Skip to main content

Should your own AI rat you out? It’s complicated, says the man building it

Kuna System Interview Home AI police robot
Guiseppe Cacace/AFP/Getty Images
Depending on who you ask, the future of artificial intelligence is either something to be excited about, or fearful of. Elon Musk suggests their ever-growing intelligence will put them at odds with humanity itself, while those who are more optimistic, like Mark Zuckerberg, think AI can help us live more fruitful, efficient lives.

Like most technology, the type of AI we end up with will depend on the people creating them. If developed with privacy and end-user control in mind, we could end up with a firmer grasp of how AI operates.

Kuna Systems is one firm looking into that possibility. The smart security camera and cloud backup provider is starting to experiment with artificial intelligence, and that’s lead to some interesting moral quandaries, which it’s in the process of solving.

Digital Trends spoke with Haomiao Huang, Kuna’s CTO, and picked his brains about the kind of problems that can be faced by developing advanced artificial intelligence. He told us that, with the right mindset, we can retain control over AI while still seeing the benefits they offer.

How AI can improve already smart technology

Modern AI, though commonplace, is limited. We see it in chat bots, image recognition systems, fraud prevention checks, voice assistants. While useful, it’s all pedestrian compared to the kind of intelligence we’re used to seeing in movies and TV shows. Soon, AI could make our already smart devices smarter, removing the need for humans to manually control our technology.

IoT devices — in particular, connected security cameras — are some of the most widely hacked devices in the world.

“What [Kuna] makes is a preventative security system,” Huang told us. “Instead of waiting until someone has broken a window or door, we allow our customers to respond before a crime has taken place.” He went on, explaining that, “a traditional security system is a responsive tool to a crime, but we’re moving into the realm of preventing a crime before it happens. The system can see and respond to a crime and prevent it from happening in the first place.”

Kuna Systems’ cameras require a measure of artificial intelligence to make that possible. They must interpret what the camera feeds are picking up, and then respond accordingly.

“We already have a system in place that can detect whether that’s a person, or a car, how many people, and so on. One of the capabilities we’re working on is detecting suspicious behaviors,” Huang continued. “It’s a pretty common tactic of thieves to ring the front door and if nobody answers, go to the back door and try and find a way in. The [AI] system we’re designing will be able to recognize that and register it as a priority, and then send an alert to our customers, or even potentially call the police.”

Today, such decisions are made with humans involved. The owner receives an alert that an “event” has taken place when someone, or something, trips the camera feed. They can then look at the live stream and respond accordingly. An advanced AI could automate this, responding faster than a human ever could, and do so when there’s no one around to check the camera feed.

“I used to be really worried about locking up my bike, but soon you’re going to be able to leave your bike by your house without locking it up, because the camera will cover it and will be able to check to see if the person taking it is authorized to do so,” Huang continued. “From there, it doesn’t make sense to steal things anymore, because you’re going to get caught and in the future, the items themselves will know whether you’re allowed to use them.”

This is similar to the work Microsoft has been doing with AI in various workplace scenarios. At Build 2017, the company showed an AI  concept capable of spotting spillages, warning of workers using tools they aren’t trained for, and even noting those exceeding recommended activity levels after a life-changing operation.

Having an AI keep an eye on us all has a myriad of benefits, but even with Huang’s rosy idea of the future of AIs, he and Kuna understand that there is danger in giving an AI too much control.

The moral implications of an AI in charge

Describing the authorization and oversight capabilities of future AI smart cameras as a “beautiful case,” where property crime is effectively eliminated, Huang held up a dystopian mirror to that same scenario, and showed what a murky world such technology could create.

How can artificial intelligence make decisions based in the realm of morality, and have implications that an AI could never understand? Autonomous vehicles, for example, face the “trolly problem.” Should a car swerve off the road to avoid a family crossing the street, if it will endanger the lives of the passengers?

The world envisioned by Kuna would expand the issue into nearly every part of our lives.

Kuna System Home AI
Kuna AI can already differentiate between humans and other sources of motion, like cars and birds. Now they’re focused on teaching it to recognize criminal behavioral patterns to alert you before the crime even happens. Image used with permission by copyright holder

“With smart cameras, if the AI recognizes a crime being committed against the owner, then it’s obvious what it should do,” Huang said. “But if it recognizes a crime that the owner is committing, what should it do then? I think most people would agree, if you commit a bad crime, then it should be reported and you should get in trouble for it,” said Huang. “But there’s a gray area of small crimes. Say your camera catches you watering your lawn when you shouldn’t be, is that really something that should be reported? Probably not. If your security system sees you murdering someone, then it probably should.”

Even then, the concept of an AI security system that turns in its owner is sure to make some people uncomfortable. Security that is always on, always watching, puts society at risk of eliminating privacy altogether. And privacy isn’t the only issue that all-seeing, all-powerful AIs could bring to the table. They could also be co-opted for nefarious purposes.

IoT devices — in particular, connected security cameras — are some of the most widely hacked devices in the world, finding themselves enlisted for denial of service attacks in the millions. That problem would only be compounded if those products had capable artificial intelligences of their own, that could be tricked into performing their functions not at the behest of their owner, but at the whims of whoever infiltrated the device.

Giving owners the AI leash

For Huang, these problems can only be resolved by keeping the humans who own AI devices in charge of those devices. While AI can remove the need for regular human interaction, they should never eliminate human oversight.

“[It’s important to keep] the home owner involved in the loop […] It’s not just a convenience of product features, but a moral responsibility aspect of it,” he said. “Who does the responsibility actually lie with?”

“If they’re buying for it and paying for it, then they’re the one who gets to decide what the AI is going to do.”

Giving owners the option to modify behavior of the AI they own is one possible solution. When you buy a driverless car, you could decide how it should act in certain scenarios. Do you want your car to prioritize you and your loved ones when your safety and that of a stranger’s must be weighed by the algorithm? What happens when the AI must decide between your safety, and that of a group of jaywalking children?

When you buy a smart camera, you could decide if you want it to report crimes to the police, or only to you. You could set your preferences for crimes committed on your property, or on the street opposite. You could decide what scale of crimes it should report, and which ones it shouldn’t.

It could be that governments or developers mandate serious crimes like murder or assault are reported regardless of preference, of course. That sort of system is already in place in certain human-driven institutions, Huang points out. “School counselors are legally obligated to report abuse,” he said — so it may be that AI-powered devices have similar obligations. That’s an issue society, as a whole, will need to decide.

“Ultimately the decisions [these products] make come indirectly from the society they were built in and the company they were built by,” Huang said. “What we need to think about is giving that kind of authority to the users. If they’re buying for it and paying for it, then they’re the one who gets to decide what the AI is going to do in these sorts of situations.”

Despite this progressive outlook, Huang admits that Kuna could do better, and is keen to introduce more user control as AI becomes a more important facet of the service his company offers. Hopefully, others will do the same.

“When it’s automated, you explain to the user what it’s going to do and why it’s going to do it,” he said. “That’s just good design.”

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
Valve adds DLSS 3 to SteamOS backend, but don’t expect an Nvidia Steam Deck
Ghost of Tsushima running on the Steam Deck.

Valve has made a significant update to its Proton compatibility layer, which is the basis of the Linux-based SteamOS operating system on the Steam Deck. The update brings several improvements and bug fixes, but it also adds support for Nvidia's coveted DLSS 3 Frame Generation.

The update for Proton Experimental rolled out on November 12, and it was spotted by Wccftech. Proton is the bedrock for gaming on Linux, and up to this point, Nvidia users haven't had access to some of the best features of Team Green's latest graphics cards on Linux. The latest update not only supports DLSS 3 Frame Generation, but also Nvidia's Optical Flow API. Optical Flow is critical for DLSS 3 Frame Generation, though the dedicated hardware for the feature has been around since Nvidia's Turing GPUs.

Read more
This $3,390 Lenovo ThinkPad laptop is only $1,690 today
Engineer, wearing a hard hat, works on the Lenovo ThinkPad P14s as another engineer works in the background.

The Lenovo ThinkPad is one of the best workplace laptops money can buy, and has been for many years. From one generation to the next, Lenovo continues to bring improvements and new features to its longstanding ThinkPad lineup. When new ThinkPad models are released, older units tend to go on sale, and every once in a while, you’ll catch an exciting doorbuster discount on some premium hardware. That leads us to this offer:

Right now, when you purchase the Lenovo ThinkPad P14s through the manufacturer, you’ll pay $1,690. Usually, this model retails for as much as $3,390.

Read more
Yes, Reddit is down. Here’s everything you need to know
The Reddit app icon on an iOS Home screen.

Bad news, fellow Redditors. If you're trying to browse your favorite subreddit right now, you're probably unable to. Why? Because Reddit appears to be down due to technical difficulties.

What's going on with the outage? Do we know when it'll be back up? Here's a recap of everything we know.
Why is Reddit down?
On the Reddit status website, the company indicates an "unresolved incident" taking place on November 20. The company confirms "degraded performance for reddit.com," which appears to be accurate.

Read more