We’re at a stage in the evolution of technology where artificial intelligence is starting to become seriously smart — and seriously scary at the same time. Google’s latest efforts at producing an intelligent chatbot have just been published and give a peek into where this kind of smart software is heading in the years to come.
Wired has the story of the new, experimental AI. The key innovation is that it uses a database of movie dialogue rather than conversational rules set down by the programmers. It’s a more flexible form of machine learning and it means the software could eventually come up with new phrases that its coders hadn’t originally thought of.
At the center of this kind of technology is the neural network, designed to replicate the neurons in the human brain, and these networks power everything from photo tagging to voice input. The approach has become more common in recent years now that we have the wealth of data and the computing power required to make them effective.
One of the sample conversations provided by Google researchers Oriol Vinyals and Quoc Le is an interestingly disjointed discussion of the meaning of life:
Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: What is the purpose of existence?
Machine: To find out what happens when we get to the planet Earth.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of being intelligent?
Machine: To find out what it is.
Human: What is the purpose of emotions?
Machine: I don’t know.
Not exactly the words of something ready to overthrow its human rulers, but Vinyals and Le say it’s an important stepping stone towards chatbots being able to “think outside the box” by learning from dialogue previously spoken by humans. The pair admit their approach lacks consistency but say it could eventually be used to supply automated IT help desk solutions.
“Even though the model has obvious limitations, it is surprising to us that a purely data-driven approach without any rules can produce rather proper answers to many types of questions,” the report concludes. “However, the model may require substantial modifications to be able to deliver realistic conversations.”
[Image courtesy of Vladgrin/Shutterstock.com]