Skip to main content

Why AI will never rule the world

Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity — for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans.

According to the theory, advances in AI — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. We’re literally building our soon-to-be-sentient successors.

Recommended Videos

Except that it will never happen. At least, according to the authors of the new book Why Machines Will Never Rule the World: Artificial Intelligence without Fear.

Co-authors University at Buffalo philosophy professor Barry Smith and Jobst Landgrebe, founder of German AI company Cognotekt argue that human intelligence won’t be overtaken by “an immortal dictator” any time soon — or ever. They told Digital Trends their reasons why.

image depicting AI, with neurons branching out from humanoid head
Image used with permission by copyright holder

Digital Trends (DT): How did this subject get on your radar?

Jobst Landgrebe (JL): I’m a physician and biochemist by training. When I started my career, I did experiments that generated a lot of data. I started to study mathematics to be able to interpret these data, and saw how hard it is to model biological systems using mathematics. There was always this misfit between the mathematical methods and the biological data.

In my mid-thirties, I left academia and became a business consultant and entrepreneur working in artificial intelligence software systems. I was trying to build AI systems to mimic what human beings can do. I realized that I was running into the same problem that I had years before in biology.

Customers said to me, ‘why don’t you build chatbots?’ I said, ‘because they won’t work; we cannot model this type of system properly.’ That ultimately led to me writing this book.

Professor Barry Smith (BS): I thought it was a very interesting problem. I had already inklings of similar problems with AI, but I had never thought them through. Initially, we wrote a paper called ‘Making artificial intelligence meaningful again.’ (This was in the Trump era.) It was about why neural networks fail for language modeling. Then we decided to expand the paper into a book exploring this subject more deeply.

DT: Your book expresses skepticism about the way that neural networks, which are crucial to modern deep learning, emulate the human brain. They’re approximations, rather than accurate models of how the biological brain works. But do you accept the core premise that it is possible that, were we to understand the brain in granular enough detail, it could be artificially replicated – and that this would give rise to intelligence or sentience?

JL: The name ‘neural network’ is a complete misnomer. The neural networks that we have now, even the most sophisticated ones, have nothing to do with the way the brain works. The view that the brain is a set of interconnected nodes in the way that neural networks are built is completely naïve.

If you look at the most primitive bacterial cell, we still don’t understand even how it works. We understand some of its aspects, but we have no model of how it works – let alone a neuron, which is much more complicated, or billions of neurons interconnected. I believe it’s scientifically impossible to understand how the brain works. We can only understand certain aspects and deal with these aspects. We don’t have, and we will not get, a full understanding of how the brain works.

If we had a perfect understanding of how each molecule of the brain works, then we could probably replicate it. That would mean putting everything into mathematical equations. Then you could replicate this using a computer. The problem is just that we are unable to write down and create those equations.

profile of head on computer chip artificial intelligence
Digital Trends Graphic / Digital Trends

BS: Many of the most interesting things in the world are happening at levels of granularity that we cannot approach. We just don’t have the imaging equipment, and we probably never will have the imaging equipment, to capture most of what’s going on at the very fine levels of the brain.

This means that we don’t know, for instance, what is responsible for consciousness. There are, in fact, a series of quite interesting philosophical problems, which, according to the method that we’re following, will always be unsolvable – and so we should just ignore them.

Another is the freedom of the will. We are very strongly in favor of the idea that human beings have a will; we can have intentions, goals, and so forth. But we don’t know whether or not it’s a free will. That is an issue that has to do with the physics of the brain. As far as the evidence available to us is concerned, computers can’t have a will.

DT: The subtitle of the book is ‘artificial intelligence without fear.’ What is the specific fear that you refer to?

BS: That was provoked by the literature on the singularity, which I know you’re familiar with. Nick Bostrom, David Chalmers, Elon Musk, and the like. When we talked with our colleagues in the real world, it became clear to us that there was indeed a certain fear among the populace that AI would eventually take over and change the world to the detriment of humans.

We have quite a lot in the book about the Bostrum-type arguments. The core argument against them is that if the machine cannot have a will, then it also cannot have an evil will. Without an evil will, there’s nothing to be afraid of. Now, of course, we can still be afraid of machines, just as we can be afraid of guns.

But that’s because the machines are being managed by people with evil ends. But then it’s not AI that is evil; it’s the people who build and program the AI

DT: Why does this notion of the singularity or artificial general intelligence interest people so much? Whether they’re scared by it or fascinated by it, there’s something about this idea that resonates with people on a broad level.

JL: There’s this idea, started at the beginning of the 19th century and then declared by Nietzsche at the end of that century, that God is dead. Since the elites of our society are not Christians anymore, they needed a replacement. Max Stirner, who was, like Karl Marx, a pupil of Hegel, wrote a book about this, saying, ‘I am my own god.’

If you are God, you also want to be a creator. If you could create a superintelligence then you are like God. I think it has to do with the hyper-narcissistic tendencies in our culture. We don’t talk about this in the book, but that explains to me why this idea is so attractive in our times in which there is no transcendent entity anymore to turn to.

brain with computer text scrolling artificial intelligence
Chris DeGraw/Digital Trends, Getty Images

DT: Interesting. So to follow that through, it’s the idea that the creation of AI – or the aim to create AI – is a narcissistic act. In that case, the concept that these creations would somehow become more powerful than we are is a nightmarish twist on that. It’s the child killing the parent.

JL: A bit like that, yes.

DT: What for you would be the ultimate outcome of your book if everyone was convinced by your arguments? What would that mean for the future of AI development?

JL: It’s a very good question. I can tell you exactly what I think would happen – and will happen. I think in the midterm people will accept our arguments, and this will create better-applied mathematics.

Something that all great mathematicians and physicists are completely aware of was the limitations of what they could achieve mathematically. Because they are aware of this, they focus only on certain problems. If you are well aware of the limitations, then you go through the world and look for these problems and solve them. That’s how Einstein found the equations for Brownian motion; how he came up with his theories of relativity; how Planck solved blackbody radiation and thus initiated the quantum theory of matter. They had a good instinct for which problems are amenable to solutions with mathematics and which are not.

If people learn the message of our book, they will, we believe, be able to engineer better systems, because they will concentrate on what is truly feasible – and stop wasting money and effort on something that can’t be achieved.

BS: I think that some of the message is already getting through, not because of what we say but because of the experiences people have when they give large amounts of money to AI projects, and then the AI projects fail. I guess you know about the Joint Artificial Intelligence Center. I can’t remember the exact sum, but I think it was something like $10 billion, which they gave to a famous contractor. In the end, they got nothing out of it. They canceled the contract.

(Editor’s note: JAIC, a subdivision of the United States Armed Forces, was intended to accelerate the “delivery and adoption of AI to achieve mission impact at scale.” It was folded into a larger unified organization, the Chief Digital and Artificial Intelligence Officer, with two other offices in June this year. JAIC ceased to exist as its own entity.)

DT: What do you think, in high-level terms, is the single most compelling argument that you make in the book?

BS: Every AI system is mathematical in nature. Because we cannot model consciousness, will, or intelligence mathematically, these cannot be emulated using machines. Therefore, machines will not become intelligent, let alone superintelligent.

JL: The structure of our brain only allows limited models of nature. In physics, we pick a subset of reality that fits to our mathematical modeling capabilities. That is how Newton, Maxwell, Einstein, or Schrödinger obtained their famous and beautiful models. But these can only describe or predict a small set of systems. Our best models are those which we use to engineer technology. We are unable to create a complete mathematical model of animate nature.

This interview has been edited for length and clarity.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Google strikes back with an answer to OpenAI’s Sora launch
Veo 2 on VideoFX

Google's DeepMind division unveiled its second generation Veo video generation model on Monday, which can create clips up to two minutes in length and at resolutions reaching 4K quality -- that's six times the length and four times the resolution of the 20-second/1080p resolution clips Sora can generate.

Of course, those are Veo 2's theoretical upper limits. The model is currently only available on VideoFX, Google's experimental video generation platform, and its clips are capped at eight seconds and 720p resolution. VideoFX is also waitlisted, so not just anyone can log on to try Veo 2, though the company announced that it will be expanding access in the coming weeks. A Google spokesperson also noted that Veo 2 will be made available on the Vertex AI platform once the company can sufficiently scale the model's capabilities.

Read more
​​OpenAI spills tea on Musk as Meta seeks block on for-profit dreams
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

OpenAI has been on a “Shipmas” product launch spree, launching its highly-awaited Sora video generator and onboarding millions of Apple ecosystem members with the Siri-ChatGPT integration. The company has also expanded its subscription portfolio as it races toward a for-profit status, which is reportedly a hot topic of debate internally.

Not everyone is happy with the AI behemoth abandoning its nonprofit roots, including one of its founding fathers and now rival, Elon Musk. The xAI chief filed a lawsuit against OpenAI earlier this year and has also been consistently taking potshots at the company.

Read more
ChatGPT has folders now
ChatGPT Projects

OpenAI is once again re-creating a Claude feature in ChatGPT. The company announced during Friday's "12 Days of OpenAI" event that its chatbot will now offer a folder system called "Projects" to help users organize their chats and data.

“This is really just another organizational tool. I think of these as smart folders,” Thomas Dimson, an OpenAI staff member, said during the live stream.

Read more