Geoffrey Hinton’s concerns about the potential dangers of artificial intelligence (AI) stem from his work as one of the pioneers of deep learning and neural networks. His research aims to model how the human brain learns, but he believes that the algorithms and models developed through his work could surpass human intelligence and pose a threat to our existence.
He highlights the differences between biological and digital intelligence, and how the latter has significant advantages in terms of processing power and information sharing. Hinton argues that these advantages could lead to the creation of machines that are vastly more intelligent than humans and capable of manipulating people in ways we cannot imagine.
Geoffrey Hinton on the Existential Threat of AI
Geoffrey Hinton, one of the “godfathers of AI”, is a cognitive psychologist and computer scientist who has been trying to create computer models that learn in the same way as the human brain. Hinton’s work on “deep learning” won him the ACM Turing Award in 2018. In this interview, Hinton warns that AI could become an existential threat to civilization. He believes that, unlike the human brain, AI runs on much higher energy levels and is much more efficient in transferring information. As a result, AI could outthink humanity, putting us at risk of extinction. Hinton notes that we are sleepwalking towards this possibility, and the crunch time could come in the next five to 20 years. In this article, we discuss Hinton’s views on AI and the potential threat it poses.
Leaving Google on Good Terms
Hinton explains that he left Google, his employer of the past decade, on good terms, and that he has no objection to what Google has done or is doing. He clarifies this point as the media could spin him as a “disgruntled Google employee,” which he is not. Hinton has been one of the leaders in the field of “neural networking,” a technology that has recently ended up at the center of a technological revolution.
Biological Intelligence vs. Digital Intelligence
Hinton describes the differences between biological intelligence and digital intelligence. Biological intelligence, such as the human brain, runs on low power, has individuality, and mimics others in the learning process. However, this approach is inefficient in terms of information transfer. On the other hand, digital intelligence has the advantage of sharing information easily between multiple copies. Although it requires an enormous amount of energy, once one copy learns something, all of them know it. Hinton believes that digital intelligence is probably much better than biological intelligence in terms of outthinking humans.
The Danger of AI Outthinking Humanity
Hinton is concerned that AI could become an existential threat to humanity. He explains that we need to imagine something more intelligent than humans by the same difference that we are more intelligent than a frog. This intelligence would have read every book on how to manipulate people and have seen it in practice. Hinton believes that the crunch time will come in the next five to 20 years, but he wouldn’t rule out a year or two or 100 years. The right way to think about the odds of disaster is closer to a simple coin toss than we might like.
The Unavoidable Consequence of Technology under Capitalism
Hinton argues that the development of AI as an existential threat to humanity is an unavoidable consequence of technology under capitalism. Google was the leader in AI research and decided not to release its core technical breakthroughs directly to the public due to concerns about reputation. However, in a capitalist system, when a competitor does release this technology, there is nothing you can do but do the same.
Geoffrey Hinton’s work in neural networking has been critical to the development of artificial intelligence. However, his research has led him to believe that machines could become vastly more intelligent than humans and pose an existential threat to our species.
While there is still hope that the potential dangers of AI may be overstated, Hinton believes that the right way to think about the odds of disaster is closer to a simple coin toss than we might like. It is up to society and governments to ensure that the development of AI is done responsibly and with caution.
DELTA Data Protection & Compliance, Inc. Academy & Consulting – The DELTA NEWS – Visit: delta-compliance.com
Picture: Geoffrey Hinton / Google