World-famous theoretical physicist and Cambridge University professor Stephen Hawking has a history of speaking out about his fears for the future of humanity. For years, Stephen Hawking has been warning the world about the potential dangers of artificial intelligence, expressing his concerns that the technology could spell doom for the human race if it’s allowed to run amok or developed without an abundance of caution. Now, the Washington Post reports that 74-year-old Stephen Hawking is publicly expanding his list of the disasters he believes mankind might face in the future.
Professor Hawking believes that many of the biggest threats to the continued existence of the human race are problems of our own creation, most of which are related to our ever-advancing technology. Speaking at a public Q&A session ahead of the annual BBC Reith Lectures, Hawking talked the statistics of a world-wide disaster striking planet Earth, saying that the odds are low in any given year. He went on to explain that over the course of 1,000 to 10,000 years, however, the likelihood of a global disaster become a “near certainty.”
Included among the list of potentially apocalypse-inducing threats that Stephen Hawking believes humanity might be contending with in the coming century are fallout from human-induced climate change, nuclear war, and genetically modified/engineered viruses.
So why does Stephen Hawking think that the next 100 years are so critical for the survival of humanity? In a nutshell, mankind hasn’t reached the technological level to escape from planet Earth if things go wrong. Hawking believes that within the next century or so, humanity will have the ability to construct self-sustaining colonies on other planets or their satellites. In the interim, if we find ourselves dealing with the fallout of our technological ambitions, we won’t have anywhere to go.
“We will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period.”
While Hawking recognizes the potential for technology to spell the end of the world as we know it, and potentially to bring about the extinction of our species, he doesn’t think that humanity should abandon its technological ambitions. Professor Hawking does think that we should be very careful with how we move forward technologically.
“We are not going to stop making progress, or reverse it, so we have to recognize the dangers and control them. I’m an optimist, and I believe we can.”
Stephen Hawking is most critical and wary of the development of full artificial intelligence, and in a 2014 interview he famously told the BBC that it could “spell the end of the human race.” Hawking went on to lay out the reason for his fears, stating that fully functional artificial intelligence would be capable of redeveloping itself at an ever-increasing rate. Humanity, which is limited by the speed of purely biological evolution, could never compete with the rate of artificial intelligence’s progress, in the opinion of Stephen Hawing, and therefore would ultimately and inevitably be superseded by its own creation.
Ironically, while Stephen Hawking is vocal in his fears of the potential harm artificial intelligence could inflict on humanity, he is dependent upon it for his survival. The professor suffers from amyotrophic lateral sclerosis (ALS), otherwise known as Lou Gehrig’s Disease. The degenerative, progressive disease has resulted in Hawking being almost completely paralyzed. As a result of a tracheotomy he underwent in 1985, he is also unable to speak. Stephen Hawking’s physical disabilities have left him utterly reliant on machines and computers for survival. His verbal communication is 100 percent dependent on a computer, which, the Washington Post reports, just got an artificial intelligence upgrade.
According to the Washington Post , Stephen Hawking had his computer’s artificial intelligence software system upgraded to latest iteration, known as ACAT (Assistive Context Aware Toolkit), in 2014.
Despite Stephen Hawking’s palpable confidence that mankind will be the cause of its own inevitable demise, most likely due to technology running amok, not everyone’s taking his warnings to heart. In fact, Professor Hawking and Elon Musk, two of the most recognizable and influential scientists of the last 50 years, were recently “honored” with the 2015 Luddite Award, reports International Business Times. The annual award is given by the Information Technology & Innovation Foundation (ITIF) to the “worst offenders of the year when it comes to foiling technological progress.”
Despite co-awarding the 2015 Luddite award to Stephen Hawking and Elon Musk, ITIF President Robert D. Atkinson called both men “pioneers of science and technology” in a written statement. He added that even so, the pair are members of a “loose coalition” of “alarmists.”
“…they and others have done a disservice to the public — and have unquestionably given aid and comfort to an increasingly pervasive neo-Luddite impulse in society today — by demonizing AI in the popular imagination.”
While Professor Hawking has repeatedly been blunt in his criticism of and warnings about unchecked technological advancement, it hasn’t stopped him from continuing to reap the benefits of the same technology. It hasn’t even prevented him from continuing his work in the very field he says poses the biggest threat to humanity’s future survival: artificial intelligence.
Just last year, Hawking (along with Elon Musk) signed an open letter pledging that advancements in future artificial intelligence technology should be geared toward “maximizing the societal benefits of artificial intelligence.” The pair also signed another letter intended to encourage a ban on “autonomous weapons” capable of selecting and engaging targets without human assistance or intervention. Such weapons would be the first step in an “artificial intelligence arm race,” according to Musk and Stephen Hawking.
[Photo by Tim P. Whitby/Getty Images]