Google’s DeepMind A.I. Technology That Learns Like Humans Raises Safety Concerns
A recent breakthrough by the Google company, DeepMind, in the field of artificial intelligence (AI) technology has raised once again concerns about the potential threat that autonomous machines that can think and learn independently pose to humanity. Many top research scientists have expressed concern in the past that fully autonomous AI could spell the end of human civilization as we know it.
DeepMind recently developed AI technology that can learn the way humans learn. The company’s researchers achieved the feat by overcoming one of the major barriers that AI developers have faced over the decades, which is to teach machines how to learn like humans by applying experience acquired in a previous learning situation to a different or new learning situation. This implies the ability to build a new set of skills on the foundation of previously acquired knowledge, skills, and experience.
DeepMind’s new AI program can tackle a range of tasks and achieve performance levels similar to humans by using previously acquired knowledge and skills to solve new problems. The ability to accumulate experience over time and apply previous experience to solving new problems depends in turn on the ability of the learning system to avoid forgetting old skills when confronted with new ones.
New #DeepLearning #Breakthrough from Google @DeepMindAI will improve how #NeuralNets can learn #sequences https://t.co/ejUfYZq9oT pic.twitter.com/uMWS9Hbicu
— KDnuggets (@kdnuggets) March 16, 2017
Our latest paper about continual learning in neural networks has been published in @PNASNews today! Read it here: https://t.co/Fflh56dl5p pic.twitter.com/fDDK1uh4u9
— DeepMind (@DeepMind) March 14, 2017
But unfortunately, previously developed AI systems were able to acquire only one set of skills. When presented with a new problem to solve, the AI simply forgets, erases or overwrites the previous set of skills and starts all over again to master the new problem without being able to draw or benefit from previous experience.
AI developers term the problem “catastrophic forgetting.” Overcoming the problem of “catastrophic forgetting” has posed a huge challenge to developers for decades.
“If we’re going to have computer programs that are more intelligent and more useful, then they will have to have this ability to learn sequentially,” James Kirkpatrick, an AI developer at DeepMind explained, according to the Guardian.
“Humans and animals learn things one after the other and it’s a crucial factor which allows them to learn continually and to build upon their previous knowledge.”
Artificial intelligence has a multitasking problem, and DeepMind might have a solution https://t.co/xNoHdiSiso pic.twitter.com/clVw5uZ2fr
— Lifeboat Foundation (@LifeboatHQ) March 16, 2017
DeepMind’s new program can overcome the problem of “catastrophic forgetting” by simulating the way human brains learn. This makes it possible for the system to move from one task to another and use previously acquired experience to speed up the process of acquiring new skills.
DeepMind researchers tested the new AI program by challenging it to learn to play popular Atari games, such as Breakout, Space Invaders, and Defender. The found that the program exhibited the ability to transfer experience gained from one learning situation to another.
They published the results of their work in the journal Proceedings of the National Academy of Science.
#AI #Robotics
What is DeepMind AI technology? Is Google building artificial intelligence? https://t.co/r0Ho62ksaS pic.twitter.com/RVrIcmAhpt— JackVerr (@jackverr54) March 16, 2017
While the researchers were able to confirm that the AI endeavored to solve new problems by drawing from previous experience they were unable to demonstrate that the new approach improved performance in specific tasks compared with other types of programs that suffer from “catastrophic forgetting.”
It appeared that operating as a general intelligence program compromised the ability to achieve very high levels of proficiency in a specific task that specialized “one-trick-pony” programs can achieve.
“We have demonstrated that it can learn tasks sequentially, but we haven’t shown that it learns them better because it learns them sequentially.”
Here's what happened when one of the world's best Go players took on Google's @DeepMindAI. More information here https://t.co/GjVhCWWoKf pic.twitter.com/dN004x6o8p
— Physics World (@PhysicsWorld) March 16, 2017
Another AI milestone surpassed! DeepMind can learn to play a game and then transfer that knowledge to another game. https://t.co/7BYAvJiZ5z pic.twitter.com/HTWhAYd0sV
— Scott Santens (@scottsantens) March 15, 2017
Recently, DeepMind researchers announced that they observed that their AI programs sometimes resorted to violence in competitive situations.
The observation was made when two of DeepMind’s AI programs competed against each other in an apple picking game. One program resorted to violent tactics as a way of a gaining a competitive edge over its opponent.
Commenting on the observation, DeepMind’s Joel Leibo said that the system exhibited one of the most troubling aspects of human behavior, the tendency to resort to violence to achieve goals.
“The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”
The observation raised, once again, warnings by several top scientists, including the British theoretical physicist Stephen Hawking, that autonomous AI systems that can engage in independent thought process could pose a threat to the human race due to their ability to focus independently on deriving the most efficient solutions to specific problems.
“The development of full artificial intelligence could spell the end of the human race,” Hawking once told the BBC.
Philosopher Nick Bostrom at the University of Oxford argued that developers would have to focus on how to establish ethical oversight over AI research projects that take humans into an uncertain future where AI will play increasing roles in human affairs.
An autonomous AI system preoccupied with solving a problem could end up destroying human kind by simply focusing on solving a specific problem while ignoring the broader implications of its actions, such as in the area of ethics.
This is comparable to Hitler focusing on the “Jewish problem” obsessively and obtaining the “final solution” which involved committing the crime of genocide against an entire ethnic group.
“The creation of AI with the same powerful learning and planning abilities that make humans smart will be watershed moment. When eventually we get there, it will raise a host of ethical and safety concerns that will need to be carefully addressed. It is good to start to studying these in advance rather than leave all the preparation for the night before the exam.”
Google’s efforts to develop more advanced AI technologies through its company DeepMind are attracting attention. DeepMind has a history of developing programs that are capable of learning how to play video games at the same skill levels as the best human players.
For instance, in 2016, one of the company’s programs, AlphaGO, beat a world champion GO player, the 18-time world champion Lee Sedol. It was the first time than an AI technology beat an expert level human GO player.
[Featured Image by Serpeblu/Shutterstock]