Elon Musk, CEO of SpaceX and chief product architect at Tesla Motors, gave a book recommendation today for Superintelligence by Nick Bostrom — a prominent advocate for the idea that The Terminator could easily become reality if we allow artificial intelligence to develop too quickly.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) August 3, 2014
With someone renowned in the science world like Musk giving his stamp of approval to the book, it’s easy to assume that Bostrom’s AI ideas might be pretty mainstream. But his earlier work includes an essay written in 2003 where he argued that the world is fake and we are living in a computer simulation, reported technology news source Cnet.
But even if you might dismiss Elon’s recommendation for Bostrom’s book because of the odd-ball idea he presents about the world being a simulation, that shouldn’t make you think his theories aren’t popular. Bostrom gave a TED Talk last March ( as did Musk ) that argued existential risk — possible bad outcome — is the most frightening when addressing the literal end of the human race. Bostrom spends the rest of the video arguing that humans aren’t properly accounting for this threat — showing that there is more research on the dung beetle than the threat of human extinction.
But even some of Musk’s peers might not be sold on Borstrom’s ideas. A recent profile of Elon’s company Vicarious in Business Insider India featured one of the firm’s advisors, Bruno Olshausen, downplaying the imminence of the threat of artificial intelligence.
Absent a major paradigm shift – something unforeseeable at present – I would not say we are at the point where we should truly be worried about AI going out of control. That is not to say that we shouldn’t worry about how humans will use machines or engage in warfare via machines – e.g., for domestic spying, foreign espionage, hacking attacks and the like. But in the meantime we can rest easy knowing that computers themselves are not going to take over the world anytime soon, or in the foreseeable future.
Elon certainly hasn’t been shy about his belief in a possible Doomsday scenario surrounding AI. In fact, he’s said he invested in Vicarious partially to keep an eye on the development of artificial intelligence , reported The Guardian . Thus unsurprisingly, Musk didn’t backtrack on his claims much in a follow-up tweet several hours later, slightly addressing the possible academic criticism that a threat might be scary — but it isn’t going to be happening any time soon like Elon may have implied.
Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable
— Elon Musk (@elonmusk) August 3, 2014
Are science fiction fantasies becoming possible realities, even with names like Elon Musk behind them? Do you think these ideas deserve academic investigation?