Wednesday, February 4, 2015

Gates & Musk On The Inevitable A.I. Apocalypse

Bill Gates is arguably one of the smartest men in the world; his passion for technology cannot be understated. In addition to that Gates is also one of the biggest philanthropists in the world, but his outlook isn't positive on all things, namely artificial intelligence.

Image Credit: Simon Davis/DFID


Ask Me Anything - Gates

During a recent Ask Me Anything session on Reddit, Gates answered questions on topics that ranged from his biggest regrets to his favorite foods and outlined his predictions on the future, predictions that are both optimistic and ominous.

About halfway through the Q&A Gates was asked what personal computing will look like in 2045, to which Gates responded by saying that the next 30 years will be a time of rapid progress. Gates wrote, "Even in the next 10 years problems like vision and speech, understanding and translation will be very good. Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively."



Personal Agent Project

Gates then went on to talk about Microsoft's "Personal Agent" project, which is being designed to help people manage their memory, attention and focus. According to Gates, "The idea that you have to find applications and puck them and they each are trying to tell you what is new is just not the efficient model - the agent will help solve this. It will work on all your devices."

Responses from the Redditers was mixed, with some users poking fun at Gates and others raising suspicions. "This technology you are developing sounds at its essence like the centralization of knowledge intake," wrote one users. "Ergo, whoever controls this will control what information people make their own. Even today, we see the daily consequences of people who live in an environment that essentially tunnel-visions their knowledge."



Summoning The Demon? - Musk

Naturally, this segued into a discussion about the threat of superintelligent machines to the human race. This question has been at the forefront of a lot of recent discussions among the world's leading futurists, including Stephen Hawking, who stated last month that artificial intelligence "could spell the end of the human race". Back in October at the MIT Aeronautics and Astronautics Department's Centennial Symposium, Tesla head Elon Musk referred to A.I. as "summoning the demon".

Musk wrote, "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it's probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish. With artificial intelligence, we are summoning the demon. in all those stories where there's the guy with the pentagram and the holy water, it's like yeah he's sure he can control the demon. Didn't work out."

British inventor Clive Sinclair stated that he things A.I. will doom mankind. "Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive," Sinclair stated. "It's just an inevitability." In his interview, Gates put himself on the side of the alarmed.

Should We Be Concerned?

"I am in the camp that is concerned about super intelligence," Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."


Interestingly enough, Gates' comments come just after the managing director of Microsoft Research's Redmond Lab stated that the doomsday declarations about the threat to human life are over exaggerated.

"There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences," said Eric Horvitz. "I fundamentally don't think that's going to happen. I think that we will be very proactive in the terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life." Horivtz also noted that "over a quarter of all attention and resources" at Microsoft Research are focused on artificial intelligence.


The A.I. Apocalypse

Whether or not artificial intelligence is inevitably the end of mankind is still up for debate, but the good news, at least as far as I'm concerned, is that we still have a long time before any of that happens. Hopefully it's enough time for us to realize we're making an army of terminators before Arnold has to come back and fix things for us.



SEE ALSO:


Just For Fun:



(C) 2013 KIDinaKORNER/Interscope Records