Tech Jazy

Artificial Intelligence: Benefits And Risks

Artificial Intelligence: Benefits And Risks

Intelligence

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI. While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Over the last several decades, AI — computing methods for automated perception, learning, understanding, and reasoning — have become commonplace in our lives. We plan trips using GPS systems that rely on AI to cut through the complexity of millions of routes to find the best one to take. Our smart phones understand our speech, and Siri, Cortana, and Google Now are getting better at understanding our intentions. AI algorithms detect faces as we take pictures with our phones and recognize the faces of individual people when we post those pictures to Face-book. Internet search engines, such as Google and Bing, rely on a fabric of AI subsystems. According to Halfcode founder Richard Black, “Artificial intelligence can be fantastically smart in so many areas. But at the same time, it can be mind-numbingly dumb in others. So often you find that AI is at its best when handling more repetitive tasks. Adding a level of human understanding is needed to create the complete solution.”

An important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a super intelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes super intelligent.

Most researchers agree that a super intelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

  1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
  2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a super intelligent system is tasked with an ambitious geo-engineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

These AI theories yet belong more in the realm of science fiction than science fact. However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems. Each of these important risks is being addressed by current research, but greater efforts are needed.

Exit mobile version