How Artificial Intelligence Could Destroy Us By Accident Technology

From Stephen Hawking to Elon Musk, some of the most important minds in the world of artificial intelligence (AI) have expressed concern that it represents an existential threat to our species.

But according to a new book, what should concern us is not that robots become aware of themselves and rise up against their human masters, but that the machines become so good in achieving the objectives we set for them, that we end up being annihilated inadvertently to set them wrong tasks.

(CLICK HERE to visit the cover of Technology and Science)

RELATED

(Can a computer make you think it's human ?: the amazing answer to Alan Turing's question)

(How machines learned to deceive humans (and how this could help us))

Stuart Russell, a professor at the University of California at Berkeley, is the author of Human Compatible: AI and the Problem of Control ("Compatible with humans: AI and the problem of control") and an expert on advances that machine learning It has made possible.

"The Hollywood meme always consists of the machine that spontaneously becomes aware of itself and then decides that it hates human beings and wants to kill us all," he told the BBC.

But robots have no human feelings, so "it is completely wrong to worry about that."

Robots are getting better at the tasks we assign them.
(Photo: Getty)

"It is not really the evil conscience, but its capacity that has to worry us, only its capacity to reach a goal badly specified by us."

In an interview with the BBC's Today program, the expert gave a hypothetical example of the real threat that, in his opinion, AI could represent.

Imagine that we have a powerful AI system that is capable of controlling the planet's climate and that we want to use it to return CO2 levels in our atmosphere to the pre-industrial era.

"The system discovers that the easiest way to do this is to get rid of all human beings, because they are the ones who are producing all this carbon dioxide in the first place," Russell said.

“And you could say, well, you can do whatever you want, but you can't get rid of human beings. So what does the system do? It just convinces us to have fewer children until there are no human beings left. ”

The victory of the Deep Blue chess team over Garry Kasparov was a milestone for the development of artificial intelligence. (Photo: Getty)

The example serves to highlight the risks associated with artificial intelligence acting under instructions in which humans have not thought.

Most current AI systems have "weak" applications, specifically designed to address a well-specified problem in an area, according to the Center for the Study of Existential Risk, at the University of Cambridge, in the United Kingdom.

An important moment for this field came in 1997, when the Deep Blue computer defeated the world chess champion, Garry Kasparov, in a six-game tournament.

But despite the feat, Deep Blue was designed by humans specifically to play chess and could not with a simple game of checkers.

That is not the case with subsequent advances in artificial intelligence. AlphaGo Zero software, for example, reached a superhuman performance level after only three days of playing Go against itself.

Using deep learning, a machine learning method that employs artificial neural networks, AlphaGo Zero required much less human programming and proved to be a very good Go, chess and shōgi player.

It was completely self-taught, in a way, perhaps, alarming.

Russell says that humans need to regain control of AI before it's too late. (Photo: Getty)

"As an artificial intelligence system becomes more powerful and more general, it could become super intelligent, superior to human performance in many or almost all domains," says the Existential Risk Center.

And that is why, according to Russell, we humans need to regain control.

According to Russell, giving artificial intelligence more defined objectives is not the solution to this dilemma, because humans themselves are not sure what those goals are.

"We don't know that we don't like something until it happens," he says.

"We should change the entire base on which we build AI systems," he says, moving away from the notion of giving fixed target robots.

"Instead, the system has to know that it doesn't know what the objective is."

“And once you have systems that work that way, they will really be different from human beings. They will start asking for permission before doing things, because they won't be sure if that's what you want. ”

In "2001: Odyssey in Space" (1968), a highly capable computer rebels against plans to shut it down. (Photo: Getty)

Especially, says Professor Russell, they would be "happy to be turned off because they want to avoid doing things that you don't like."

"The way we build AI is a bit like the way we think of a genie inside a lamp. If you rub the lamp, the genie comes out and you say, 'I would like this to happen" https://elcomercio.pe/ "said Russell.

"And, if the AI ​​system is powerful enough, it will do exactly what you ask for and you will get exactly what you ask."

"Now, the problem with the geniuses in the lamps is that the third wish is always: 'Please undo the first two wishes because we could not specify the objectives correctly."

"Then, a machine that pursues an objective that is not the right one becomes, in effect, an enemy of the human race, an enemy that is much more powerful than us."

Follow us on twitter…



SEARCH FOR MORE

YOU MAY ALSO LIKE