Terminator: Dark Fate, The Critics Of Scientists To The New Film Of The Saga

Image copyright
                 Victor Chavez
            Image caption
                    The saga starring Linda Hamilton and Arnold Schwazenegger opens a new sequel: "Terminator: Dark Fate".

Arnold Schwarzenegger was serious when he said, "I'll be back," but not everyone is delighted that there is a new Terminator movie premiering this week.

In the laboratories of the University of Cambridge, Facebook and Amazon, researchers fear that "Terminator: Dark Fate" can provide the public with misleading information about the real dangers of artificial intelligence (AI).


AI pioneer Yoshua Bengio told the BBC that he did not like Terminator movies for several reasons.

What is the quantum supremacy that the great technicians pursue and why it is so important

"They draw an image that is not really consistent with the current understanding of how AI systems are built today and in the near future," says Professor Bengio, who is sometimes called one of the "godparents of AI" for his work in deep learning in the 1990s and 2000s.

"We are very far from super intelligent artificial intelligence systems and there may even be fundamental obstacles to going far beyond human intelligence."

Image copyright
                 Maryse Boyce
            Image caption
                    AI pioneer Yoshua Bengio believes that we are still far from developing super smart machines.

In the same way that "Shark" influenced the opinions of many people about sharks in a way that did not align with scientific reality, apocalyptic science fiction films like "Terminator" can generate misplaced fears of uncontrollable AI and almighty.

"The reality is that that is not going to happen," says Edward Grefenstette, a research scientist at Facebook AI Research in London.

While the improved human cyborgs rebel in the new sequel to "Terminator," today's artificial intelligence systems are barely able to play board games like "Go" or recognize faces in a photo. And although they can do those tasks better than humans, they are far from being able to control a body.

"Today's cutting-edge systems could not even control the body of a mouse," says Bengio, co-founder of the Canadian research company in this field Element AI.

Those who work in AI today struggle to excel in more than one task, which is why they are often referred to as "weak AI" systems rather than "strong AI."

Image copyright
                 University of Cambridge
            Image caption
                    Neil Lawrence, professor of machine learning at the University of Cambridge, believes that we should rethink our concept of AI.

But it would be more appropriate to call many of today's AI technologies "computers and statistics," according to Neil Lawrence, who recently left Amazon and joined the University of Cambridge as the first DeepMind machine learning teacher.

"Most of what we call AI is really using large computing capabilities combined with a large amount of data to undo statistical correlations," he says.

Unfounded fears

Individuals like Elon Musk have done a good job scaring some to think that "Terminator" could become a reality in the not too distant future, thanks in part to phrases such as AI being "potentially more dangerous than nuclear weapons." .

But the AI ‚Äč‚Äčcommunity is not sure how fast artificial intelligence will advance over the next five years, much less within the next 10 to 30 years.

There is also skepticism in the community about whether artificial intelligence systems will ever reach the same level of intelligence as humans, or if this would be desirable.

"Normally, when people talk about the risks of AI, they have scenarios in mind where machines have achieved 'strong artificial intelligence' and have the cognitive abilities to act beyond the control and specification of humans who created them." says Grefenstette.

"With all due respect to people who talk about the dangers of strong AI and its imminence, this is an unrealistic perspective, since recent progress in AI still invariably focuses on developing very specific skills within controlled domains. ".

Image copyright
                 Kerry Brown
            Image caption
                    "The film can make people think about what wars will be like in the future," says Lawrence.

We should be more concerned with how humans abuse the power offered by AI, says Bengio.

How will AI further accentuate inequality? How will AI be used in surveillance? How will AI be used in war?

The idea of ‚Äč‚Äčrelatively dumb AI systems that control unbridled killing machines is terrifying.

Image copyright
                 Skydance Productions / Paramount Pictures.
            Image caption
                    Linda Hamilton returns to the role she premiered almost four decades ago.

"The film can make people think about what wars will be like in the future," says Lawrence.

For Joanna Bryson, who runs the Bath Intelligent Systems group at the University of Bath: "It's good to make people think about the problems of autonomous weapon systems."

But we don't need to look to the future to see AI hurting. Facial recognition systems are being used to track and oppress Uyghurs in China, bots are being used to manipulate elections and there are already "deepfake" videos.

"Artificial intelligence is already helping us destroy our democracies and corrupt our economies and the rule of law," according to Bryson.

Image copyright

Fortunately, many of the current AI researchers are working hard to ensure that their systems have a positive impact on people and society, focusing their efforts on areas such as medical care and climate change.

On Facebook, for example, researchers are trying to discover how to train artificial systems that understand our language, follow instructions and communicate with us or other systems.

"Our main objective is to produce artificial intelligence that is more cooperative, communicative and transparent regarding their intentions and plans, while helping people in real-world settings," says Grefenstette.

Image copyright
                 Bath University
            Image caption
                    Joanna Bryson will become the professor of ethics and technology at the Hertie School of Governance in Berlin in February.

Ultimately, the responsibility for spreading the true state of AI lies with the media.

The choice of a photo directly affects the interest in an article, but journalism schools around the world would strongly recommend not to fool the public simply by looking for clicks.

Unfortunately, there have been numerous media cases that use still images of "Terminator" movies in stories about relatively gradual advances.

Bryson says he goes over these types of articles as if they were advertisements, while Lawrence assumes they are clickbaits.

The 10 companies that have transformed the way of doing business in the last decade (and what was its formula for success)

Journalists who write AI stories "should show the cubicles of people who really develop AI," on Google or Facebook, for example, says Bryson.

"The press must stop treating AI as a kind of scientific discovery that is removed from the ground or discovered on Mars," he says. "AI is just a technology that people use to do things."

Now you can receive notifications from BBC Mundo. Download the new version of our app and activate them to not miss our best content.

Do you already know our YouTube channel? Subscribe!