STOCKHOLM/LONDON — High-ranking executives in the artificial intelligence (AI) industry, including Sam Altman, CEO of OpenAI, joined experts and professors on Tuesday to raise the “extinction risk of AI,” and urged authorities to equate it with the risks posed by pandemics and nuclear war. “Mitigate AI extinction risk should be a global priority, alongside other societal risks like pandemics and nuclear war,” more than 350 signatories wrote in a letter published by the Center for AI Safety ( CAIS), a non-profit organization. In addition to Altman, the signatories include the CEOs of artificial intelligence companies DeepMind and Anthropic, and executives at Microsoft MSFT.O and Google GOOGL.O. Also among them were Geoffrey Hinton and Yoshua Bengio – two of the three so-called “godfathers of AI” who received the 2018 Turing Award for their work on deep learning – and professors at institutions ranging from Harvard to China’s Tsinghua University. A CAIS statement singled out Meta META.O, where AI’s third godfather Yann LeCun works, for not signing the letter. “We asked a lot of Meta employees to sign up,” said Dan Hendrycks, CAIS director. Meta did not immediately respond to requests for comment. The letter coincided with the US-European Union Trade and Technology Council meeting in Sweden, where politicians are expected to speak on AI regulation. Elon Musk and a group of AI experts and industry executives were the first to cite the potential risks to society in April. “We have extended an invitation (to Musk) and expect him to sign it this week,” Hendrycks said. Recent advances in AI have created tools that supporters say can be used in applications ranging from medical diagnosis to legal briefing, but this has raised fears that the technology could lead to privacy violations, boost disinformation campaigns and causing trouble with “intelligent machines” that think for themselves. The warning comes two months after the nonprofit Future of Life Institute (FLI) published a similar open letter, signed by Musk and hundreds of others, demanding an urgent pause on advanced AI research. , citing risks to humanity. “Our letter made the demand for a pause known, this makes known the risks of extinction,” said Max Tegmark, president of the FLI, who also signed the most recent letter. “Now an open and constructive conversation can finally begin.” AI pioneer Hinton previously told Reuters that AI could pose a “more urgent” threat to humanity than climate change. Last week, Sam Altman, chief executive of OpenAI, called the EU’s efforts to create AI regulation – a world’s first – overregulation and threatened to leave Europe. Within days he retracted after criticism from politicians. Connect with the ! Subscribe to our channel Youtube and activate notifications, or follow us on social networks: Facebook, Twitter and Instagram.