TORONTO, Canada — Geoffrey Hinton, one of the so-called godfathers of artificial intelligence (AI), on Wednesday urged governments to step in and make sure machines don’t take over society. Hinton made headlines in May when he announced that he had left Google after a decade of work to speak more freely about the dangers of AI, shortly after the release of ChatGPT captured the world’s imagination. The highly respected AI scientist, who works at the University of Toronto, was speaking to a packed audience at the Collision technology conference in the Canadian city. The conference brought together more than 30,000 start-up founders, investors and tech workers, most looking to learn how to ride the wave of AI, not to hear a lesson about its dangers. “Before AI outsmarts us, I think people who develop it should be encouraged to put a lot of effort into understanding how it might try to take control away,” Hinton said. “Right now there are 99 very smart people trying to improve the AI and one very smart person trying to figure out how to stop it from taking over and maybe you want to be more balanced,” he said. AI could deepen inequality, says Hinton Hinton has warned that AI’s risks should be taken seriously despite his critics who believe he is exaggerating the risks. “I think it’s important that people understand that this is not science fiction, this is not just scaremongering,” he insisted. “It’s a real risk that we need to think about, and we need to figure out in advance how to deal with it.” Hinton also expressed concern that AI will deepen inequality, with the massive productivity gains from its implementation benefiting the rich, rather than the workers. “The wealth is not going to go to the people who do the work. It is going to make the rich richer and not the poorer and that is very bad for society,” he added. He also pointed to the danger of fake news created by ChatGPT-style bots and said he hoped AI-generated content could be marked in a similar way to how central banks watermark cash. “It’s very important to try, for example, to mark everything that’s fake as fake. If we can do that technically, I don’t know,” he said. The European Union is considering such a technique in its AI Law, legislation that will set the rules for AI in Europe, which is being negotiated by lawmakers. ‘Overpopulation on Mars’ Hinton’s AI danger list was in stark contrast to conference discussions that focused less on security and threats, and more on seizing the opportunity created by ChatGPT. Venture capitalist Sarah Guo said talking pessimism about AI as an existential threat was premature, likening it to “talking about overpopulation on Mars”, citing another AI guru, Andrew Ng. She also warned against “regulatory capture” that would see government intervention protect holders before it had a chance to benefit sectors such as healthcare, education or science. Opinions differed on whether the current generative AI giants, primarily Microsoft-backed OpenAI and Google, would remain unmatched or whether new entrants would expand the field with their own models and innovations. “In five years, I still figure if you want to go and find the best, most accurate, most advanced overall model, you’re probably going to have to go to one of the few companies that have the capital to do it.” said Leigh Marie Braswell of the venture capital firm Kleiner Perkins. Zachary Bratun-Glennon of Gradient Ventures said he envisioned a future where “there will be millions of models on a network much like the network of websites we have today.” Connect with the ! Subscribe to our channel Youtubeand activate notifications, or follow us on social networks: Facebook, Twitter and Instagram.
READ MORE WAB NEWS
