top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

Marijan Hassan - Tech Journalist

Scientist warns that big tech is deliberately downplaying the existential risks of AI


A prominent artificial intelligence (AI) researcher has sparked controversy by accusing major technology companies of downplaying the potential dangers of advanced AI.



Speaking to a popular publication during the Global AI Summit in Seoul, South Korea, Max Tegmark expressed concern over the shifting focus of artificial intelligence regulation noting that the global conversation has been diverted from the critical existential threats AI poses.


Tegmark compared the current AI scenario to the development of the nuclear bomb in the 1940s highlighting how Enrico Fermi's creation of the first self-sustaining nuclear chain reaction in 1942 was a wake-up call for physicists.


“In 1942, Enrico Fermi built the first ever reactor with a self-sustaining nuclear chain reaction under a Chicago football field. When the top physicists at the time found out about that, they really freaked out, because they realized that the single biggest hurdle remaining to building a nuclear bomb had just been overcome.”


Tegmark says the scientists realized the bomb was just a few years away, which indeed came to pass in 1945 with the Trinity test. Similarly, AI models that pass the Turing test signal a warning about losing control over AI.


Last year, after the launch of OpenAI’s GPT-4, Tegmark's non-profit organization, the Future of Life Institute, pushed for a six-month pause on advanced AI research. However, despite backing from leading AI pioneers like Geoffrey Hinton and Yoshua Bengio, the call for a pause did not materialize into concrete action. Instead, AI regulation summits, starting with Bletchley Park in the UK and continuing in Seoul, have largely diluted the focus on existential threats.


Tegmark finds this shift in regulatory discussions from existential risks to issues such as privacy and job market impacts troubling. However, he understands how it could happen and draws parallels to the delayed regulation of smoking despite early evidence linking it to lung cancer.


“In 1955, the first journal articles came out saying smoking causes lung cancer, and you’d think that pretty quickly there would be some regulation. But no, it took until 1980, because there was this huge push by industry to distract. I feel that’s what’s happening now.


While acknowledging the current harms of AI—such as biases and impacts on marginalized groups—Tegmark stressed that these should not overshadow the potential for catastrophic outcomes.


Addressing people who say existential AI risks are far-fetched and that it’s better to focus on the current risks, Tegmark noted that even top AI leaders are aware of the risks. Still, they won’t speak about it because of the implications.


“I think they all feel that they’re stuck in an impossible situation where, even if they want to stop, they can’t. If a CEO of a tobacco company wakes up one morning and feels what they’re doing is not right, what’s going to happen? They’re going to replace the CEO. So the only way you can get safety first is if the government puts in place safety standards for everybody,” he said.

wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page