AI Researchers Urge Comprehensive Safety Assessments Before Advancing Artificial Superintelligences
Hardware/News

AI Researchers Urge Comprehensive Safety Assessments Before Advancing Artificial Superintelligences

Experts are advocating for rigorous safety calculations similar to those conducted during atomic bomb tests before unleashing powerful AI technologies on humanity.

Artificial Intelligence (AI) is a term frequently encountered today, often misused to refer to multiple unrelated concepts. Instances of AI such as ChatGPT and DeepSeek dominate headlines as discussion about implementing AI across various sectors intensifies, reminiscent of previous societal anxieties surrounding nuclear technology.

The notion of juxtaposing AI with atomic bomb deployment might seem exaggerated, yet it reflects the concerns described in a recent report by The Guardian which argued for safety protocols akin to those employed during the Trinity test for nuclear weapons.

Max Tegmark, a professor at MIT, along with his students, has published a paper advocating for the establishment of a protocol assessing whether advanced AI systems might evade human control. This methodology parallels the assessments conducted by Arthur Compton to evaluate the risks of atmospheric detonation before progressing with the Trinity test.

Tegmark estimates a staggering 90% likelihood that a highly advanced AI could potentially endanger humanity, which is significantly more critical than merely eliminating software bugs. This hypothetical advanced AI is categorized as Artificial Super Intelligence (ASI).

In his research, Tegmark contends that AI developers must proactively identify potential risks, likening it to calculating the Compton constant—an essential metric for maintaining control over these systems. “It’s insufficient to maintain positive assurances without empirical evaluations,” he stated.

Furthermore, he is affiliated with the Future of Life Institute, a non-profit dedicated to the safe development of AI, which released an open letter in 2023 requesting a halt on the advancement of highly potent AIs, gaining signatures from prominent figures like Elon Musk and Steve Wozniak.

Collaboration with distinguished computer scientist Yoshua Bengio, and researchers from OpenAI, Google, and DeepMind, highlights Tegmark’s commitment to addressing AI safety, culminating in the Singapore Consensus on Global AI Safety Research Priorities report. If society does witness the emergence of an ASI, Tegmark advocates for established precedents to determine the potential risks involved.

Next article

Samsung Unveils Mind-Blowing OLED Panels for VR with Incredible Specs

Newsletter

Get the most talked about stories directly in your inbox

Every week we share the most relevant news in tech, culture, and entertainment. Join our community.

Your privacy is important to us. We promise not to send you spam!