Many Believe AI Superintelligence Development Needs a Pause for Safety
AI/Software

Many Believe AI Superintelligence Development Needs a Pause for Safety

A consensus is forming among notable figures, from celebs to former officials, urging a halt to the advancement of superintelligent AI until safety concerns are duly addressed.

Concerns over artificial superintelligence are mounting, with many prominent figures advocating for a temporary halt on its development until potential safety risks are properly evaluated. This includes notable individuals from various sectors, including actors, politicians, and scientists who have publicly supported this cause.

“We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

Among those advocating for these measures are Sir Stephen Fry and former White House Chief Strategist Steve Bannon. The movement raises questions about the impact of superintelligent AI on society and its future applications.

Surprisingly, both Fry and Bannon find common ground on this sensitive issue, illustrating the diverse range of support for responsible AI innovation.

Next article

Former Riot Executive Critiques Big Studio Spending as His MMO Seeks Funding

Newsletter

Get the most talked about stories directly in your inbox

Every week we share the most relevant news in tech, culture, and entertainment. Join our community.

Your privacy is important to us. We promise not to send you spam!