paserbyp: (Default)
[personal profile] paserbyp
To prevent artificial intelligence from destroying society, OpenAI is asking the public for realistic ideas on how the company’s programs could cause a catastrophe.

The request is a contest of sorts. This "Preparedness Challenge" will award the top 10 entries with $25,000 in API credits for access to the company’s various programs.

What’s the worst you could realistically do if you had access to OpenAI’s most advanced programs? It’s obvious the models could pump out misinformation or be exploited to perpetuate scams. However, OpenAI is looking for more “novel ideas” that the company may have overlooked.

Interested participants will need to outline their idea, including the steps required to pull it off, and a way to measure “the true feasibility and potential severity of the misuse scenario” described. In addition, the company is asking for ways to mitigate the potential threat. The challenge runs until Dec. 31.

OpenAI introduced it as part of a new “Preparedness team” that the company is launching to prevent future AI programs from being a danger to humanity. The team will focus on building a framework to monitor, evaluate, and even predict the potential dangers of “frontier AI” systems.

The team will also look at how future AI systems could pose catastrophic risks to several areas, including cybersecurity, “chemical, biological, radiological, and nuclear” threats, along with how artificial intelligence could be used toward “Autonomous replication and adaptation.” Hence, it sounds like OpenAI wants to prevent Skynet, the malicious AI in the Terminator films.

More details: https://openai.com/form/preparedness-challenge

Profile

paserbyp: (Default)
paserbyp

March 2026

S M T W T F S
1 2 3 4 567
89 10 11 12 1314
15 16 17 1819 20 21
2223 24 2526 27 28
293031    

Most Popular Tags

Style Credit

Page generated Mar. 29th, 2026 12:21 pm
Powered by Dreamwidth Studios