Back Door for AI
Oct. 27th, 2023 09:15 am
To prevent artificial intelligence from destroying society, OpenAI is asking the public for realistic ideas on how the company’s programs could cause a catastrophe. The request is a contest of sorts. This "Preparedness Challenge" will award the top 10 entries with $25,000 in API credits for access to the company’s various programs.
What’s the worst you could realistically do if you had access to OpenAI’s most advanced programs? It’s obvious the models could pump out misinformation or be exploited to perpetuate scams. However, OpenAI is looking for more “novel ideas” that the company may have overlooked.
Interested participants will need to outline their idea, including the steps required to pull it off, and a way to measure “the true feasibility and potential severity of the misuse scenario” described. In addition, the company is asking for ways to mitigate the potential threat. The challenge runs until Dec. 31.
OpenAI introduced it as part of a new “Preparedness team” that the company is launching to prevent future AI programs from being a danger to humanity. The team will focus on building a framework to monitor, evaluate, and even predict the potential dangers of “frontier AI” systems.
The team will also look at how future AI systems could pose catastrophic risks to several areas, including cybersecurity, “chemical, biological, radiological, and nuclear” threats, along with how artificial intelligence could be used toward “Autonomous replication and adaptation.” Hence, it sounds like OpenAI wants to prevent Skynet, the malicious AI in the Terminator films.
More details: https://openai.com/form/preparedness-challenge