OpenAI calls for global agency focused on 'existential risk' posed by superintelligence

An international agency should be in charge of inspecting and auditing artificial general intelligence to ensure the technology is safe for humanity, according to top executives at GPT-4 maker OpenAI.

CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever said it's "conceivable" that AI will obtain extraordinary abilities that exceed humans over the next decade.

"In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there," the trio said in a blog post on Tuesday.

The costs of building such powerful technology is only decreasing as more people work towards advancing it, they argued. In order to control progress, the development should be supervised by an international organization like the International Atomic Energy Agency (IAEA).

The IAEA was established in 1957 during a time when governments feared that nuclear weapons would be developed during the Cold War. The agency helps regulate nuclear power, and sets safeguards to make sure nuclear energy isn't used for military purposes.

"We are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc," they said.

Such a group would be in charge of tracking compute and energy use, vital resources needed to train and run large and powerful models.

"We could collectively agree that the rate of growth in AI capability at the frontier is limited to a certain rate per year," OpenAI's top brass suggested. Companies would have to voluntarily agree to inspections, and the agency should focus on "reducing existential risk," not regulatory issues that are defined and set by a country's individual laws.

Last week, Altman put forward the idea that companies should obtain a license to build models with advanced capabilities above a specific threshold in a Senate hearing. His suggestion was later criticized since it could unfairly impact AI systems built by smaller companies or the open source community who are less likely to have the resources to meet the legal requirements.

"We think it's important to allow companies and open source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits)," they said.

Elon Musk in late March was one of 1,000 signatories of an open letter that called for a six-month pause in developing and training AI more powerful than GPT4 due to the potential risks to humanity, something that Altman confirmed in mid-April it was doing.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter stated.

Alphabet and Google CEO Sundar Pichai wrote a piece in the Financial Times at the weekend, saying: "I still believe AI is too important not to regulate, and too important not to regulate well". ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Jun 8
US Senators take Meta to task for releasing LLaMA AI model after token safety checks

Suggest that Zuck has yet again unleashed stuff without a thought for the downsides

Jun 8
About ducking time: Apple fixes up autocorrect in iOS 17

WWDC And makes developer-grade OS betas available to all ducking loyalists

Jun 7
Waymo robo-car slays dog in San Francisco

Deadly accident said to be unavoidable

Jun 7
Atlassian pipes software flaw reports into Jira, so the boss can see them too

This could be a useful way to show what you're up against, or give the clueless a stick to beat you with

Jun 7
Search engines don't always help chatbots generate accurate answers

Research shows developers have to find new ways to manipulate information for AI

Jun 7
The age of AI

Webinar Scaling up AI Initiatives to SuperPOD levels

Jun 7
Red Hat to stop packaging LibreOffice for RHEL

The sky isn't falling... but it's sign of bigger changes to come