OpenAI calls for global agency focused on 'existential risk' posed by superintelligence

An international agency should be in charge of inspecting and auditing artificial general intelligence to ensure the technology is safe for humanity, according to top executives at GPT-4 maker OpenAI.

CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever said it's "conceivable" that AI will obtain extraordinary abilities that exceed humans over the next decade.

"In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there," the trio said in a blog post on Tuesday.

The costs of building such powerful technology is only decreasing as more people work towards advancing it, they argued. In order to control progress, the development should be supervised by an international organization like the International Atomic Energy Agency (IAEA).

The IAEA was established in 1957 during a time when governments feared that nuclear weapons would be developed during the Cold War. The agency helps regulate nuclear power, and sets safeguards to make sure nuclear energy isn't used for military purposes.

"We are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc," they said.

Such a group would be in charge of tracking compute and energy use, vital resources needed to train and run large and powerful models.

"We could collectively agree that the rate of growth in AI capability at the frontier is limited to a certain rate per year," OpenAI's top brass suggested. Companies would have to voluntarily agree to inspections, and the agency should focus on "reducing existential risk," not regulatory issues that are defined and set by a country's individual laws.

Last week, Altman put forward the idea that companies should obtain a license to build models with advanced capabilities above a specific threshold in a Senate hearing. His suggestion was later criticized since it could unfairly impact AI systems built by smaller companies or the open source community who are less likely to have the resources to meet the legal requirements.

"We think it's important to allow companies and open source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits)," they said.

Elon Musk in late March was one of 1,000 signatories of an open letter that called for a six-month pause in developing and training AI more powerful than GPT4 due to the potential risks to humanity, something that Altman confirmed in mid-April it was doing.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter stated.

Alphabet and Google CEO Sundar Pichai wrote a piece in the Financial Times at the weekend, saying: "I still believe AI is too important not to regulate, and too important not to regulate well". ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Jul 16
Kaspersky culls staff, closes doors in US amid Biden's ban

After all we've done for you, America, sniffs antivirus lab

Jul 15
FTC probes IBM's $6.4B HashiCorp takeover

Cloud biz claims it's full speed ahead - and damn the torpedoes

Jul 15
FTC to probe IBM's $6.4B takeover of HashiCorp

Cloud biz claims it's full speed ahead - and damn the torpedoes

Jul 15
Linux kernel 6.10 arrives with punched-up hardware support

Plus: Broader Rust abilities, better sandboxing, and more

Jul 15
Is Teams connector retirement a tweak to fit EU laws, or a sign of price rises to come?

Analysis Customers debate reasons behind move, say halo of Teams has slipped now Copilot is here

Jul 15
Fresh programmer's editor on Linux lies Zed ahead

New project from Atom developer gains a second host OS

Jul 15
The graying open source community needs fresh blood

Opinion Deep experience of the older tech crowd is nothing short of vital, yet projects need new devs to move forward