OpenAI loses another senior figure, disperses safety research team he led

OpenAI has lost another senior staffer, and on his way out the door this one warned the company - and all other AI shops - are just not ready for artificial general intelligence.

The departing exec is Miles Brundage who on Friday will cease working as senior advisor for AGI readiness. AGI - artificial general intelligence - is the term used to describe AI that appears to have the same cognitive abilities as a human. Like people, AGIs could theoretically learn almost anything. Preparing for the arrival of AGI is regarded as an important and responsible action, given the possibility AGIs could do better than humans in some fields.

Brundage revealed his departure in a Substack post in which he explained his decision as a desire to contemplate OpenAI's AGI readiness, and the world, without having his view biased by being an employee.

In his post, he wrote that Open AI has "gaps" in its readiness - but so does every other advanced AI lab.

"Neither OpenAI nor any other frontier lab is ready, and the world is also not ready," he wrote, adding "To be clear, I don't think this is a controversial statement among OpenAI's leadership, and notably, that's a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I'll be working on AI policy for the rest of my career)."

Despite that state of unreadiness, his post reveals that OpenAI's AGI readiness team will be dispersed among other teams, as part of a re-org.

Brundage also opined that "AI and AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision-makers in governments, non-profits, civil society, and industry, and this needs to be informed by robust public discussion." Those efforts need to consider both safety and equitable distribution of benefits, he suggested.

"I think AI capabilities are improving very quickly and policymakers need to act more urgently," he noted, but suggested recent experience in fields such as pandemic preparedness mean action won't happen unless leaders can communicate a sense of urgency.

"I think we don't have all the AI policy ideas we need, and many of the ideas floating around are bad or too vague to be confidently judged," he added.

Brundage wrote that one idea he disagrees with "is for democratic countries to race against autocratic countries."

"I think that having and fostering such a zero-sum mentality increases the likelihood of corner-cutting on safety and security," he suggested, before urging "academics, companies, civil society, and policymakers [to] work collaboratively to find a way to ensure that Western AI development is not seen as a threat to other countries' safety or regime stability, so that we can work across borders to solve the very thorny safety and security challenges ahead."

He affirms that collaboration is important - despite his belief that it is very likely "Western countries continue to substantially outcompete China on AI."

The Middle Kingdom and other autocratic nations have enough tech to "build very sophisticated capabilities," so failing to engage on safety manage risk would be dangerously short-sighted.

While Brundage sees many reasons to consider AI safety, he also sees plenty of upside.

"I think it's likely that in the coming years (not decades), AI could enable sufficient economic growth that an early retirement at a high standard of living is easily achievable," he wrote. "Before that, there will likely be a period in which it is easier to automate tasks that can be done remotely."

But there may be some tough years first. "In the near term, I worry a lot about AI disrupting opportunities for people who desperately want work."

If we get it right, he thinks humanity will have the option to "remove the obligation to work for a living."

"That is not something we're prepared for politically, culturally, or otherwise, and needs to be part of the policy conversation," he suggested. "A naïve shift towards a post-work world risks civilizational stagnation (see: Wall-E), and much more thought and debate about this is needed."

Post-work is also a matter OpenAI considers quite often, as Brundage joins the org's CTO, chief research officer and research VP, plus co-founder Ilya Sutskever, as recent departures from the AI standard-bearer.

Brundage played down the significance of his own departure, writing "I have been here for over six years, which is pretty long by OpenAI standards (it has grown a lot over those six years!)." ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Oct 25
Hackers love GitHub dorks - SecOps love outsmarting them

Partner Content How GitGuardian enables auditing of GitHub footprints to mitigate past, present, and future leaks

Oct 25
Your computer's not working? Sure, I can fix that problem - which I caused

On Call Not paying what you agreed for a job can prove expensive in the long run

Oct 25
OpenAI loses another senior figure, disperses safety research team he led

Artificial General Intelligence readiness advisor Miles Brundage bails, because nobody is ready

Oct 25
Polish radio station ditches DJs, journalists for AI-generated college kids

Station claims its visionary, ex-employees claim it cynical; reality appears way more fiscal

Oct 24
Hugging Face puts the squeeze on Nvidia's software ambitions

AI model repo promises lower costs, broader compatibility for NIMs competitor

Oct 24
Emergency patch: Cisco fixes bug under exploit in brute-force attacks

Who doesn't love abusing buggy appliances, really?