Microsoft has eliminated its entire team responsible for ensuring the ethical use of AI software at a time when the Windows giant is ramping up its use of machine-learning technology.
The decision this month to ditch the ethics and society team within its artificial intelligence organization is part of the 10,000 job cuts Microsoft announced in January that will continue rolling through the IT titan into next year.
The hit to this unit may remove some guardrails meant to ensure Microsoft's products that integrate machine-learning features meet the mega-corp's standards for ethical use of AI, and comes as some raise questions about the effects of controversial artificial-intelligence models on society at large.
On the one hand, baking AI ethics into the whole business as something for all employees to consider is kinda like when Bill Gates told his engineers in 2002 to make security an organization-wide priority. On the other hand, you might think a dedicated team overseeing that would be helpful, internally.
Platformer first reported the layoffs of the ethics and society group and cited unnamed current and former employees. The group was supposed to advise teams as Redmond accelerated the integration of AI technologies into a range of products, from Edge and Bing to Teams, Skype, and Azure cloud services.
Microsoft still has in place its Office of Responsible AI, which works with the company's Aether Committee and Responsible AI Strategy in Engineering (RAISE) to spread responsible practices across operations in day-to-day work. That said, employees told the newsletter that the ethics and society team played a crucial role in ensuring those principles were directly reflected in how products were designed.
However, a Microsoft spokesperson told The Register that the impression that the layoffs meant the tech goliath is cutting its investments in responsible AI is wrong. The unit was key in helping to incubate a culture of responsible innovation as Microsoft got its AI efforts underway several years ago, it was said, and now Microsoft executives have adopted that culture and seeded it throughout the company.
"That initial work helped to spur the interdisciplinary way in which we work across research, policy, and engineering across Microsoft," the spokesperson said.
"Since 2017, we have worked hard to institutionalize this work and adopt organizational structures and governance processes that we know to be effective in integrating responsible AI considerations into our engineering systems and processes."
There are hundreds of people working on these issues across Microsoft "including net new, dedicated responsible AI teams that have since been established and grown significantly during this time, including the Office of Responsible AI, and a responsible AI team known as RAIL that is embedded in the engineering team responsible for our Azure OpenAI Service," they added.
By contrast, fewer than 10 people on the ethics and society team were affected and some were moved to other parts of the biz, with the Office of Responsible AI and the RAIL unit.
According to the Platformer report, the team had been shrunk from about 30 people to seven through a reorganization within Microsoft in October 2022.
Team members lately had been investigating potential risks involved with Microsoft's integration of OpenAI's technologies across the company and the unnamed sources, said CEO Satya Nadella and CTO Kevin Scott were anxious to get those technologies integrated into products and out to users as fast as possible.
Microsoft is investing billions of dollars into OpenAI, a startup whose products include Dall-E2 for generating images, GPT for text (OpenAI this week introduced its latest iteration, GPT-4), and Codex for developers. Meanwhile, OpenAI's ChatGPT is a chatbot trained on mountains of data from the internet and other sources that takes in prompts from humans - "Write a two-paragraph history of the Roman Empire," for example - and spits out a written response.
Microsoft also is integrating a new large language model into its Edge browser and Bing search engine in hopes of chipping away at Google's dominant position in search.
Since being opened up to the public in November 2022, ChatGPT has become the fastest app to reach 100 million users, crossing that mark in February. However, problems with the technology - as well similar AI apps like Google's Bard - cropped up fairly quickly, ranging from wrong answers to offensive language and gaslighting.
The rapid innovation and mainstreaming of these large language-model AI systems is fuelling a larger debate about their impact on society.
Redmond will shed more light on its ongoing AI strategy during an event March 16 hosted by Nadella and titled The Future of Work with AI," which The Register will be covering. ®
Opinion The war in Ukraine is bad and wrong... but does blocking these contributions help Ukraine?
The hope? Reducing piles of admin for clinicians freeing them up for medical work
Staff and suppliers paid late last year, new requirements lead to contract price hike
Utility that began as a personal project found its way into billions of devices
Bot also botches some requests, but is about to be baked into cloud services anyway
Meta-made small language model can produce misinformation, toxic text
Concerns over consistent dialog boxes, pinning, default apps mulled