Slack tweaks its principles in response to user outrage at AI slurping

Salesforce division Slack has responded to criticism by users outraged that its privacy principles allowed the messaging service to slurp customer data for AI training unless specifically told not to, claiming the data never leaves the platform and isn't used to train "third party" models.

The app maker said its ML models were "platform level" for things like channel and emoji recommendations and search results, and it has now updated the principles "to better explain the relationship between customer data and generative AI in Slack."

It said it wanted to clarify that:

The privacy principles were overhauled in 2023 and contained the text: "To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content and files) submitted to Slack."

Yes, that's correct - the principles that got customers so upset allowed Slack to analyze messages in order to train its models. In fact, pretty much anything dropped into Slack channels could be used. The implications of this are far-reaching, and users for whom the penny dropped were vocal in their criticism.

For its part, Slack has insisted that data will not leak across workspaces, although it did admit that its global models used customer data. Messages within the workspace, however, were presumably fair game.

The principles have since been tweaked slightly, and now read: "To develop non-generative AI/ML models for features such as emoji and channel recommendations, our systems analyze Customer Data."

A Slack spokesperson told The Register "Note that we have not changed our policies or practices - this is simply an update to the language to make it more clear."

The slurping is also on by default, something that could raise an eyebrow with regulators. To turn it off, Slack demands the workspace owner email its customer experience team requesting an opt-out. It does not give any indication of how long that opt-out will take to be processed for customers who do not want to have their data used in the training of Slack's global models.

Opting out means that a customer will still enjoy the benefits of the globally trained models, just without being part of those global models.

The Register asked Slack why it did not choose an opt-in model and will update this piece should we receive an explanation.

Slack says it uses the data to better parse queries, help with autocomplete and come up with emoji suggestions.

According to the company's privacy principles, "These types of thoughtful personalizations and improvements are only possible if we study and understand how our users interact with Slack."

Over on Threads, someone who claims to be an engineer at Slack says that they don't train LLMs on customer data. Aaron Maurer, who according to their LinkedIn profile is a ML and AI person at Slack, says the org's "policy, as we have published numerous places, is we do not train LLMs on customer data. For instance, to add another spot: https://slack.com/help/articles/28310650165907-Security-for-Slack-AI." However, in principle and according to Slack's Ts&Cs, it can if it wants to.

Matthew Hodgson, CEO at Element, told The Reg that he found it "utterly mind blowing" that Slack was "proposing training AI on private customer data."

"It's bad enough that cloud vendors like Slack and Teams have access to your unencrypted data in the first place, but to then feed it into an opaque and unpredictable LLM model is terrifying."

For context, Slack is not the only service to use customer data for model training. Reddit getting friendly with OpenAI and adding its forum posts to ChatGPT is another example, although customers paying a subscription to use Slack would be forgiven for being a little surprised to find their data being used as global training fodder unless they opt out.

Slack's change happened in 2023, and the furor highlights the need for users to check what their data is being used for as AI hype continues to surge through the tech industry. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Jul 26
Google DeepMind's latest models kinda sorta take silver at Math Olympiad

Sure, it took three days to do what teenaged brainiacs do in nine hours - but who's counting?

Jul 26
Study shock! AI hinders productivity and makes working worse

Management drank the Kool Aid but staff can't cope with new demands

Jul 26
Omnissa, VMware's old end-user biz, emerges with promise of 'AI-infused autonomous workspace'

We think this means easier-to-administer virtual desktops with extra shiny

Jul 26
Jul 25
OpenAI unveils AI search engine SearchGPT - not that you're allowed to use it yet

Launching in Beta is so 2014. We're in the pre-Beta limited sign-up era now