Anthropic Claude wants to be your helpful colleague, always looking over your shoulder

Anthropic on Monday announced the research preview of Claude Cowork, a tool for automating office work that comes with the now familiar recitation of machine learning risks.

The company said Cowork was inspired by Claude Code, its tool for generating programming code using natural language prompts and whatever files and tools have been made accessible.

With Claude Code, the output is generally source code or related artifacts like documentation or configuration files. With Cowork, the output may be local files like spreadsheets or interaction with desktop or web applications.

Cowork may find fans among knowledge workers who have to create presentations but take no pride in their work, who need to move data between applications without codified import and export rituals, or who have to conduct extensive reformatting operations in spreadsheets.

It may also find favor as a crutch that allows people with minimal application skills to delegate activities beyond their competency to an AI model with a different set of limitations. Think of all the charts Cowork can produce for those untutored in data science and statistics, or the proposals and position papers it can generate for summarization by other AI models.

But to function, Cowork must have access to files and applications.

That's where the warning comes in. Anthropic makes it clear that Cowork is a research preview and should be used with caution. Users are advised not to give Cowork access to local files containing sensitive information.

When used in conjunction with the Claude in Chrome extension (which comes with its own safety warning), Cowork should be limited to accessing trusted websites. The same goes for modifying Claude's default internet settings.

Also, users are advised to "monitor Claude for suspicious actions that may indicate prompt injection." Yes, if you happen to sic the AI model on a file or website that contains text that could be interpreted as a system instruction rather than data, you could trigger an indirect prompt injection attack. Security researchers have repeatedly shown how relatively easy it is to craft such attacks.

Anthropic goes on to offer reassurance that it implements various layers of protection, like reinforcement learning to convince Claude to refuse malicious instructions and content classifiers that attempt to catch malicious consent before it can be passed to the underlying model.

But the company follows its mollification with a cautionary reprise: "Important: While we've enacted these safety measures to reduce risks, the chances of an attack are still non-zero. Always exercise caution when using Cowork." You've been warned.

For those still not convinced to proceed carefully, there's a disclaimer of liability that insists "You remain responsible for all actions taken by Claude performed on your behalf."

In case that's not clear, the company says this includes content published or messages sent, purchases or financial transactions, data accessed or modified, and the terms of service at third-party websites, which, like Amazon, may prohibit automated interaction without permission.

Once you've gotten past the disclaimers and the necessary permissions have been granted, and if you happen to meet the access requirements - a Claude Max subscription ($100 or $200 per month) and the Claude macOS desktop app - then you can tell Cowork "Please help organize my desktop." With any luck, the assistive software will rearrange your messy desktop files more thoroughly than your Mac's Clean Up Selection command.

That's not all. "Claude can then read, edit, or create files in that folder," Anthropic explains in its blog post. "It can, for example, re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes."

Cowork can be extended to work with third-party apps and sites via Connectors, and with specific file formats like PowerPoint (pptx), Excel (xlsx), Word (docx), and PDF (pdf) via Skills.

The challenge is figuring out what makes sense to automate. Some batch operations on desktop files may be more efficient as CLI commands or shell scripts, which Claude might even compose if asked.

Anthropic has acknowledged that users may have trouble coming up with ways to employ Claude by publishing a list of suggested use cases. These include tasks like "clean up promotional emails," "organize files in Google Drive," and "thoughtful gift giving with Claude" .

Anthropic said it expects to make improvements in Cowork based on feedback, and to bring the automation service to Windows. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Feb 7
Whether they are building agents or folding proteins, LLMs need a friend

interview AI pioneer Vishal Sikka warns to never trust an LLM that runs alone

Feb 7
Study confirms experience beats youthful enthusiasm

Research shows productivity and judgment peak decades after graduation

Feb 6
Four horsemen of the AI-pocalypse line up capex bigger than Israel's GDP

AIpocolypse Amazon, Google, Meta, Microsoft eye $635B in infrastructure spend

Feb 6
Supermarket sorry after facial recognition alert flags right criminal, wrong customer

System worked as intended, but staff then kicked out innocent bystander

Feb 6
Microsoft starts the countdown for the end of Exchange Web Services

Windows giant might try turning it off and on again to see who notices

Feb 6
Romanian rail workers accused of bribery turned to ChatGPT for legal tips

Corruption probe takes detour as staff facing trial reportedly asked AI if seat-blocking scams caused financial damage