Questions we get asked

The stuff people actually want to know before they commit.

Security & Safety

No. Screen content gets sent to Anthropic's API for processing and then discarded. Anthropic doesn't hold onto your screenshots or data beyond what's needed to finish the current task. Nothing is stored long-term, nothing gets fed back into training. We also help you configure which apps and screens Cowork can see, so sensitive stuff never enters the loop in the first place.

Permalink →

Only if you configure it to. Cowork sees your screen and controls your mouse and keyboard, but strictly within the boundaries you set. You pick which applications it can open, which folders it can touch, and what actions it's allowed to take. Without explicit permission, it does nothing. Most teams start locked down tight and open things up gradually as they get comfortable.

Permalink →

Not unless you specifically allow it. File deletion is one of the sensitive actions that gets blocked by default in every deployment we configure. You control exactly what Cowork can and can't do through permission settings. Deleting, moving, or modifying files outside approved folders requires explicit approval gates that you define.

Permalink →

No. Anthropic's terms are clear on this. Data sent through the API isn't used to train their models. Your screenshots, documents, and interactions stay private. This is a common concern and a fair one, but on this point the answer is straightforward.

Permalink →

It will. AI agents aren't perfect, and pretending otherwise would be dishonest. That's why every deployment includes monitoring, rollback procedures, and escalation paths. When Cowork does something unexpected, you'll know about it quickly, and you'll have clear steps to correct it. The goal isn't zero errors. It's catching them fast and limiting their impact.

Permalink →

You are. That's the uncomfortable truth, and anyone telling you otherwise is selling something. The AI is a tool your business chose to use, and responsibility sits with the business. That's exactly why we put guardrails, approval gates, and monitoring in place before anything goes live. The goal is to make sure mistakes are small, caught fast, and never reach a point where liability becomes a serious conversation.

Permalink →

Prompt injection is a real risk, yes. A malicious document could contain hidden instructions that try to trick Cowork into doing something unintended. This is why we configure permission boundaries and approval gates for anything sensitive. Cowork can't send money, delete critical files, or access restricted systems unless you've explicitly allowed it. The attack surface exists, but proper deployment shrinks it to a manageable size.

Permalink →

Approval fatigue is one of the biggest real-world risks with AI agents. If people get asked to approve 50 things a day, they stop reading and just click yes. Our deployment framework addresses this by being selective about what needs approval in the first place. Low-risk actions run automatically. High-risk ones get approval gates with clear context about what's happening. Fewer prompts means each one actually gets read.

Permalink →

Getting Started

At the time of writing, Anthropic offers several plans. Free gives you basic access. Pro is $20/month (or $17/month paid annually) and adds extended features, Claude Code, and more models. Max is $100 or $200/month depending on usage tier, aimed at power users who need 5x or 20x the capacity. For teams, Standard seats run $30/month per person ($25 on annual billing) with a minimum of five members, and that includes admin controls, SSO, and centralised billing. Enterprise pricing is custom. On top of subscription costs, our consulting fees depend on your team size and deployment scope. Check claude.com/pricing for the latest, since Anthropic updates these regularly.

Permalink →

Not much. A computer running Windows or macOS, an Anthropic account, and someone willing to be the first person to try it. That's it for the technical side. The more important preparation is deciding which workflows you want to automate first and getting your team on board with the idea. We handle the rest during deployment.

Permalink →

Stages. Always stages. Start with two or three people who are open to trying it, let them run for a couple of weeks, learn from what works and what doesn't, then expand. Every team that's tried the big-bang approach has regretted it. Staged rollout gives you time to tune the guardrails, build internal expertise, and create advocates who help bring the rest of the team along.

Permalink →

Yes. Cowork runs on Windows 11 Home, Pro, and Enterprise. There are some differences in how IT policies and permissions work across editions, which matters for business deployments. If you're on Home edition in a business context, we'll walk through the implications and make sure your setup is solid.

Permalink →

Cowork runs inside a virtual machine on your computer. Think of it as a contained workspace where the AI operates, separate from your main desktop. It can see and interact with applications inside that VM, but the boundary between the VM and the rest of your machine is one of your key safety layers. We configure what crosses that boundary and what doesn't.

Permalink →

Four to six weeks from kickoff to full team access. First week is assessment and planning. Weeks two and three cover configuration, guardrails, and pilot testing with a small group. Weeks four through six are gradual expansion with monitoring. Some teams move faster, some slower. Depends on your comfort level and how many workflows you're automating.

Permalink →

No. Cowork is built for non-technical people. It works with your existing applications through the same screens and clicks you already use. We handle the technical setup so your team doesn't have to. After deployment, managing Cowork is closer to managing a new employee than managing software.

Permalink →

Team Adoption

Not with Cowork, no. It's a tool that handles the repetitive parts of someone's job so they can spend time on work that actually needs a human brain. Nobody's getting replaced by a thing that fills in spreadsheets and sorts emails. What does change is that people's roles shift toward higher-value work. That said, you need to communicate this clearly to your team, because they're definitely worried about it.

Permalink →

Be honest and specific. "We're bringing in a tool to handle repetitive tasks so you can focus on more interesting work" lands very differently from "We're implementing AI across the business." Name the actual tasks it'll handle. Be clear that nobody's job is at risk. And involve people early. The worst thing you can do is surprise your team with AI on their desktops one Monday morning.

Permalink →

Expect resistance. It's normal and honestly reasonable. People worry about being replaced, about looking incompetent, about trusting a machine with their work. We've seen this at every company we've worked with. Our approach starts with the skeptics, not the enthusiasts. When the person who was most doubtful becomes the one showing others how they use it, adoption takes care of itself.

Permalink →

By not drowning them in approval requests. If someone gets asked to approve 40 actions a day, they'll stop reading them. Guaranteed. The fix isn't more training or stricter policies. It's better configuration. We set up Cowork so low-risk actions run without prompts, and the approval gates that remain are few enough that people actually pay attention to them.

Permalink →

Both, with a lean toward experimentation inside safe boundaries. Give people a clear "you can do this, don't do that" framework, then let them explore within it. The teams that get the most value from Cowork are the ones where individuals find their own uses for it. You can't predict every good use case from the top down.

Permalink →

Not a two-hour workshop with slides. Good training happens in the context of each person's actual work. We sit with them (virtually or in person), look at what they spend their time on, and show them how Cowork handles those specific tasks. One-on-one, practical, grounded in their daily reality. That's what sticks. Generic "intro to AI" sessions don't change behaviour.

Permalink →

Use Cases

It can draft them based on templates and past examples, yes. It's good at pulling together structured documents from existing content. But you'll want a human reviewing anything that goes to a client or carries legal weight. Cowork handles the assembly and formatting. Your team handles the judgment calls about what the document actually says.

Permalink →

Yes, and this is one of the most popular use cases. Cowork can sort incoming email, draft replies, flag things that need attention, and manage calendar scheduling. Most teams set it up with approval gates on sending emails (so nothing goes out without a human check) while letting it handle the sorting and drafting automatically.

Permalink →

Cowork interacts with spreadsheets the same way you do, through the application on your screen. It can enter data, run calculations, create charts, move data between sheets, and generate reports. It's especially good at the tedious stuff: copying data from one system into another, cleaning up formatting, pulling numbers from multiple sources into a single report.

Permalink →

If your team can use the CRM through a browser or desktop app, Cowork can too. It works through the screen, not through APIs. So there's no integration to build, no plugin to install, no compatibility list to check. Salesforce, HubSpot, Pipedrive, or that custom thing your industry uses. If a person can click through it, Cowork can learn to do the same.

Permalink →

Tread carefully. Cowork can technically work with any application, but that doesn't mean it should have access to everything. We configure strict boundaries around sensitive data. Most businesses keep HR systems, payroll, and employee records outside Cowork's reach entirely. If you do want AI assistance with HR workflows, we set up extra approval layers and limit access to specific tasks.

Permalink →

Often more useful, actually. In a small team, everyone wears multiple hats and time is the scarcest resource. If one person spends 10 hours a week on data entry and admin, getting half of that back is a big deal. The ROI per person tends to be higher in smaller teams because there's less slack in the system to begin with.

Permalink →

This is where people underestimate it. Process automation is obvious. But a busy professional making a judgment call often needs to pull information from five different places: old emails, a conversation from three months ago, external research, internal documents, a contact's history. That multi-source legwork is exactly what Cowork is good at. Ask it to gather what you need, and you get a briefing in minutes instead of spending an afternoon digging. Creative work is similar. Cowork won't design your graphics (yet), but it's a capable note-taker, writing companion, and editor. Talk through your ideas, let it capture and structure your thoughts, clean up your prose, check your spelling as you go. For anyone who writes content or makes decisions based on scattered information, the productivity gain is real and often bigger than the process automation savings.

Permalink →

Compliance

Yes. Size doesn't matter here. If your team is using AI, you need written rules about how they use it. Doesn't need to be 30 pages. A one-page document covering what's allowed, what's not, and what needs approval is enough to start. Without it, every person is making their own judgment calls about what's appropriate, and those judgments won't always agree.

Permalink →

It's complicated. Cowork sends screen content to Anthropic's API, which means protected health information would leave your local environment. That creates HIPAA considerations around data transmission and business associate agreements. It's not a flat no, but it requires careful configuration and legal review. We can help you figure out which workflows are viable and which ones to keep manual.

Permalink →

Anthropic maintains SOC 2 Type II compliance for their API infrastructure. For PCI-DSS, the key question is whether Cowork ever sees payment card data on screen. If it does, that's in scope and needs to be handled accordingly. We configure deployments so that card data stays outside Cowork's view unless you've specifically addressed the compliance requirements.

Permalink →

It depends where you operate and what you're using AI for. Several US states have passed or are passing AI-specific legislation, with requirements around disclosure, bias testing, and automated decision-making. The landscape is moving fast. We stay across the major state-level developments and help you understand which ones affect your specific use of Cowork. But we always recommend looping in your own legal counsel for anything compliance-critical.

Permalink →

Cowork logs what it does. We configure those logs to capture the level of detail you need: what actions were taken, what was approved, what was flagged, and what the outcomes were. For most businesses, the built-in logging plus a simple review process is enough. For regulated industries, we can set up more detailed tracking that satisfies auditor requirements.

Permalink →

It can, but compliance depends on how you deploy it. We configure data handling, retention policies, and access controls to meet GDPR requirements for your situation. That includes making sure personal data processing has a lawful basis, that data minimization principles are followed, and that your team knows which workflows involve personal data. We're not lawyers, but we know what your lawyers will ask about.

Permalink →

Costs & ROI

Subscription costs range from $20/month for Pro up to $200/month for Max, depending on usage needs. Team seats start at $25/month per person on annual billing. Add our consulting and training fees on top of that, which depend on team size and deployment scope. Then factor in the time your team spends learning the tool during the first few weeks, because that's a real cost even if it doesn't show up on an invoice. We give you a full breakdown upfront so you can budget properly. See claude.com/pricing for current Anthropic pricing.

Permalink →

Most teams see measurable time savings within the first two weeks of full deployment. Whether that translates to financial ROI depends on what you do with that time. If your $80/hour employee saves 10 hours a week on data entry, the math is straightforward. But honestly, ROI varies a lot. Some workflows save massive amounts of time. Others save a little. We help you start with the high-impact ones so the value is obvious early.

Permalink →

For process work (data entry, report generation, email sorting), 25 to 30 percent time savings is a reasonable baseline. Some tasks see 50% or more. But the gains don't stop at process work. Busy professionals are seeing real value in decision-making and creative work too. Need to make a call on a vendor? Ask Cowork to pull the relevant emails, research their track record, and summarise what your team discussed about them three months ago. That kind of multi-source research used to take an afternoon. Now it takes minutes. The biggest gains often come from work people assumed AI couldn't help with.

Permalink →

Do the maths on your own numbers. Team seats start at $25/month per person on annual billing. If each person saves five hours a month (a conservative estimate), and their loaded cost is $50/hour, that's $250 in recovered time against $25 in subscription costs. For most teams the numbers work, but not for every role. Some people will get enormous value. Others barely touch it. We help you identify who benefits most so you're not paying for seats that don't earn their keep.

Permalink →