How we deploy Cowork safely

Enterprise AI governance rebuilt for the 5–50 person business. No bloated tooling, no guesswork — a framework that actually fits.

Hard vs soft guardrails

What the system controls vs what the LLM controls.

Hard guardrails — the system

Deterministic rules enforced by connectors, APIs, and automation. These aren't suggestions — they're structural.

  • CRM triggers and notifications to managers
  • Access control and role-based permissions
  • Delete prevention and approval workflows
  • "Sales team can't delete orders" is a connector rule, not an LLM rule

The sharp stuff is already handled before Claude ever sees it.

Soft guardrails — the LLM

Contextual judgment calls handled by skills and prompt context. Claude operates within the boundaries set by hard guardrails.

  • Skills, rulesets, and business context shape Claude's behaviour
  • Tone, formatting, and decision-making nuance
  • Domain-specific reasoning and workflow awareness

You don't make Claude bulletproof on everything — you design the workflow so it doesn't need to be.

The calibration process

Iterative refinement, not binary permissions.

You can't say "write a document containing X, Y, Z" and hard-stop if it doesn't — that's not how LLMs work. Instead, you review the work, decide if it's good enough, then layer additional skills and prompt context until output is consistent.

We test across scenarios, edge cases, and different user behaviours. Once calibrated, all config files are marked read-only so employees can't tamper with guardrails. Skills, rulesets, and prompt context become the deployed configuration.

Employees use the system. They don't modify the system.

This is the core consulting value — we do the calibration work, test it thoroughly, then deploy it locked down.

Connector governance

Business rules at the API level, not the LLM level.

Connectors are integrations that hook into external systems: AWS, n8n, Gmail, Google Drive, CRMs. Anthropic maintains a reviewed catalog of options, and custom connectors can be added for anything with an API.

The API behind the connector shouldn't allow actions the role shouldn't perform. When something gets won in the CRM, a notification goes to a manager — that's a hard process, not an LLM decision. CRM workflows, approval chains, notifications — all connector/automation territory.

The LLM adds judgment within those guardrails, not around them. Permissions live on the other side of the connector.

Observability & audit trails

Know what Claude did. Always.

How do you know what Claude did, what decisions it made, what data it touched? You capture the full chain: what the human asked, what Claude responded, what connectors were accessed, what data was read or written, and what decisions Claude made and why.

Open source tools already exist for this — Honeycomb, SigNoz, OpenTelemetry. We bring them to businesses that don't know they exist. Every workflow gets a full audit trail.

Critical for compliance, and critical for understanding when something goes wrong. Making AI-assisted work auditable and defensible.

Data protection

Sensitive information never reaches the model.

The real concern: "What if my LLM leaks credit card details or sensitive source code?" The answer is layered protection. API gateway proxies sit between the business and Claude — inspecting requests, identifying sensitive data, blocking or redacting before Claude ever sees it.

Strip PII, credit card numbers, and secrets on the way in. Monitor what Claude outputs on the way out. DLP layers on both input and output, with integration into existing solutions like Nightfall.

Connector permissions, request proxying, output monitoring — protection at every layer of the stack.

Ready to deploy Cowork the right way?

Tell us about your team and we'll design a governance framework that fits.

Talk to us