"What if my AI leaks our credit card details?" "What if proprietary source code ends up in a training dataset?" These fears are legitimate, and they're the number one blocker for AI adoption in businesses that handle sensitive data. Dismissing the concern doesn't work. Neither does hoping the LLM provider's terms of service will protect you. The answer is layered protections that prevent sensitive data from reaching Claude in the first place.

Defence in depth

The protection model has three layers. First, connector permissions control what data Claude can access from external systems. If the connector doesn't expose a field, Claude never sees it. Second, request proxying inspects and sanitises data before it reaches the LLM — stripping PII, redacting secrets, and blocking known-sensitive patterns. Third, output monitoring watches what Claude produces and flags or blocks responses that contain data that shouldn't leave the system.

Each layer catches what the others miss. A connector might expose a field that turns out to contain embedded PII. The proxy catches it. The proxy might miss an unusual format. Output monitoring catches it. This defence-in-depth approach is what makes governed deployments trustworthy for businesses with real data sensitivity requirements.