Anyone can install Cowork and point Claude at a task. The hard part is getting it to do that task well, consistently, in your specific business context. That's calibration. And it's less abstract than it sounds.
Start by rethinking the process
Before you configure anything, stop and think about the process itself. An AI is doing this now. How would an AI do it? What are the actual steps involved? Some of those steps might need a human in the loop. Maybe the AI handles steps one through four, then a human reviews and approves before the next skill kicks in to move forward. You need to think about where those handoff points belong.
This is a different exercise from documenting a process for a new hire. A person fills in gaps with common sense and asks a colleague when something doesn't make sense. An AI follows what you gave it. Every assumption you carry around in your head, every bit of context that "everyone just knows" in your business, needs to be made explicit. If it's not in the configuration, it doesn't exist for Cowork.
Do the process, then let Cowork write the documentation
The best way to calibrate is to actually walk through the process with Cowork. Do it together. Step by step. Then, while it's all fresh in context, ask Cowork to write the documentation for what you just did. It's seen the whole thing. It knows the steps, the decisions, the data involved. The documentation it produces will be grounded in reality, not theory.
That documentation becomes the foundation for your skills and configuration. It's already in the language the LLM understands because the LLM wrote it.
Refine by doing
Put your process out there and let your first few people test it. They'll find the lumps. The places where Cowork gets stuck, misunderstands a step, or produces something off. You'd be surprised how many things you assume in your head when you write a process. Things that are obvious to you and your team, but that a new person (or an AI) wouldn't know. Those gaps show up fast once real people are running real tasks.
When you find a gap, fix it. Add the missing business context to the skill. Clarify the instruction that was ambiguous. Then here's the good part: ask Cowork to rewrite its own instructions based on what it learned. It knows where it got stuck. Let it propose a better version of the process documentation. This is continuous improvement built into the tool itself.
Versioning and governance
This is where governance earns its keep. You can let a team member refine a process, let them work with Cowork to improve the skills and instructions. But before that refined version gets pushed out to the rest of the system, it goes through review. You version it. You approve it. Then it rolls out.
Once approved, the configuration locks down. Skills and rulesets become read-only. Your team uses the system as deployed. They don't modify it on the fly. That's the difference between a governed deployment and everyone improvising their own version of the process.
