Training Consulting Studio Framework Insights About Contact

Most leaders stop at Cost. The AI wins are in Compliance and Competency.

A people leader's operating model for rolling out AI: no overspending, no data leaks, no deploying to a team that isn't ready.

The 3Cs of AI Adoption

If you lead a team, you've probably had this conversation in the last six months: which AI tool should we buy, and how much will it cost? It's a reasonable question. It's also the wrong place to start.

Cost is the easy part. The platforms are largely commoditized at the entry tier: twenty dollars a seat, more or less, for any of them. The real questions are the ones nobody asks first: where can your team's data safely go, and is your team actually ready to use this stuff?

Three lenses, in order:

01
Cost.
What tools, what stack, what does it actually cost to arm your team end-to-end.
02
Compliance.
Where the actual risk lives: what data goes where, and how hard to scrutinize the output.
03
Competency.
Your team's real skill level, measured rather than guessed. How to survey, pilot, standardize.

Most leaders start and stop at Cost. The wins are in Compliance and Competency.

What follows is the operating model. Each lens has a specific decision attached to it, and the decisions need to happen in this order. If you skip Compliance, you'll roll out AI and discover six months later that your client data has been training someone's model. If you skip Competency, you'll roll out to your weakest users and skip your strongest. Both are unforced errors.

• • •

Cost: The Three Platforms That Matter

The AI market looks chaotic, but for a team leader trying to make a buying decision, it's actually pretty narrow. There are three platforms worth seriously considering, and one default you're probably already paying for.

$20 · $100 · $200 / seat
Most cost-effective. Lowest entry, biggest ecosystem.
Suite ChatGPT (the chat interface most people already know)
Codex (agent workstation for technical work)
OpenAI Platform (prompt eng, API, agents, audio + image)
$20 · $100 · $200 / seat
Premium. Best agent stack and reasoning quality.
Suite Claude (strongest for nuanced reasoning + writing)
Anthropic Console (workbench, prompt eng, API)
Claude Desktop (full computer-use access)
Skill Jar (pre-built skills) · Claude Design
$20 · $250 · Workspace
Broadest. Drive integration is the killer feature.
Suite Gemini (chat, deep Workspace integration)
AI Studio (Google's "Lovable" equivalent)
NotebookLM (research with cited answers)
Labs · Jules · Antigravity (creative + coding agents)
Where most of you start: Most corporate teams default to Microsoft Copilot, and most employees find it inefficient for actual work. The real question isn't "should we use AI?" It's keep Copilot, add on top, or swap?

If you're starting from scratch and want the broadest landing pad: OpenAI. Cheapest entry, biggest third-party ecosystem, and ChatGPT is what most of your team has already used at home.

If your team does serious knowledge work (long documents, regulated content, agent workflows), Anthropic. It costs the same at the entry tier but earns its premium reputation on the agent stack: Claude Projects for shared context, Computer Use for actual workflow automation, and a developing skill ecosystem.

If you live in Workspace, Google. Drive integration means no copy-paste; Gemini reads and writes Docs, Sheets, and Gmail directly. The trade-off: the broader Google AI suite is powerful but disconnected. Pick the two or three tools that fit your workflow. Don't try to use all of them.

Whichever you choose, decide the Copilot question explicitly. "Keep it for the calendar, add Claude for writing" is a real strategy. "Pay for Copilot and ChatGPT and Gemini for everyone, just in case" is how you spend $400/month/person on tools nobody actually opens.

• • •

Compliance: "Is My Data Safe?"

This is the question every team leader asks within the first ten minutes of any AI conversation. The honest answer is more useful than the marketing one.

The risk is less about the AI provider and more about the account tier. The lower the tier of account, the fewer controls you usually have.

Three things to internalize:

OpenAI, Anthropic, and Google all invest heavily in security. They have to: they're selling to enterprises, regulated industries, and governments. Their underlying infrastructure is comparable to the SaaS tools you already trust: Workspace, Dropbox, SharePoint. The provider isn't the vulnerability.

The vulnerability is in how you access the tool, not whether you use it. The same Claude or ChatGPT model behaves very differently depending on whether you reach it through an enterprise contract, a team subscription, a personal account, or a browser extension. Same brain, different walls.

Your team's risk is set by the lowest-tier account anyone on the team is using. If twelve people are on the enterprise plan and one person is pasting client data into a personal ChatGPT account on their home laptop, your real exposure is that one person's setup.

The Account Tier Hierarchy

Four tiers, top to bottom, most secure to most exposed. Your job is to position your team at the highest tier they actually need, then write down the floor for sensitive data.

T1
Enterprise API (contract / BAA required)
Zero data retention. No training on your data. Audit logs. Contractual commitments you can show a regulator. See Anthropic Enterprise or OpenAI Enterprise for terms.
Healthcare · Finance · Legal · anything touching PHI, PII, or financials.
T2
Team / Business subscription
Usually no training on your data. Retention varies; read the terms. Some admin controls.
Most teams of 5+. The minimum responsible default.
T3
Personal subscription (Plus / Pro)
May be used for training unless you toggle it off. Limited admin controls. No central oversight.
Solo experimentation only. Not for team work.
T4
Browser-based + extensions
Adds extension-intercept surface area on top of any tier underneath. The extension can see whatever the page sees.
Avoid for sensitive data. Full stop.

The shorthand to give your team: assume anything you put in a personal account or a browser extension can be seen. If that's not true, fine. You've lost nothing by being cautious. If it is true, you've saved yourself a board meeting.

• • •

The Operating Model: What Data Goes Where

Tier hierarchy tells you which tools are okay. It doesn't tell you what's okay to put into them. For that you need a second axis: how hard you scrutinize the output. Two systems, working together.

← Document classification (Traffic Light)
Green: public · non-sensitive
Marketing copy, public reports, brainstorming, ideation. → Any tier, any tool.
Yellow: internal · proprietary
Strategy docs, internal comms, draft work product. → Team subscription minimum. No personal accounts.
Red: confidential · regulated
Client data, PII / PHI, financials, legal, contracts. → Enterprise API only, with a verification process.
Human review (Level of Scrutiny) →
Low (1 of 3)
Internal drafts, brainstorming, ideation. → Light review. One pass. Trust the model.
Medium (2 of 3)
Client-facing or external-facing work. → Substantive review. Edit + verify claims. SME check.
High (3 of 3)
Regulated, contractual, legally binding. → Verified human-in-the-loop. Documented. Multiple eyes.
Pair them: match the traffic light to the scrutiny level. Red doc = High scrutiny. Always. A two-line rule like this, written down once, prevents 90% of the "wait, who reviewed this?" conversations.

• • •

Competency: Stop Guessing Where Your Team Is. Measure It.

Here's where most rollouts quietly fail. A leader picks a platform, sends out a license, runs a training, and assumes the team will figure it out. Six months later, three people are doing remarkable work with AI, eight are using it like a slightly-better Google, and the rest haven't opened it since the kickoff. The gap isn't access. It's fluency, distributed unevenly across the team, and nobody measured it.

Don't deploy AI to a team you haven't measured. You'll roll out to your weakest users and skip your strongest.

The Competency Loop is three steps. It's not optional. It's not "after we figure out the tool." It's the thing you do before you scale.

01
Survey.
Anonymous baseline of your team's current state. Five minutes. Eleven question groups.
02
Score.
A 3-pillar diagnostic (Conceptual, Operational, Governance) run org-wide and by department.
03
Pilot.
Targeted rollout based on what the survey actually told you, not what you assumed going in.

The three pillars are deliberately separate. Treating them as one number hides the most useful information.

Conceptual
Do they understand the AI landscape: models, vendors, what's possible right now?
Operational
Can they actually use the tools? Prompting, files, agents, workflows?
Governance
Do they understand the risk profile and the rules for responsible use?

A team can score high on Conceptual (they read the newsletters) and low on Operational (they've never built a workflow). Or strong on Operational and weak on Governance. That second pattern is exactly the team that will eventually paste a client contract into a personal ChatGPT account. You only see those mismatches if you measure the pillars separately.

• • •

After the Survey, Build Three Things: In This Order

Once you have a baseline, the rollout has a shape. Three artifacts, in a specific order. The order matters because each one feeds the next.

→ 01
Prompt Library.
Standardize prompts across the team so people stop reinventing every chat. Hosted where your team already lives: SharePoint, Notion, Drive. Built using a framework (CRAFT, below).
First: highest leverage, lowest effort.
→ 02
Power-User Pilots.
Don't roll out to everyone. Start with the people the survey shows are operationally strongest: tier 5+ on the behavior ladder. Power users → wins → pull, not push.
Second: pulls from the survey + library.
→ 03
Resource Center.
Central hub: prompt library, skill library, governance one-pager, recorded demos. Becomes the team's source of truth as you scale.
Third: capstones the rollout. Emerges naturally once you have 2–3 wins.

Survey first. Then library. Then pilots. Then center. Don't skip steps. The most common failure mode is launching with a giant resource center that nobody reads, because there's no library inside it that anyone trusts and no pilot success stories pointing people toward it.

• • •

CRAFT: A Prompt That Gets the Same Answer Every Time

The prompt library is the highest-leverage artifact in the whole rollout. Done right, it does three things at once: it standardizes quality across the team, it gives new hires institutional wisdom on day one, and it turns prompts into an asset that compounds. Bad ones get replaced, good ones get reused.

The framework is CRAFT. Five parts. Use all five and the same input gives you the same output, every time.

C
Context.
The situation, the audience, what you're working on. "I'm a people manager prepping for a 1:1 with a direct report who missed a deadline."
R
Role.
Who AI should act as. "Act as an experienced HR coach who specializes in performance conversations."
A
Action.
What specifically you want it to do. "Draft 3 opening questions for the conversation."
F
Format.
How the output should look. "Bulleted list. Each question ≤ 15 words. Numbered 1–3."
T
Tone.
The voice and register. "Curious and supportive, not accusatory."

Same input → same output, every time. New hires inherit the team's accumulated wisdom on day one. Prompts become an asset that compounds.

If you can only build one thing this quarter, build the library. Five prompts, written in CRAFT, in a shared doc. Add to it as the team finds patterns that work. Cut prompts that don't earn their keep. In six months you have an artifact your team would defend.

• • •

Three Things to Do This Week

The framework above is the long view. Here's the short one: what to actually do on Monday morning.

01
Send the survey.
Goal: 80%+ response rate in 5 business days. Anonymous. Five minutes.
02
Pick one pilot.
A workflow 2–3 people on your team are doing manually today: research, summaries, doc review, comms drafts.
03
Start the resource center.
Even one shared doc with five prompts is a start. Don't wait for perfect.
The Throughline

Cost is the conversation everyone has. Compliance is the conversation that prevents the worst-case scenario. Competency is the conversation that determines whether any of this actually works.

Stop guessing where your team is. Measure it. Then build in order: library, pilots, center. And don't wait for perfect.

• • •

Frequently Asked Questions

What are the 3Cs of AI adoption?

Cost, Compliance, and Competency, in that order. Cost is the platforms and the price per seat. Compliance is where data can safely go and how hard to scrutinize the output. Competency is your team's actual skill level, measured rather than guessed. Most leaders stop at Cost. The wins are in Compliance and Competency.

Which AI platform should my team use?

Three matter for most teams: OpenAI (most cost-effective, biggest ecosystem), Anthropic (premium, best agent stack), Google (broadest, deep Workspace integration). Most corporate teams already pay for Microsoft Copilot. The real question isn't whether to adopt AI, it's "keep Copilot, add on top, or swap?"

Is my data safe in ChatGPT, Claude, or Gemini?

The risk is less about the provider and more about the account tier. The lower the tier, the fewer controls you have. Your team's risk is set by the lowest-tier account anyone on the team is using: assume anything in a personal account or a browser extension can be seen.

What's the minimum responsible default for a team?

A Team or Business subscription (Tier 2). It typically excludes your data from training, gives you some admin controls, and provides a contractual surface to point to. For regulated data (healthcare, finance, legal), go straight to Tier 1: Enterprise API with a contract or BAA.

What is the CRAFT prompt framework?

A five-part structure for writing prompts that produce consistent results: Context, Role, Action, Format, Tone. Built into a team prompt library, CRAFT means same input, same output. New hires inherit the team's accumulated wisdom on day one.

How do I measure my team's AI competency?

Run an anonymous baseline survey across three pillars: Conceptual (do they understand the landscape?), Operational (can they actually use the tools?), Governance (do they understand the risk profile?). Pilot with the people the survey shows are operationally strongest, not the people you assumed were.

• • •

Further Reading

A short list of resources that extend the thinking in this article:

Start with the diagnostic.

Five minutes, anonymous. See where your team actually is across Conceptual, Operational, and Governance. Then we'll help you build the library, the pilots, and the center.

Take the Diagnostic Work With Us