
AI Training Week 2: Use Cases by Role (Workshop Format)
TL;DR
- •Week 2 is when generic prompt drills die — split the cohort by role and run parallel workshops.
- •Each role-track produces a backlog of 5-8 use cases by Friday, scored by time-saved and risk.
- •The deliverable isn't a slide; it's the top 1-2 use cases each team commits to building in week 3.
After watching 30+ founders run week 2 of an AI rollout, my conclusion is this: the cohort that splits by role beats the cohort that stays as one room, every single time.
Why role-splitting matters in week 2
Week 1 was about removing fear and producing one artifact per person. By week 2, that universal lesson has hit its ceiling. A salesperson and a controller now need fundamentally different content — and if you keep them in the same room, both stop paying attention.
Stanford's research on AI deployment in 51 organizations found that the productivity gain depends heavily on how the workflow is shaped: escalation-routing AI yielded around 71% productivity uplift while approval-routing yielded around 30%. The point isn't the percentages — it's that the same tool produces wildly different outcomes depending on the role's workflow shape. Week 2 is where you start letting each team find its own shape.
Definition: Use case — a specific task, owned by a specific role, where AI replaces or augments a step in an existing workflow. Generic ("we should use AI for marketing") doesn't count.
How to structure week 2
The structure that works:
- Monday — 30-minute cohort opener. Recap of week 1 wins. Announce role tracks.
- Tuesday — 90-minute role workshop, in parallel. One AI champion per role-track facilitates.
- Wednesday/Thursday — async use-case capture. Each employee submits 2-3 candidate use cases via a shared form.
- Friday — 60-minute backlog scoring. Each role-track scores its candidates and commits to the top 1-2 for week 3.
Three to seven role tracks for most SMBs. For a 100-person company I usually split: Sales, Customer Success, Operations, Finance, Marketing, People/HR, Engineering. Below 50 people you can collapse some.
Definition: Role-track — a parallel session inside the same week, scoped to one job function, run by an AI champion who lives that function day-to-day.
The role workshop script (90 minutes)
[0:00-10:00] Champion opens. Shows ONE use case from their own week —
before/after, with the actual prompt and the actual output.
[10:00-30:00] Round-robin: each participant names ONE task they spent
more than 30 minutes on last week that felt repetitive.
No "AI-fying" yet. Just naming the pain.
[30:00-55:00] Pair-and-prompt: pairs work on each other's task. 12 minutes
per task. The prompt is written together, on a shared screen.
[55:00-75:00] Scoring drill: each pair scores their use case on three axes:
time-saved (per week, per person), risk (low/med/high),
and ownership (who builds it in week 3?).
[75:00-90:00] Backlog sort: champion projects all use cases on the screen.
Group votes top 5-8 for the role-track backlog.
Every part of this is fightable, except the round-robin. Skip the round-robin and you get vague aspirations instead of named tasks.
Tool tip (Course for Business): The reason role-splits beat all-cohort week-2 sessions is Augment, don't replace lands differently per role. A salesperson hears "draft my outreach 5x faster"; a controller hears "explain a variance in 2 minutes instead of 30." If both are in the same room, the framing collapses to abstraction. The 6-week program at https://course.aiadvisoryboard.me/business runs week 2 as parallel role-tracks for exactly this reason. (Course for Business)
What the per-role backlog actually looks like
A real backlog from a 140-person services firm I advised — anonymized, but the shape is representative.
Sales (8 candidates, top 2 picked):
- Personalize 50 outbound emails/day from CRM context (top pick)
- Auto-summarize discovery calls into 5-bullet briefing for AE handoff (top pick)
- Draft follow-up nudge based on call transcript signals
- Generate 1-pager from prospect's website + LinkedIn
Finance (6 candidates, top 2 picked):
- Variance explanation drafts from monthly close data (top pick)
- AP invoice triage: extract vendor + amount + GL code (top pick)
- Audit-question first-draft answers
- Forecast commentary
Operations (7 candidates, top 2 picked):
- Vendor email triage and reply drafts
- Incident post-mortem first drafts (top pick)
- SOP generation from screen-recordings (top pick)
The pattern: each role-track ends with 1-2 commitments owned by named people, not 8 wishes owned by no one.
Good vs bad use cases
Bad: "Use AI for marketing." Good: "Generate 10 LinkedIn-post variants per week from our existing case studies, owned by [name], target time-saved 4 hours/week."
Bad: "AI-powered support." Good: "Draft first-response replies for tier-1 tickets in Zendesk, escalate anything mentioning churn or refund — owned by [name], target deflection 30% within 4 weeks."
The good versions name the input, the output, the owner, and the metric.
Team scan (what AI champions report after week 2)
- Each role-track typically surfaces 5-12 candidate use cases; ~30% are duplicates across teams.
- The single highest-leverage use case in week 2 is almost always email or document drafting — boring, but enormous time-saved.
- Sales and customer-success role-tracks generate the most candidates; legal and compliance the fewest (and the highest-stakes ones).
- Champions report that role-splitting cuts session-runtime questions roughly in half vs week 1.
- Engineering teams want to skip ahead to agents — hold them to week 4.
- Finance teams want to skip the workshop entirely and have IT build them a tool — push back, the workshop is what builds judgment.
- The top blocker to scoring is "I don't know how often I do this" — encourage rough estimates over none.
- About 1 in 4 use cases identified in week 2 turn out to be data-access problems, not AI problems — surface this early.
- Cross-team use cases (Sales→CS handoff, Ops→Finance close) are the highest-ROI but the slowest to ship — flag, don't kill.
- The week-2 backlog is the artifact you'll reuse for the entire 6-week program.
Micro-case (what changes after 7-14 days)
A 95-person SaaS company I advised ran week 2 as four parallel role-tracks: Sales, CS, Engineering, G&A. Each track had a champion who'd been one cohort ahead. By Friday, 31 candidate use cases had been logged — 8 made the committed-build list. By day 14, two of those eight (sales-call summary and CS first-draft replies) were already producing outputs in production. The CFO, who'd been skeptical, asked to expand the G&A track because "the variance-explanation prompt actually works." Total external spend in week 2: $0 — no consultants, just AI champions running their own role-tracks. Compare to a peer company that hired a Big-4 firm for the same exercise and produced a 60-page deck with zero shipped use cases by week 4.
Note on this case: This example is illustrative — based on typical patterns we observe with companies of 30-500 employees, not a single named client. Specific numbers are rounded approximations of common ranges, not guarantees.
Tool tip (Course for Business): The cheapest mistake in week 2 is letting one champion run all role-tracks. The whole point of AI Champions (1:15-20) is that each track has a champion who lives that workflow — a sales champion can spot a fake "AI use case" that's really a CRM hygiene problem in 30 seconds. The 6-week program at https://course.aiadvisoryboard.me/business assigns champions per role precisely so that judgment scales. (Course for Business)
FAQ
What if a role only has 2-3 people? Merge with the closest adjacent function (e.g., a 3-person legal team merges with finance, not marketing). Keep the workshop focused on workflow shape, not org chart.
Can week 2 be async? The use-case capture can. The 90-minute pair-and-prompt cannot — that's where judgment transfers. Async workshops produce backlogs full of wishes, not commitments.
How do we score risk on a use case? Three buckets: low (internal-only output, human always reviews), medium (customer-facing draft with human review), high (autonomous customer-facing or financial decisions). Week 2 should commit only to low and medium.
What if Engineering insists on building agents now? Note their use cases, hold them to week 4. Premature agent-building is the most common reason weeks 3-5 collapse. (We separately have an advisory product for daily-management of the broader rollout, but skip it for now.)
Should the founder attend role-tracks? Drop into 2-3 of them for 10 minutes each. Don't sit through full sessions — your presence collapses honest pain-naming.
Conclusion
Week 2 is where the program either fans out to match the shape of the company or collapses back to abstraction. Split by role. Commit to 1-2 use cases per track. Push agent-talk to week 4. The week-2 backlog is the spine of the next four weeks.
Next step: name your role-track champions and put their workshops on the calendar before Monday's opener.
If you want every employee to ship their first AI automation in five days — book a 30-min call and we'll map your team's first week: https://course.aiadvisoryboard.me/business
Frequently Asked Questions
Ready to transform your team's daily workflow?
AI Advisory Board helps teams automate daily standups, prevent burnout, and make data-driven decisions. Join hundreds of teams already saving 2+ hours per week.
Get weekly insights on team management
Join 2,000+ leaders receiving our best tips on productivity, burnout prevention, and team efficiency.
No spam. Unsubscribe anytime.
Related Articles

JCB Hit 83% Monthly Copilot Use — What They Did Differently
JCB reached 83% monthly active Copilot usage — far above industry-typical drop-off. The program design that produced this and what an SMB owner can copy.
Read more
Huber+Suhner Reached 99% AI Pilot Adoption — The Playbook
Huber+Suhner's AI pilot reportedly hit 99% adoption — an outlier figure. The program design behind it and what an SMB owner can realistically copy.
Read more
AI Training Week 6: Champions and Final Projects
Week 6 closes a 6-week corporate AI program with champion graduation and shipped final projects per role-track. The handover format that keeps adoption alive past the cohort.
Read more