AI Training Week 5: Risk and Responsible AI (Case-Based)

AI Training Week 5: Risk and Responsible AI (Case-Based)

5/8/202613 views10 min read

TL;DR

  • Week 5 is case-based, not policy-based — Klarna, Builder.ai, Replika, and shadow-AI are the curriculum.
  • The deliverable is a one-page Responsible-AI playbook your role-tracks actually follow, not a deck.
  • Don't outsource this week to legal — run it with your champions, with legal sitting in.

When the GC of a 320-person fintech told me his board wanted "Responsible AI training" before they'd approve the rollout, I told him: don't do a compliance lecture. Run real cases your team will recognize on Monday morning.

Why "Responsible AI" sits in week 5, not week 1

Front-loading risk in week 1 kills the program. People sandbag instead of experiment, and you never get to use cases. Back-loading it past week 5 means risky behavior compounds — by week 6 your team has shipped agents touching real customer data and you haven't talked governance once.

Week 5 is the calibration point. By now your team has built prompt judgment (week 1-2), made tooling choices (week 3), and shipped agents (week 4). They have specific behaviors to govern, not abstractions. About 46% of employees in recent surveys admit to having pasted confidential data into public AI tools — that number is your baseline before week 5, and the whole point is to bring it down with consent and clarity, not threats.

Definition: Responsible AI — the operational practice of deploying AI systems with attention to data privacy, output reliability, escalation paths, and regulatory exposure. Not a slide deck. A set of behaviors per role.

The four cases week 5 actually teaches

Forget "ethical AI" abstractions. The cases that change behavior:

Case 1 — Klarna's customer-service walk-back (2025)

Klarna deployed a fully autonomous AI customer-service agent, publicly celebrated it as replacing 700 staff, then walked it back in 2025 after CSAT dropped. The case teaches the escalation-gap lesson: an AI agent without a human escalation path looks great in demos and breaks under real-world edge cases.

The takeaway your team should leave with: never deploy a customer-facing AI without a documented, fast escalation route. Pair this with the Intercom Fin pattern — AI-first, mandatory human escalation — as the operating model.

Case 2 — Builder.ai's $1.3B collapse (2024)

Builder.ai pitched itself as "AI builds your software" while quietly using human developers behind the scenes. Collapsed in 2024 owing creditors over $1B. The case teaches the honesty lesson: don't market AI capabilities you don't have, internally or externally.

The takeaway: describe your AI-augmented work honestly to customers and to your own team. "Drafted by AI, reviewed by [name]" is fine. "Human-only" when it isn't, breaks trust permanently.

Case 3 — EU AI Act fines + privacy precedents

EU AI Act fines top out at €35M or 7% of global turnover. Recent privacy-grounds cases: Replika fined €5M in Italy, Clearview €30.5M in Netherlands, OpenAI €15M in Italy. The case teaches the regulatory tail-risk lesson: even a 30-500-person company can be exposed if you process EU personal data with AI in a non-compliant way.

The takeaway: map your AI use cases against AI Act risk tiers and against your data residency. Most SMB use cases are limited-risk or minimal-risk — but you need to know which is which.

Case 4 — Shadow AI inside your own company

Stanford's "77% rule" found most AI work in organizations is invisible — shadow, unofficial. About 46% of employees admit to having pasted confidential data into public AI. The case teaches the own-house lesson: the biggest Responsible-AI risk in your company is probably not policy gap, it's the absence of sanctioned tools that meet real workflow needs.

The takeaway: week 3's tool deep-dive is the primary defense against shadow AI — not a policy memo.

Definition: Shadow AI — employee use of unsanctioned AI tools or unsanctioned data inputs into AI tools, typically because sanctioned options don't fit the workflow.

How to structure week 5

The format that works:

  1. Monday — 60-minute case session. Champions walk the cohort through the four cases. 15 minutes per case. Discussion, not lecture.
  2. Tuesday — 90-minute role-track risk audit. Each track audits its week-4 agents and week-2 use cases against the four lessons.
  3. Wednesday — async one-pager drafting. Each role-track drafts a one-page Responsible-AI playbook for their role.
  4. Thursday — 60-minute legal review. GC or external counsel sits in, role-tracks present, legal red-lines.
  5. Friday — 30-minute approval. Founder approves the cross-role playbook. Done.

Tool tip (Course for Business): The reason Augment, don't replace is the right operating principle for week 5 (not "AI is dangerous, restrict everything") is that it builds the human-in-the-loop habit by default. Most regulatory and reputational risks come from over-autonomous deployment, not over-cautious. The 6-week program at https://course.aiadvisoryboard.me/business runs week 5 as case-based discussion led by champions with legal sitting in — never legal-only briefings, which produce policy theater. (Course for Business)

The one-page Responsible-AI playbook (template)

Role-track: [Sales / CS / Finance / Ops / etc.]
Owner: [Champion name]
Last reviewed: [Date]

1. SANCTIONED TOOLS
   - Primary: [tool from week-3 lab]
   - Fallback: [secondary tool]
   - Out-of-scope: [tools not approved for this role's data]

2. DATA WE NEVER PASTE
   - [List 3-7 specific categories: customer PII, contracts under NDA, etc.]
   - Rule of thumb: if it's "you'd be embarrassed if it leaked" — don't paste.

3. HUMAN-IN-THE-LOOP RULES
   - Customer-facing output: ALWAYS reviewed by [role] before send.
   - Financial impact > $[X]: ALWAYS reviewed by [role].
   - Internal-only outputs: spot-check weekly.

4. ESCALATION
   - If AI output is wrong/harmful: notify [name] within 24 hours.
   - If a customer asks "was this written by AI?": disclose honestly.

5. REVIEW CADENCE
   - Champion reviews this playbook every 4 weeks.
   - Update after any incident.

This fits on one page. If yours grows past one page, you've added bureaucracy. Cut it back.

Good vs bad week-5 outcomes

Bad: "We have a 40-page Responsible-AI policy that nobody reads." Good: "Each role-track has a one-pager owned by the champion, posted in the cohort Slack."

Bad: "We banned ChatGPT from corporate networks." Good: "We sanctioned [tool] for [data class] and trained the team to use it correctly. Shadow use dropped within a fortnight."

Bad: "Legal owns Responsible AI." Good: "Champions own Responsible AI behavior. Legal owns Responsible AI red-lines. Different jobs."

Team scan (what AI champions report after week 5)

  • The four cases land harder than any policy memo — Klarna and Builder.ai resonate especially with leadership.
  • About 1 in 3 role-tracks discover a week-4 agent that needs a human-in-the-loop step added — caught here, not in production.
  • Shadow-AI conversations are uncomfortable on day 1, normal by day 5 — name the behavior without naming people.
  • The single most common finding is "we paste customer names into a tool we shouldn't" — fixable in a week with sanctioned alternatives.
  • Champions report that legal participation is essential but legal-led sessions backfire. Run-by-champions, sit-by-legal is the working format.
  • About 1 in 4 cohorts surfaces a regulatory exposure they hadn't thought about (HIPAA, PCI, GDPR, regional financial-services rules).
  • The one-pager playbook is read; multi-page policies are not. Length is the single biggest predictor of actual use.
  • Friday approval should be from the founder/CEO, not legal — signals the operating principle, not the compliance principle.
  • Cohorts that skip week 5 entirely tend to have an incident within 60-90 days that triggers a panic policy roll-out.
  • Most week-5 risks are people-and-process, not technology — which is consistent with the BCG 10-20-70 framing.

Micro-case (what changes after 7-14 days)

A 280-person regtech firm I advised ran week 5 as four cases + per-role one-pagers. Coming in, internal counsel had drafted a 47-page policy that nobody had read. By Friday of week 5, all five role-tracks had a one-pager signed off — and three week-4 agents had a human-in-the-loop step added that wasn't there before. By day 14, an unprompted internal survey found about 70% of employees could correctly name which data class went into which tool. That's the metric — not policy completeness, but employee recall under pressure. Compare to a peer firm that ran a one-shot 90-minute legal lecture as their entire Responsible-AI training: 6 months later, an incident with pasted customer data triggered a freeze on all AI tools and a full restart.

Note on this case: This example is illustrative — based on typical patterns we observe with companies of 30-500 employees, not a single named client. Specific numbers are rounded approximations of common ranges, not guarantees.

Tool tip (Course for Business): Watch for the trap of letting week 5 turn into a legal-led lecture. The cohort that built Shoulder-to-Shoulder judgment in weeks 1-4 will quietly check out the moment they sense a compliance briefing. Champion-led, case-based, legal-witnessed is the format that holds attention. The 6-week program at https://course.aiadvisoryboard.me/business is built around this exact split — and it's why participants leave week 5 more confident in AI use, not less. (Course for Business)

FAQ

Should we hire a Responsible-AI consultant for week 5? Probably not for an SMB. Use the four cases above, your champions, and your own counsel. External consultants tend to produce 40-page deliverables that don't survive contact with role-tracks.

What if our legal team insists on a 30+ page policy? Negotiate: keep their long policy as the legal artifact, but require a one-pager per role-track for actual use. The long doc is for audits; the one-pager is for Tuesday morning.

Do we need a separate ethics committee? Below 500 employees, no. Your champions plus your founder plus your counsel is the committee. Above 500, the question changes.

What about EU AI Act compliance specifically? Most SMB use cases sit in limited-risk or minimal-risk tiers. Map yours, document the data flows, and get formal sign-off if any agent touches biometric, employment-decision, or critical-infrastructure data. (We separately have a daily-management advisory product on aiadvisoryboard.me, but Responsible-AI structure should be in-house.)

What's the one signal that week 5 worked? A random employee, asked "what data should you not paste into ChatGPT?", can answer specifically and correctly within 10 seconds. If they can't — the playbook hasn't landed.

Conclusion

Week 5 is the calibration week for risk. Four cases, role-track audits, one-page playbooks, champion-led with legal sitting in. The output isn't a policy document — it's behavior that survives Monday morning. Run it case-based, not policy-based, and your week-6 finale lands on a stable foundation.

Next step: pick which four cases your cohort runs (the four above are a strong default) and book the legal review session before Monday.

If you want every employee to ship their first AI automation in five days — book a 30-min call and we'll map your team's first week: https://course.aiadvisoryboard.me/business

Frequently Asked Questions

AI-Powered Solution

Ready to transform your team's daily workflow?

AI Advisory Board helps teams automate daily standups, prevent burnout, and make data-driven decisions. Join hundreds of teams already saving 2+ hours per week.

Save 2+ hours weekly
Boost team morale
Data-driven insights
Start 14-Day Free TrialNo credit card required
Newsletter

Get weekly insights on team management

Join 2,000+ leaders receiving our best tips on productivity, burnout prevention, and team efficiency.

No spam. Unsubscribe anytime.