AI Training

All Staff AI Learning Pathway

A scaffolded pathway for all employees, delivered following the Foundation GenAI course, with timely pop-up modules woven through that can be done at any point.

Learn AI Skills in Minutes, Not Hours

Our microlearning pathways break down complex AI concepts into focused, 30-45 minute modules that fit into your busy schedule.

Tier 1 (Responsibility): building confident, responsible habits
Course 1 30 minutes
Data, Privacy and AI: Practical Decisions Every Day
Help employees make confident, responsible decisions about data when working with AI tools, and understand the real risks of using unapproved ones.
Takeaway
A desk-ready decision card (Before you share: five questions to ask yourself), a quick reference giving employees a reliable gate-check before inputting any data into an AI tool.
  1. Why data sensitivity matters every time you use an AI tool, even when nothing has visibly gone wrong
  2. How to recognise different levels of data sensitivity, from publicly available information to confidential business and personal data
  3. The five questions to ask before sharing anything with an AI tool, and how to make that check a reflex rather than a chore
  4. What shadow AI is: the unapproved tools employees reach for when approved ones feel slow, limited or inconvenient
  5. Why people use shadow AI: the human motivations behind it, and why acknowledging those makes the risk conversation more honest
  6. The organisational risks shadow AI creates beyond policy non-compliance: data leaving controlled environments, outputs that can't be audited, and liability that is hard to trace
  7. The gap between knowing the rules and following them under pressure, and what closes that gap in practice
  8. What actually happens to data shared with AI tools, including where it may go and what it may be used for
  9. How to build a personal habit of data hygiene that works across different tools and contexts, not just the ones currently approved
  10. How to raise a concern when you're unsure: what to say, who to tell, and why speaking up protects the organisation and the people whose data is at stake
Course 2 30 minutes
AI Bias and Fairness: What Every Employee Should Know
Give employees the awareness and practical tools to recognise bias in AI outputs, and understand why their judgement is the most important quality control in the process.
Takeaway
A one-page human review checklist: questions to ask before using any AI output that informs a decision about people, with a short escalation guide on how to describe and flag a bias concern internally.
  1. What AI bias actually is and why it is structural rather than accidental: a feature of how models learn, not a sign that something went wrong in development
  2. Where bias comes from: training data that reflects historical patterns, and how AI amplifies those patterns rather than correcting them
  3. The most common types of bias employees will encounter: majority-group preference, flattened diversity, outdated assumptions about roles, and embedded preference disguised as neutral language
  4. Concrete workplace examples across hiring, customer communication, performance review and content summarisation: close enough to real work that the risk feels immediate
  5. Why AI outputs feel authoritative even when they are carrying flawed assumptions, and why fluency in an output is not the same as accuracy or fairness
  6. The decision layer: the critical moment between receiving an AI output and acting on it, and why that moment belongs entirely to the human
  7. Why human judgement is irreplaceable: the contextual knowledge, lived experience and moral awareness that no model has access to
  8. How to pause, question and check before using any AI output that touches a decision about a person or group: what that looks like in practice rather than in theory
  9. A paired scenario showing two employees using the same AI output differently: one acting on it directly, one pausing to review, and why the difference matters
  10. How to flag something that feels off: who to tell, how to describe it without needing to be certain, and why raising concerns is part of the role
Responsible individually; now work responsibly together
Tier 2 (Integration): applying skills in a team context
Course 3 30 minutes
Working with AI as a Team: Shared Norms and Collaboration
Help teams move beyond individual AI competence to consistent, coordinated and trustworthy AI use across the group.
Takeaway
A one-page team AI norms starter template: a simple, editable agreement covering the key decisions teams need to make together about AI use, transparency and shared resources, designed to be completed in a single short team meeting.
  1. Why individual AI competence does not automatically produce effective team AI use, and what goes wrong when it is assumed to
  2. The coordination problems that emerge without shared agreements: inconsistent outputs, duplicated effort, and uneven quality reaching clients and stakeholders
  3. What team AI norms look like in practice: concrete, workable agreements rather than policy documents that no one reads
  4. How to build a shared prompt library that the whole team can access, trust and contribute to over time
  5. Transparency between colleagues: what it means to be open about AI involvement in shared work, and why it matters for accountability within the team
  6. Avoiding duplication: how to share AI workflows and reusable resources so individuals are not each solving the same problem from scratch
  7. Agreeing quality standards for AI-assisted outputs before they reach external audiences: what the team's baseline looks like and who is responsible for upholding it
  8. The risks of an unofficial team AI culture: where some people are using AI in ways others don't know about, and how to bring it into the open constructively
  9. How to give and receive useful feedback on shared AI resources so they improve rather than quietly degrade over time
  10. How to start the conversation about AI norms with your team if it hasn't happened yet, making it feel like a practical team decision rather than a compliance exercise
Course 4 30 minutes
Human Skills in an AI-Enabled Workplace
Reinforce the human capabilities that matter more, not less, because of AI, and build the habits that prevent over-reliance.
Takeaway
A one-page action guide with practical exercises to strengthen judgement, problem-solving and adaptability alongside AI, with prompts for weekly reflection on where human skills made the difference.
  1. Why AI raises the value of human skills rather than replacing them, and what that shift means for how employees should think about their own development
  2. Judgement: the skill AI cannot replicate, why it atrophies when over-relied upon, and how to keep it sharp
  3. Critical thinking in practice: questioning assumptions embedded in AI-assisted work before it gets further along the process
  4. Problem-solving: using AI as a thinking partner without outsourcing the thinking; knowing which part of the problem to keep for yourself
  5. Adaptability: staying effective as tools, workflows and team expectations keep changing without a stable end point in sight
  6. Learning agility: building the habit of continuous development alongside AI, not just around it
  7. Recognising the early signs of over-reliance: when AI has become a crutch rather than a tool, and what that costs in the longer term
  8. Emotional intelligence and empathy: why these capabilities carry more weight in an AI-enabled workplace, particularly in roles involving people, conflict or complexity
  9. How to position yourself as someone who uses AI well: maintaining intellectual ownership of your outputs rather than presenting AI-generated work as unexamined
  10. Building a personal development approach that deliberately grows human skills alongside AI capability, rather than treating them as separate tracks
Integrated into work; now make the impact visible and lasting
Tier 3 (Extend): communicating and growing with AI
Course 5 30 minutes
Communicating AI-Assisted Work Transparently
Help employees communicate clearly and confidently about AI's role in their work: building trust with colleagues, managers and clients rather than undermining it.
Takeaway
A one-page disclosure decision guide: a simple framework for deciding when, how and what to communicate about AI involvement, with example phrases for common professional scenarios including client conversations, team handovers and manager check-ins.
  1. Why transparency about AI use is becoming a professional and ethical expectation, not just an organisational one, and how that expectation is shifting across sectors
  2. The spectrum of AI involvement: from light editing assistance to substantial generation, and why the degree of involvement shapes the disclosure decision
  3. How to honestly assess how much AI contributed to a piece of work, including the ways people underestimate or overstate that contribution
  4. A practical framework for when disclosure is necessary, when it is helpful even if not required, and when it is not needed at all
  5. How to talk about AI involvement with colleagues, managers and clients in a way that is honest without undermining confidence in the quality of your work
  6. What accountability looks like when AI is involved: owning the output, the judgement applied to it and the decision to use it, not just the prompt that started the process
  7. The trust implications of undisclosed AI use and what happens professionally when it comes to light unexpectedly
  8. How to handle conversations with clients or stakeholders who have strong or mixed feelings about AI: navigating those moments without defensiveness or dismissal
  9. Building a natural communication habit around AI that becomes part of how you work rather than something you have to consciously decide each time
  10. How your organisation's AI policy shapes your disclosure obligations, and how to make a sensible, defensible decision in situations the policy does not explicitly cover
Course 6 30 minutes
Metacognition and AI: Becoming a Reflective AI User
Strengthen the self-awareness that makes every other AI skill more effective, and build the reflective habit that sustains growth as tools keep evolving.
Takeaway
A personal reflective loop guide for observing and adjusting your own AI thinking over time, and a short presentation template for sharing AI learnings with your team in a structured, useful way.
  1. What metacognition is and why it is the underlying skill that makes all other AI capabilities compound over time
  2. Noticing your own patterns when working with AI: where you default without thinking, where you hesitate, and what those patterns reveal about your assumptions
  3. The reflective loop in practice: observe what happened, question why it went that way, adjust your approach, and repeat with intention
  4. How to recognise when AI use has become habitual rather than intentional, and why that distinction matters for the quality of your outputs
  5. Identifying your own cognitive shortcuts and how AI tools can either support or quietly amplify them depending on whether you're paying attention
  6. Using reflection to improve your prompting over time: learning from what worked and what didn't without starting from scratch after every session
  7. How to learn productively from AI interactions that went wrong: treating failures as diagnostic information rather than dead ends
  8. Building a personal feedback loop: a lightweight way to track what works across different tasks and contexts so that learning accumulates rather than evaporates
  9. How to share AI learnings with your team in a way that is useful to others rather than just interesting: turning individual insight into collective capability
  10. Creating a reflective habit that is sustainable and grows with you, so that as AI tools keep changing, your capacity to adapt changes with them
Tier 1: Responsibility
Tier 2: Integration
Tier 3: Extend
Pop-up module
AI Training