Library

Curated articles

Our library features curated AI articles from expert voices, each with a summary and analysis of the key implications for AI strategy and training - so you can quickly grasp what matters and take action.

March February January Evergreen Articles McKinsey Feb 2026

March 2026

Back to top
One Useful Thing 12 March 2026

The Shape of the Thing

Ethan Mollick argues we have entered the age of managing AI rather than working with it – and that the choices organisations make right now will set precedents for everyone else.

What This Means for AI Strategy and Training
  • • Shift AI training from prompting skills to managing and delegating work to AI agents
  • • Act now: organisations experimenting with AI today are setting precedents for how it gets used everywhere
  • • Prepare for rolling disruption – sudden AI capability leaps that trigger rapid market, job and policy shifts
  • • Build governance and norms for AI use before external frameworks are imposed

Click for article summary...

Summary

Ethan Mollick argues that AI has entered a new phase: rather than prompting AI back-and-forth, we now manage agents that can handle hours of human work autonomously. Exponential capability gains are enabling radical experimentation in how organisations work, illustrated by StrongDM's fully AI-coded Software Factory. A single week in February showed what rolling AI disruption feels like: sudden market reactions, job impacts and policy conflicts arriving together. With recursive self-improvement now on every major lab's roadmap, the window to shape how AI is used may not stay open for long.

click to return...

Harvard Business Review 9 March 2026

Has AI Ended Thought Leadership?

A veteran thought leader argues that AI has commoditised expert-sounding content – and that only original thinking and lived experience will remain credible.

What This Means for AI Strategy and Training
  • • Challenge leaders to develop genuine perspectives rather than rely on AI-polished summaries
  • • Use AI to accelerate research and synthesis, but invest in the original insights AI cannot produce
  • • Recognise that credibility now rests on lived experience, original research and authentic voice
  • • Shift AI training programmes from content generation towards original thinking and judgment

Click for article summary...

Summary

John Winsor, a self-described thought leader with six books and decades of experience, argues in this HBR essay that AI has commoditised the kind of expert-sounding content on which the category depended. As AI can now synthesise, summarise and package ideas at scale, generic insight faces rapid devaluation. What AI cannot replicate – original research, lived experience and genuine perspective – becomes the only defensible source of authority. The implication for organisations is clear: building AI capability must prioritise original thinking and judgment, not just the efficient production of polished AI content.

click to return...

Financial Times 7 March 2026

The AI pension advisers are already here

Millions of people are already turning to chatbots to plan their retirement – and the financial advice industry is struggling to keep pace with consumer adoption.

What This Means for AI Strategy and Training
  • • Prepare financial services teams for AI-assisted client interactions as consumer adoption accelerates
  • • Train advisers to work alongside AI tools rather than position them as competitors
  • • Build governance and compliance frameworks before AI enters client-facing advice processes
  • • Address the trust gap: adviser comfort with AI recommendations averages just 4.1 out of 10

Click for article summary...

Summary

Millions of people are already using chatbots such as ChatGPT to plan their retirement, with an estimated 2.7 million UK adults now turning to AI for financial guidance and more than half willing to act on it. Among financial advice firms, AI adoption has more than doubled in a year – from 29% to 60% – yet advisers remain cautious about client-facing use, with average comfort scores of just 4.1 out of 10. Concerns centre on trust in outcomes and regulatory compliance, highlighting a growing gap between consumer adoption and professional readiness.

click to return...

Harvard Business Review 5 March 2026

When Using AI Leads to "Brain Fry"

A BCG study of 1,500 workers finds that intensive AI oversight is causing a new form of cognitive fatigue – but when AI offloads repetitive work, stress levels fall.

What This Means for AI Strategy and Training
  • • Teach employees to use AI to offload repetitive tasks, not just supervise more AI output
  • • Set clear limits on simultaneous AI tool use: productivity peaks at two to three tools
  • • Train managers to be intentional with AI – leadership behaviour directly reduces brain fry
  • • Redesign workflows before deploying AI, rather than layering it onto existing demands

Click for article summary...

Summary

A BCG study of around 1,500 workers, published in HBR, finds that AI is creating contradictory effects on employee wellbeing. When workers constantly supervise multiple AI systems or juggle several tools, cognitive fatigue – or "brain fry" – increases sharply: one in seven workers reports it, with associated rises in errors, decision fatigue and intention to quit. Yet when AI offloads repetitive tasks, stress drops around 15%. Productivity peaks at two to three tools simultaneously. The researchers argue that organisations must redesign work, rather than layer AI on top of existing processes.

click to return...

Cambridge Judge Business School 4 March 2026

Report reminds that hot tech doesn't equate with bubbles

A 126-year market analysis co-authored by Cambridge Judge's Professor Elroy Dimson finds that new technologies do not always generate bubbles – and bubbles do not necessarily imply weak long-term returns.

What This Means for AI Strategy and Training
  • • Resist both AI hype and AI scepticism – evidence matters more than sentiment
  • • Use historical precedent to build board confidence in AI investment
  • • Avoid bubble-thinking that leads to under-investment in transformative technology
  • • Focus AI investment on long-term capability, not short-term market noise

Click for article summary...

Summary

The UBS Global Investment Returns Yearbook 2026, co-authored by Professor Elroy Dimson of Cambridge Judge Business School, analyses 126 years of market data and challenges the assumption that hot new technologies inevitably produce bubbles. Railroads still outperform despite ceding dominance; technology has delivered 14.1% annualised returns over 29 years against 10.0% for the US market – even for investors who bought at the March 2000 dot-com peak. The Yearbook concludes that investors should shun neither new nor old industries: both overenthusiasm and excessive pessimism are persistent errors.

click to return...

McKinsey & Company 4 March 2026

How Danone is reinventing FMCG operations

Danone's COO Vikram Agarwal explains how AI and Industry 5.0 are transforming the food company by putting operations at the centre of growth.

What This Means for AI Strategy and Training
  • • Reframe operations as a growth driver, not a cost centre, when making the case for AI investment
  • • Build AI use cases bottom-up from the frontline – not top-down from head office
  • • Invest in workforce training at scale: Danone trained 20,000 operations staff through its Industry 5.0 Academy
  • • Digitise only when it delivers measurable business value – cost, capacity or customer service

Click for article summary...

Summary

In this McKinsey interview, Danone COO Vikram Agarwal outlines the company's three-part operations transformation: digitalising planning processes, building production capacity with precision and investing in people and digital skills. AI now drives predictive maintenance, COGS forecasting and supplier partnerships, repositioning operations from a cost centre to a growth engine. Danone's Industry 5.0 Academy has already trained 20,000 of its 47,000-strong operations workforce since mid-2025. Agarwal's central lesson: digitisation must deliver measurable business value, and use cases must come bottom-up from the frontline – not simply be imposed from the top down.

click to return...

Harvard Business Review 4 March 2026

Research: How AI Is Changing the Labor Market

Six years of US job postings reveal AI cutting demand for automation-prone roles while boosting demand for work that humans and AI do together.

What This Means for AI Strategy and Training
  • • Invest in reskilling workers in automation-prone roles before displacement accelerates
  • • Focus training on non-automatable skills: judgement, interpersonal communication and AI literacy
  • • View AI as an augmentation tool, not just a cost-cutting measure, when designing workforce strategy
  • • Build continuous upskilling programmes as skill requirements in augmentation-prone roles broaden

Click for article summary...

Summary

Research by Harvard Business School's Suraj Srinivasan, analysing nearly all US job postings from 2019 to March 2025, finds that generative AI is reshaping the labour market in two directions. Since ChatGPT's launch, postings for automation-prone roles fell 13%, while demand for augmentation-prone roles – those requiring analytical, technical or creative work enhanced by AI – grew 20%. Skill requirements in automation-prone jobs are shrinking, while AI-related skills are rising in others. The researchers recommend investing in reskilling, continuous upskilling and treating AI as an augmentation tool rather than a cost-cutting measure.

click to return...

BCG 4 March 2026

Four Ways GenAI Improves the Lives of Frontline Workers

BCG argues that GenAI can dramatically improve day-to-day life for frontline and hourly workers – from smarter scheduling to in-the-moment training and troubleshooting.

What This Means for AI Strategy and Training
  • • Use GenAI to balance customer, productivity and people outcomes.
  • • Build AI-powered scheduling that respects frontline preferences.
  • • Give workers instant, in-flow training and troubleshooting.
  • • Centralise technical and compliance knowledge into one AI assistant.

Click for article summary...

Summary

The article explains how GenAI can ease the daily pressures on frontline workers by simplifying scheduling, delivering instant training, troubleshooting issues in real time and centralising scattered technical and compliance information. Drawing on BCG's Frontline Ops AI tool and real-world examples from hospitals, quick-service restaurants and public-sector call centres, it shows GenAI reducing complexity, cutting call and wait times, and lowering burnout and attrition. The core message is that when organisations embed GenAI into the systems that govern how work actually gets done, AI becomes a decision support engine that makes frontline work more autonomous, humane and effective.

click to return...

BCG 2 March 2026

AI-First Hotels: Leaner, Faster, Smarter

AI-first hotels are already realising measurable gains in cost efficiency, customer experience and revenue growth – and the window for catching up is closing fast.

What This Means for AI Strategy and Training
  • • Redesign workflows around AI from the outset, rather than layering tools onto existing processes
  • • Build the people and data foundations AI requires before expecting returns at scale
  • • Train staff for role shifts, from administrative work to higher-value customer interaction
  • • Use co-design and change management to build employee confidence in AI tools

Click for article summary...

Summary

This BCG report, produced with New York University hospitality and AI experts, examines how AI is transforming hotels across three dimensions: commercial and customer excellence, cost advantage through automation and robotics, and faster design and construction. AI-scaling companies are already seeing measurable gains in marketing, guest experience and staffing efficiency. Hotels that treat AI as an add-on will fall behind those that rewire fundamentals. Success requires people strategy, data integration and ongoing capability-building alongside technology – yet only 2.9% of hospitality workers currently have AI skills, compared with 21% in tech.

click to return...

DataCamp March 2026

The 2026 State of Data & AI Literacy Report

Based on a YouGov survey of 500+ enterprise leaders, the report reveals how organisations are building data and AI skills – and why workforce readiness is a defining competitive advantage.

What This Means for AI Strategy and Training
  • • Treat data and AI literacy as a core workplace skill
  • • Focus training on interpretation, judgement and practical application
  • • Move beyond fragmented, role-limited training to organisation-wide capability building
  • • Link workforce AI readiness to investment returns and business outcomes

Click for article summary...

Summary

Based on a YouGov survey of 517 enterprise leaders across the US and UK, the 2026 State of Data and AI Literacy Report finds that data and AI skills are now viewed as workplace fundamentals alongside writing and project management. However, most organisations face a readiness gap – not in advanced engineering, but in interpretation, judgement and practical application. Training remains fragmented, role-limited or too passive to build capability at scale. The report highlights that organisations seeing stronger returns from AI are those investing systematically in their people, and argues that workforce readiness is emerging as a defining competitive advantage.

click to return...

February 2026

Back to top
Harvard Business Review 25 February 2026

Our Favorite Management Tips on Building Trust on Your Team

A curated collection of HBR's best management tips on building and sustaining trust within teams, a foundation increasingly tested by AI-driven change.

What This Means for AI Strategy and Training
  • • Recognise that trust is the foundation teams need before AI can be adopted effectively
  • • Equip managers to maintain team trust through periods of AI-driven uncertainty and role change
  • • Use transparent communication and shared learning to build confidence alongside new AI tools
  • • Invest in leadership behaviours that sustain psychological safety as workflows evolve

Click for article summary...

Summary

HBR curates its best management tips on building trust within teams, drawing on practical advice for leaders navigating collaboration, motivation and team dynamics. Trust underpins every aspect of effective teamwork, from open communication and accountability to psychological safety and shared purpose. As AI reshapes roles, workflows and expectations, team trust becomes even more critical — and more easily eroded. Leaders who invest in trust-building behaviours create the conditions for teams to experiment with AI confidently, raise concerns openly, and adapt together rather than retreat into uncertainty.

click to return...

BCG 24 February 2026

Five Things Boards Need to Get Right with AI

Board-level AI governance requires strategic focus, investment discipline and firsthand experience with the technology — not just awareness that AI matters.

What This Means for AI Strategy and Training
  • • Align AI priorities directly with business strategy, not as a parallel initiative
  • • Equip board members with firsthand AI experience to improve oversight and judgement
  • • Treat technology and partner choices as long-term strategic commitments, not reversible experiments
  • • Ensure leadership incentives reward AI adoption outcomes, not just pilot activity

Click for article summary...

Summary

BCG argues that boards must move beyond awareness and into active AI governance across five areas: setting pace and priorities tied to competitive strategy; protecting strategic freedom by scrutinising technology and partner lock-in; managing AI investment as a deliberate portfolio balancing near-term returns with longer-horizon bets; aligning incentives and organisational readiness so ambition does not outrun delivery; and maintaining disciplined external communications as AI raises the stakes of every public statement. The article emphasises that boards do not need to become AI experts, but must gain firsthand experience with the tools to govern effectively rather than relying on secondhand briefings.

click to return...

University of Cambridge 20 February 2026

Most AI Bots Lack Basic Safety Disclosures, Study Finds

An investigation of 30 leading AI agents finds that developers readily share capability data but withhold the safety disclosures needed to assess real-world risk.

What This Means for AI Strategy and Training
  • • Assess the safety disclosures and transparency of AI agents before deploying them in the organisation
  • • Train teams to distinguish between vendor capability claims and verified safety evaluations
  • • Build awareness of prompt injection risks and the limits of autonomous browser-based agents
  • • Develop governance frameworks that require safety evidence, not just capability demonstrations

Click for article summary...

Summary

A research team led by the University of Cambridge, with collaborators from MIT, Stanford and other institutions, assessed the transparency and safety practices of 30 leading AI agents. The AI Agent Index found that while developers readily publicise what their agents can do, most withhold the safety evidence needed to assess risk. Only four agents have published formal safety documents covering the bot itself, 25 out of 30 do not disclose internal safety results, and browser-based agents — which operate at the highest levels of autonomy — have the worst safety reporting. The researchers warn of a 'transparency asymmetry' that amounts to a weaker form of safety washing, and call for governance frameworks that keep pace with increasingly autonomous AI systems acting in the real world.

click to return...

McKinsey & Company 18 February 2026

The State of Organizations 2026

A survey of 10,000+ executives finds 75% of organisations struggling to build high-performance cultures, with rising productivity pressure and an AI readiness gap undermining results.

What This Means for AI Strategy and Training
  • • Balance productivity pressure with investment in people to sustain performance
  • • Close the AI readiness gap — 86% of leaders say their organisation is not prepared to embed AI into daily operations
  • • Simplify workflows and decision-making before layering on AI automation
  • • Redesign performance systems, incentives and career pathways to support AI-era ways of working

Click for article summary...

Summary

McKinsey's 74-page report, based on input from over 10,000 executives across 15 countries and 16 industries, identifies nine shifts reshaping organisations, driven by AI acceleration, geopolitical disruption and evolving workforce expectations. A central finding is that 75% of organisations struggle to build lasting high-performance cultures, with limited career progression, poor incentives and disengaged employees as the top barriers. Critically, high-pressure environments are counterproductive — organisations that push hard on output without investing in people see lower willingness and commitment, while those balancing both are four times more likely to sustain top-tier financial performance. The report also highlights a major AI readiness gap and argues that the next productivity frontier lies not in restructuring but in improving how work flows across the organisation — simplifying workflows, reducing handoffs and clarifying decisions before automating.

click to return...

One Useful Thing 18 February 2026

A Guide to Which AI to Use in the Agentic Era

As AI shifts from chatbots to agents, choosing the right combination of model, app and harness matters more than picking the smartest model alone.

See our microlearning course on harnessing AI

What This Means for AI Strategy and Training
  • • Understand the difference between AI models, apps and harnesses to make informed choices
  • • Move beyond chatbot interactions towards agent-based workflows for real work
  • • Invest in learning how to manage and delegate to AI, not just prompt it
  • • Evaluate AI tools by what they can do in context, not by benchmark scores alone

click for summary...

Summary

The article explains that using AI now means choosing between models, apps and harnesses rather than simply picking a chatbot. As AI shifts from conversation to autonomous action, tools like Claude Code, Claude Cowork and NotebookLM let AI complete multi-step tasks independently. The guide argues that the right harness matters more than the smartest model, and that learning to manage AI as a worker rather than prompting it as a tool is the key skill shift for professionals in the agentic era.

click to return...

Fortune 13 February 2026

When Will AI Kill White-Collar Office Jobs?

A senior technology leader warns that AI could significantly reshape white-collar roles within 18 months, accelerating automation across knowledge work.

What This Means for AI Strategy and Training
  • • Plan for rapid automation of routine knowledge tasks within 12–18 months
  • • Redesign roles around judgement, creativity and oversight
  • • Accelerate reskilling in AI collaboration and critical thinking
  • • Treat workforce transformation as an immediate priority

Click for article summary...

Summary

The article reports comments from Mustafa Suleyman of Microsoft, who suggests AI could begin replacing significant elements of white-collar work within 18 months. Rather than a distant disruption, he frames the shift as imminent, particularly for routine knowledge tasks. While whole professions may not disappear, many roles will be redefined as AI systems take on analysis, drafting and administrative work, requiring organisations to rethink skills, structures and workforce strategy quickly.

click to return...

Business Insider 13 February 2026

McKinsey's 25,000 AI Agents Aren't a Metric of Success, Its Rivals Say

Consulting firms are racing to deploy AI agents, but rivals argue the number of agents matters less than the quality, efficiency and real business value they deliver.

What This Means for AI Strategy and Training
  • • Measure AI agent value by productivity, quality and cost outcomes, not volume
  • • Focus on deploying fewer, higher-impact agents rather than scaling for scale's sake
  • • Align AI agent strategy with clear business priorities and performance indicators
  • • Build team capability to work alongside and evaluate AI agents effectively

click for summary...

Summary

The article reports that McKinsey has deployed 25,000 AI agents in under two years, with plans to pair every employee with at least one agent. However, rivals EY and PwC argue that volume is the wrong measure of success. They focus instead on productivity, quality and cost outcomes, with EY noting that just a handful of agents deliver the most value. The piece highlights a broader debate about how to measure AI adoption maturity as consulting firms race to embed AI across their operations.

click to return...

CNBC 13 February 2026

'Something Big Is Happening in AI' – Viral Essay Wasn't Meant to Scare People

An investor's viral essay arguing that AI's disruptive potential is under-appreciated drew over 80 million views, urging professionals to start experimenting with AI now.

Read Matt Shumer's essay here

What This Means for AI Strategy and Training
  • • Encourage hands-on AI experimentation across all professional roles
  • • Treat AI disruption as imminent, not distant, for knowledge workers
  • • Challenge assumptions that certain roles are immune to AI impact
  • • Use the growing public debate to build urgency for AI readiness programmes

click for summary...

Summary

Investor Matt Shumer's essay 'Something Big Is Happening' went viral with over 80 million views, arguing that AI's capabilities are widely under-appreciated. He says AI can already perform the technical work of his job and expects professionals in law, finance, medicine and other fields to face similar disruption. While he clarified the essay wasn't meant to scare, his core message is clear: people should start using AI tools now to understand what is coming rather than assume their roles are protected.

click to return...

Anthropic 10 February 2026

Cowork: Claude Code for the Rest of Your Work

Anthropic's Cowork extends Claude Code's agentic capabilities to non-coding tasks, now available on Windows with file access, multi-step tasks, plugins and MCP connectors.

What This Means for AI Strategy and Training
  • • Explore agentic AI for non-coding tasks such as document creation, file management and report drafting
  • • Recognise that AI agents can now complete sustained, multi-step work with minimal supervision
  • • Prepare teams for a shift from back-and-forth prompting to delegating tasks to AI
  • • Evaluate how folder-based AI tools could streamline everyday knowledge work

Click for article summary...

Summary

Anthropic launched Cowork, a tool that brings Claude Code's agentic capabilities to non-technical tasks. Users give Claude access to a folder on their computer, and it can read, edit and create files autonomously — organising downloads, building spreadsheets from screenshots, or drafting reports from scattered notes. Now available on Windows with full feature parity, Cowork also introduces global and folder-specific instructions so users can tailor Claude's behaviour across sessions. Built on the same foundations as Claude Code, Cowork signals a broader shift from conversational AI towards sustained, agent-based work for all professionals.

click to return...

Harvard Business Review 10 February 2026

Gen AI Won't Make Your Employees Experts

AI can help workers tackle complex tasks faster, but it does not turn novices into experts. Its impact depends on how it is used, understood, and integrated with human skill development.

Read about our GenAI Foundation Course

What This Means for AI Strategy and Training
  • • Gen AI speeds up learning but does not close the expertise gap
  • • Use AI to augment, not replace, deliberate skill development
  • • Pair AI deployment with structured training and mentorship
  • • Evaluate AI by real performance outcomes, not just tool usage

Click for article summary...

Summary

The article argues that while generative AI can reduce the time it takes novices to become competent at unfamiliar tasks, it does not automatically make them experts. Leaders often assume that access to powerful AI tools alone will upskill employees, but achieving expertise still requires human learning, context, judgement and feedback. Effective AI strategy should integrate AI with training, mentorship and real-world practice to truly enhance capability rather than create only a superficial sense of competence.

click to return...

Harvard Business Review 9 February 2026

AI Doesn't Reduce Work — It Intensifies It

Research suggests that rather than lightening workloads, AI adoption can intensify work demands by raising expectations, increasing complexity and shifting effort in unexpected ways.

What This Means for AI Strategy and Training
  • • Prepare for AI to reshape workloads rather than simply reduce them
  • • Set realistic expectations about how AI changes effort and output demands
  • • Train teams to manage the hidden costs of AI-augmented work
  • • Monitor wellbeing as AI shifts the nature and intensity of tasks

click for summary...

Summary

The article challenges the assumption that AI reduces work. While AI can automate routine tasks like drafting documents and summarising information, research suggests it often intensifies work by raising output expectations, increasing complexity and creating new demands. Rather than freeing time for high-value tasks, AI adoption can shift effort in ways organisations do not anticipate. The findings highlight the need for realistic planning around how AI changes work, not just whether it is adopted.

click to return...

Bloomberg 4 February 2026

What's Behind the 'Saaspocalypse' Plunge in Software Stocks?

Software stocks are sliding as investors question SaaS growth models amid AI disruption, pricing pressure and shifting enterprise spending patterns.

Read what our CEO Chris Hornby has to say on this topic

What This Means for AI Strategy and Training
  • • Expect pressure on SaaS firms without clear AI differentiation
  • • Reassess pricing as AI lowers costs and raises competition
  • • Prioritise AI integration that drives real customer gains
  • • Prepare for tighter scrutiny of growth and profitability

Click for article summary...

Summary

The article examines the sharp decline in software stocks, dubbed the 'Saaspocalypse', as investors rethink the sustainability of traditional SaaS models. Slowing growth, higher interest rates and AI-driven disruption are compressing valuations. Generative AI is lowering barriers to entry, intensifying competition and challenging premium pricing. Companies that cannot clearly demonstrate durable differentiation, strong margins and meaningful AI integration face greater pressure in a more sceptical market environment.

click to return...

The Next Recession 3 February 2026

AI and Creative Destruction

While AI has the potential to boost productivity, it will also disrupt existing jobs, industries and business models and gains from AI are unlikely to be evenly distributed.

What This Means for AI Strategy and Training
  • • Treat AI as a structural shift, not just a productivity tool
  • • Invest in reskilling and redeployment, not only efficiency gains
  • • Prepare leaders and teams for uneven and disruptive change
  • • Use training to help organisations adapt roles and work models over time

Click for article summary...

Summary

The article examines AI through the lens of creative destruction, arguing that while AI has the potential to boost productivity, it will also disrupt existing jobs, industries and business models. The piece challenges overly optimistic narratives, suggesting that gains from AI are unlikely to be evenly distributed and may take time to materialise. It highlights the risk that organisations and economies focus on technology adoption without sufficient attention to how work, skills and value creation are reshaped in practice. Learning and adaptation matter, as workers and firms will need to adjust roles, capabilities and expectations as AI-driven change unfolds.

click to return...

Bernard Marr February 3, 2026

The AI Trust Paradox: Why Confidence Is Rising Faster Than Readiness

While leaders and employees may feel optimistic about AI's potential, many organisations lack the skills, governance and processes needed to deploy it responsibly and effectively.

What This Means for AI Strategy and Training
  • • Treat trust as a capability to be built, not assumed
  • • Invest in training that explains how AI works and where it fails
  • • Build organisational readiness alongside AI enthusiasm
  • • Use learning to align confidence with real competence and oversight

Click for article summary...

Summary

The article explores a growing paradox in AI adoption: confidence in AI is increasing faster than organisations' actual readiness to use it well. While leaders and employees may feel optimistic about AI's potential, many organisations lack the skills, governance and processes needed to deploy it responsibly and effectively. This gap creates risk, as misplaced trust can lead to poor decisions, over-reliance on AI outputs, or unaddressed failures. The article notes that trust must be earned through experience, transparency and understanding. Training and learning are essential to help people understand AI's limits, evaluate outputs critically and build informed confidence rather than blind trust.

click to return...

BCG 3 February 2026

Competing for the AI-Empowered Insurance Customer

Also read this: AI in Insurance: Understanding the Implications for Investors

As AI assistants reshape how customers discover and choose insurance, insurers must rethink distribution, visibility and the role of human expertise.

What This Means for AI Strategy and Training
  • • Prepare teams for AI-mediated customer journeys
  • • Train employees to collaborate with AI assistants
  • • Redesign customer workflows to blend automation with judgement
  • • Build confidence to adapt as AI becomes a primary interface

Click for article summary...

Summary

The article explains how insurers must adapt to a future where AI assistants increasingly mediate how customers research, compare and purchase insurance, fundamentally reshaping distribution and customer journeys. It describes three waves of AI-influenced distribution – augmented, assisted and autonomous – and argues that insurers need to remain visible within AI-driven ecosystems while redesigning digital touchpoints for AI agents as well as people. Rather than treating AI as a threat to intermediaries, the article shows how AI can enhance personalisation and efficiency, while human judgement remains critical for complex decisions and trust-based interactions. Success depends not just on technology, but on organisational readiness, workflow redesign and sustained investment in skills so teams can work effectively alongside AI.

click to return...

Harvard Business Review 2 February 2026

9 Trends Shaping Work in 2026 and Beyond

How AI, new work models and shifting expectations are redefining what work looks like – and what organisations must adapt to next.

What This Means for AI Strategy and Training
  • • Treat AI as a catalyst for redesigning work, not just improving efficiency
  • • Invest in continuous learning to keep pace with changing roles and skills
  • • Prepare managers to lead hybrid human–AI teams effectively
  • • Align AI strategy with broader workforce, wellbeing and performance goals

Click for article summary...

Summary

The article outlines nine interlinked trends that are reshaping how work is organised, experienced and valued in 2026 and beyond. AI features prominently as a force changing roles, performance expectations and collaboration, but the article stresses that technology alone does not determine outcomes. Instead, success depends on how organisations redesign work, develop skills and support people through ongoing change. Themes include the rise of human–AI collaboration, growing skills gaps, evolving career paths and increased pressure on leaders to balance productivity with wellbeing. Learning and adaptation emerge as central capabilities, as employees are expected to continuously update skills while organisations rethink how work gets done in an AI-enabled environment.

click to return...

January 2026

Back to top
People Management January 2026

AI, Agents and Authenticity: How 2026 Will Rewrite Recruitment

AI agents will fundamentally reshape recruitment in 2026, shifting power and scale on both the candidate and employer side while raising new authenticity risks.

What This Means for AI Strategy and Training
  • • Treat AI in recruitment as a strategic capability, not just an efficiency tool
  • • Invest in AI literacy for recruiters, focusing on judgement, validation and ethical use
  • • Update training to address authenticity risks, including AI-generated and falsified content

Click for article summary...

Summary

The article argues that AI agents will fundamentally reshape recruitment in 2026, shifting power and scale on both the candidate and employer side. As AI tools become easier to use, candidates can deploy agents to search, match and apply for roles at scale, while employers use AI to screen and shortlist more efficiently. This increases speed and reach but also raises risks around authenticity, including AI-generated CVs, exaggerated experience and deepfakes. As a result, recruitment is moving away from CVs as proof and towards verification, skills assessment and human judgement.

click to return...

One Useful Thing January 2026

The Shape of AI: Jaggedness and Bottlenecks

AI systems deliver uneven performance across teams and functions, making it essential to understand where AI works well and where human intervention is needed.

What This Means for AI Strategy and Training
  • • Identify areas where AI performs unevenly and target support accordingly
  • • Train employees to interpret AI outputs and manage exceptions
  • • Build skills to address bottlenecks and optimise workflows alongside AI
  • • Use learning and experimentation to continuously improve AI-human collaboration

Click for article summary...

Summary

The article examines how AI adoption often faces uneven performance across teams, projects and functions. AI systems can deliver impressive results in some areas while creating bottlenecks or inefficiencies in others. Understanding where AI works well and where human intervention is needed is key to maximising impact. The piece emphasises that training and skills development help employees recognise limitations, optimise AI outputs, and collaborate effectively with AI tools, turning potential bottlenecks into opportunities for learning and improvement.

click to return...

Harvard Business Review January 2026

A Systematic Approach to Experimenting with Gen AI

Effective AI experimentation requires structured, hypothesis-driven approaches linked to real business problems rather than ad-hoc pilots.

What This Means for AI Strategy and Training
  • • Shift from isolated AI pilots to structured, repeatable experimentation
  • • Train teams to frame hypotheses and evaluate AI performance rigorously
  • • Build shared learning loops so insights scale across the organisation
  • • Use experimentation as a capability-building tool, not just a proof of concept

Click for article summary...

Summary

The article argues that organisations should move from ad-hoc experimentation to a systematic approach. Effective experimentation is structured, hypothesis-driven and linked to real business problems. Teams learn faster when experiments test specific assumptions, compare human and AI performance, and capture reusable insights. Building learning loops sharing results and refining use cases develops internal capability. Skills and training are essential to design good experiments and interpret AI outputs critically.

click to return...

Harvard Business Review January 2026

Match Your AI Strategy to Your Organisation's Reality

AI strategies fail when ambitions outpace organisational reality, requiring leaders to align goals with actual data quality, technical maturity and workforce capabilities.

What This Means for AI Strategy and Training
  • • Ground AI strategy in a realistic assessment of organisational readiness
  • • Focus on value-chain areas where data, ownership and capabilities are strongest
  • • Invest in skills and learning to close gaps between ambition and execution
  • • Use early successes to build confidence and capability for broader AI adoption

Click for article summary...

Summary

The article argues that AI strategies fail when ambitions outpace organisational reality. Leaders should align AI goals with the parts of the value chain they control and technologies they can manage. This means being honest about data quality, technical maturity and workforce capabilities. Progress comes from focusing on high-value use cases where AI can be embedded and scaled. Skills, learning and organisational readiness are critical without them, even well-funded initiatives struggle to deliver impact.

click to return...

One Useful Thing January 27, 2026

Management as AI Superpower: Thriving in a World of Agentic AI

Management skills like delegating, scoping problems and evaluating work are becoming the key to working effectively with AI agents.

What This Means for AI Strategy and Training
  • • Develop management and delegation as core AI skills
  • • Train leaders to scope problems and evaluate AI outputs
  • • Recognise that "soft skills" are now critical for AI success
  • • Use management frameworks as effective AI prompts

Click for article summary...

Summary

The article argues that management skills are becoming the key to working effectively with AI agents. MBA students created startups in four days using AI not because they were technical experts, but because they knew how to delegate, scope problems, and evaluate work. Traditional management frameworks like requirements documents and shot lists work remarkably well as AI prompts. The skills often dismissed as "soft" – giving clear instructions, providing feedback, recognizing quality work – are now the hard skills that matter. Success with AI depends less on clever prompting and more on knowing what you want and explaining it clearly.

click to return...

World Economic Forum January 26, 2026

Davos Summit: Why Scaling AI Feels Hard – and What to Do About It

Many organisations struggle to move beyond AI pilots because the challenge is aligning data, processes, people and decision-making, not the technology itself.

What This Means for AI Strategy and Training
  • • Focus on organisational readiness, not just technical deployment
  • • Train teams to integrate AI into real workflows and decisions
  • • Build change capability alongside AI capability

Click for article summary...

Summary

Drawing on insights shared at the World Economic Forum's Davos Summit, the article explains why many organisations struggle to move beyond AI pilots and achieve impact at scale. The challenge is rarely the technology itself, but the difficulty of aligning data, processes, people and decision-making across the organisation. The article highlights that without the right skills, incentives and organisational support, AI initiatives stall. Training and learning play a critical role in building confidence, reducing friction and enabling teams to integrate AI into everyday workflows.

click to return...

Fortune January 23, 2026

Cursor's AI Agent Swarm Built a Web Browser – With No Human Help

Hundreds of AI agents coordinated autonomously for a week to build a working web browser, signalling a shift from single-task AI towards sustained, team-based autonomous work.

Read what happened when our Learning Director Philippa Cameron tried her hand at using Cursor...

What This Means for AI Strategy and Training
  • • Prepare for AI systems that sustain complex, autonomous work over days or weeks
  • • Explore how agent orchestration could reshape software delivery and project management
  • • Treat multi-agent coordination as an emerging capability with broad implications
  • • Assess readiness for a future where AI teams, not just tools, take on entire projects

Click for article summary...

Summary

The article reports on Cursor's experiment in which hundreds of AI agents, powered by OpenAI, autonomously built a web browser over a week with no human intervention. Agents were organised into planners, workers and judges, coordinating across millions of lines of code. While the result was incomplete and not production-ready, the experiment demonstrates that AI can now sustain complex, open-ended work far longer than before, pointing towards a future where autonomous AI teams take on entire projects.

click to return...

Harvard Business Review January 23, 2026

How to Articulate Your Contributions as a Senior Leader

Leaders should explain their impact in terms of direction-setting to build trust.

What This Means for AI Strategy and Training
  • • Help leaders frame AI contributions as direction-setting
  • • Clarify how AI decisions connect to business outcomes
  • • Equip leaders to explain AI investment priorities
  • • Build consistent narratives on how AI fits into strategy

Click for article summary...

Summary

The article argues that senior leaders often underestimate the importance of clearly articulating their own contributions, especially when their work is less visible and more strategic. It outlines how leaders can explain their impact by linking decisions, trade-offs and long-term thinking to tangible outcomes for the organisation. Rather than listing activities, effective leaders frame their contribution in terms of direction-setting, enabling others, and managing complexity. In periods of change – including AI-driven transformation – this clarity helps teams understand priorities, reduces uncertainty, and builds trust in leadership judgement.

click to return...

McKinsey & Company January 22, 2026

How the Best CEOs Are Meeting the AI Moment

Leading CEOs treat AI as a catalyst to reimagine processes and organisational design, with impact driven by business transformation rather than technology alone.

What This Means for AI Strategy and Training
  • • Position AI as a business transformation, not a tech rollout
  • • Build AI fluency through hands-on learning, starting at the top
  • • Invest in training that develops judgement, not just skills
  • • Redesign roles to support human–AI collaboration at scale

Click for article summary...

Summary

This podcast transcript frames AI as a defining leadership moment, arguing that its impact is driven far more by business transformation than by technology alone. CEOs who are making progress treat AI as a catalyst to reimagine processes, decision-making and organisational design, rather than something to be bolted on. Agentic AI is accelerating change, flattening hierarchies and shifting value towards judgement, learning and adaptability. A recurring theme is fluency – leaders and employees alike must actively learn through hands-on use, experimentation and curiosity. Training, access to tools and shared learning spaces are critical to moving from scattered experimentation to sustained value creation, while governance and responsible AI practices must scale alongside adoption.

click to return...

BCG January 22, 2026

The AI-First Life Insurance Company

AI-first companies redesign end-to-end workflows around AI capabilities from the outset, rather than layering AI onto existing processes.

What This Means for AI Strategy and Training
  • • Redesign workflows around AI, not layer tools onto old processes
  • • Show how AI-first design drives faster, more personalised outcomes
  • • Equip leaders to explain why AI-first requires sustained change

Click for article summary...

Summary

The article explores what it really means to be an AI-first organisation, using life insurance as a concrete example. Rather than layering AI onto existing processes, AI-first companies redesign end-to-end workflows around AI capabilities from the outset. This enables faster decisions, more personalised products, and lower operating costs, while shifting human effort toward judgement, exceptions, and customer relationships.

click to return...

Harvard Business Review January 20, 2026

Your Team Is Anxious About AI – Here's How to Talk to Them About It

Fear of AI often stems from uncertainty about job security and changing roles, and leaders who avoid these conversations make anxiety worse.

What This Means for AI Strategy and Training
  • • Treat employee anxiety as a strategic risk, not a soft issue
  • • Equip managers to have informed, honest conversations about AI
  • • Use training to build confidence and reduce uncertainty
  • • Involve teams in AI adoption to increase trust and engagement

Click for article summary...

Summary

The article explores why anxiety about AI is widespread among employees and how leaders can address it constructively. Fear often stems from uncertainty about job security, changing roles and a lack of understanding about how AI will be used. The article argues that avoiding these conversations makes anxiety worse. Instead, leaders should talk openly about what AI will and will not do, acknowledge legitimate concerns, and involve teams in shaping how AI is adopted. Training and learning are central to reducing fear, helping employees build confidence, develop new skills and see how AI can support their work rather than replace it.

click to return...

PwC 19 January 2026

PwC 2026 Global CEO Survey

CEOs' confidence in revenue growth has hit a five-year low – and uneven AI returns are emerging as a defining divide between leaders and laggards.

What This Means for AI Strategy and Training
  • • Train teams to integrate AI into core products and decisions
  • • Build strong AI foundations before scaling
  • • Use learning to help leaders navigate volatility and disruption

Click for article summary...

Summary

The report from PwC's 29th Global CEO Survey reveals that CEOs increasingly see technological change, including AI adoption, as a central challenge; 42% cite keeping pace with tech transformation as a top concern. Despite heavy AI investment, only a small minority of organisations report that AI has delivered both cost savings and revenue gains, and more than half say they have seen no significant financial benefit from AI to date. The findings highlight a widening gap between companies that have built strong AI foundations and embedded AI across products, services and decision-making and those still struggling to scale.

click to return...

OECD 19 January 2026

OECD Digital Education Outlook 2026

Generative AI is transforming education, but its impact depends on purposeful integration and pedagogical guidance.

What This Means for AI Strategy and Training
  • • Train educators to integrate GenAI into lessons and assessment
  • • Treat AI adoption as a capability challenge, not just a tech one
  • • Build foundational systems and ethical practices before scaling
  • • Support educators to navigate change and evolving needs

Click for article summary...

Summary

The OECD Digital Education Outlook 2026 examines how GenAI is reshaping teaching, learning, assessment and educational administration worldwide. While GenAI tools are increasingly accessible and can produce high-quality outputs, the report finds that without guidance, students risk offloading cognitive effort, reducing engagement and long-term skill acquisition. The report emphasises that teachers remain central, with AI serving as an augmenting tool rather than a replacement. Education systems must invest in human-centred policies, teacher training, adaptive infrastructure, and inclusive access to ensure AI supports meaningful learning and equity.

click to return...

World Economic Forum January 19, 2026

The AI Perception Gap: How to Ensure Employers and Workers Are Ready for Transformation

Professionals underestimate AI's impact on their own roles, creating a perception gap that slows upskilling and leaves organisations unprepared.

What This Means for AI Strategy and Training
  • • Align training with role-relevant AI expectations
  • • Provide personalised AI learning, not generic reskilling
  • • Embed upskilling into the flow of work
  • • Foster continuous learning to navigate rapid AI change

Click for article summary...

Summary

The article argues that professionals underestimate AI's impact on their own roles, creating a perception gap that slows upskilling. Workers often misjudge their skills and delay learning, while organisations fail to provide structured, personalised training. Optimism bias leads employees to assume their roles are safe from disruption. Well-designed, purpose-driven AI training drives high engagement. Success requires balancing technical AI fluency with soft, adaptive skills like communication and critical thinking. Proactive upskilling in both technical and soft skills is essential for employees and organisations to thrive.

click to return...

Le Monde January 16, 2026

Why Yann LeCun Is Leaving Meta to Launch His Own AI Start‑Up

AI pioneer Yann LeCun is leaving Meta to develop next-generation AI systems that understand the physical world through reasoning, planning and persistent memory.

What This Means for AI Strategy and Training
  • • Reassess long‑term AI strategy beyond language models to focus on reasoning‑capable architectures
  • • Prioritise training that builds foundational AI skills in world modelling, reasoning and multimodal learning
  • • Prepare talent for emerging AI paradigms, not just current industry trends

Click for article summary...

Summary

The article discusses AI pioneer Yann LeCun's decision to leave Meta and launch an independent AI start‑up focused on next‑generation AI systems that understand the physical world. LeCun argues that current large language model approaches are limited in reasoning and real‑world understanding. His venture will develop world models capable of reasoning, planning and persistent memory for industrial, robotics and decision‑making applications. LeCun emphasises openness and diversity in AI research, pushing back against short‑term product strategies and advocating long‑term foundational work that can redefine how AI is built and trained.

click to return...

BCG January 15, 2026

As AI Investments Surge, CEOs Take the Lead

As AI investment accelerates, leadership ownership becomes a decisive factor in whether organisations see real returns.

What This Means for AI Strategy and Training
  • • Anchor AI strategy in clear leadership ownership and accountability
  • • Invest in training to build confidence and capability across the organisation
  • • Align AI investments with business priorities, not isolated pilots
  • • Treat skills, learning and change as central to scaling AI successfully

Click for article summary...

Summary

The article argues that as AI investment accelerates, leadership ownership becomes a decisive factor in whether organisations see real returns. AI can no longer be treated as a purely technical or IT-led initiative. Instead, senior leaders are stepping in to set direction, prioritise use cases, and ensure AI efforts are aligned with business strategy. The article highlights that many AI programmes still struggle because organisations lack the skills, structures and confidence to scale. Training, learning and capability-building are essential to help leaders and teams move from experimentation to sustained value creation.

click to return...

European Commission 15 January 2026

EU Invests Over €307 Million in Artificial Intelligence and Related Technologies

The EU is investing over €307 million in AI to strengthen Europe's ecosystem, with skills development positioned as essential to translating investment into impact.

What This Means for AI Strategy and Training
  • • Align organisational AI strategies with emerging public investment priorities
  • • Invest in skills and training to take advantage of funded AI innovation
  • • Build capability to adopt AI responsibly within regulatory frameworks
  • • Treat workforce readiness as critical to turning investment into impact

Click for article summary...

Summary

The article outlines the European Union's decision to invest more than €307 million in artificial intelligence and related technologies as part of its broader digital strategy. The funding is aimed at strengthening Europe's AI ecosystem, supporting research, innovation, and the adoption of AI across sectors. A key focus is ensuring that organisations and workers are equipped to use AI responsibly and effectively, alongside investments in infrastructure and governance. Skills development, training and capability-building are positioned as essential to translating public investment into real economic and societal impact.

click to return...

World Economic Forum January 14, 2026

Cybersecurity Paradox: Training the Next-Generation Workforce

AI strengthens security but also introduces new vulnerabilities, requiring organisations to manage human-AI interactions and build workforce trust frameworks.

What This Means for Cybersecurity Strategy and Training
  • • Implement trust frameworks for human–AI collaboration
  • • Prioritise AI literacy and upskilling across the workforce
  • • Train teams to detect and respond to AI-targeted attacks
  • • Integrate continuous learning to close the cybersecurity skills gap

Click for article summary...

Summary

The article argues that as AI becomes embedded in enterprise workflows, it creates a paradox – it strengthens security but also introduces new vulnerabilities. Attackers can exploit AI agents, manipulate data, or target over-reliance on AI outputs, while human behaviour remains a central risk. Organisations must manage human–AI interactions, not just systems. Workforce trust frameworks – focusing on reliability, accountability, transparency and ethical alignment – are essential. Training and AI literacy are critical so employees can evaluate AI outputs, detect manipulation, and apply cybersecurity principles in AI-augmented environments.

click to return...

IEEE Spectrum January 13, 2026

AI Mistakes Are Inevitable - Here is How to Handle Them

AI systems will inevitably make errors, and organisations must prepare employees to detect mistakes, evaluate outputs critically and respond appropriately.

What This Means for AI Strategy and Training
  • • Train employees to recognise, assess, and respond to AI errors
  • • Build skills to critically evaluate AI outputs and system behaviour
  • • Integrate human oversight into AI workflows to reduce risk
  • • Foster a learning culture that treats AI mistakes as opportunities for improvement

Click for article summary...

Summary

The article explains that AI systems will inevitably make errors, and organisations must prepare to manage them effectively. Mistakes often stem not from flawed algorithms, but from gaps in oversight, process design, or user understanding. The piece emphasises that training, skill development and capability-building are essential so employees can detect errors, evaluate AI outputs critically, and respond appropriately. By combining human judgement with AI systems, companies can minimise risk and maximise the value of AI adoption.

click to return...

World Economic Forum January 9, 2026

Agentic AI: How Human Purpose Can Guide the Next Wave of Intelligent Systems

As AI becomes more autonomous, human purpose becomes more important for setting goals, supervising behaviour and intervening when systems act unexpectedly.

What This Means for AI Strategy and Training
  • • Clarify organisational purpose and values before scaling agentic AI
  • • Train leaders and teams to supervise, guide and correct autonomous systems
  • • Build skills in judgement, oversight and ethical decision-making
  • • Treat agentic AI as an organisational capability challenge, not just a technical one

Click for article summary...

Summary

The article explores agentic AI systems that act autonomously and make decisions with limited human input. As AI becomes more agentic, human purpose becomes more important, not less. Without clear intent, values and direction, organisations risk deploying systems that optimise for wrong outcomes. Guiding agentic AI requires more than technical controls it depends on human judgement, ethical clarity and organisational capability. Training is critical so people can set goals, supervise AI behaviour, and intervene when systems act unexpectedly.

click to return...

One Useful Thing January 8, 2026

Claude Code and What Comes Next

Modern AI supports extended problem-solving and iterative building, breaking work into smaller testable steps and encouraging rapid learning.

What This Means for AI Strategy and Training
  • • Encourage experimentation with AI for sustained, iterative work
  • • Train teams to think in workflows, not single prompts
  • • Expand AI training beyond technical roles
  • • Build confidence through learning-by-doing

Click for article summary...

Summary

The article argues that todays AI systems can do real, sustained work beyond one-off prompts. Modern AI supports extended problem-solving, experimentation and iterative building particularly powerful for programmers and those who are programming-adjacent. AI is changing how tasks are approached, breaking work into smaller, testable steps and encouraging rapid learning. Skills development and hands-on exploration are essential to harness this new mode of working effectively.

click to return...

Harvard Business Review January 8, 2026

What Companies that Excel at Strategic Foresight Do Differently

Organisations that excel at strategic foresight systematically scan for weak signals, consider multiple futures and embed foresight into decision-making.

What This Means for AI Strategy and Training
  • • Use AI to support continuous environmental scanning and pattern detection
  • • Train leaders to interpret AI-generated signals through a strategic foresight lens
  • • Embed AI-enabled foresight into core strategy and decision-making processes

Click for article summary...

Summary

The article explores what organisations that excel at strategic foresight do differently when navigating uncertainty. Instead of relying on single forecasts, they systematically scan for weak signals, consider multiple plausible futures and embed foresight into decision-making. AI can support this work by detecting emerging patterns earlier, while human judgement interprets what those signals mean. Strong foresight is as much about mindset as process seeing uncertainty as something to engage with, not avoid.

click to return...

Gartner 6 January 2026

What AI in the Workforce Actually Means for Your Organization

AI will fundamentally reshape jobs and skills, making workforce readiness - not automation - the critical factor for organisational success.

What This Means for AI Strategy and Training
  • • Redesign roles around human–AI collaboration, not just task automation
  • • Treat reskilling and upskilling as a continuous capability as skills cycles shorten
  • • Put governance in place for privacy, consent and digital identity as AI agents scale

Click for article summary...

Summary

Gartner argues that AI's biggest workforce impact will come from job redesign and accelerated skills change rather than widespread job loss. As AI takes on routine and analytical work, human capabilities such as judgment, creativity and leadership become more valuable. The article stresses that organisations must prioritise continuous learning, proactive workforce planning and clear governance to ensure AI delivers sustainable value while supporting employees through rapid transformation.

click to return...

Cambridge Judge Business School January 6, 2026

A Look Ahead at 2026: Predictions and Hopes

Progress with AI will depend on how well humans guide and govern powerful technologies, with judgement, values and responsibility shaping positive outcomes.

What This Means for AI Strategy and Training
  • • Build future-focused skills that combine technological understanding with judgement
  • • Prepare leaders and teams to navigate uncertainty and rapid change
  • • Embed learning and reflection into AI strategy, not just execution
  • • Use foresight to shape responsible, human-centred AI adoption

Click for article summary...

Summary

The article looks ahead to 2026, exploring how technological change including AI is likely to shape organisations, decision-making and society. It highlights both optimism and caution, stressing that progress will depend on how well humans guide and govern powerful technologies. Rather than focusing only on technical capability, the article emphasises the role of judgement, values and responsibility in shaping positive outcomes. Learning and capability-building matter, as leaders and employees alike will need to adapt their thinking, skills and behaviours to keep pace with change.

click to return...

Harvard Business Review January 6, 2026

Why AI Boosts Creativity for Some Employees but Not Others

AI increases creativity for employees who use metacognition to reflect on problems, question outputs and deliberately adjust their approach, while passive users see little benefit.

Read about our microlearning course on AI and Metacognition

What This Means for AI Strategy and Training
  • • Invest in metacognitive training to help employees use AI more creatively
  • • Teach people how to question, refine and build on AI outputs
  • • Shift AI training beyond tools and prompts to thinking skills and self-awareness

Click for article summary...

Summary

The article explains why AI increases creativity for some employees but not for others. The difference lies less in access to AI tools and more in how people think about and manage their own thinking. Employees who reflect on problems, question AI outputs and deliberately adjust their approach tend to use AI in more creative and exploratory ways. Others use AI passively and see little benefit. The article argues that training in metacognition – learning how to plan, monitor and evaluate one's thinking – can significantly improve creative outcomes when working with AI.

click to return...

BCG January 6, 2026

Strategies to Tackle the AI Skills Gap

Many companies struggle to find employees with the right AI capabilities, highlighting the need for targeted reskilling aligned with business priorities.

What This Means for AI Strategy and Training
  • • Develop structured reskilling and upskilling programmes to close AI skill gaps
  • • Align training initiatives with organisational priorities and AI strategy
  • • Use workforce planning to anticipate future AI capabilities needs
  • • Invest in continuous learning to maintain and scale AI proficiency across teams

Click for article summary...

Summary

The article examines the growing AI skills gap and how organisations can address it strategically. Many companies struggle to find employees with the right capabilities to implement and scale AI initiatives. The report highlights the need for targeted reskilling, upskilling and learning programmes, combined with workforce planning and recruitment strategies. Organisations that take a proactive approach aligning skill development with business priorities and providing structured training are better positioned to extract value from AI investments and sustain long-term competitive advantage.

click to return...

BCG January 6, 2026

How AI Is Paying Off in the Tech Function

Technology functions are realising tangible AI benefits by integrating it into workflows and ensuring employees have the skills to use it effectively.

What This Means for AI Strategy and Training
  • • Embed AI into workflows while providing clear guidance and support
  • • Invest in training and reskilling to enable effective AI adoption
  • • Develop capabilities for both operational efficiency and strategic innovation
  • • Build teams confidence in using AI to improve outcomes and create value

Click for article summary...

Summary

The article explores how technology functions are realising tangible benefits from AI, from automating routine tasks to improving decision-making and product development. Success depends on integrating AI into workflows, aligning teams around clear goals, and ensuring employees have the skills to use AI effectively. The piece highlights that training, reskilling and capability-building are critical to scaling AI impact, enabling tech teams to move from operational efficiency to strategic innovation. Organisations that invest in both technology and people see faster adoption and higher returns from AI initiatives.

click to return...

BCG January 2, 2026

Product Teams Make AI Sales Agents Smarter

AI enhances sales performance when product and sales teams work together to refine models with context, feedback and human oversight.

What This Means for AI Strategy and Training
  • • Train teams to work alongside AI with feedback and context
  • • Develop skills to interpret and act on AI outputs
  • • Encourage collaboration between product, sales and AI teams
  • • Build learning loops to refine AI performance

Click for article summary...

Summary

The article explains how AI can enhance sales performance when product and sales teams work together to refine and guide AI systems. Rather than relying solely on AI recommendations, teams can iteratively improve models by providing context, feedback and human oversight. This collaboration ensures AI agents deliver more accurate, actionable insights while aligning with business goals. The piece highlights that training and skill development are essential, enabling employees to interpret AI outputs, make informed decisions, and continuously improve AI-driven processes.

click to return...

Evergreen Articles

Back to top

Earlier articles that remain highly relevant

One Useful Thing 2023

Centaurs and Cyborgs on the Jagged Frontier

What This Means for AI Strategy and Training
  • • Train people to recognise where AI adds value – and where it does not
  • • Design tasks and workflows deliberately around AI strengths and limits
  • • Build judgement and experimentation into AI training, not just tool use
  • • Encourage flexible human–AI collaboration models rather than one-size-fits-all approaches

Click for article summary...

Summary

The article explores how people work with AI on what is described as the “jagged frontier” of capability – where AI performs extremely well at some tasks and poorly at others. It distinguishes between two collaboration models: centaurs, where humans and AI divide tasks, and cyborgs, where work is tightly interwoven. Performance gains depend less on the tool itself and more on how tasks are designed and how well people understand AI’s strengths and limits. The article highlights that without the right judgement, users can be misled by AI’s uneven performance. Learning and capability-building are essential so individuals can choose the right collaboration model, adapt workflows and use AI in ways that genuinely improve outcomes.

click to return...

Wes Kao 2020

Spiky point of view: Let's get a little controversial

What This Means for AI Strategy and Training
  • • Develop and articulate spiky points of view rather than producing AI-polished, consensus-friendly content
  • • Train leaders to distinguish genuine, defensible insight from generic ideas with rounded edges
  • • Use the SPOV framework to sharpen thinking before feeding ideas into AI for refinement
  • • Recognise that conviction and authentic voice are qualities AI cannot generate on your behalf

Click for article summary...

Summary

Wes Kao argues that standing out in a noisy world requires developing a "spiky point of view" – a perspective you feel strongly about and will advocate for, even if others disagree. Unlike generic insight, a spiky POV is rooted in lived experience, conviction and authentic voice, making it almost impossible to imitate. It should challenge the audience to think differently, be defensible rather than universally agreed and reflect genuine belief rather than safe consensus. As AI commoditises generic content, a spiky POV becomes an increasingly rare and distinctive competitive advantage.

click to return...

Tiago Forte 2022

Building a Second Brain

What This Means for AI Strategy and Training
  • • Build a system to capture and organise AI-relevant knowledge so learning compounds over time
  • • Apply the CODE framework to AI training: capture insights, organise by use case, distil principles and express them in practice
  • • Reduce the cognitive load of staying current with AI by externalising and structuring information
  • • Create shared knowledge systems within teams to preserve institutional AI learning

Click for article summary...

Summary

Tiago Forte's Building a Second Brain is a widely adopted personal knowledge management system for capturing, organising and using information more effectively. Its CODE framework – Capture, Organise, Distil, Express – argues that the human brain is ill-suited to storing everything we need to know, and that we should externalise memory into a trusted digital system instead. The result: ideas compound over time, creative output improves and cognitive load falls. In an era of information overload and accelerating AI change, a reliable personal knowledge system has never mattered more to knowledge workers.

click to return...

McKinsey Quarterly: Digital Edition

Back to top

Vol. 62, No. 1, February, 2026

Go to Digital Edition
McKinsey Quarterly Vol. 62, No. 1

The Agentic Organization: Contours of the Next Paradigm for the AI Era

What This Means for AI Strategy and Training
  • • Treat AI as an organisational design issue, not just a technology deployment
  • • Train leaders and teams to work effectively with autonomous AI agents
  • • Redesign roles, workflows, and accountability to include AI agents by default
  • • Embed AI governance and controls into everyday operating practices

Click for article summary...

Summary

This article describes the emergence of the "agentic organization," where humans work alongside autonomous AI agents to deliver end-to-end outcomes. Rather than using AI as a support tool, early adopters are redesigning operating models, decision rights, governance, and workflows around AI agents. The shift is positioned as the most significant organizational transformation since the industrial and digital revolutions, requiring new structures, skills, and leadership mindsets.

click to return...

McKinsey Quarterly Vol. 62, No. 1

Change Is Changing: How to Meet the Challenge of Radical Reinvention

What This Means for AI Strategy and Training
  • • Build continuous learning into AI adoption rather than one-off training
  • • Equip leaders to manage constant AI-driven change, not static implementations
  • • Develop organisational capability for rapid experimentation with AI tools
  • • Train managers to lead teams through ongoing AI-enabled reinvention

Click for article summary...

Summary

The article argues that traditional change management approaches are no longer sufficient in an era of continuous disruption. Organizations must move from episodic transformation programmes to ongoing reinvention. This requires new leadership capabilities, faster decision making, greater adaptability, and the ability to integrate technological change – especially AI – into the core of how change happens.

click to return...

McKinsey Quarterly Vol. 62, No. 1

Building the AI Muscle of Your Business Leaders

What This Means for AI Strategy and Training
  • • Prioritise AI literacy and fluency for senior and mid-level leaders
  • • Focus training on business problem-solving with AI, not technical depth alone
  • • Develop shared language between business and technical teams
  • • Embed AI capability into leadership development programmes

Click for article summary...

Summary

This article highlights that competitive advantage from AI comes less from technology itself and more from leaders who can connect business problems to AI possibilities. Many organisations underinvest in developing leaders' AI literacy, leaving a gap between technical teams and strategic decision makers. Building this "AI muscle" is framed as a core leadership responsibility.

click to return...

McKinsey Quarterly Vol. 62, No. 1

Deploying Agentic AI with Safety and Security: A Playbook for Technology Leaders

What This Means for AI Strategy and Training
  • • Train teams to recognise and manage agentic AI risks early
  • • Build AI safety, ethics, and security into standard training programmes
  • • Develop clear accountability for human oversight of AI agents
  • • Treat risk management as a core AI capability, not a compliance add-on

Click for article summary...

Summary

This article outlines the new risks introduced by agentic AI systems, including autonomy, escalation, and unintended behaviour. It argues that traditional risk frameworks are insufficient and proposes a proactive approach combining technical safeguards, governance, human oversight, and organisational readiness. Security and safety are positioned as enablers of scale, not blockers.

click to return...

McKinsey Quarterly Vol. 62, No. 1

'Saying "I Don't Know" Is One of the Hardest Things a Leader Can Do': A Conversation with Delta CEO Ed Bastian

What This Means for AI Strategy and Training
  • • Encourage leaders to model curiosity and learning around AI
  • • Normalise uncertainty as part of AI adoption and experimentation
  • • Train leaders to ask better questions of AI rather than seek certainty
  • • Align AI initiatives with long-term value rather than short-term hype

Click for article summary...

Summary

In this interview, Delta's CEO reflects on leadership through uncertainty, learning, and long-term thinking. The discussion reinforces the importance of humility, adaptability, and openness to change. These qualities are increasingly essential as AI reshapes industries and decision making.

click to return...

McKinsey Quarterly Vol. 62, No. 1

How Strategy Champions Win

What This Means for AI Strategy and Training
  • • Integrate AI explicitly into core strategy, not as a side initiative
  • • Train teams to translate AI ambition into executable actions
  • • Build strategic alignment between AI investments and business priorities
  • • Develop execution skills alongside AI vision and planning

Click for article summary...

Summary

This article examines why only a minority of organisations believe they have high-quality strategy. Successful "strategy champions" excel not only at bold strategic design but also at execution and mobilisation. The article stresses clarity, alignment, and sustained focus – capabilities increasingly challenged by rapid technological change.

click to return...

McKinsey Quarterly Vol. 62, No. 1

How to Get Your Operating Model Transformation Back on Track

What This Means for AI Strategy and Training
  • • Align AI initiatives tightly with operating model outcomes
  • • Train teams on how AI changes roles, processes, and decision flows
  • • Avoid treating AI as an overlay on broken operating models
  • • Build change capability alongside technical AI skills

Click for article summary...

Summary

This article explores why many operating model transformations fail and identifies six common pitfalls. Success depends on clear outcomes, disciplined execution, and alignment between structure, processes and capabilities, which are often stressed by AI-driven change.

click to return...

McKinsey Quarterly Vol. 62, No. 1

Humanoid Robots: Crossing the Chasm from Concept to Commercial Reality

What This Means for AI Strategy and Training
  • • Prepare workforces for collaboration with physical AI systems
  • • Train organisations on safety, ethics, and human-robot interaction
  • • Integrate robotics into broader AI and automation strategies
  • • Focus training on practical deployment, not speculative capability

Click for article summary...

Summary

The article examines the rapid progress of humanoid robots and the remaining barriers to large-scale commercial deployment. It argues that cost, reliability, integration, and workforce acceptance will determine adoption, rather than technological novelty alone.

click to return...

McKinsey Quarterly Vol. 62, No. 1

Jagged Little Pill – You Learn: Your AI Briefing

What This Means for AI Strategy and Training
  • • Train users to understand AI limitations, not just capabilities
  • • Encourage critical evaluation of AI outputs in everyday work
  • • Build literacy around model behaviour, bias, and failure modes
  • • Promote responsible, informed trust rather than blind reliance on AI

Click for article summary...

Summary

This AI briefing explains the "jagged frontier" of AI capability: models can perform extraordinarily well in some tasks while failing unexpectedly in others. By examining model and system cards, the article highlights risks such as hallucinations, deception, and misalignment, reinforcing the need for informed and critical AI use.

click to return...

McKinsey Quarterly Vol. 62, No. 1

Tackling the Healthcare Worker Shortage

What This Means for AI Strategy and Training
  • • Use AI to augment, not replace, skilled healthcare workers
  • • Train healthcare staff to work confidently with AI-enabled tools
  • • Design AI training around real workflow relief and patient outcomes
  • • Embed ethical and equity considerations into healthcare AI adoption

Click for article summary...

Summary

This article analyses the global healthcare workforce shortage and argues that solving it requires rethinking training, retention, and care delivery models. AI is presented as a potential enabler in reducing administrative burden, supporting diagnostics, and empowering patients, but not a substitute for systemic reform.

click to return...