The Tool Was Available But Nobody Used It

Why access to generative AI never guarantees organizational adoption

Time to Complete: 30 minutes

Download the 5-Minute Warm-Up PDF above 

Who This Is For: This lesson is for digital transformation leads, innovation managers, R&D directors and HR business partners who work inside established industrial organizations and are responsible for rolling out new technology to employees who were not consulted about the decision. It is equally relevant for organizational change consultants, internal strategy teams and department heads who have already deployed generative AI tools and are now wondering why adoption remains low despite strong investment in licenses and training. Graduate students in business, management science and engineering management will find the lesson directly applicable to the gap between technology deployment and behavioral change that their future employers are actively trying to close. Researchers in innovation management and organizational behavior will recognize the empirical grounding in Design Science Research and ethnographic fieldwork as a foundation for the theoretical arguments made throughout. The shared challenge across all of these roles is a deceptively simple one: an organization provided employees with powerful AI tools and watched most of them do nothing with it. 

Real-World Applications

Scania, the Swedish heavy-duty vehicle manufacturer with nearly 59,000 employees operating in over 100 countries, deployed ChatGPT Enterprise across its workforce in February 2025. Before that deployment, an internal survey revealed that between fifteen and thirty percent of employees had used ChatGPT, while sixty percent reported never using generative AI for work. Despite the company providing licenses, training tutorials, weekly Q&A sessions with OpenAI experts and dedicated digitalization conferences, many license holders remained inactive after rollout. The Research and Innovation Office -- the organizational unit charged with exploring transformational technologies including autonomous vehicles, electrification and connected transport systems -- faced the same adoption barriers as the rest of the company, with the added complication that its decentralized decision-making culture actively resisted top-down mandates. This case provides practitioners with a concrete, documented benchmark for diagnosing adoption gaps in their own organizations and gives academic researchers an empirical reference point for studying the interaction between technology acceptance, organizational diffusion and change management in large industrial settings. 

The Problem and Its Relevance

Organizations are not failing to adopt AI because the technology is hard to understand. They are failing because every governance structure, cost allocation model and time management norm inside a large company was designed for a world where change happened slowly enough for committees to manage it. When a monthly software license fee is charged back to individual business units without a visible return on investment, an eighty-euro charge becomes a structural barrier to adoption regardless of how transformative the underlying tool might be. The problem is not that leaders do not believe in AI but that belief does not exempt anyone from the calculus of competing priorities, and in that calculus, a tool no one has personally experienced always loses to a meeting that is already on the calendar. 

The second problem is less visible and more dangerous. When AI adoption proceeds as a bottom-up movement without top-down direction, the result is not democratization but fragmentation. Early adopters build impressive pilots that demonstrate individual-level value. Those pilots stall at the boundary of their own team because scaling any AI tool requires integrated data infrastructure, and most large organizations store their institutional knowledge across dozens of disconnected systems. No retrieval-augmented generation solution can surface what it cannot find. The organizations celebrating early wins are often simultaneously building the conditions that will prevent those wins from ever reaching the rest of the company. 

Core Concepts

Read each definition before beginning the lesson activity.  

Generative AI

Generative AI refers to AI systems that produce original outputs such as text, code and summaries based on user prompts. Unlike traditional AI that classifies or predicts from fixed categories, generative AI creates content by drawing on patterns learned from large-scale training data. Tools like ChatGPT Enterprise and Microsoft 365 Copilot are the generative AI systems examined in this lesson. Productivity gains of up to twenty-five percent have been reported for specific task types when generative AI is used effectively but these gains depend on task type, user expertise and whether the tool is actually used at all.

Technology Acceptance Model (TAM)

TAM identifies two primary factors that determine whether an individual will adopt a new technology: perceived usefulness, meaning the degree to which a person believes the tool will improve their job performance, and perceived ease of use, meaning how effort-free they expect interaction with the tool to be. When employees ask 'What is it good for?' or 'How am I supposed to use it?', they are expressing low scores on both constructs. TAM predicts that tools perceived as difficult and of uncertain value will not be adopted regardless of their actual capabilities.

Unified Theory of Acceptance and Use of Technology (UTAUT)

UTAUT extends TAM by adding performance expectancy, effort expectancy, social influence and facilitating conditions as predictors of adoption behavior. Social influence is particularly important: managers who lack personal experience with AI tools cannot model its use for their teams, and that modeling gap is one of the most consistent adoption barriers identified across organizations. Facilitating conditions, the organizational and technical resources that support use, matter less than expected when licenses exist but learning time does not.

Innovation Diffusion Theory (IDT)

IDT describes how innovations spread through a population over time, following a predictable pattern from innovators through early adopters, early majority, late majority and laggards. It identifies five factors that determine adoption speed: relative advantage over existing practices, compatibility with existing values and workflows, complexity of use, trialability or the ability to experiment safely and observability of results. In the Scania case, trialability was present through sandbox environments and hackathons, but observability remained low because successful AI use cases rarely traveled beyond the teams that produced them.

Retrieval-Augmented Generation (RAG)

RAG is a method that combines a generative AI model with an information retrieval system, allowing the model to search external documents before generating a response. This means the AI can answer questions based on specific organizational files rather than relying solely on its training data. RAG is particularly relevant for R&I offices that hold large volumes of project documentation in multiple formats including PowerPoint, PDF and Word. A RAG system is only as useful as the data it can access, which makes fragmented file structures a direct technical constraint on AI effectiveness.

Lewin's Change Management Model

Lewin describes organizational change as a three-step process. Unfreezing involves creating awareness that current practices need to change. Changing involves implementing new behaviors and workflows. Refreezing involves stabilizing the new practices as organizational norms. The Scania Research and Innovation Office shows strong evidence of the unfreezing stage, employees and IT teams are aware of the need for change and have begun communicating that urgency, but evidence for the change and refreeze stages is largely absent. This matters because awareness without formalized practice change does not constitute adoption.

Kotter's 8-Step Change Model

Kotter extends Lewin's framework into eight steps: creating urgency, building a coalition, developing a vision, communicating the vision, removing obstacles, generating short-term wins, consolidating gains and anchoring change. Each step builds on the previous one and is measurable. The most consequential gaps in the Scania case appear in steps five through eight. Structural obstacles such as the IT cost-allocation model that makes licenses appear expensive to business units remained in place. Short-term wins from early AI use cases were not systematically shared. Governance structures for scaling adoption were not formalized. Kotter's model reveals that the work of change management begins, rather than ends, once the tools are deployed.

The Dual-Strategy Requirement

Research from the Scania case identifies a pattern that appears across large organizations attempting AI adoption: bottom-up experimentation without top-down direction produces fragmentation, while top-down mandates without bottom-up buy-in produce resistance. Neither strategy alone is sufficient. A bottom-up approach democratizes experimentation and allows innovators to demonstrate value at the individual level. A top-down approach provides the governance structures, time allocations and resource commitments that allow individual wins to scale across the organization. The two strategies are not alternatives. They are complements that must operate simultaneously. 

Lesson Goal

This lesson builds AI literacy by guiding participants through the three-phase conceptual framework developed from empirical research inside Scania's Research and Innovation Office. Learners will examine how individual acceptance, organizational diffusion and change management interact to enable or block adoption, and will apply that framework to an organizational context of their own choosing. The goal is not to produce AI strategists but informed practitioners who can diagnose where their own organization sits in an adoption journey and identify which interventions would be most likely to move it forward. 

Three Critical Questions

Engage with these questions before beginning the activity. Brief written notes will improve the quality of your group discussion. 

Roadmap

The following steps guide you through a structured analysis of AI adoption barriers. Read all steps before beginning. You have 30 minutes total. 

Step 1 (5 min): Select and Position an Organizational Context

Choose a specific type of organization where generative AI is being or could be deployed in administrative or knowledge-worker functions. This could be a hospital administration unit, a university research department, a government innovation office or a manufacturing company's R&D division. Your goal is not to choose a familiar context. Choose one where you can identify at least three structural characteristics such as cost allocation practices, decision-making hierarchy or data infrastructure that would directly shape adoption outcomes. Write one sentence positioning that organization on the S-curve of AI adoption: experimental phase, early-to-mid acceleration or approaching maturity.

Tip: Avoid generic choices. The more specific your organizational context, the more precise your diagnosis can be in the steps that follow. 

Step 2 (5 min): Map Individual Adoption Factors

Using the TAM and UTAUT frameworks, map the individual-level factors that would shape whether employees in your chosen organization perceive generative AI as useful and easy to use. Consider how digital literacy levels, prior AI experience, current workload and job security concerns would interact. For each factor, write one sentence explaining whether it would raise or lower the likelihood of adoption and why. Pay particular attention to social influence: who are the potential modeling figures in this organization, and do they have personal hands-on experience with the tools they are expected to champion? 

Step 3 (6 min): Map Organizational Diffusion Factors

Using IDT, assess how generative AI would spread, or fail to spread, through your chosen organization. For each of the five IDT factors, rate the likely condition in your context as favorable, mixed or unfavorable, and write one sentence justifying your rating. Give particular attention to compatibility: how fragmented is the organization's data infrastructure, and would a RAG-based tool be able to access the knowledge it would need to be useful? Identify which factor poses the single greatest barrier to diffusion in your context and explain your reasoning.

Step 4 (6 min): Diagnose the Change Management Stage

Using Lewin's model and Kotter's eight steps, assess where your chosen organization currently sits in its change management journey. Be specific. Do not default to "early stage." Instead, identify which specific Kotter steps show evidence of progress and which show evidence of absence. Use the indicator language from the research: present, partially present or largely absent. The most important diagnosis here is whether the organization has formalized new practices or whether it is still operating entirely in the unfreezing stage while believing it has progressed further. 

Step 5 (4 min): Design the Dual-Strategy Intervention

Based on your diagnosis across Steps 2, 3 and 4, design a dual-strategy intervention that addresses your highest-priority barriers simultaneously from both the bottom up and the top down. For the bottom-up component, specify how you would create structured learning time, Communities of Practice or sandboxed experimentation environments without requiring executive approval. For the top-down component, specify one concrete structural change -- to cost allocation, governance, data infrastructure or performance measurement -- that only leadership can authorize. Be realistic about timeline: which indicator would you expect to shift within three months and which would require eighteen months of sustained effort?

Tip: The most common mistake in this step is designing an intervention that requires a perfect top-down champion before anything bottom-up can begin. Effective dual strategies allow both tracks to start independently and reinforce each other over time. 

Step 6 (4 min): Test for Scalability and Equity

Review your intervention design from Step 5. Ask whether it could be implemented by a team of five people with no dedicated AI budget, by a team of fifty people with moderate IT support and by a team of five hundred with centralized IT governance. If the answer changes significantly across those three scales, identify the specific design element that breaks at each scale and revise it. Then consider whether your intervention would be equally accessible to employees with low AI literacy and to those who are already early adopters. A well-designed intervention should accelerate adoption across the full distribution of digital readiness, not only among those who were already interested. 

Individual Reflection

After completing the activity, take three minutes to consider the following questions. You do not need to answer all of them. Write brief notes on any that feel most relevant to your current work or study context. 

The Bottom Line

Giving employees access to AI tools is not the same as changing how they work. The research from Scania makes this concrete: between fifteen and thirty percent of employees used ChatGPT before the enterprise rollout, and sixty percent reported never using generative AI for work. After the rollout, many license holders remained inactive. Access did not close that gap because access was never the problem. The actual barriers were time, leadership modeling and a cost allocation structure that made a modest monthly fee feel prohibitively expensive without a demonstrated return. Solving those problems requires a different kind of intervention than buying more licenses or scheduling another training session.

The more consequential insight is that fragmented data infrastructure is not a secondary concern. It is the primary reason that individual AI wins fail to scale. When institutional knowledge is distributed across twenty or more disconnected systems, a retrieval-augmented generation tool cannot function at its potential regardless of how sophisticated the underlying model is. Organizations that celebrate early adoption pilots without simultaneously investing in data coherence are building a ceiling into their transformation strategy before it begins. The question 'How do we adopt AI?' has a technical answer and an organizational answer, and most organizations are only working on the technical one. 

#GenAIAdoption   #LeadershipGap   #ChangeManagementNow   #RAGinPractice   #OrganizationalAIReadiness