-
CENTRES
Progammes & Centres
Location
Generative AI is advancing rapidly, but most organisations struggle to turn experimentation into measurable impact, exposing a persistent learning gap between innovation and implementation.
Generative Artificial Intelligence (Gen AI) has been celebrated as a revolutionary force, poised to transform industries, reinvent workflows, and augment human creativity in ways previously relegated to science fiction. From automating routine administrative tasks to becoming virtual companions, the possibilities of Gen AI seem boundless. Current rates of adoption and expansion appear to reflect this enthusiasm. The size of the generative AI market is expected to reach US$59 billion in 2025, and could grow annually at over 37 percent, resulting in a market volume of US$400 billion by 2031.
At the same time, however, a recent study revealed that nearly 95 percent of companies investing in generative AI report no measurable return on their projects. The striking number is not a reflection of a lack of ambition; firms are experimenting, piloting, and investing heavily. Rather, it signals a deeper, structural challenge: the inability to translate AI experimentation into sustained, productive, and context-aware deployment. The excitement around generative AI is palpable, yet for most organisations, its promised impact remains elusive.
The central explanation for this disconnect is what researchers have termed the learning gap. Unlike the familiar notions of a talent gap or infrastructure deficit, the learning gap is subtler. It is not about the availability of skilled personnel or high-performance computing; it is about the inability of organisations to convert knowledge and experimentation into practice that yields tangible results. Teams may be proficient in AI tools and methodologies, yet they struggle to integrate these capabilities into the specific contexts of their workflows.
A recent study revealed that nearly 95 percent of companies investing in generative AI report no measurable return on their projects. This signals a deeper structural challenge: the inability to translate AI experimentation into sustained, productive, and context-aware deployment.
The consequences of the learning gap are felt keenly in the workplace. Generative AI, in theory, promises to augment human creativity, automating repetitive work while allowing employees to focus on higher-value tasks. In practice, however, employees often find themselves correcting low-quality outputs, retrofitting AI tools into ill-suited processes, or witnessing pilots peter out after initial enthusiasm fades. Instead of empowering “superagents,” a term used by some management scholars to describe highly capable, AI-augmented workers, many organisations find themselves producing what some have called “workslop”: a stream of generic, unhelpful content that demands human intervention rather than reducing cognitive load. The gap between potential and reality underscores a central point: transformative technology alone cannot create transformative outcomes.
The learning gap is best understood as the space between what organisations experiment with and what they are able to deploy and scale effectively. It is an organisational phenomenon, as much about culture, governance, and leadership as about technology. When companies run pilots or prototypes, these experiments sometimes stay disconnected from core workflows and lack clear metrics to assess their impact. Without structured learning mechanisms, lessons from one pilot rarely inform the next. This could produce a cycle of repeated experimentation without meaningful progress, thus slowing down value creation.
This gap is compounded by the way organisations train their employees. AI training programmes frequently focus on technical literacy - understanding algorithms, mastering tools, or learning prompt engineering, but rarely connect these skills to specific business problems or organisational processes. Employees may understand how an LLM generates text, but lack the necessary skills and knowledge to effectively use AI in practice. Moreover, even when training is robust, opportunities to apply learning in real projects are limited. Safe spaces for experimentation are scarce, and the high stakes of operational failure often discourage meaningful experimentation. The result is a workforce familiar with the capabilities of AI in theory, but relatively unprepared to leverage it in practice.
Beyond training, the learning gap is perpetuated by structural and organisational barriers. One critical factor is the absence of effective feedback mechanisms. Generative AI tools are most valuable when they evolve in response to human inputs, errors, and changing contexts. Without monitoring systems and structured feedback loops, AI deployments remain static, brittle, and context-blind. Organisations that do not track performance, error rates, or user corrections fail to create a continuous learning cycle, leaving both humans and machines in a state of stagnation.
Instances of leadership misalignment further deepen the problem. McKinsey’s research on workplace AI adoption highlights a paradox: in cases where employees are sometimes more willing and capable of using AI than leaders anticipate, leadership can become the gatekeeper of learning and often the bottleneck.
Silos between technical and business teams exacerbate the issue. Data science units, often responsible for AI deployment, operate in isolation from business units, compliance teams, and operational staff. This separation prevents the transfer of knowledge, reduces contextual understanding, and limits collaborative problem-solving. Cross-functional alignment is essential: AI cannot be merely bolted on to existing processes; it must be co-designed with stakeholders across the organisation to ensure relevance, usability, and impact.
Closing the learning gap requires a shift in focus from technology to organisation. Pilots must be anchored in real business problems, with measurable objectives that align with workflow needs. Incremental, context-sensitive deployment allows organisations to refine AI applications in situ, providing both employees and AI systems the feedback necessary to improve over time. Small-scale success builds confidence, generates data for iteration, and lays the groundwork for broader adoption.
Equally important is the creation of structured learning opportunities within operational contexts. Training should not exist in isolation; employees must be embedded in projects where they can experiment, make mistakes, and see the tangible consequences of AI decisions. This experiential learning is far more effective than abstract instruction, allowing teams to internalise both the capabilities and limitations of generative AI. Over time, organisations develop a collective intelligence: employees learn from AI, AI learns from employees, and both evolve together.
The learning gap is best understood as the space between what organisations experiment with and what they are able to deploy and scale effectively. The absence of structured learning mechanisms could produce a cycle of repeated experimentation without meaningful progress, thus slowing down value creation.
Leadership and governance play a pivotal role in this process. Executives must articulate a clear strategic vision, define success metrics, and align incentives to reinforce adoption. Cross-functional task forces, combining business expertise, technical skills, and operational insight, ensure that AI is integrated thoughtfully rather than applied sporadically. Transparent monitoring and reporting mechanisms allow organisations to identify and correct underperformance, thereby maintaining momentum and sustaining trust in AI systems.
Cultivating a learning culture is equally critical. Psychological safety, the freedom to experiment without fear of punitive consequences, encourages employees to push boundaries, share insights, and collectively refine practices. Celebrating iterative learning, even when immediate outcomes are imperfect, transforms failure from a setback into a source of insight. AI deployment thus becomes not merely a technical initiative, but enables an organisational transformation, where culture, governance, and technology converge to close the learning gap.
Consider two hypothetical organisations navigating the AI frontier. A financial services firm pilots a generative tool to summarise regulatory reports. The project team collaborates closely with legal analysts, iteratively refining prompts based on errors, and monitoring the system’s performance against clear metrics. Over time, the tool achieves measurable efficiency gains, reduces manual workload, and is gradually deployed across departments. The learning loop — feedback, refinement, and integration transforms a pilot into a scalable system that delivers value.
Contrast this with a marketing team that experiments with AI-generated advertising content across several campaigns. The pilots are treated as isolated exercises, outputs are inconsistently evaluated, and lessons are rarely shared beyond the immediate team. After several months, results remain uneven, trust in AI erodes, and the initiative is shelved. Despite access to the same technology, the difference between success and failure hinges less on model quality than on organisational learning and integration.
The generative AI learning gap underscores a central insight: technology alone does not transform organisations. Without deliberate strategies to embed AI into workflows, structured feedback, and leadership alignment, pilots remain experiments rather than engines of productivity. Bridging the gap would require viewing AI as part of a socio-technical system, where humans, machines, and organisational structures co-evolve.
For businesses, this would mean shifting investment from merely acquiring AI tools to building the systems, culture, and processes that enable learning. For employees, it would require cultivating skills not only in AI operation but in iterative problem-solving and cross-functional collaboration. And for leaders, it would demand vision, patience, and governance mechanisms that align experimentation with measurable impact.
Generative AI has the potential to reshape industries and redefine work itself. But this promise will remain unrealised until organisations close the learning gap, turning isolated pilots into continuous, adaptive, and value-generating systems. The future of AI deployment is not a race to acquire the latest model, but a test of organisational learning, and the companies that master it will be the ones that truly harness AI’s transformative power.
Tanusha Tyagi is a Research Assistant with the Centre for Digital Societies at the Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Tanusha Tyagi is a research assistant with the Centre for Digital Societies at ORF. Her research focuses on issues of emerging technologies, data protection and ...
Read More +