The history of technology is often told through the lens of latency or the gap between an invention and its eventual integration into the productivity statistics of a nation. As economist Robert Solow famously remarked in 1987, “you can see the computer age everywhere but in the productivity statistics”. Eventually, he was proven wrong; it took fifteen years for organizational change to catch up with technical capability.
Today, we are navigating a similar, albeit more rapid, phenomenon: the “capability overhang.” This is a precarious period where AI models possess the technical ability to execute complex tasks, yet our businesses, schools, and regulatory bodies lack the institutional safeguards and implementation roadmaps to leverage them. As discussed in the first of a three-part miniseries on the Global Governance Institute’s State of the Globe podcast, this overhang is not merely a technical glitch; it is a governance crisis that threatens the very foundations of how we learn and work.
In standard economic forecasting, we focus on the median scenario or the most likely outcome. Most current models suggest that AI will result in incremental productivity gains without a total labor collapse. However, recent research from institutions like the Forecasting Research Institute suggests a 14% probability of an "extremely rapid acceleration" scenario, a "black swan event" where AI capabilities outpace human adaptation so quickly that it triggers systemic economic instability.
In any other industry, a 14% risk of total failure would be grounds for an immediate grounding of the fleet. If a commercial aircraft had a 14% chance of crashing, no passenger would board. Yet, in the realm of AI governance, we remain largely focused on the smooth flight scenario. Effective global governance requires us to shift our focus toward these tail risks. We must build social safety nets and regulatory frameworks not for the average day, but for the day the math breaks.
Perhaps the most insidious risk of the capability overhang is what I term the "cognitive atrophy trap." AI is a magnifier of existing trends. In higher education, it is magnifying a decades-long shift toward transactional learning. We are witnessing a surge in "cognitive offloading," where students outsource writing and research to Large Language Models (LLMs).
The danger here is not just academic dishonesty; it is the erosion of procedural memory. The "struggle" of writing an essay (i.e., the hours spent in a library synthesizing conflicting arguments) is not a bug in the education system; it is the feature. The struggle is the learning. When we remove the friction of thinking, we weaken the mental muscles required for critical evaluation. If a generation of students never learns to construct an original argument, they will lack the critical thinking necessary to serve as the "human-in-the-loop" that AI systems require to remain accurate and ethical.
In the labor market, the capability overhang manifests as a shift from "AI washing" to a silent "hiring freeze." While many recent high-profile layoffs have been blamed on AI, they are often a form of "AI washing," masking traditional cost-cutting or pandemic-era over-hiring.
The more immediate, yet less discussed, threat is the AI-induced hiring freeze at the entry level. As agentic AI begins to handle the tasks an intern or entry-level worker would normally do (e.g., basic coding, legal research, administrative drafting, etc.) the incentive for firms to hire juniors diminishes. This creates a "seniority trap." If we stop hiring entry-level workers because an AI agent is more efficient, we destroy the pipeline of talent. In the not too distant future, we will face a structural deficit of senior partners and experts.
How does our education system respond to these shifts? The answer may lie in a return to ancient roots or to ancient Greece, to be more precise, and to the "Socratic method." As routine cognitive tasks become commoditized, the value of the human worker will shift back to the human-centric, transferable skills that the Finnish government called the "Four Cs of Innovation": critical thinking, creativity, collaboration, and communication.
We may see a two-tier education system emerge. On one hand, a mass-market, AI-augmented model focused on efficiency. On the other, a return to the "Socratic Method": small-group, device-free, deep-thinking environments where the goal is not to produce an output, but to nurture the cognitive resilience of the next generation of “deep thinkers”. In the age of AI, the ultimate competitive advantage will be the ability to ask the right questions, a skill that cannot be developed through copy-pasting, but only through human-led dialogue.
The current direction of flight is not a given. We have a narrow window to nudge society toward a more positive outcome. This requires a level of "situational awareness" that transcends individual ministries or corporate boardrooms.
Governance cannot be left to the market. We need civil society leaders and policymakers to realize that AI is not just another tool; it is a fundamental rewrite of the social contract, requiring us to reimagine the traditional link between labor, income, and human dignity. Whether we use this "capability overhang" to build a new Renaissance of human creativity or fall into a trap of economic and cognitive decline depends entirely on the governance frameworks we choose to build today.
David Timis is a Senior Fellow in AI Governance at the Global Governance Institute (GGI) and a manager at Generation.org. This commentary is based on the first episode of the GGI podcast miniseries "The Governance of AI."
Photo by Alex Kotliarskyi on Unsplash