Part Two: Can AI Rebuild The Middle Class?
AI doesn’t have to eliminate jobs. Instead, it can broaden the reach of expertise across a wider workforce.
This is the second installment in my AI series. Stay tuned for the final piece!
During a recent discussion with UK Prime Minister Rishi Sunak, Elon Musk described artificial intelligence as "the most disruptive force in history," suggesting that it could lead to a future where "no job is needed." Similarly, Geoffrey Hinton, a leading figure in AI development, recommended last year that people should consider pursuing careers in fields like plumbing.
The prevailing sentiment is clear: many fear that the future of work is at risk. According to a recent Gallup survey, three-quarters of U.S. adults are concerned that artificial intelligence (AI) will reduce the number of available jobs.
However, this anxiety may be unfounded.
In fact, the job market in industrialized countries is robust and looks set to remain so. Four years post-Covid, the U.S. has not only returned to its low pre-pandemic unemployment rates but has also seen total employment climb to nearly three million more than before the outbreak. Similar trends of labor shortages due to declining birth rates and shrinking labor forces are observable globally, including in China.
This situation isn't merely speculative; it's rooted in demographics. The people who will turn 30 in 2053 have already been born, and we can't increase that number. Unless there's a significant shift in immigration policies, developed nations like the U.S. are more likely to run out of workers than to run out of employment opportunities.
AI will undoubtedly influence the job market, but not necessarily in the manner some thinkers suggest. Rather than replacing human workers, AI is poised to redefine what constitutes valuable and sought-after expertise. Expertise, in this context, is the specific knowledge or skills needed to perform tasks, whether it's conducting health assessments, developing software, or managing catering services. The value of expertise hinges on its necessity for achieving goals and its rarity in the job market. To borrow a line from the character Syndrome in "The Incredibles," when everyone is an expert, no one is.
In developed economies, the primary determinant of a job’s value is the level of expertise it requires. Occupations that demand minimal training or qualifications, such as those in restaurant service, janitorial work, manual labor, and childcare, often pay less.
To illustrate, compare the roles of an air traffic controller and a crossing guard. Fundamentally, both jobs involve making quick, critical decisions to prevent accidents involving vehicles and pedestrians. However, the financial rewards are starkly different: in 2022, air traffic controllers earned a median annual salary of $132,250, nearly four times higher than the $33,380 median salary of crossing guards.
The distinction between jobs often comes down to the level of expertise required. For instance, becoming an air traffic controller involves extensive education and hands-on training, making it a rare and valued skill. In contrast, the role of a crossing guard in most U.S. states does not require formal education, specialized knowledge, or certification. While an urgent demand for crossing guards could potentially be met by air traffic controllers, the reverse scenario is highly improbable.
Expertise is a dynamic attribute. Skills that once brought a high market value — such as farriery, typesetting, fur trapping, and spell-checking — have become either obsolete or have been automated. Meanwhile, some of today’s highest-paying professions in industrialized nations, like oncologists, software engineers, patent lawyers, therapists, and movie stars, emerged in response to new technological or societal needs. However, which areas of expertise fade or flourish can change with each new technological era. The advent of artificial intelligence is likely to be one of these transformative periods.
The early vision of the Information Age was one of leveled economic hierarchies through the democratization of information. In 2005, Marc Andreessen, co-founder of Netscape, expressed to the New York Times's Thomas Friedman his amazement that a teenager from places like Romania, Bangalore, the former Soviet Union, or Vietnam could access all the necessary tools and information to apply knowledge freely.
However, reality has unfolded differently.
It turns out that information is just a starting point for decision-making, a critical economic activity often dominated by a small, educated elite. The proliferation of computers made information and computational resources widely available, which ironically led to a concentration of decision-making power and resources in the hands of these elite experts.
At the same time, automation has eliminated many mid-level jobs in administrative support, clerical work, and blue-collar production. As a result, without access to better job opportunities, 60% of adults without a bachelor’s degree find themselves in low-paying, non-expert service roles.
The distinct advantage provided by AI is its potential to reverse the trends initiated by computerization—expanding the scope, significance, and value of human expertise to include a broader segment of the workforce.
The potential of AI is unique; it offers a chance to counteract the trends set by computerization by expanding the influence and value of human expertise to a wider array of workers. AI integrates information and learned experience to aid in decision-making, allowing a broader group of employees, given the appropriate foundational training, to engage in higher-level decision-making tasks that are presently monopolized by a small group of elite professionals like doctors, lawyers, software engineers, and college professors. Effectively utilized, AI could help rejuvenate the middle-skill, middle-class sector of the U.S. labor market, which has been eroded by automation and globalization.
While there are concerns that AI might render human expertise obsolete, historical evidence and economic principles argue against this notion. AI should be seen as a tool—analogous to a calculator or a chainsaw—that amplifies the application of expertise, not as a substitute for it.
Tools, by their nature, reduce the gap between intent and execution. They empower workers with the right training and judgment to execute tasks that were once laborious, prone to errors, or even unattainable. On the other hand, in the hands of those without the necessary skills and experience, these tools can be ineffective or even dangerous. For example, a pneumatic nail gun is an invaluable asset for a professional roofer but poses significant risks for an inexperienced DIY enthusiast.
For those equipped with foundational training and experience, AI can enhance their capability to undertake more valuable work. While AI will undoubtedly automate some jobs, making certain skills obsolete, it will also create new capabilities, goods, and services that demand new types of expertise that we cannot yet predict.
My argument is not merely predictive but emphasizes what is achievable. The future use of AI is not predestined; its applications, both constructive and destructive, are limitless. The misconception that technological outcomes are inevitable—a concept Shoshana Zuboff calls inevitabilism—robs us of our ability to shape our future actively. As Simone de Beauvoir noted, "Fate triumphs as soon as we believe in it." AI provides powerful tools that can enhance the capabilities of workers and improve the nature of work itself. It is crucial that we master these tools and employ them to our advantage.
From Craftsmanship To Widespread Expertise
Most modern-day "experts" would find themselves out of their depth if transported back to the 18th century. Before the Industrial Revolution, all products were handcrafted by artisans with specialized skills: wagon wheels by wheelwrights, clothing by tailors, shoes by cobblers, timepieces by clockmakers, and firearms by blacksmiths. These artisans mastered at least two types of expertise: procedural expertise, which involved executing highly practiced steps to achieve a specific outcome, and expert judgment, which entailed adapting these procedures to different circumstances.
For instance, if a blacksmith were to make two muskets based on the same design, no part from one could be used interchangeably with the other. Each component would be individually shaped, smoothed, and polished to fit the specific musket it was made for. Few of today’s experts could perform such tasks, especially with the rudimentary tools available at the time.
Despite the high regard for artisanal skills, their economic value was dramatically undermined by the advent of mass production in the 18th and 19th centuries. This new mode of production simplified the complex tasks of artisans into smaller, manageable steps that could be mechanized and performed by production workers without specialized skills. These workers were typically supervised by managers who had higher levels of education.
Mass production proved to be far more efficient than artisanal methods, but it often subjected rank-and-file workers to dangerous and demanding conditions for very low wages.
Furthermore, while skilled artisan work was predominantly the domain of adult men—reflecting both the lengthy apprenticeship required to hone such skills and prevailing gender norms—the early factories heavily employed children and unmarried women. The skilled British weavers and textile workers who protested against mechanization in the 19th century, known as the Luddites, are often mistakenly ridiculed for their supposed irrational fear of technology.
However, their concerns were legitimate. As economic historian Joel Mokyr and his colleagues noted in 2015, "the handloom weavers and frame knitters with their small workshops were quickly eliminated by factory production after 1815." Even though the innovations of the industrial era greatly boosted productivity, it took nearly fifty years for the living standards of the working class to begin improving.
AI provides extensive tools for enhancing worker capabilities and improving job functions. It is crucial that we harness these tools and ensure they serve our purposes.
As industry evolved, becoming more complex and sophisticated, there emerged a growing need for a new type of expertise—what might be termed "mass expertise." This was a skill set required by workers who operated and maintained advanced machinery and equipment, involving tasks like machining, fitting, welding, chemical processing, handling textiles, dyeing, and calibrating precision instruments. Beyond the manufacturing environment, roles such as telephone operators, typists, bookkeepers, and inventory clerks became crucial as they served as the conduits of information—akin to the information technology professionals of today.
Many of these skills were unprecedented. For instance, electricians only became necessary once electricity was widely adopted for industrial and consumer applications. Similarly, skilled machinists and telephone operators were roles created in response to new technologies—the machine tools and telephone networks, respectively. Mastery of these sophisticated tools and the intricate requirements of these roles often demanded literacy and numeracy.
It was no accident that during this period, a significant and growing portion of the U.S. workforce had acquired a high school diploma, equipping more workers with these essential skills and leading to better compensation. This fortunate blend of increased industrial productivity and a higher demand for skilled labor helped forge a burgeoning middle class in industrialized nations, enabling more people to afford luxuries like complete wardrobes, factory-made household items, and modern appliances such as electric toasters and irons.
However, unlike the artisans of earlier times, these "mass expert" workers were not typically required—or even allowed—to exercise much expert judgment in their roles. As noted by Frederick Winslow Taylor, a pioneering figure in management theory, in 1911: "The work of every workman is fully planned out by the management at least one day in advance, and each man receives in most cases complete written instructions, describing in detail the task which he is to accomplish, as well as the means to be used in doing the work."
Consequently, the specialized but narrowly defined nature of mass expert work, which emphasized adherence to rules over independent decision-making, made these jobs particularly susceptible to being displaced by technological advancements in subsequent eras.
From Mass To Elite Expertise In The Digital Era
Rooted in the technological breakthroughs of World War II, the Computer Era, also known as the Information Age, largely diminished the demand for the type of mass expertise cultivated during the Industrial Revolution. The digital computer set itself apart from all preceding technologies through its ability to execute cognitive and manual tasks encoded in clear, deterministic rules—what economists refer to as “routine tasks” and what software engineers know as programs.
While it might seem obvious to say that machines operate under deterministic rules, the digital computer introduced a new dimension. Unlike earlier mechanical devices that performed specific physical tasks, computers function as symbolic processors that manage, analyze, and manipulate abstract information. Alan Turing's 1937 proof demonstrated that such machines could perform an infinitely varied array of tasks, as long as those tasks could be expressed as algorithms.
Before the advent of computers, the human mind was the sole tool for processing symbols. The introduction of the computer provided a powerful alternative, albeit with its unique limitations. In this new era, individuals who excelled in skilled office and production roles represented mass expertise.
However, as computing technology evolved, digital machines became more adept and cost-effective than human workers at mastering tools and following explicit rules. This shift reduced the value of mass expertise much like the technologies of the Industrial Revolution had once undercut artisanal expertise.
Yet, not all tasks are governed by clear rules. Philosopher Michael Polanyi highlighted in 1966 that “We can know more than we can tell,” pointing out that our implicit, tacit knowledge often surpasses what we can explicitly articulate.
Activities such as crafting a compelling argument, telling a joke, riding a bicycle, or recognizing an adult's face in a baby photo involve complex, nuanced actions that people perform effortlessly and without a formal understanding of the underlying processes.
These "non-routine" tasks are typically mastered not through studying rules but through experiential learning. For example, a child learning to ride a bicycle doesn’t need to understand the physics of gyroscopes; trial and error is sufficient. Before AI, programming a robot to ride a bicycle required specifying every necessary step, branch, and exception—a challenge encapsulated in what is often referred to as Polanyi’s Paradox, highlighting the divide between our intuitive skills and our ability to explicitly describe them.
Advanced computing diminished the significance of mass expertise in a manner akin to how Industrial Revolution technologies lessened the value of artisanal skills.
Polanyi's Paradox has significantly influenced the landscape of high-paying jobs, which often require handling non-routine tasks. Managers, professionals, and technical workers are frequently tasked with making judgment calls in unique, high-stakes situations—such as developing a treatment plan for an oncology patient, drafting a legal brief, leading a team or organization, designing a building, developing software, or navigating a plane through adverse conditions. While a strong grasp of rules is essential, these scenarios demand more than just rule-following; they require expert judgment.
In many ways, these modern elite experts—doctors, architects, pilots, electricians, educators—resemble the artisans of pre-industrial times. Like those artisans, today's professionals blend procedural knowledge with expert judgment and often creativity, especially when facing specific, critical, and uncertain challenges. Similar to old artisanal apprenticeships, these professionals also develop their expert judgment through extended periods of supervised practice, though this is rarely referred to as apprenticeship in white-collar professions.
While computerization has diminished the value of mass expertise, it has been almost a divine boon for those engaged in elite expert work. Computers have allowed professionals to spend less time on gathering and organizing information, enabling more time for interpreting and applying this information—thus enhancing decision-making. This shift has increased the accuracy, productivity, and depth of expert professional judgment, greatly enhancing its value.
As computer technology has progressed, the wages of individuals with four-year college degrees and particularly those with graduate degrees in fields like law, medicine, and science and engineering, have surged. However, this has a downside: computerization has also automated away the mass expertise roles that non-elite workers once occupied, roles upon which these professionals previously relied.
Ironically, computerization has also deeply affected those in non-expert work. Many of the lowest-paid jobs in industrialized nations are in direct service roles such as food service, cleaning, security, and personal care. These positions require dexterity, visual acuity, basic communication skills, and common sense—attributes necessary for non-routine tasks, which computers are ill-equipped to perform. Yet, these jobs are low-paying because they demand little specialized expertise, with most able-bodied adults able to perform them with minimal training.
Computers may not be able to perform these service tasks, but they have increased the supply of workers vying for them. Individuals who might have once found employment in clerical, administrative, or production roles requiring mass expertise are now often relegated to non-routine, hands-on service jobs, exerting downward pressure on wages in these sectors.
Therefore, rather than fostering a new era of mass expertise as the Industrial Revolution did, computerization has contributed to a prolonged trend of increasing inequality over the past four decades.
Expertise In The Age Of Artificial Intelligence
Like the Industrial and Computer revolutions before it, the advent of Artificial Intelligence represents a pivotal shift in the economic significance of human expertise. To understand why, it's crucial to consider how AI differs from the previous computing era we are transitioning from.
Before AI, the hallmark of computing was its flawless and almost cost-free ability to perform routine, procedural tasks. However, its major limitation lay in its inability to handle non-routine tasks that require tacit knowledge—knowledge that is understood intuitively and applied contextually, and not easily communicated. In contrast, AI's capabilities are virtually the opposite.
There's a sort of cosmic irony in how AI operates: it is not inherently reliable with straightforward facts and numbers—it doesn't strictly adhere to predefined rules. Instead, AI excels at acquiring and applying tacit knowledge. Unlike systems reliant on hard-coded procedures, AI learns through examples and gains expertise without explicit instructions, acquiring skills it wasn’t explicitly programmed to have.
If a traditional computer program is comparable to a classical musician playing strictly from sheet music, then AI resembles a jazz musician—improvising on themes, taking creative liberties, and inventing new melodies. Much like a human expert, AI can blend formal knowledge (rules) with experiential learning to make—or assist with—unique, high-stakes decisions.
AI's ability to deviate from a set script and improvise based on its training and experience positions it to partake in what has traditionally been the domain of elite experts: expert judgment.
Although AI is still in its early stages, its potential is staggering. As AI becomes more adept, precise, and widespread in providing expert judgment, it will increasingly become a common fixture in our professional lives. Its main function will be to support, advise, and alert decision-makers as they exercise expert judgment. This might seem like a stretch, but the influence of AI in decision-making is already permeating our daily lives subtly yet significantly.
For example, when your email tool suggests finishing your sentences, your smartwatch inquires if you've had a fall, or your car adjusts your steering to keep you centered in your lane, AI is already applying its form of expert judgment to interpret your intentions and aid your actions.
Unlike the Industrial Revolution, which sparked a new era of mass expertise, computerization has driven a forty-year trend of increasing inequality.
The implications of most AI-driven decisions are currently minor unless, for instance, you're not attentive while your Tesla autopilots, but the significance of these decisions is set to increase as AI takes on more critical roles in our lives.
What does this dramatic advancement in machine capability mean for the future of human expertise? Although AI is a novel force, it has a notable historical parallel in its economic implications, albeit in a direction opposite to current trends.
Remember, the emergence of pre-AI computing enhanced the value and impact of professional decision-makers by making information gathering and organization faster. At the same time, it reduced the value of the procedural expertise that many middle-skill workers relied on.
Now, envision a technology that could reverse this dynamic: What would it look like? Such a technology would bolster and expand judgment capabilities, enabling a broader range of non-elite workers to participate in high-stakes decision-making. It would also challenge the exclusive control that professionals such as doctors, lawyers, software engineers, and professors have over their respective fields.
Artificial Intelligence represents this transformative technology. By offering decision support through real-time guidance and safeguards, AI could empower a wider group of workers, those with relevant but not necessarily advanced academic credentials, to undertake tasks traditionally reserved for highly trained experts. This shift could enhance job quality for non-college-educated workers, reduce earnings inequality, and, mirroring the Industrial Revolution’s impact on consumer goods, decrease the cost of essential services like healthcare, education, and legal advice.
The analogy with mass production is apt here. Just as mass production significantly lowered the cost of consumer goods, the current challenge lies in addressing the steeply rising costs of vital services like healthcare, higher education, and legal services, which are dominated by guilds of highly educated professionals.
Economists Emily Dohrman and Bruce Fallick from the Federal Reserve Bank have noted that over the past four decades, the costs of healthcare and education have increased by approximately 200% and 600%, respectively, relative to U.S. household incomes. These rising costs are partly due to the increasing expense of employing elite decision-makers, whose expertise—being both essential and rare—commands a substantial premium.
AI holds the promise of reducing healthcare costs by diminishing the scarcity of expertise—essentially, by enabling a greater number of workers to perform tasks that have traditionally required expert knowledge.
To illustrate this point more tangibly, let’s consider the role of the Nurse Practitioner (NP). NPs are Registered Nurses (RNs) who, after earning an additional master’s degree, are qualified to perform functions that were previously exclusive to physicians, such as administering and interpreting diagnostic tests, assessing and diagnosing patients, and prescribing medications.
From 2011 to 2022, the employment of NPs nearly tripled to about 224,000, and is expected to grow by around 40% over the next decade, which is significantly higher than the national average. In 2022, the median annual salary for an NP was $125,900.
Nurse Practitioners embody elite decision-makers. Their role requires a blend of procedural expertise and expert judgment to address unique patient cases where the need for careful decision-making is critical.
The relevance of the NP profession in this discussion lies in its demonstration of how high-stakes professional tasks—diagnosing, treating, and prescribing—have been redistributed from the most elite professionals (MDs) to another group of professionals (NPs) who possess significant but comparatively less formal expertise and training.
What facilitated the redistribution of these elite decision-making responsibilities? Primarily, the change was institutional. In the early 1960s, facing a shortage of primary care physicians and recognizing the underutilized capabilities of registered nurses, a group of nurses and doctors created the NP role to address these gaps.
This shift necessitated the creation of new training programs, the establishment of certification standards, and a challenging overhaul of medical practice regulations, which involved ongoing negotiations with the American Medical Association, the primary lobbying group for physicians.
A secondary but vital factor in this evolution was the role of information and computing technology (ICT). A 2012 study highlighted how ICT supports the advanced practice dimensions of NPs. The availability and completeness of electronic patient information have enhanced the timeliness and quality of diagnostic and therapeutic decisions, thereby expediting patient access to appropriate care. Additionally, centralized patient data has improved the quality of communication between healthcare professionals.
This example not only shows how AI and related technologies can extend the reach of expertise and reduce costs but also illustrates the potential for these technologies to transform traditional roles in sectors like healthcare, making high-level medical care more accessible and affordable.
If we liken a traditional computer program to a classical musician who strictly adheres to the sheet music, then AI can be compared to a jazz musician — creatively riffing on existing melodies, taking improvisational solos, and crafting new tunes on the fly.
To simplify, electronic medical records and improved communication tools have enhanced the decision-making capabilities of Nurse Practitioners (NPs).
Looking ahead, AI has the potential to further augment the expert judgment of NPs, expanding their ability to undertake a wider range of medical tasks. This concept isn’t limited to healthcare; it extends to various fields like contract law, educational instruction, and medical procedures. AI can enhance the capabilities of a broader workforce by complementing their skills and supporting their judgment.
Is there evidence to support this? Three recent studies offer "proof-of-concept":
1. Programming Productivity: A 2023 study by Sida Peng of Microsoft Research and colleagues from GitHub Inc. and MIT Sloan School of Management explored GitHub Copilot, an AI-based programming tool. In a controlled experiment, programmers using Copilot completed tasks about 56% faster than those without it, demonstrating significant gains in productivity.
2. Writing Enhancement: Researchers Shakked Noy and Whitney Zhang from MIT conducted a study published in 2023 focusing on writing tasks. Participants, including marketers and consultants, were divided into two groups; one used ChatGPT and the other conventional tools like word processors. The ChatGPT group showed marked improvements in both the speed and quality of writing. Notably, the least skilled writers using ChatGPT performed on par with the median of the non-AI group, indicating a substantial leap in quality.
3. Customer Service Efficiency: In another study highlighted by the National Bureau of Economic Research, researchers found that generative AI tools suggested responses to customer service agents, leading to a 14% improvement in productivity. The benefits were especially significant for novice workers, who reached the capability level of experienced agents three times faster than usual. Additionally, the AI tools appeared to reduce the turnover rate among new agents, likely because the AI buffer reduced negative customer interactions.
These studies illustrate that AI tools primarily supplement rather than replace human expertise. They do this by automating initial tasks—like drafting code or writing responses—freeing up professionals to refine these outputs. This automation saves time, while augmentation improves quality by helping less skilled workers achieve results closer to their more skilled counterparts.
However, integrating AI effectively also requires understanding and training. For instance, a study involving professional radiologists found that AI did not enhance diagnostic quality because the radiologists did not know how to use the AI predictions effectively. This underscores that successful deployment of AI tools in professional settings isn't just about the technology itself but also about properly training the users to leverage these tools to enhance their work.
These examples show that AI has the potential to democratize expertise, making high-stakes tasks accessible to a broader group and potentially reducing costs in fields dominated by highly specialized professionals.
From contract law to calculus instruction to catheterization, AI could potentially enable a larger set of workers to perform high-stakes expert tasks.
If AI drives a significant increase in productivity across sectors like radiology, customer service, software development, and copywriting, one might wonder if this will lead to fewer workers performing tasks that previously required larger workforces. However, the impact might differ by sector.
Demand in areas such as healthcare, education, and computer programming seems nearly insatiable and is likely to grow even more if AI succeeds in reducing the costs associated with these services. Nevertheless, in some fields, yes, rapid productivity gains could lead to reduced employment. Consider the agricultural sector: in 1900, about 35% of U.S. employment was in agriculture, but by 2022, this figure had dropped to around 1% — not because we consume less food, but due to enormous productivity improvements.
Yet, what holds true for employment within a specific product or service sector does not necessarily apply to the economy as a whole. When nearly 40% of U.S. workers were employed on farms, sectors like health and medical care, finance and insurance, and software and computing were just beginning to develop.
Most contemporary jobs aren't merely holdovers from historical roles that have somehow dodged automation. Instead, they represent new specialties that arose in response to specific technological advances, requiring new types of expertise that didn't exist or were unimaginable in past eras.
Roles such as air traffic controllers, electricians, or gene editors didn't exist until related technological innovations created the need for such specialized skills. Furthermore, technology isn't the sole driver of new jobs. Many expert personal service occupations—like vegan chefs, college admissions consultants, and personal trainers—have emerged due to increased incomes, changing trends, and evolving economic conditions. Innovation helps by growing the economic pie, allowing societies to demand richer, more varied services.
Looking ahead, with stagnant population growth and an increasing proportion of the population aging past retirement, the real challenge for the U.S. and other developed nations isn't a lack of jobs but a shortage of workers. In Japan, which faces significant demographic challenges, strategies such as reducing store hours, using digital avatars, and hiring foreign students are already being implemented to manage labor shortages.
Ideally, if AI can enable more workers to apply their expertise more efficiently, it could increase the proportion of high-productivity jobs, alleviating some of the labor market pressures brought on by demographic shifts. This would not only maximize human potential but also ensure that economies remain resilient in the face of changing workforce dynamics.
Substitution Vs. Complementarity: Understanding The Role of AI
If AI can provide abundant, low-cost expertise, does that render the smaller amounts of human expertise redundant? Consider this comparison to YouTube. If you’re skilled in home repair or the trades, you likely use YouTube for instructional videos—how to replace a light switch, detect a gas leak, or service a snowblower. A 2018 Pew Research study found that 51% of adult YouTube users consider the platform “very important” for learning how to do things they've never done before.
Yet, who really benefits from these tutorials? Not the experts—they're the ones creating the videos. How about beginners? Imagine I decide to upgrade the fuse box in my 19th-century house to a modern circuit breaker panel, despite having no experience with electrical tools and lacking the proper safety gear. I have a free Saturday, a nearby Home Depot, and a high level of confidence. I watch several YouTube tutorials and begin the project. Soon, however, I discover that my old fuse box doesn't match the one in the video. Whether I backtrack or forge ahead, I’m now at serious risk of an electric shock or starting a fire.
Clearly, that YouTube tutorial wasn’t meant for someone like me. To effectively use the freely available expertise, I needed basic knowledge on handling high-voltage circuits and the ability to troubleshoot unexpected problems. With that expertise, YouTube could have been the perfect resource.
The takeaway here is that tools don’t make expertise obsolete; rather, they enhance its value by broadening its effectiveness and reach. And the more potent the tool, the greater the risks involved. As Alexander Pope famously said, “a little learning is a dang’rous thing.”
While AI offers more than just a "YouTube for white-collar professionals," its role in augmenting expert capabilities remains critical. Most medical procedures, for example, are defined by a specific sequence of actions. However, performing these actions successfully requires practical experience and the kind of nuanced judgment that comes from hands-on practice.
Instead of rendering expertise obsolete, tools frequently enhance its value by expanding its effectiveness and reach.
It's conceivable that an experienced medical professional could learn to use a new medical device, such as a novel type of catheter, with AI guidance, or perform a rare procedure during an emergency. Similarly, an untrained person might manage to perform catheterization by following an instructional video on YouTube. However, when complications arise, the presence of someone with expert medical judgment becomes crucial.
Artificial Intelligence generally won't allow untrained, non-expert individuals to execute high-stakes tasks like catheterization effectively. However, it can help those with basic expertise enhance their skills. AI builds upon a solid foundation of knowledge, expanding the capability of expertise. Without such a foundation, its use becomes risky.
Am I ignoring the possibility that AI-powered robots will soon independently perform these tasks, eliminating the need for human experts? I believe not. While AI will continue to advance robotics, the prospect of robots autonomously performing physically demanding tasks in uncontrolled real-world settings, as opposed to controlled factory environments, is still far off.
If this seems overly cautious, consider the example of autonomous driving. Despite significant investments and bold predictions of rapid success, leading tech companies have struggled to achieve fully autonomous driving. The difficulty isn't in manipulating a vehicle's controls—that's the easy part. The real challenge lies in consistently interpreting and reacting to an unpredictable world filled with variable road conditions, pedestrians, and changing weather. Given these complexities, the cognitive and physical skills required to install a breaker box, cook a meal, or perform catheterization remain impressively demanding.
Is This the Decline of Expertise?
One might argue that I am simply heralding the gentle decline of human expertise. Will AI render human skills as obsolete as tractors made ditch-digging, assembly lines made artisanal crafts, and calculators made manual long division?
While I doubt most would yearn for a return to forging tools from wrought iron or performing long division manually, I acknowledge the underlying concern. A future where human labor holds no economic value appears to me as an unmanageable nightmare, though some proponents of a guaranteed income might disagree. However, this conclusion does not necessarily arise from the premise.
Innovation consistently introduces new tools, which often serve as instruments of automation. Consider London cab drivers who spend years memorizing the city's streets, a skill rendered technologically redundant and economically unnecessary by smartphone navigation apps.
Tools can indeed diminish the need for certain human expertise. Yet, often the reverse is true as well. Consider air traffic controllers as discussed previously. Without radar, GPS, and two-way radios, these highly trained professionals would be reduced to merely scanning the skies. Similarly, the skills of established professions like doctors, builders, and musicians would be much less effective, even irrelevant, without the necessary tools to apply their expertise.
Economically speaking, while navigation apps have automated the skills of London cab drivers, radar and GPS have enhanced the capabilities of air traffic controllers, creating new types of expert work rather than eliminating old ones.
If innovation was solely about automation, we would have exhausted available work long ago. Instead, it seems the industrialized world might run out of workers before it runs out of jobs. This is likely because the most significant innovations have not primarily focused on automation. For instance, automation did not lead to the development of airplanes, indoor plumbing, penicillin, CRISPR, or television.
Rather than merely automating tasks, these innovations have opened entirely new realms of human activity, created new jobs, and called for new forms of expertise. There were no aircraft crews, household plumbers, geneticists, or television actors until these innovations created the need for such specialized skills.
AI is set to automate some tasks, eliminate some jobs, and transform others. At the same time, it will introduce new products and services, create new demands for expertise, and unlock new opportunities for human progress—although predicting the specific outcomes remains challenging.
It seems that the industrialized world may exhaust its supply of workers before it depletes its available job opportunities.
These opposing effects will produce both winners and losers, and the transition could be difficult. There is no economic principle guaranteeing that the effects of automation and new job creation will balance each other out; recent data suggest that automation is advancing faster than new job creation. Even if these forces were to reach a stalemate, it is improbable that the workers whose skills are rendered obsolete by AI will be the same ones who benefit from newly valuable expertise.
A Hypothetical Situation, Not A Prediction
History and academic research show that the technologies societies develop, and their uses—whether for exploitation or emancipation, broadening prosperity or concentrating wealth—are primarily determined by the institutions that create them and the incentives under which they are deployed.
The scientific mastery of controlled nuclear fission in the 1940s gave countries the capability to develop both highly destructive weapons and nearly carbon-free electric power plants. Eight decades later, different nations have prioritized these technologies in varying ways. For instance, North Korea has developed a suite of nuclear weapons but no civilian nuclear power plants, while Japan, the only nation to have suffered nuclear attacks, has no nuclear weapons but operates dozens of civilian nuclear power facilities.
Artificial Intelligence is even more versatile and widely applicable than nuclear technology, making the potential for both positive and negative uses much broader. The deployment of AI, and the resulting winners and losers, will hinge on the collective and often conflicting decisions made by industries, governments, foreign states, non-governmental organizations, universities, labor unions, and individuals.
The implications are profound, affecting economic efficiency, income distribution, political power, and civil rights. Some nations use AI to intensively monitor their citizens, suppress dissenting opinions, and identify (and punish) dissidents—and they are quickly spreading these capabilities to other authoritarian regimes. Conversely, in different contexts, the same AI technologies are being used to accelerate medical drug discovery (including Covid vaccines), facilitate real-time language translation, and provide tailored educational support to both struggling students and self-taught learners.
AI presents a significant threat to labor markets, but not in the form of a future devoid of jobs due to technology. The real danger is the devaluation of expertise—a future where human labor is generic and undifferentiated, where everyone is an "expert" thus making no one truly an expert, and where labor becomes disposable, with most wealth accumulating to the owners of AI patents. The political landscape of such a future would be dystopian, a blend of “WALL-E” and “Mad Max.”
Interestingly, this bleak economic vision is what many AI visionaries appear to anticipate. For instance, OpenAI, the creator of ChatGPT and DALL-E, defines Artificial General Intelligence (AGI) as "highly autonomous systems that outperform humans at most economically valuable work." In a 2023 bestseller, AI pioneer Mustafa Suleyman questions, “If the coming wave is as general and wide-ranging as it appears, how will humans compete?”
The most generous interpretation of these foreboding statements is that they are probably incorrect—a simplification of the complexity of innovation to merely a dimension of automation. Do these technology leaders believe that power tools devalue the skills of contractors, or that airplanes outperform their passengers? The latter question is nonsensical. Airplanes are not our competitors; they are tools that enable us to fly.
Merely replicating existing capabilities more efficiently and cheaply is a modest achievement. The most valuable tools enhance human abilities and unlock new realms of possibility. The more mundane ones marginally improve upon existing tools.
My Maytag washing machine may have more computing power than the first Apollo spacecraft, and I can operate it remotely from anywhere in the world, but it's never going to land on the moon. If AGI merely improves our existing tools rather than enabling groundbreaking achievements like a moon landing, it isn’t AGI that has let us down; rather, we have not leveraged its full potential.
Amidst widespread media hype about an AI-induced apocalypse, it's easy to overlook that the industrialized world currently has more jobs than it does workers. The question isn't whether there will be jobs—we will have them—but whether they will be the jobs we desire.
For the fortunate, work provides purpose, community, and respect. However, the quality and dignity of a significant number of jobs have diminished over the past four decades as computerization has advanced and inequality has become more pervasive.
AI presents a genuine threat to labor markets, not in the form of a technologically jobless future, but through the devaluation of expertise.
AI offers a unique opportunity to reverse current trends—extending the relevance, reach, and value of human expertise to a broader workforce. This could not only reduce earnings inequality and decrease the costs of essential services such as healthcare and education, but it could also help restore the quality, prestige, and autonomy that many workers and jobs have lost.
This alternative path is not a guaranteed outcome of AI development. However, it is technologically feasible, economically sensible, and morally appealing. Given this potential, we should focus not on what AI will do to us, but on what we want AI to do for us.


