The rapid advances in artificial intelligence should lead to significant productivity gains. Yet, this is not always the case. Why? Technology alone is not enough. There is no direct, linear relationship between technological use and performance gains. In some cases, technology can hinder productivity. It all depends on how technology is integrated and used. To better understand the challenges of AI, it is helpful to look back at the introduction of mechanical looms in the 19th century.
In 1842, managers of textile mills in Lowell, Massachusetts thought they had found a miracle solution. Each worker operated two mechanical looms. The mill decided to acquire new, more efficient machines. The reasoning was simple: production would double if each worker operated a third loom. Yet it didn’t work. The reason it didn’t work helps us understand what is happening today with artificial intelligence.
In his case study, economist James Bessen, a specialist in the impact of new technologies on the world of work, shows that, with the mechanical loom, the worker’s real job was no longer weaving; rather, it was monitoring the weaving process. The worker inspected the fabric as it came off the machine, looking for broken threads and defects. The worker stepped in before a small flaw could ruin an entire roll of fabric. When managers added a third loom, they didn’t increase the amount of weaving a person could do. They increased the amount of checking required. This exceeded what a worker could reliably handle. The bottleneck wasn’t the machines. It was the humans’ ability to monitor them.
Therefore, factories had to slow their machines down by 15% to maintain acceptable quality. It then took a full year of retraining before workers could manage three looms at full speed without letting errors slip through.
Besson observes that, once the mechanical loom was in place, 62% of the resulting productivity gains were due to better training of the workers, who were able to supervise a greater number of looms, rather than to the machines’ performance. Sixty years later, a single worker operated 18 looms and produced 50 times more than a century earlier. However, to achieve this, factories had to triple their investment in training per worker. Machines and skilled workers were not interchangeable. They depended on one another.
AI: The New Loom?
In a recent article, three economists—Christian Catalini, Xiang Hui, and Jane Wu—argue that this same dynamic is playing out today with AI, but more rapidly and on a much larger scale. Artificial intelligence is indeed following a classic trajectory for disruptive technologies: after a fairly long incubation period, its performance increases rapidly. The cost of generating text, code, images, analyses, and virtually any other type of output is also falling rapidly. However, the cost of verifying this output—confirming its accuracy, ensuring it is not a hallucination, and establishing its reliability—is rising because it remains dependent on human judgment.
Economic value is destroyed in the gap between these two curves. We can now produce as much as we want for less and less money. However, if we cannot reliably evaluate the results, a growing portion of them will be useless at best. Yet, verification is very difficult.
The natural response to this problem would be to use AI to control AI. However, according to Catalini, Hui, and Wu, this approach is risky. AI systems trained on similar data have the same blind spots. They make similar mistakes with complete confidence. Having one model verify another does not provide independent oversight. Rather, it’s a game of mirrors that can create the illusion of agreement while perpetuating the same errors. It’s like having the bar monitored by its customers.
Verification must therefore be done by humans, who must be well-trained for the task due to its complexity. However, as AI technology advances, young people are no longer hired for entry-level tasks that would have allowed them to gain practical experience over time. Experienced workers, in turn, transfer their expertise directly to AI systems. This dehumanized conception of work leads to the belief that the two groups no longer need to communicate or work together. The virtuous professional system that once produced human judgment developed through experience is now blocked at both ends.
The Challenge of Training
The factories in Lowell eventually solved their problem, but it cost them dearly. Rather than slowing down the machines, they thoroughly trained the people responsible for monitoring them. Therefore, there is a significant training challenge. Moreover, this experience underscores that we cannot think of the introduction of new technology strictly in terms of human-machine substitution. Rather, we must devise the best modes of human-machine collaboration. There are partial substitutions, as well as the development of new tasks for humans. The example of Lowell shows that this challenge exists for any new technology and that there is nothing new here. This reflects a more general idea that I have mentioned before: what matters is not the technology itself, but how it is utilized. The challenge lies in devising a value-creation system and organizational structure that make the most of the technology.
Lowell’s industrialists learned this lesson the hard way, experiencing a year of declining productivity and costly retraining. Let’s try not to repeat their mistakes!
Sources: @ccatalini on X and James Bessen on Arxiv.
🇫🇷 A version in French of this article is available here.
📬 If you enjoyed this article, feel free to subscribe to be notified of future ones.
