AI, Disruption, and Automation: Is the Academic World the Next Kodak?

In public discourse, the emergence of large language models is typically discussed in terms of its implications for for knowledge workers such as programmers, lawyers, and accountants. However, little is said about the effects of this technology on the producers of knowledge themselves—those whose profession has consisted of reading, synthesizing, conceptualizing, and transmitting for centuries. Yet, the disruption is profound. The academic world is currently experiencing its own “Kodak” crisis but does not seem to be aware of it.

Scholars devote many years to two main activities: assimilating the scientific literature in their field and writing articles based on that literature. Sometimes they compile empirical data as well. This represents a considerable cognitive investment. This investment is usually repaid with a position at a prestigious institution or university. The key to entry is a Ph.D., which requires five to seven years of work akin to monastic life. The entire system of producing doctorates, academic publications, and university hiring is based on the idea that scholarship equals value.

Yet, this model is breaking down, at least in its most fundamental aspect. What does a researcher who wants to publish actually do? They conduct a literature review. AI can produce an initial draft more quickly and with broader coverage. They process the collected data. AI can do this faster. They draft the article according to the target journal’s guidelines. AI produces an acceptable first draft in a few minutes. What about the contributions? Ten minutes in five iterations. The abstract? Same thing. Of course, AI is not infallible, and we must not confuse producing a text that resembles an article with producing a scientific contribution. The academic form is replaceable; the epistemic value is not. However, the most time-consuming part of the job—which accounted for most of the initial investment—is becoming a function of oversight and control. Supervision is neither quick nor trivial; it often requires the same expertise that the tool seems to replace. However, the balance of time between production and supervision has shifted significantly.

Naturally, this substitution does not affect all researchers in the same way. The hard sciences are partially protected by the laboratory and lab work. As long as researchers handle test tubes, calibrate instruments, and conduct physical experiments, the core of the profession has a tangible foundation that AI cannot dematerialize. The situation is more mixed in the humanities and social sciences. Some of these fields produce primary data that AI cannot generate. However, others—those who primarily work with existing data and derive conceptualizations from literature—see most of their work as replaceable.

What, then, remains that cannot be replaced? Precisely what AI cannot do. Identifying a paradox, for example. Formulating a research question. Designing a protocol. Going into the field to conduct difficult interviews, observe organizational situations, or sift through unpublished archives. Constructing concepts, arbitrating controversies, evaluating the validity of an approach, training and supervising doctoral students, and signing off on and taking responsibility for an interpretation before the community. The function of assurance—an identifiable researcher who is methodologically accountable and takes responsibility for their errors—cannot be reduced to the production of text. It remains fully human. The researcher’s value remains intact within this scope. It simply shifts to the portion of the profession that bookish scholarship does not cover, and it is reduced numerically.

The problem is that the academic system has not learned to primarily operate on this aspect. Two pre-existing weaknesses have gradually made it a minority practice among a significant portion of researchers.

The first is growing isolation. Over the decades, the profession has become more insular: scholars publish for peers in journals produced and read by peers on increasingly narrow and often esoteric topics in an increasingly coded language. Links with real-world organizations, businesses, government agencies, and other fields have frayed to the point of becoming tenuous for a significant fraction of the profession. This confinement to pure theory was problematic even before AI. With AI, it becomes fatal because it deprives researchers of the one asset AI cannot produce: situated judgment, knowledge of the field, and a nuanced understanding of a particular context.

The second issue is a reluctance to teach. In the current system, teaching is often viewed as a distraction from the “real” work of publishing, which determines one’s career and generates institutional incentives. For those who resign themselves to it, the educational impact becomes negligible. When students can obtain patient, readily available explanations tailored to their level from an AI, teachers who settle for the bare minimum become openly replaceable. What do they offer that an AI cannot?

Faced with this situation, researchers are not standing still. A veritable AI race has begun in labs with workshops, training sessions, and seminars on “how to use ChatGPT to publish more” and “how to speed up data processing.” However, this energy is being channeled into serving the existing model by producing more articles faster in the same journals according to the same evaluation criteria. This is the “cramming” reflex that Clayton Christensen described among established players facing disruption: integrating new technology into the old model to improve its efficiency rather than rethinking it. The analogy with Kodak, which was unable to reinvent itself in a digital world, has its limits. However, the cognitive mechanism of the famous innovator’s dilemma is comparable: the incumbent does not react to disruption because it views it through the prism of the very model it is dismantling.

While it’s possible that institutions will react, the entire system of recruitment, evaluation, and symbolic hierarchies rewards scholarship and article production, now a dead-end. The trap is formidable because it is invisible to those involved. A scholar who has invested twenty years in becoming a recognized specialist has no interest in acknowledging the growing automatable portion of their profession.

Therefore, the question is not whether scholars will be replaced. Rather, the question is: What is their value now, if their bookish scholarship is becoming commonplace, their isolation from the field prevents them from providing added value, and their disengagement from teaching makes them replaceable by a machine? The answer lies in producing original data, formulating new questions, ensuring the validity of a method, guiding reasoning, and immersing oneself in real-world situations. However, this requires a profound rethinking of the profession, which runs counter to current incentives.

Danger zone

But one thing is certain: scholars whose value primarily rests on what they have read and synthesized are in danger, especially as more and more of their work falls within the scope of what can be automated. Clearly identifying this portion in order to reinvent one’s profession while there is still time is essential for all white-collar workers, and scholars won’t escape that reckoning.

🇫🇷 A version in French of this article is available here.

🔎 Readers may consult the series of three articles by researcher Alexander Kustov to explore the topic further. Kustov takes a radical stance on the issue; the first article is titled “Academics Need to Wake Up on AI.”

Leave a Reply