22 years ago, on September 11, 2001, the world watched in horror as two planes crashed into the World Trade Center in New York in one of the most spectacular terrorist attacks ever. Behind this immense tragedy, whose geopolitical consequences are still being felt today, lies the failure of the American intelligence apparatus, and the CIA in particular. Despite all the information at its disposal, the CIA failed to anticipate the attack. What explains this failure?
Created in 1947 with the explicit mission of preventing another Pearl Harbor-the quintessential strategic surprise-the CIA has failed in this mission on at least four major occasions: the Cuban Missile Crisis of 1962, the Iranian Revolution, the collapse of the USSR, and 9/11. Understanding how these surprises occurred despite the agency’s extraordinary resources is the subject of my 2013 book, Constructing Cassandra, co-authored with Milo Jones.
Traditional explanations of surprise
Strategic surprise has long occupied a central place in the fields of intelligence, international relations, and warfare. Several types of explanations have been proposed. The first is that of a communication problem between a service and the decision maker. The possibility of an event may be mentioned in a report but not taken into account. For example, the CIA denies that it was surprised by the collapse of the USSR because one of its reports had expressed doubts about the country’s economic health. As we know, the art of bureaucracy is always to cover one’s tracks by raising all possibilities. What counts, then, is the hypothesis that a department promotes, not the ones it evokes. Why does it promote some and not others?
The second explanation is organizational dysfunction. The compartmentalization of departments, their difficult relationships, even their competition, the culture of secrecy, or the slowness of decision-making can complicate the transmission of crucial information. This was particularly true in the case of Iraq’s weapons of mass destruction in 2002/2003. But if the problem is organizational, why has it not been possible to devise an optimal organizational form over time? Judging by the numerous reforms of American intelligence, especially after a major failure, it’s not for lack of trying.
Other explanations focus on the psychological dimensions of the decision, emphasizing the importance of cognitive or emotional biases. But if the problem is psychological, why do some surprises occur and others do not? Why are there categories of errors that are not detected by improved processes and methods?
To these explanations we add a cybernetic approach and a contingent approach. According to the former, surprise results from the inability to distinguish between good and bad information, especially in the face of ever-increasing amounts of information. The goal is to detect weak signals (precursors of an upcoming phenomenon). According to the second, a strategic surprise is an unpredictable, unavoidable event that has no cause to which we can respond. But if the problem is detecting weak signals, how do we extract and sort the information? In her groundbreaking study of Pearl Harbor, Roberta Wohlstetter showed that there was no shortage of information about the Japanese fleet. But American analysts didn’t know how to use it. Similarly, why do false assumptions persist for so long in the face of contrary and sometimes readily available evidence?
An alternative approach: the identity angle
In the book, we took a close look at an organization’s identity and analytical culture, including how its assumptions and beliefs are created and maintained over time. We found four fundamental and enduring characteristics:
1- A homogeneous body of analysts that prevents a diversity of hypotheses from being considered;
2- A scientistic attitude that favors a purely analytical approach, detached from a reality that is nonetheless social;
3- A preference for secret information, which determines both the choice of priorities and the type of information sought (everything that is not secret is not worthy of interest);
4- The primacy of consensus, which means that what is communicated to the decision maker is the result of a compromise that is politically acceptable to all stakeholders, thus blinding him to extremes.
These characteristics have a significant impact on the four phases of a department’s activity: defining the mission, gathering information, analyzing it, and finally producing and disseminating the results of the analysis to decision-makers. It explains why certain problems are ignored, why certain data are favored, why certain hypotheses are discarded, and why certain possibilities are not discussed with decision makers. Each of these phases is significantly disrupted by the identity of the CIA, and the impact of these disruptions on the genesis of the four surprises is significant.
Where were the Cassandras?
For each of these surprises, we were able to identify Cassandras, i.e., one or more individuals who anticipated the events to varying degrees. In the case of the missile crisis, the Cassandra was the CIA director himself, John McCone. In the case of the Iranian revolution, it was journalists, businessmen and the Israeli services. In the case of the USSR, it was economists like Igor Birman. In the case of the September 11 attacks, it was the head of the Bin Laden unit, Michael Scheuer.
What did these Cassandras have in common? They were poorly integrated into their organizations and had reputations for being difficult people. John McCone was new to the job and not cut from the organization’s mold. Although he was convinced of the presence of missiles, he was not followed by his teams, who did nothing while he was away. In the spring of 1978, the french newspaper Le Monde published a series of alarmist articles about the Shah’s regime. Igor Birman unsuccessfully denounced the CIA’s calculations, which overestimated the Soviet GDP and thus the strength of the regime. For months, Michael Scheuer tried in vain to alert his superiors about Bin Laden, who was considered a minor player. As a last resort, he wrote directly to the head of the agency, skipping six levels of hierarchy. He was immediately fired and became… a librarian. September 11th was only a few weeks away.
The study of these examples also refutes the idea that it was impossible to imagine their occurrence. Intelligent, informed people hypothesized about them, but the identity and culture of the CIA prevented these hypotheses from being investigated and accepted. The alternative hypotheses they defended were stillborn, and the organization’s vast machine for collecting, analyzing, producing, and disseminating a more accurate vision of the field of possibilities was thwarted at every stage of its chain.
An understanding of strategic surprises based on organizational identity is a prerequisite for the use of other explanations – psychological, organizational, weak signals, etc. – because identity and culture create the conditions under which the latter operate. In essence, the lesson of Constructing Cassandra is this: What surprises us depends on who we are.
What works for the CIA also works for other organizations, especially businesses. The field of innovation has long observed the paradox of companies that are leaders in their field being overtaken by a disruption in their environment. Again, the paradox is explained more by what makes these companies what they are-the historical business model they are locked into-rather than by what they explicitly want. In questioning their approach to strategic surprise, then, organizations of all kinds will do better to focus on their identity than on their analytical tools.
➕ Constructing Cassandra, co-authored with Milo Jones, is available from Amazon.
📬 If you enjoyed this article, don’t hesitate to subscribe to receive future articles via email (“I subscribe” in the upper right corner of the home page).
🇫🇷 French version of this article here.