Chapter Two: Assessing Critical Analytical Shortfalls
Many of today's principal analytic problems arise from continued reliance on analytic tools, methodologies, and processes that were appropriate to ... the Cold War.As astute members of the Intelligence Community have observed, intelligence capabilities (and the organizations that provide them) are not general purpose tools but, rather, specialized and “finely honed” instruments that evolve and adapt to the specific challenges and requirements placed upon them. Many of today’s principal analytic problems arise from continued reliance on analytic tools, methodologies, and processes that were appropriate to the static and hierarchical nature of the Soviet threat during the Cold War and were, in that environment, largely successful. We possessed several decided advantages that enabled us to overcome some of the limitations on our capabilities: a careful and cautious “Main Enemy” that was also a far simpler and slower target than those we face today; more time; many and more experienced analysts; and varied and well-established sources of information, including the adversary’s own vast outpouring of seemingly trivial but very useful data.
Given these advantages, the Intelligence Community was able to:
Concentrate, focus, and build a deep foundation of cumulative evidence;
Foster longstanding and deep expertise in its analytic cadre by exploiting a very dense information environment;
Rely on multiple-source collection, which generally allowed us to cross-check information and make judgments consistent with the highest professional standards;
Largely neglect intelligence collection and analysis on “soft” cultural, societal, and people issues (other than the most prominent elites), because the plans and intent of the adversary were in the hands of a small Politburo and because we “knew” that knowledge of the plans and intent of subordinate elements or nations—for example, Poland—was rarely necessary; and
Employ an intelligence model that could rely on collecting “secrets” in voluminous quantities and required a mass production approach for producing reports.
That was also a period of relative information scarcity on worldwide events, and the Intelligence Community had a substantial comparative advantage over other information providers through access to intelligence obtained by clandestine collection.
The United States had many Cold War successes, of course, but there were always significant shortcomings in American intelligence capabilities. These shortcomings resulted in surprises that had important and unanticipated consequences for US policy. Among these were the Egyptian/Syrian attacks on Israel opening the 1973 Yom Kippur War, the 1974 Indian nuclear test, the fall of the Shah and the accompanying ascendance of a fundamentalist Islamic regime in Iran in 1979, the unexpectedly rapid collapse of the Warsaw Pact in 1989, and the dissolution of the Soviet Union in 1991.
The traditional intelligence methods were even less successful against some important targets that have carried over into the post-Cold War era—such as Iran, China, North Korea, nuclear proliferation, and the global WMD black-market. In these cases (exemplified by the 1998 Indian and Pakistani nuclear tests), we have had serious shortcomings in understanding because, in the face of the numerous, continuing, and competing demands for higher priority current intelligence, we were unwilling or unable to make the long-term commitments to build the deep knowledge base the task required.
Today, the United States may need to take action more frequently on the basis of ambiguous intelligence, against harder to recognize threats, from less risk-averse enemies—especially the shifting groups of radical Islamic fundamentalists who foment terrorist activities and may wield capabilities that could devastate cities, societies, or economies. These new Islamic adversaries also pose, as did the Soviet Union, a long-term ideological challenge whose character and “operational code” we do not currently understand. Unlike the Soviet Union, however, which was a hierarchical, bureaucratic state whose organizational character we did understand, the new targets are more dynamic and complex. These new groups morph frequently and metastasize like cancers, emerging with new personalities and network linkages and threatening new targets.
As a result, today’s Intelligence Community must contend with:
Little foundational knowledge on major adversaries and cultures;
Fragmentary evidence and sparse intelligence flows, which often cannot be substantiated or contextualized—even against the highest priority current targets;
Thin domain (“account”) expertise, due to multiple and shifting priorities and frequent shuffling of relatively inexperienced analysts; and
An analysis model that remains heavily dependent on “secrets,” even when the key intelligence questions may involve mostly “mysteries” and “obscurities.”
Aggravating these existing shortcomings, a cascade of significant and complex developments that will pose substantial new challenges is already evident, making the adequacy of the community’s current capabilities even more problematic. These changes involve more dynamic geostrategic conditions, more numerous areas and issues of concern, smaller and more agile adversaries, more focus on their intentions and plans and less on large physical objects and weapons systems, a more open information environment, and more widespread understanding of American intelligence methods and capabilities by our adversaries. Additionally, the sense of vulnerability to terrorist attack on US territory left by 9/11 has created huge new demands on the Intelligence Community to provide information for homeland security. The immensity of these challenges complicates the task of developing appropriate cures for the real causes of inherited shortcomings and of heading off new analytic shortcomings.
The lengthy reports of the 9/11 Commission and the SSCI report on Iraqi WMD laid out in great detail their views on why these two failures occurred. Although they occurred in very different substantive domains, at very different levels of analysis, in different parts of the intelligence organizations, and exhibited significantly different failure modes and causative reasons, both reports tended to locate the causes in problems related to information sharing and coordination, instances of insufficient collection and poor data, and “errors” of analytic judgment by individuals and groups.
The Intelligence Community now finds itself under intense scrutiny and faced with the need to transform in fundamental ways in order to meet the entire range of national security intelligence challenges only partially recognized in the legislatively mandated reforms. Addressing these challenges requires fundamentally new approaches in both collection and analysis as well as in the processing and dissemination methods that support them. As Barger aptly notes: what is needed is revolution, not reform.
Misunderstanding Analytic Processes
Major American corporations have fallen victim to a pattern of "over adaptation" and "change blindness." The Intelligence Community runs the same risk.
The litany of failures should have been a tip-off to the deep-seated nature of the analytic problems. Such a series of “idiosyncratic” errors by individuals and small groups within an organization are, however, more likely to be symptoms than root causes, as Perrow convincingly demonstrated in case studies of Three Mile Island and Chernobyl. A pattern of repeated errors is often a signal of seriously dysfunctional methods—fundamental and systematic failures of procedures and processes throughout an organization. From this perspective, the proximate causes of the failures identified in both the report of the 9/11 Commission and the SSCI report on Iraqi WMD hardly appear to be convincing root causes of these recent intelligence failures.
A more accurate diagnosis of the sources of our intelligence shortcomings requires a deeper and more thoughtful analysis of why organizations make mistakes—causes that go beyond obvious superficial conditions created by flawed organizational structures and insufficient directive authorities. As Charles Sabel noted, “There he [Herbert Simon in Administrative Behavior] showed that modern organizations were efficient precisely because they systematically turned habits—the disposition to react to particular situations in a characteristic, but open-ended way—into rigid routines.” Not unexpectedly, these routines “work” for the specific conditions they were developed to address. They rarely perform well for off-design conditions, however, and, often, the better they work for the design conditions, the more narrow the set of conditions for which they are appropriate. Paradoxically, the better they work, and, therefore, the more efficient the organization at its routine tasks, the greater the danger that the organization will fail to be sensitive to its environment and changes occurring there. As with the dinosaurs, scores of major American corporations have fallen victim to this pattern of “over adaptation” and “change blindness.” The Intelligence Community runs the same risk.
The Problem of the Wrong Puzzle
Frequent public references to “failing to connect the dots” are especially problematic for an accurate understanding of intelligence errors and failures. This view of the analytic shortfalls is particularly perverse, because it masks the true nature of the analyst’s challenges. The flawed “connect the dots” analogy flows from the image of the children’s game book in which lines are to be drawn between a set of numbered dots in order to make a recognizable picture. That analogy assumes, however, that—as in the children’s book—the dots exist, that it will be obvious which dots connect to which others and in what order. The problem is that this simple analogy overlooks a well-known phenomenon in psychology that is often illustrated by the “Rubin Vase Illusion”: that evidence really does not “speak for itself”; rather, that information is “perceived and interpreted.” Humans are extremely good at finding patterns, even when there is none—hence the classic intelligence aphorism, “You rarely find what you’re not looking for and usually do find what you are looking for.”
If we are to use a puzzle analogy, perhaps a more appropriate model might be that of a guest at a resort hotel who, on a rainy afternoon, wanders into the game room and finds a box holding a large number of jigsaw puzzle pieces. As the cover of the box is missing, there is no picture to guide him in reconstructing the puzzle, nor is there any assurance that all the pieces are there. Indeed, when he discovers that there are several other empty puzzle boxes on a shelf, it is not even clear that all the pieces in the box belong to the same puzzle. Reconstructing the puzzle in this example is a far different and more difficult challenge than linking numbered dots, where the outline of the image is reasonably apparent.
Talk of a "science of analysis" is a conceit .... The reality is otherwise.
Both the dots analogy and the model of evidence-based analysis (discussed in the following section) understate significantly the need for imagination and curiosity on the part of the analyst.
The Myth of “Scientific Methodology”
Many well-informed outside commentators and intelligence professionals continue to talk about the “science of analysis,” and only some of them are truly aware of the shaky foundations of this belief or of its real implications. But this talk of a “science of analysis” is a conceit, partly engendered by Sherman Kent’s dominating view of intelligence analysis as a counterpart of the scientific method. The reality is otherwise; analysis falls far short of being a “scientific method” in the common, but usually misunderstood, sense. Moreover, this view of science itself is “scientism,” which fails to recognize the important role of less “rational” and less “scientific” elements, such as imagination and intuition. As Mark and Barbara Stefik, knowledgeable and respected participants in the discipline of science, have written about science and innovation in a recent book.
The word “theory” usually connotes a formal way of thinking logically or mathematically. In this formal sense, theory takes its place in a knowledge-generating process called the scientific method. The scientific method includes hypothesis formation, experiment planning, execution, and data analysis. In this scheme, theory is used to make predictions. Theory is created by a process that includes hypothesis formation and testing.
Unfortunately, this notion of theory and the working methods of science and invention leaves out imagination. This makes it both boring and misleading….
Citing a well-known commentary by a Nobel laureate, the Stefiks add:
In [Peter] Medawar’s view, the standard scientific paper promotes an error of understanding how science is done because it confuses proving with discovering. The layout of a scientific paper makes it appear that the doing of science is the careful laying out of facts. Once the facts are in, the conclusions follow. This makes it seem like science is all about deduction. Unfortunately, this formal structure leaves out the creative part of discovery and invention. The structure of a scientific paper is only about proof, promoting the systematic marshalling of evidence. In this abbreviated story, once a scientist has by some means figured it out, the paper lays out the conclusions logically.
A more realistic and useful appraisal of the process of intelligence analysis comes from Charles Allen, a long-time senior intelligence official: “I want to speak mainly about the art and craft of intelligence…. We could have talked about the science of intelligence, but, by and large, as far as I’m concerned, the science of intelligence is yet to be invented. I don’t see it. It’s not really there.” This is not to suggest that rigor, accuracy, clarity, and precision are not required in intelligence analysis; given the stakes, they are obviously essential. But demanding a false precision from an analysis process that is itself incorrectly modeled on a common misunderstanding of the methods of science is not likely to improve the quality of analysis. Indeed, an important issue for both managers and users of analysis to consider is the likelihood that there may be little concordance between precision in the details of the answer and the accuracy of the overall (gestalt) judgment. A process and methodology too focused on provable evidence may get the details right at the cost of ignoring important inferential judgments that need to be conveyed in order to provide a true sense of the uncertainties of both evidence and judgment.
The Flaws of a “Tradecraft” Culture
A process and methodology too focused on provable evidence may get the details right at the cost of ignoring important inferential judgments.
Intelligence analysis remains largely a craft culture that is conducted within a self-protective guild system and taught by means of a broken apprenticeship process. There are other fields, such as science, medicine, and warfare, in which knowledge is also understood to be tentative and not subject to formal “proof,” as is possible in mathematics. Within such professions, the cumulative practices, habits, and mindsets of an evolved culture are especially important for the creation of knowledge and the transmission of expertise. As with intelligence, these other communities are ones in which much of the knowledge needed for effective performance relates to the often-arcane processes of the craft (tradecraft, as the Intelligence Community terms it). This knowledge is tacit and difficult to elicit from the experts, and it is usually best communicated by personal example and practice. However, the culture of intelligence lacks many of the formalized processes, such as “peer review,” and the cumulative knowledge structures that the academic, military, and medical communities have created to address similar challenges in building a solid foundation of understanding that can be passed to successor practitioners. Perhaps for these reasons, intelligence analysis is not yet a true profession. Within this culture, therefore, effective mentorship is especially important for transmitting expertise and, perhaps more significantly, for imparting professional standards and values to apprentices.
See Aris A. Pappas and James M. Simon Jr., “The Intelligence Community: 2001–2015,” Studies in Intelligence 46, no. 1: 39–47; Bruce Berkowitz, “Intelligence for the Homeland,” SAIS Review 24, no. 1 (Winter-Spring 2004): 3.
To appreciate the costs of this view, see the brief account on page 41 of Professor Murray Feshbach’s use of Soviet and Russian health statistics to derive an important conclusion.
The bureaucratized nature of the Soviet Union and the state of telecommunications throughout the Cold War allowed the United States to exploit its technological prowess and employ effective remote collection capabilities in addition to the traditional method of using human sources.
Marc Sageman, “Understanding Terror Networks,” FPRI E-Note, 1 November 2004.
In addition, inexperienced analysts may lack the ability to tap sources of deep expertise available through connections to long-standing professional networks.
Fritz Ermarth originally developed this typology. For the details, see page 40.
At the same time, the breaching of old distinctions between foreign and domestic intelligence activities has increased public concern over potential threats to privacy.
Indeed, the IRTPA was largely driven by the recommendations of the 9/11 Commission, which focused mostly on one, albeit important, aspect of intelligence needs, that of counterterrorism. But, as noted above, the IRTPA reforms mandate changes that affect all functions of the Intelligence Community.
Perrow, Normal Accidents.
For examples of the consistent nature of such errors, see the following: the “Jeremiah Report”; Interview with Richard Kerr, MSNBC, 14 July, 2003 (concerning the “Kerr Report” on the Iraqi WMD NIE). An unclassified portion of that report is in Studies in Intelligence 49, no. 3 (2005) (hereinafter cited as Kerr, et al.).
Charles F. Sabel, “Theory of a Real Time Revolution.”
See Carol Loomis, “Dinosaurs?” Fortune, 3 May 1993: 36ff. An accompanying sidebar recounted how a senior Sears executive pointed behind himself to the tens of volumes of corporate practices and rules that governed the corporate response to any conceivable problem. The emergence of mid-market national discount chains wasn’t covered; and, therefore, “…wasn’t a problem they had to address.”
This relatively simple problem is known formally in mathematics as a “directed graph.”
Edgar Rubin, 1915. Heuer discusses such perceptual problems using different examples in Chapter 2 of The Psychology of Intelligence Analysis.
This remark is often attributed to Amrom Katz, a pioneer in aerial and overhead reconnaissance.
Heuer, Chapter 6.
See, for example, Frank Hughes and David Schum, Evidence Marshalling and Argument Construction.
Sherman Kent, Strategic Intelligence. See also Jack Davis, “The Kent-Kendall Debate.”
The term “scientism” is used to connote the frequent confusion between the appearance of a formal scientific methodology and the actual conduct of science, which may be intuitive, but is nonetheless subject to rigorous proof.
Mark Stefik and Barbara Stefik, Breakthrough: Stories and Strategies of Radical Innovation, 110.
The Stefiks are referring to Peter Medawar’s article, “Is the Scientific Paper Fraudulent? Yes; It Misrepresents Scientific Thought,” published in the Saturday Review 47, 1 August 1964: 42–43.
Stefik and Stefik, 110–11.
For more on this subject, see the section on “Evidence-based Scientism” in Chapter Three.
The professions of medicine and law refer to themselves as “practices,” which reflects their roots in the guild system. More importantly, perhaps, this usage conveys that the essential elements of a profession (ethos, ethics, and skills) are human values best transmitted by people.