eth•nog•ra•phy\n [F ethnographie, fr. ethno- + -graphie -graphy] (1834) : the study and systematic recording of human cultures: also: a descriptive work produced from such research. (Merriam Webster’s Collegiate Dictionary, Eleventh Edition)
Because I conducted human performance–related fieldwork before I came to this project, I carried into it a certain amount of experiential bias, or “cognitive baggage.” The research findings from those other studies could bias my perspective and research approach within the Intelligence Community. For example, surgeons and astronauts do not need to deal with intentionally deceptive data. Patients are not trying to “hide” their illnesses from surgeons, and spacecraft are not thinking adversaries intent on denying astronauts critical pieces of information. This one difference may mean that intelligence analysis is much more cognitively challenging than the other two cases and that the requisite psychomotor skills are significantly less important. In an effort to counteract the biases of experience, I will attempt to be explicit about my own definitions in this work.
The three main definitions used in this work do not necessarily represent definitions derived from the whole of the intelligence literature. Although some of the definitions used in this work are based on the Q-sort survey of the intelligence literature described later, some are based on the 489 interviews, focus groups, and two years of direct and participant observations collected during this project.
Definition 1: Intelligence is secret state or group activity to understand or influence foreign or domestic entities.
The above definition of intelligence, as used in this text, is a slightly modified version of the one that appeared in Michael Warner’s work in a recent article in Studies In Intelligence. Warner reviews and synthesizes a number of previous attempts to define the discipline of intelligence and comes to the conclusion that “Intelligence is secret state activity to understand or influence foreign entities.”
Warner’s synthesis seems to focus on strategic intelligence, but it is also logically similar to actionable intelligence (both tactical and operational) designed to influence the cognition or behavior of an adversary. This synthesis captures most of the elements of actionable intelligence without being too restrictive or too open-ended, and those I asked to define the word found its elements, in one form or another, to be generally acceptable. The modified version proposed here is based on Warner’s definition and the interview and observation data collected among the law enforcement elements of the intelligence agencies. These elements confront adversaries who are not nation states or who may not be foreign entities. With this in mind, I chose to define intelligence somewhat more broadly, to include nonstate actors and domestic intelligence activities performed within the United States.
Definition 2: Intelligence analysis is the application of individual and collective cognitive methods to weigh data and test hypotheses within a secret socio-cultural context.
This meaning of intelligence analysis was harder to establish, and readers will find a more comprehensive review in the following chapter on developing an intelligence taxonomy. In short, the literature tends to divide intelligence analysis into “how-to” tools and techniques or cognitive processes. This is not to say that these items are mutually exclusive; many authors see the tools and techniques of analysis as cognitive processes in themselves and are reluctant to place them in different categories. Some authors tend to perceive intelligence analysis as essentially an individual cognitive process or processes.
My work during this study convinced me of the importance of making explicit something that is not well described in the literature, namely, the very interactive, dynamic, and social nature of intelligence analysis. The interview participants were not asked to define intelligence analysis as such; rather, they were asked to describe and explain the process they used to perform analysis. The interview data were then triangulated with the direct and participant observation data collected during this study.
Despite the seemingly private and psychological nature of analysis as defined in the literature, what I found was a great deal of informal, yet purposeful collaboration during which individuals began to make sense of raw data by negotiating meaning among the historical record, their peers, and their supervisors. Here, from the interviews, is a typical description of the analytic process:
When a request comes in from a consumer to answer some question, the first thing I do is to read up on the analytic line. [I] check the previous publications and the data. Then, I read through the question again and find where there are links to previous products. When I think I have an answer, I get together with my group and ask them what they think. We talk about it for a while and come to some consensus on its meaning and the best way to answer the consumer’s question. I write it up, pass it around here, and send it out for review.
The cognitive element of this basic description, “when I think I have an answer,” is a vague impression of the psychological processes that occur during analysis. The elements that are not vague are the historical, organizational, and social elements of analysis. The analyst checks the previous written products that have been given to consumers in the past. That is, the analyst looks for the accepted organizational response before generating analytic hypotheses.
The organizational-historical context is critical to understanding the meaning, context, and process of intelligence analysis. There are real organizational and political consequences associated with changing official analytic findings and releasing them to consumers. The organizational consequences are associated with challenging other domain experts (including peers and supervisors). The potential political consequences arise when consumers begin to question the veracity and consistency of current or previous intelligence reporting. Accurate or not, there is a general impression within the analytic community that consumers of intelligence products require a static “final say” on a given topic in order to generate policy. This sort of organizational-historical context, coupled with the impression that consumers must have a final verdict, tends to create and reinforce a risk-averse culture.
Once the organizational context for answering any given question is understood, the analyst begins to consider raw data specific to answering the new question. In so doing, the analyst runs the risk of confirmation biases. That is, instead of generating new hypotheses based solely on raw data and then weighing the evidence to confirm or refute those hypotheses, the analyst begins looking for evidence to confirm the existing hypothesis, which came from previous intelligence products or was inferred during interactions with colleagues. The process is reinforced socially as the analyst discusses a new finding with group members and superiors, often the very people who collaborated in producing the previous intelligence products. Similarly, those who review the product may have been the reviewers who passed on the analyst’s previous efforts.
This is not to say that the existing intelligence products are necessarily inaccurate. In fact, they are very often accurate. This is merely meant to point out that risk aversion, organizational-historical context, and socialization are all part of the analytic process. One cannot separate the cognitive aspects of intelligence analysis from its cultural context.
Definition 3: Intelligence errors are factual inaccuracies in analysis resulting from poor or missing data; intelligence failure is systemic organizational surprise resulting from incorrect, missing, discarded, or inadequate hypotheses.
During interviews, participants were asked to explain their understanding of the terms intelligence error and intelligence failure. There was little consensus regarding the definitions of error and failure within the Intelligence Community or within the larger interview sample. Here are some sample responses:
I don’t know what they mean.
There are no such things. There’s only policy failure.
You report what you know, and, if you don’t know something, then it isn’t error or failure. It’s just missing information.
Failure is forecasting the wrong thing.
Failure is reporting the wrong thing.
Error is forecasting the wrong thing.
Error is reporting the wrong thing.
A failure is something catastrophic, and an error is just a mistake.
Error is about facts; failure is about surprise.
Error is when nobody notices, and failure is when everybody notices.
Some responses disavowed the existence of intelligence error and failure; some placed the terms in the broader context of policy and decisionmaking; some interchanged the two terms at random; some defined the terms according to their outcomes or consequences. Despite the variability of the responses, two trends emerged: novice analysts tended to worry about being factually inaccurate; senior analysts, managers, and consumers, tended to worry about being surprised. Often, participants’ responses were not definitions at all but statements meant to represent familiar historical examples:
The attack on Pearl Harbor.
The Chinese sending combat troops into Korea.
The Tet Offensive.
The Soviet invasion of Afghanistan.
The collapse of the Soviet Union.
The Indian nuclear test.
The danger of defining by example is that each case is contextually unique and can be argued ad infinitum. What is important about these examples as a whole is that they all indicate one central and recurring theme. Specifically, all these examples signify surprise—in some cases, intelligence surprise; in other cases, military, civil, and political surprise. Even if the Intelligence Community itself was not surprised by one of these events, it was unable to convince the military, civil, and political consumers of intelligence that these events might occur; in which case, the failure was one of communication and persuasion.
When I began this study, my own definition of error and failure derived from the psychological and cognitive disciplines. Specifically, I took it that human error and failure are related to measures of cognitive and psychomotor accuracy, commission of error being at one end of the accuracy scale and omission or not performing the correct action being at the other.
During the interviews for this study, I soon found that the psychological definition was insufficient. The psychological definition took into account the cognitive and psychomotor components of task-structure, time-to-task, and accuracy-of-task as measures of errors and error rates, but it did not fully take into account the notion of surprise. Surprise is the occurrence of something unexpected or unanticipated. It is not precisely commission or omission; it indicates, rather, the absence of contravening cognitive processes. Measures of accuracy may account for factual errors in the intelligence domain, but measures of accuracy are insufficient to account for surprise events and intelligence failure.
To put this in context, an analyst, while accounting successfully for an adversary’s capability, may misjudge that adversary’s intention, not because of what is cognitively available, but because of what is cognitively absent. The failure to determine an adversary’s intention may simply be the result of missing information or, just as likely, it may be the result of missing hypotheses or mental models about an adversary’s potential behavior.
 Michael Warner, “Wanted: A Definition of ‘Intelligence’,” Studies in Intelligence 46, no. 3 (2002): 15–22.
 US Joint Forces Command, Department of Defense Dictionary of Military and Associated Terms.
 The appendix lists literature devoted to each of these areas.
 In research, triangulation refers to the application of a combination of two or more theories, data sources, methods, or investigators to develop a single construct in a study of a single phenomenon.
 Intelligence analyst’s comment during an ethnographic interview. Such quotes are indented and italicized in this way throughout the text and will not be further identified; quotes attributable to others will be identified as such.
 See Appendix A for a list of literature on error.
 Sociological definitions are more akin to the definitions proposed in this study. Failure can occur due to system complexity and missing data as well as through the accumulation of error. See Charles Perrow, Normal Accidents. Living with High Risk Technologies. I’d like to thank Dr. Perrow for his assistance with this work.