This study included 489 interviews with intelligence professionals, academics, and researchers throughout the Intelligence Community. It also involved participation in intelligence training programs, workshops, and focus groups; direct observation of intelligence analysts performing their duties, and participant observation in a variety of analytic tasks. My access was not restricted to specific people, locations, or organizations. I was allowed to observe, interview, and participate in whatever manner I thought would be most beneficial to the research project.
Unlike other academic studies of the intelligence discipline (case studies or topic-specific postmortems, for example), this study was process oriented. It also differed from the work of Sherman Kent, Richards Heuer, and other intelligence professionals concerned with the process of intelligence analysis. Rather than having an intelligence professional looking out to the social and behavioral sciences, this study had a social scientist looking in at the intelligence profession. Although some of the conclusions of this work may be similar to previous studies, the change in perspective has also led to some different findings.
It is important to keep in mind that cultural anthropology is a qualitative discipline and that, in general, its findings are descriptive and explanatory rather than inferential or predictive. The use of ethnographic methods to describe a culture, the environment in which that culture operates, and the work processes that culture has adopted is designed to generate testable theory that can be investigated experimentally or quasi-experimentally using other research methodologies. Additionally, ethnography is used to identify and describe the influence of different variables on cultural phenomena, again with a focus on developing testable theory. Unlike more quantitative disciplines, cultural anthropology is not traditionally employed experimentally to test theory or to generate predictive measures of statistical significance.
The findings in this work describe the data collected during this study, but they do not indicate the weight or general statistical effect of any one variable as opposed to any other variable. Although a single variable might have more effect on the error or failure rate of intelligence analysis, further quantitative research will be needed to determine those statistical values. Without additional quantitative support, it may not be possible to generalize from these findings.
This study used an applied anthropological methodology for the collection and analysis of qualitative data. A traditional approach to ethnography, the descriptive documentation of living cultures, was modified for use in post-industrial organizational settings. This method included conducting interviews, directly observing analysts performing their jobs, participating in analytic tasks and training, and conducting focus groups. The settings for this research included the 14 members of the Intelligence Community, related government agencies, universities, think tanks, national laboratories, the National Archives and related presidential libraries, and private and corporate locations.
The background data were collected using a Q-sort literature review method, which is discussed in more detail in Chapters Three and Eleven. This procedure was followed by semi-structured interviews, direct observation, participant observation, and focus groups. The Q-sort method was employed specifically because of its utility for developing taxonomic categories.
The identity of the research participants will not be revealed. Participant responses and observational data gathered during the research process have been tabulated and made anonymous or aggregated according to context and content and, thus, are not attributable to any specific individual. This is not simply the result of security procedures within the Intelligence Community; it is also the professional obligation of every member of the American Anthropological Association, as stated in the American Anthropological Association Code of Ethics.
The interview technique employed in this study was semi-structured. Several specific questions about the participant’s perception of the nature of intelligence, the analytic process, the intelligence production cycle, and intelligence errors and failures were standard throughout the interviews. Other questions, specific to the individual’s job responsibilities, were tailored to each respondent. This method allowed for a more open-ended approach, which surveys and highly structured interviews do not. The semi-structured method is more akin to an open conversation (with consistent data collection constructs and probing questions) than to a formal interview, which helps put the respondents at ease and makes the entire process seem somewhat less contrived.
Access to interview participants was made possible through the Center for the Study of Intelligence. Individuals at CSI introduced me to their contacts throughout the Intelligence Community, including active and retired senior analysts, managers and senior leadership, as well as to academics and researchers. The various intelligence-training centers put me in touch with new hires and novice analysts. Each interviewee was asked to make recommendations and provide contact information for others who might be interested in participating in this research project. In addition, numerous interviewees were approached without a formal or informal introduction from a previous participant. Only four of the 489 individuals contacted to date have declined to participate in this study. This constitutes a participation rate of greater than 99 percent, which is unusually high for this type of research. Although a participation rate this high may be an artifact of the sampling method or of an organizational pressure to participate, it also may indicate a general desire within the Intelligence Community to support performance improvement research.
Unlike random sampling, purposive sampling is an attempt to collect data from specific data sources. In anthropological studies, purposive sampling is regularly used to address specific issues and to answer specific questions. Normally, this approach requires finding a “key informant” or someone on the inside of a specific culture who will become the researcher’s ally and access agent. In this particular study, the CSI staff acted as access agents to the Intelligence Community at large.
Relying on such a “social network” sampling method for collecting interview data does pose potential statistical biases. The likelihood that each new interviewee was referred to me because of a friendly relationship with a previous interviewee may mean that those references are “like minded” and not necessarily representative of the population of intelligence professionals. In order to counteract that bias, efforts were also made to enlist individuals without any social network-based introduction. The “cold” contacts were informed of the nature of the research project, its sponsorship, and its goals, given reference information for verification, and then invited to participate. The “cold” contact interviewees were also asked to make recommendations and provide contact information for others who might be interested in participating in the study.
This strategy was used in an attempt to reduce the affects of sampling bias by generating parallel social network samples. The figure below is a visual representation of a parallel social-network sampling model. The central, or first-order, node on the left is a “cold” contact or unknown individual who recommends several second-order contacts, each represented as a node within the left box. The second-order “cold” contacts then make additional recommendations for third-order contacts, and so on. The central (first-order) node on the right is a “hot” contact or a known individual who recommends several second-order contacts, each represented as a node within the right box. The second-order “hot” contacts then make recommendations for third-order contacts, and so on.
In many instances, the contacts from both social network samples overlapped or converged on specific individuals, as represented by the overlapped fourth-order nodes in the central column. There are several possible explanations for this convergence. It may indicate that there are a number of respected “thought leaders” in the Intelligence Community whom each contact believed I should interview for this project, or the convergence of nodes might merely serve to emphasize the small size of Intelligence Community. In any case, this approach to sampling may help to ameliorate the sampling bias inherent in qualitative research.
In addition to semi-structured interviews, both direct and participant observation data collection methods were employed. The direct observation method involved watching Intelligence Community analysts perform their tasks in both actual and training environments, recording the physical and verbal interactions they had with one another, and observing the steps used to create intelligence products. Direct observation occurred over the course of two years by observing 325 individual analysts and teams of analysts performing their specific tasks. The data collected from observing the 325 analysts were not included in the semi-structured interview data because I did not use the formal semi-structured interview process to structure those interactions. These observational data were recorded separately in field notes and used for triangulating the findings from the interviews.
The participant observation method is employed to give the researcher a “first-person” understanding of the context and nuances associated with a task and the culture in which that task occurs. Although the researcher possesses only an approximation of the knowledge and understanding of the actual practitioners of the task and their culture, this “first-person” perspective can lead the researcher to new insights and new hypotheses.
During this study, the participant observation was conducted during analytic production cycles, scenario development, and red cell exercises. This included monitoring my own analytic strategies, the analytic strategies of others as diagramed or verbalized, the physical and verbal social interactions among the participants, the environment in which the tasks occurred, and the steps used to create a final intelligence product. These data, along with notes on social dynamics, taboos, and social power, were recorded in field notes and created a separate data source for triangulation.
With modern anthropology, these data normally would be captured on film, audiotape, or in some digital format. Due to the security requirements of the Intelligence Community, however, the data were captured only in the written form of field notes. As is the case with the field notes, the identity of the interview participants will not be disclosed. This is in keeping with both the security practices of the Intelligence Community and the professional standards described in the American Anthropological Association Statement on the Confidentiality of Field Notes.
The data from the interviews were analyzed using a method called interpretational analysis. This approach included segmenting the interview data into analytic units (or units of meaning), developing categories, coding the analytic units into content areas, and grouping the analytic units into categories. From these categories, general trends and specific instances can be identified. As noted, the direct and participant observational data were analyzed separately in order to triangulate the findings from the interview data. The purpose of using multiple data sources for triangulation is to uncover internal inconsistencies in the data, to cross-check those inconsistencies with the available literature, and to verify the content validity for each category.
As of this writing, 489 semi-structured interviews have been conducted with active and retired intelligence professionals, intelligence technology researchers, academics who teach the intelligence discipline or have published in it, and consumers of intelligence products. Of the 489 individuals interviewed, 70-percent were newly hired, active, or retired intelligence professionals; 15-percent were academics; 11-percent were intelligence technology researchers; and the remaining four percent were policy makers or senior consumers of intelligence products. The graph here shows the distribution of interviews by percentage for each professional category.
The table below lists each professional category and the corresponding total number (N) of individuals interviewed. The intelligence professional category is further divided into three sub-groups. The “novice” sub-group includes new hires and those with less than two years of experience. The “active” sub-group includes all those currently working in the Intelligence Community with more than two years of experience. The “retired” sub-group includes those who have spent more than fifteen years in the intelligence profession and have since gone on to either full retirement or other organizations outside of the Intelligence Community.
Of the 345 intelligence professionals interviewed, 20 percent were novices, 65 percent were active, and 15 percent were retired. The active and retired sub-groups include senior managers.
Interview Categories and Numbers
n order to assure anonymity for the participants, I have created broader job-related functional categories and associated the number of individuals interviewed with the broader categories rather than linking them to specific organizations within the Intelligence Community. This is in contrast to aggregating the agencies according to each agency’s specific mission, process, or product. Although not an official member of the Intelligence Community, the Drug Enforcement Administration is included because of its intelligence function and resources. The table on the next page shows how I aggregated the agencies into National-Technical, Defense, and Law Enforcement-Homeland Security categories according to the professional functions of interview participants.
Agency Aggregation According to Interviewee Job-type
|National-Technical||Defense||Law Enforcement-Homeland Security|
|Central Intelligence Agency||Defense Intelligence Agency||Department of Homeland Security|
|National Security Agency||Army Intelligence||Federal Bureau of Investigation|
|National Reconnaissance Office||Air Force Intelligence||Department of Energy|
|National Geospatial Intelligence Agency||Navy Intelligence||Department of Treasury|
|Department of State (INR)||Marine Corps Intelligence||Drug Enforcement Administration|
The figure below shows the distribution of intelligence professionals interviewed for this study according to each broader functional category. Of the 345 intelligence professionals interviewed, 214 work within the National-Technical Intelligence category, 76 in the Defense Intelligence category, and 55 in the Law Enforcement—Homeland Security category.
 Sherman Kent, Strategic Intelligence for American World Policy; Richards J. Heuer, Jr., Psychology of Intelligence Analysis.
 Erve Chambers, Applied Anthropology: A Practical Guide; Alexander Ervin, Applied Anthropology: Tools and Perspectives for Contemporary Practice.
 Russell Bernard, Research Methods in Anthropology: Qualitative and Quantitative Approaches; Robert Bogdan, Participant Observation in Organizational Settings; Norman Denzin and Yvonna Lincoln, Handbook of Qualitative Research; Jean Schensul and Margaret LeCompte, Ethnographer’s Toolkit. Vol. I - Vol. VII; James Spradley, Participant Observation; Robert Yin, Case Study Research: Design and Methods.
 William Stephenson, The Study of Behavior: Q-Technique and its Methodology.
 American Anthropological Association, Code of Ethics of the American Anthropological Association.
 Social network sampling is also known as “snowball” sampling in sociology and psychology.
 American Anthropological Association, Statement on the Confidentiality of Field Notes.
 Leonard Bickman and Debra Rog, Handbook of Applied Social Research Methods; Meredith Gall et al., Educational Research; Jonathan Gross, Measuring Culture: A Paradigm for the Analysis of Social Organization; Ernest House, Evaluating with Validity; Jerome Kirk and Marc Miller, Reliability and Validity in Qualitative Research, Qualitative Research Methods, Volume 1; Delbert Miller, Handbook of Research Design and Social Measurement; Michael Patton, Qualitative Evaluation and Research Methods; Peter Rossi and Howard Freeman, Evaluation. A Systematic Approach.
 Additional interviews are being conducted.
 The use of two years as a divide between novice and active is derived from the total amount of experience it is possible to gain in that time. See the discussion of expertise in Chapter Five.
* Adobe® Reader® is needed to view Adobe PDF files. If you don't already have Adobe Reader installed, you may download the current version at www.adobe.com (opens in a new window). [external link disclaimer]