Library

 

Chapter Three: An Inventory of Analytic Pathologies

Chapter Three:

Although many of the reform proposals responding to the recent intelligence failures took diametrically opposed positions with respect to the role of the DCI, the creation of the DNI, and the functions of the CIA within a “transformed” Intelligence Community, they were mostly quite similar in that they focused both on wiring diagrams and formal authorities as the mechanisms for reform and on creating a “czar” to execute those authorities through centralized management.[1] As is often the case, reform legislation took a structural route to address problems that flow from dysfunctional processes, inbred cultural practices and habits, and failures of human leadership.

Although the legislative changes have restructured the community, other than emphasizing “competitive analysis” and more HUMINT, the IRTPA contains little significant language about reforming the internal functional processes of intelligence—collection and analysis. The legislation failed to take into account several key distinctions that any intelligence reform proposals must recognize: differences in issue domains; tactical vs. strategic consequences; narrowly focused vs. synoptic collection and analysis; reporting vs. deep analysis; immediate vs. long-term decision horizons; foveal vs. peripheral perception; action vs. exploitation; and defined, tasked requirements vs. speculative information gathering.

Some recommendations, in fact, could aggravate the already dysfunctional conditions, further damaging analytic capabilities. The IRTPA directed the establishment of centers for priority issues, as recommended by the 9/11 Report, but these are likely to place even more emphasis on current intelligence, whether or not this was intended. Integrated centers combining operators and analysts do foster better sharing and collaboration among them, but they invariably place the emphasis for analysts on operational support. Moreover, creating static structures and rigid processes designed to deal with fixed areas or issues is not a particularly sound response to events and conditions that are very fluid and fuzzy. These can be better handled by operating within a process that allows flexibility and discretion. Finally, with an already severe shortage of effective managers and experienced intelligence officers (including analysts, operators, and collectors), establishing more centers with mandated institutional positions and priorities can only further dilute an already over-stressed cadre of intelligence professionals.

 

An Inventory of Analytic Pathologies

. . . establishing more centers with mandated institutional positions and priorities can only further dilute an already over-stressed cadre of intelligence professionals.

Curing analytic shortcomings cannot be done by making minor modifications to the existing processes or even by wholesale replacement or upgrading of the analytic cadre.[2] Although it may be tempting to focus on better educating and training the analysts—the better for them to be able to “connect the dots”—altering the analytic model and the processes on which that model relies is almost certainly a more appropriate response to the profound problems besetting intelligence analysis. Without fixing the fundamental shortcomings in analytical processes, the system will always depend on the ability of individuals to work around the impediments, which adds additional stress to an already burdensome set of tasks.

At the same time, it should be acknowledged that there are limitations on the depth and quality of the expertise available within the community. The past decade has shown that the Intelligence Community is likely to find itself behind academic and commercial sectors in many frontier areas simply because, for the most part, that is where these frontiers are first explored. Furthermore, given the unpredictable nature of emerging challenges and the often short response time they allow, the community is unlikely to have on hand sufficient regional, cultural, linguistic, technical, or specialist expertise to meet high priority threats to our national interests. Therefore, unless community management decides to forgo having all needed expertise within the community—to open its boundaries, remove the barriers, and create mechanisms to draw on external expertise and knowledge—these impediments to exploiting outside expertise will frustrate its ability to meet emerging mission challenges.

Interestingly, even if the old analytic methods and processes developed during the Cold War were themselves capable of addressing the emerging security challenges of post-Soviet adversaries, a review of postmortems of intelligence failures prior to 9/11 and the Iraqi WMD fiasco showed that these processes and procedures were routinely violated, frequently leading to failures.[3] In addition to shortcomings in the basic analytic paradigm and processes, according to the study, the system lacked enforceable self-correcting features and functioning compliance mechanisms. This suggests that there has been a more fundamental failure in leadership and in the basic institutional and management mechanisms for assuring effective performance of oversight, assessment, accountability, and responsibility. That these failures persisted for so long should be of concern to members of the analytic community, their leadership and management, and the oversight bodies as well. It is a warning sign that better oversight and enforceable compliance procedures are needed, because the analytic community itself may no longer possess the internal self-discipline or professional standards to do so.[4] This also clearly implies that greater attention to professional ethos and standards must be an integral part of efforts to transform analysis.

Indeed, well before 9/11, several articles written by experienced community officials pinpointed fundamental shortcomings within the community’s analytical capabilities and highlighted dysfunctional processes as the causes.[5] Thus, the longstanding failure modes within the existing intelligence analysis paradigm must be identified and corrected, along with the management and oversight procedures, if the community is to meet new needs.[6]

Two elements of the current paradigm are especially worthy of attention. First is the inefficiency of the “account” structure. The account system, by its very nature, creates institutional and individual “ownership” of important intelligence domains. The benefit is that it provides a basis for accountability; the disadvantage is that ownership inhibits sharing, cooperation, and collaboration. It has also encouraged “stovepiping” by collection discipline and the control of information by collectors through originator-control (ORCON) restrictions. A fundamental redesign of analysis should start by dismantling both the notion of information ownership and the paradigm of “accounts,” while maintaining accountability for performance; information must become the common property of the community, and someone with authority must oversee sharing and configuration management.[7] Such changes are likely to involve altering the existing institutional mechanisms in order to forge interagency virtual “clusters” to perform the analytic functions while creating “mission managers” with the authority to assure that important subjects and user needs are properly serviced.

The second element is the Intelligence Community’s strong cultural orientation towards an “evidence-based scientism.” Although this approach may be appropriate for the current intelligence functions that rely heavily on gisting and reporting and dominate both SPO and SMO, it clearly limits the ability of analysts to address the anticipatory intelligence needs of decisionmakers, which usually demand more reliance on judgments and inference chains and less on specific evidence. In return for more focus on the WEI role, decisionmakers who rely upon anticipatory intelligence must realize its inherent properties and limitations and be prepared to accept greater uncertainty in assessments and estimates in order to obtain better gestalt understanding.[8]

A fundamental redesign of analysis should start by dismantling both the notion of information ownership and the paradigm of "accounts."

We need to understand that “warning” is largely built on modeling (either explicit or implicit) and synthesis, which are deductive processes, and not on analysis, which is an inductive process. In all cases, analysis processes should also access more non-traditional sources and incorporate a wider range of information to construct corroborative fabrics that can confirm or disconfirm critical information and hypotheses. The rigor and strength of these methods must rely less on narrow tests of the quality of the evidence or adherence to formalisms than on analysts being sufficiently “mindful” to recognize the pitfalls they may encounter in blind application of approved methodologies.[9] In addition, there is a huge research literature on decisionmaking under uncertainty that could be exploited to introduce innovative techniques for analysis and decision support. What all this comes down to, however, is not to disregard the need for rigor of method and quality of evidence, but, rather, to suggest that a construct of analysis too narrowly tied to a misunderstood “scientific method” needs to be augmented and leavened with intuition, curiosity, and a thirst for discovery—all essential elements of good science.[10]

Beyond these two fundamental changes, eight other problematic features of the current intelligence environment need to be addressed.


1. The Tyranny of Current Intelligence

Over the past decade, the Intelligence Community’s efforts to be responsive to its customers’ demands for current intelligence have dominated collection and analysis. As a senior analyst noted, “The Intelligence Community really [is] focused on current intelligence, on policy support. It does very little research. It has very little understanding below the level of the policymaker and, in my view, on many issues. I think that, in some ways, these two groups are reinforcing each other’s worst habits.”[11] The disappearance of the long-term Soviet threat and the chaos of the post-bipolar geostrategic environment shifted policymaker interests to a series of crises du jour. Policymakers demanded that intelligence for their current problems receive priority as the US and Allied interventions in Somalia, Haiti, Bosnia, Kosovo, and elsewhere proceeded. At the same time, the Pentagon’s increasing demands for support in ongoing military operations, including such prolonged activities as Northern and Southern Watch in Iraq, placed tremendous demands on the intelligence system for a continuing stream of timely support products.[12]

Prior to the 1991 Gulf war, no commanders of regional or functional military forces—other than those for strategic forces—could expect to receive direct and timely support from national assets, especially to support tactical operations.[13] That war, and the glimpse it provided of the power of modern information systems to overcome shortcomings in C4ISR (command, control, communications, computers, intelligence, surveillance, and reconnaissance) fueled an intense push to obtain “information superiority.” This, in turn, created nearly insatiable appetites among military commanders for real-time intelligence support and gave birth to the concept of “net-centric warfare” (NCW).[14] Many of the new C4ISR systems (including national systems), and much of the effort and funds expended by the Intelligence Community since the Gulf War, have focused on providing direct, real-time support to forces engaged in combat by closing the “sensor-to-shooter” loop and to meeting the information needs of the senior-level commanders directing those operations. When there are American forces deployed in active military operations, as there have been on a near-continual basis since the end of the Cold War, the highest priority is now accorded to providing intelligence to support them.


2. Overemphasis on Production

At the same time as demands to support military operations generated huge requirements across the Intelligence Community, standing collection and analysis requirements to fill databases and to produce routine scheduled products set priorities for a substantial part of the intelligence “phase-space.” The entire intelligence system is dominated by the demands of processing huge amounts of information gathered by collection systems whose architecture was largely designed during the Cold War to address a very different problem. This huge inflow created a production-oriented model and an “efficiency paradigm” better suited to the “Industrial Age” than to the Information Age of the Twenty-first century.[15] The volume of collected intelligence is so vast that, even with automated assistance, human analysts can effectively review and evaluate only a small part of the flow.

The existing paradigm for intelligence analysis and dissemination still relies largely on published paper reports as the mechanism for delivering products to users. Furthermore, without effective metrics to assess the value of intelligence to decisionmakers as well as the impact of analysis on the quality of their decisions, it is simple measures of data collected, traffic processed, and reports produced that have influenced critical decisions on priorities and resource allocation within the Intelligence Community. Meeting the new challenges will require a greater emphasis on adaptability and agility, as well as processes that are better able to respond to non-routine requests and high-priority challenges. The price to be paid for this, however, may be a reduction in efficiency as measured by the usual output-oriented metrics of mass-produced products normally used in cost-benefit calculations of routinized processes. Such calculations underweight the ability to meet unexpected situations and the “slack” that is usually essential in an adaptable organization.


3. Over-Reliance on Previous Judgments

The problem of “finished intelligence” stems from the conceit that any intelligence product is more than a snap-shot of knowledge believed to be true at that time and that “finished intelligence” is, by virtue of the formal coordination and review processes, “truer” than the pieces of raw intelligence from which it was built. The roots of this conceit date back to the period when the Intelligence Community was seen to possess domain expertise found neither in its user communities nor outside government, and its assessments could be considered authoritative. During that period, the Sherman Kent posture of standing apart from policy users could be considered an appropriate style. Fully coordinated “finished intelligence” products, such as NIEs, do convey authoritativeness as the current, agreed judgment of the Intelligence Community. These products also carry a sense that they and their conclusions can stand the test of time. These latter assumptions, however, are not necessarily warranted. As most “finished” intelligence products involve substantial interpretation, analysis, synthesis, and judgment, and they are more likely to have lengthy inference chains that compound uncertainty issues in more fragmentary reporting.

The validity of the earlier judgments expressed in finished products is especially important because of the common practice of “layering,” that is, using previous, formally coordinated products as the starting point for new assessments and estimates.[16] Although this practice helps to assure the consistency of an analytic line, building on established judgments or on prior positions of an analyst, branch, group, office, or agency is also a reason to be wary. As a former senior intelligence official noted,

Judgments which any analytical actor has the greatest incentive to defend must be subjected to the most critical scrutiny.[17]

The danger arises because, unlike academic practice, there is no sustained or sanctioned process of after-the-fact (“ex post facto” or “ex post”) review. Finished products are rarely subjected to a considered re-examination, nor do they receive explicit testing by other parties trying to replicate their findings—unless there is an obvious major intelligence failure. Moreover, errors that are recognized subsequent to publication are often not corrected, by means either of timely notification to readers or of corrections fully incorporated throughout the knowledge base.[18] Other knowledge intensive enterprises, such as law, medicine, and science, also depend on cumulative foundational knowledge; but they do far better at maintaining the accuracy and currency of these critical intellectual resources than does the Intelligence Community.

The community too often treats probing questions as attempts to "shape" (that is, “politicize”) analyses rather than genuine inquiries into the quality of evidence and the strength of inferences.

Even a cursory reading of the report on Iraqi WMD highlights the mutually reinforcing dangers of the “finished intelligence” conceit and of “layering.” Too often, the presumed authoritativeness of a formal product leads users to accept its judgments as established and its underlying evidence as validated. These products then become the baseline for updated assessments, and, as a result, the cumulative impact of errors is amplified and becomes pervasive.[19] In fact, the conceit is even more damaging to the effectiveness of the Intelligence Community in an era when policymakers are increasingly likely to be their own “senior analysts” and may not believe that “finished intelligence” represents the final word on a topic. Today, these officials bring other information and their own expertise to the task of arriving at a comprehensive judgment, and intelligence material is only one input into that process.[20] Yet the community too often treats probing questions as attempts to “shape” (that is, “politicize”) analyses rather than genuine inquiries into the quality of evidence and the strength of inference chains.


4. The Neglect of Research

This emphasis on current intelligence, with its consequent time pressures and the methods needed to meet production demands, has produced a range of distorting effects that are not fully recognized.[21] First, it has severely undercut the ability of analysts to do in-depth research by requiring that most analytic effort be devoted to short-term taskings. Second, by denying most analysts the opportunity to work on deep products under the tutelage of a senior mentor, this emphasis has damaged a key element of the indispensable apprenticeship process, a sine qua non for effectively training and developing professionally competent intelligence practitioners. Third, in order to meet daily requirements, the production aspects of the current intelligence cycle have been accorded undue priority.[22] Fourth, the emphasis on current intelligence helped to create an incentives and rewards system for the analysts biased towards short-term reporting rather than deep analysis. And, fifth, without explicit management support, the time pressures on, and implicit incentives for, analysts to focus on current production have made the pursuit of curiosity difficult. In contrast, within the scientific community, “investigator-initiated” research is a primary contributor to discovery and innovation, as well as a powerful factor in validating the research of others. The Intelligence Community has need of just such a procedure.

Over the past decade, the Intelligence Community has made several attempts to redirect attention to long-term research and products that reflect such efforts, but, without a sustained base of interested customers in the senior policy community, these have not succeeded.[23] Analysts have seen the emphasis on the President’s Daily Brief (PDB) and similar serial products as a clear indication of where interest (and therefore success) is to be found.[24] Although some suggest that current intelligence can provide the basis for deep understanding, it seems obvious that attempting to achieve this by compiling current reporting is not a satisfactory method for producing integrated, synoptic analyses that are set in full context. A better route to meeting the competing demands of producing both current intelligence and deep analytical products would be to exploit the expertise and domain understanding of the experienced analysts doing in-depth research to identify, select, extract, and put into context the important tidbits from the reporting stream—and to use those as a basis for more in-depth and sustained exchanges with users of intelligence.


5. The Neglect of Anticipatory Intelligence

The “information revolution” has unsettled intelligence officials by providing policymakers with alternative sources for both coverage of developments and in-depth research and understanding on issues of interest to them, as well as allowing them to become their own intelligence analysts, if they so choose. These trends have left Intelligence Community managers struggling to emulate the success of television as the provider of real-time news and has prompted the worrisome notion that customers might now see intelligence as less relevant. But the community must go beyond the current interest areas of its customers if it is to perform its primary national function of preventing surprise. In the new and very dynamic geostrategic environment, perhaps the most important element of warning is the anticipatory function against the unexpected—discovering new activities that might prove inimical to US national interests and obtaining information about them. Leads to issues of this kind are not likely to come from customer requests; as noted, in science, “investigator-initiated” research is a particularly important driver for discovery and innovation.

Waiting on the policy community to furnish requirements for anticipatory intelligence all but assures that there will be serious warning failures.

See Graphic: Key Processes & Outputs [PDF 45.4KB*]

Correcting this problem demands that the leadership of the Intelligence Community be prepared to spend real resources on issues that are not immediately “customer-relevant,” even when confronted by what seem to be limitless demands to focus on high-priority subjects. Waiting on the policy community to furnish requirements for anticipatory intelligence all but assures that there will be serious warning failures—and the community will shoulder the blame for the policymakers “surprise.” Faced with its concerns about diminishing relevance, the anticipatory role is one the Intelligence Community should grasp. At the same time, the dichotomy between current situational awareness based on a stream of detailed reporting and anticipatory or predictive intelligence is more apparent against a new enemy who relies on “stilettos and stealth” than it was against the Soviet Union, which amassed large stocks of advanced weapons whose visible and patterned development path took well over a decade and then allowed us to collect hard evidence on them by exercising those capabilities extensively. For a community which frequently restates the mantra that it is “an evidence-based” culture, this divergence of reduced collection capabilities against lower signature targets and the increasing demands for anticipatory judgments resting on long inference chains will create an uncomfortable problem in producing these assessments.


6. The Loss of “Keystone Species” and “Intellectual Middleware”

The relatively recent ecological concept of “Keystone Species” denotes organisms that play a central role within an environment, either as a resource or as a control mechanism. Within the Intelligence Community, the most important of these “keystone species” is perhaps the “journeymen” analysts.[25] These experienced analysts (say, those with seven or more years of analytic experience) are the functional equivalent of doctoral students, post-doctoral fellows, and assistant professors in academia and of the house staff (interns and residents) in medical education, both of which are also guild/craft systems. Journeymen perform the bulk of the work of producing products, making incremental improvements in process, teaching the apprentices, and disseminating knowledge and skills as they move to new communities. The journeymen carry institutional memory, transmit knowledge to junior analysts, and inculcate in them such vital professional values as intellectual curiosity, humility, and an ethos of continual learning. In addition, they form the core of a high-trust social network built on longstanding prior contacts that is essential to the diffusion of knowledge within the Intelligence Community and the national security community as a whole.

Recognizing the journeymen’s role as a “keystone species” helps to explain the severe disruptions caused by the disproportionate drawdown in their numbers triggered by the budget cutbacks of the early 1990s. To a considerable extent, this cadre, with its deep professional expertise, was sacrificed because of the disinterest of the policy community; nonetheless, the senior management of the Intelligence Community allowed it to happen. Moreover, increasing pressures for “broadening” rotational assignments and “up-or-out” promotion policies make it more difficult to retain the remaining cadre of deeply skilled experts as working analysts.

The loss of journeymen also underscores the crucial importance for the analytical profession of soft cultural factors and people-related processes, such as mentoring. Inadvertently, as mentors disappeared, the important socialization and professionalization processes that are essential to train, guide, and acculturate the current flood of apprentice analysts went with them. A further difficulty is that, absent either emphasis on, or demand for, research products, the departure of experienced analysts and the focus on current intelligence and reporting makes it difficult to train and mature newcomers in creating such deep products.

As a consequence, the community also lost the “intellectual middleware”[26] that was a central element of its knowledge base and what one former senior intelligence officer terms its “ecology of expertise.”[27] Intellectual middleware is the profound complementary understanding of both domain and process that is gained from long experience. Along with the knowledge base built-up over years, middleware is a necessary link between current intelligence and deep understanding of a domain; it provides the essential capabilities necessary for either considered in-depth judgments or meaningful quick-response products. As a conference participant with extensive analytic experience noted, “Dick Kerr did a study…of all the analysis that was written on Iraq [by the CIA]…. He said…it was very good, there was a lot of detail, there was a lot of information, but he came away from all that analysis having no real sense of what Iraq, the country, was all about.”[28] Of course, working on research projects contributes to the production of middleware, which, in turn, serves as a resource for all other products and advice; this is something that working on a succession of current and limited studies can never do.

The Intelligence Community once had a fairly standard method for introducing analysts to the process of research, which was to have them draft a new edition of a lengthy product known as a National Intelligence Survey (NIS). Working on a project such as this created in-depth analytic expertise and domain knowledge on a country or area. Above all, the knowledge gained by producing an NIS would enable an analyst to provide policymakers with the essential context for current intelligence, to render informed judgments, and to produce timely answers to important questions. Unfortunately, the importance of the NIS process was not recognized within the community, and the practice was abandoned.


7. Failure to Develop Analytic Tools and Methods for Validation

Most of the tools available to support analysis provide help for specific analytic techniques or intelligence disciplines, such as imagery intelligence (IMINT) and signals intelligence (SIGINT), rather than for all-source analysis. Even so, there are formal methodologies taught and employed within the community for all-source analysis, especially for assessing and marshalling evidence, and there are structured methods that support hypothesis assessment.[29] Consistent with its “craft” culture, the Intelligence Community has used practical experience rather than formal validation methods to assess utility and select these tools. Rob Johnston has observed, however, that there is a large number of tools and methods available to the analytic community, but few are consistently employed—and none has been tested or validated to assure its effectiveness and utility.[30] As one seasoned analyst commented, “… imagine a nuclear powered Navy in which all the Reactor Plant Operating Manuals are written by the current ship’s engineer and whenever he got relieved they were all shredded and the new engineer had to write his own.”[31] Of course, as has often been noted by students of intelligence, that an agency or community opinion is wrong does not automatically mean that the process followed in reaching that opinion was flawed.[32] Again, quoting Charles Allen,

We’re not very good at evaluating the quality of intelligence analysis independent of the outcome. We’re outcome oriented, rather than process oriented.

Conversely, when the policy succeeds and a desirable outcome occurs, we feel satisfied with the conduct of intelligence and generally look no further. The cumulative effect of this process is that it undermines the very essence of intelligence analysis.[33]

Within the diverse domain of intelligence analysis, one would expect that analytic methods and tools would vary significantly, not only depending on the sources of intelligence available and the subject matter, but also on the epistemology of the question at hand—whether the problem is, to use Fritz Ermarth’s typology, a secret, a mystery, or an obscurity. “Secret” means the information exists but must be acquired—usually by clandestine collection methods—and interpreted. A “mystery” is a question for which a set of possible outcomes may be known but whose particular outcome can be known only after the fact. An “obscurity” involves questions that are often unrecognized and not seen to be relevant until they are posed explicitly—which requires curiosity, a quality often in short supply. Data to illuminate obscurities are also often available—even if found in non-traditional sources that require imagination to identify, if procedures to filter signal from noise can be developed, and if appropriate inference chains can be constructed from the evidence. For example, Murray Feschbach drew accurate inferences about the internal strength of the Soviet system from totally “out-of-domain,” low-level, openly-available statistical data on morbidity and mortality published by the Soviets themselves.[34]

The Intelligence Community, however, as a function of its history and culture, has constructed a “hierarchy of privilege” for information that still gives the most weight to secret intelligence. Similarly, many users of intelligence information accord greater credence to reports with higher levels of classification. And, of course, the Intelligence Community often perceives its comparative advantage with decisionmakers in secret information that only it can access and supply. At the same time, unfortunately, the more evidence and judgments are restricted in dissemination by compartmentation and distribution limitations, the more likely it is that questionable judgments will pass unchallenged. This is an especially serious problem for HUMINT, which, by its nature, provides only a very narrow perspective on events for which direct confirmation may not be available. Yet timely, authoritative, and credible HUMINT is sometimes—however rarely—the only way to obtain information that can determine strategic direction.

 

The Intelligence Community often perceives its comparative advantage with decisionmakers in secret information that only it can access and supply.

8. The Hindrances of the Security Mindset

The current security mindset is an additional impediment to cooperation among and within intelligence agencies, between intelligence agencies and policy agencies at all levels of government, and between community analysts and outside expertise resident in both non-government and foreign sources. This mindset is extremely risk averse with respect to potential information loss, and it fosters procedures that make it difficult to pull together and share files of relevant information, to bring fresh perspectives to bear, and to exploit the synergies of expert collaboration. This current security paradigm views problem domains as discrete and separable and insists that protection of information (and, therefore, sources and methods) is more important than effective exploitation and cross-fertilization. Two problem areas, in particular, are the traditional barrier between the intelligence and law enforcement functions and between foreign and domestic intelligence. Under the pressure of countering the terrorist threat, however, these distinctions are eroding.


Footnotes:

[1]See, for example, a bill proposed by Rep. Jane Harman, the “Intelligence Transformation Act of 2004” (HR4104): “The goal…should be to enhance the DNI's ability to coordinate and integrate operations, focus the community on priorities, share information better…and so on, but not detract from the support that cabinet secretaries need and expect from their intelligence organization, and not dilute competitive analysis.”

[2]One excuse for the very heavy cuts in mid- and senior-level analytic cadres in the early 1990s was that it was necessary to weed out those whose expertise was presumed to be passè and, therefore, no longer needed.

[3]See Jack Davis, “Improving CIA Analytic Performance: Strategic Warning,” and “Improving CIA Analytic Performance: DI Analytic Priorities.”

[4]Several retired, formerly very senior, CIA officials have made this point about erosion of professional élan—ethos, ethics, and standards—that help keep the analyst from, as General Kent put it, succumbing to the temptations of prostitution or proselytizing (see footnote 43 on page 22). One attribute of a profession, especially a “learned profession,” is that members can work without supervision. This is also one of the characteristics that define journeymen in a craft system.

[5]Carmen Medina, “What To Do When Traditional Models Fail,” Studies in Intelligence 46, no. 3 and Russ Travers, “The Coming Intelligence Failure,” Studies in Intelligence 40, no. 2.

[6]See Berkowitz, 3. That each organization develops tradecraft and evolves practices that may not transfer easily if challenged by new functions complicates the community’s ability to shift its focus and assets to meet new challenges.

[7]Creation of an “information commons” must be done in full awareness of the problems associated with the conundrum of commons, the conflict for resources between individual interests and the common good. Garrett Hardin, “The Tragedy of the Commons,” Science 162 (1968): 1243–8.

[8]It is like the saying, “better to be approximately correct than precisely wrong.” Josh Kerbel, “Thinking Straight: Cognitive Bias in the US Debate About China,” Studies in Intelligence 48, no. 3 (2004).

[9]In this regard, several commentators, among them Sabel, Fishbein, and Treverton, have recognized the potential applicability of practices drawn from “High Reliability Organizations” (HROs). Both this issue and “mindfulness” will be addressed in greater depth in Chapter Four.

[10]This thought owes much to comments by Stephen Marrin in a private communication on 1 November 2004.

[11]Intelligence and Policy: The Evolving Relationship, Center for the Study of Intelligence, June 2004, 7.

[12]These are not new problems. Defense Secretary Donald Rumsfeld, as chairman of the Commission on Ballistic Missile Threats, highlighted the failure of executive and legislative leadership to establish appropriate priorities while besieging the community with ad hoc taskings. See Side Letter to the Rumsfeld Report, 18 March 1999, 2.

[13]Those needs and the impediments on access to national systems underwrote the very large military programs known as TIARA (Tactical Intelligence and Related Activities). National systems are also known as National Technical Means, the term used to refer to them in arms control agreements.

[14]In Network Centric Warfare: Developing and Leveraging Information Superiority, David S. Alberts, John J. Garstka, and Frederick P. Stein define NCW as “an information superiority-enabled concept of operations that generates increased combat power by networking sensors, decisionmakers, and shooters to achieve shared awareness, increased speed of command, higher tempo of operations, greater lethality, increased survivability, and a degree of self-synchronization. In essence, NCW translates information superiority into combat power by effectively linking knowledgeable entities in the battlespace.”

[15]Sabel, 52ff.

[16]See Conclusion 4, SSCI Report, 22–23.

[17]Private communication from Fritz Ermarth, 23 January 2005.

[18]See the “Curveball Report,” cited in the SSCI Report, 482 and 492.

[19]See, for example, the SSCI Report, 32–33 and 484.

[20]Jack Davis, “Paul Wolfowitz on Intelligence-Policy Relations,” Studies in Intelligence 39, no. 5 (1996).

[21]See Chapter Four for a more complete discussion

[22]This is also a problem for newspapers, and it explains why newspapers, with their daily production cycle, are different from magazines, which can afford to pay more attention to sustained investigative journalism.

[23]See Douglas MacEachin, “The Tradecraft of Analysis: Challenge and Change in the CIA,” Studies in Intelligence 46, no. 3 (2002): 23–28. It is also interesting to note that the NIE on Iraqi WMD was requested by the Senate Select Committee on Intelligence and the Senate Armed Services Committee, not by executive branch policymakers.

[24]See Rob Johnston, Analytic Culture in the U.S. Intelligence Community. Indeed, in response to these concerns, the Commission on the Intelligence Capabilities of the U.S. Regarding Weapons of Mass Destruction strongly urged that the PDB not become the centerpiece of the DNI’s analytic focus.

[25]“Journeyman” is not used in a pejorative sense, but, rather, in the traditional guild meaning of the term. In CIA parlance, these are now called “fully qualified” analysts. Military homologues are Navy chief petty officers (“chiefs”) and Marine gunnery sergeants (“gunnies”).

[26]This term from the computer science community denotes software that performs intermediary functions (such as translation and data exchange) between heterogeneous systems, enabling them to function in an integrated, interoperable manner. The connotation of middleware is “glue” that enables not just interoperability but, in this context, also cross-fertilization.

[27]This concept of an “ecology of expertise,” articulated by Fritz Ermarth, is a powerful tool for thinking about the Intelligence Community’s role.

[28]See Charlottesville Conference Report, 6 and Kerr, et al.

[29]See, for example, Hughes and Schum, “Evidence Marshalling and Argument Construction.”

[30]Johnston, 72.

[31]Private communication from John Bodnar.

[32]Put another way, “There are too many targets and too many ways of attacking them for even the best intelligence agencies to discover all threats in time to prevent them from happening.” Kerr, et al.

[33]Allen, 3.

[34]See, inter alia, Russia’s Health and Democratic Crises and Russia’s Demographic and Health Meltdown. Feschbach is currently a senior scholar at the Woodrow Wilson Center and Research Professor Emeritus at Georgetown University.

* Adobe® Reader® is needed to view Adobe PDF files. If you don't already have Adobe Reader installed, you may download the current version at www.adobe.com (opens in a new window). [external link disclaimer]


Historical Document
Posted: Mar 15, 2007 04:11 PM
Last Updated: Jun 27, 2008 10:00 AM