Library

 

Chapter Four: A Program for Transforming Analysis

Chapter Four:

The following sections lay out some fundamental guidelines, or pathways, for transforming analysis, provide a set of principles upon which to base a restructured analysis paradigm, and suggest six specific areas for attention.[1] To be effective, however, necessary changes need to be implemented within the system, not just identified. Even a cursory look at the fate of previous Intelligence Community reform efforts does not lead to expectations of great success in such an endeavor. For this reason, the path suggested here offers the opportunity to make some early course corrections that can be initiated without formal sanction and with minimal changes to existing structures and authorities.[2] It also offers the possibility of winning support from the professional intelligence cadre, who must be convinced that the measures recommended are both effective and sensible. If they are not convinced, institutional inertia, if not outright resistance, will impede change. For example, groups of analysts could take ownership of these areas proposed for action, thereby legitimating the transformation effort as an organic program instead of one imposed from above.[3]

 

Developing a New Concept for Analysis

Several recent assessments contend that the Intelligence Community’s problems are manifestations of fundamental and systematic problems within each agency’s internal processes, cultures, and organizational structures.[4] Moreover, these shortcomings do not result from a flawed community architecture or from insufficient directive authorities and budgetary control over the community as a whole—although they may be aggravated by them.

The community still has not to recognized that the dominant intelligence problems are not penetrating "denied areas," but understanding "denied minds."

There are two fundamental challenges in rebuilding analysis, with a substantial degree of contradiction and inherent tension between them. The first is to avoid being confidently wrong (as in the NIE on Iraqi WMD) by staying closer to available evidence. The second is to provide judgments on complex and often unfamiliar adversaries and on their likely behavior based on fragmentary and frequently ambiguous information (as in the circumstances leading up to the 9/11 attack).[5] The community has still not made the shift to recognizing that the dominant intelligence problems are not penetrating “denied areas,” but rather understanding “denied minds.” This shift requires rethinking not only the types of information that analysts need, but also the nature of the information gathering necessary to provide that information. In terms of the latter, intelligence practitioners will need to accept that the appropriate mechanisms will not be limited to remote clandestine collection systems or case officers.

The larger issue of how to allow a classic hierarchical organization to meet the challenges of a more fluid environment and dynamic networked adversaries must be faced squarely, but it is also an effort well beyond the scope of this study. Recognizing that this factor is important, however, the discussion below does focus on directly addressing the sources of analytic failures, including several problems stemming from a hierarchical organization model that has been both badly applied and misapplied.

 

Principles for a New Paradigm

A new paradigm should start with a set of basic principles to guide the rebuilding process, restore professional standards to the conduct of analysis, and help redress the effects of evolved dysfunctional practices. These principles should be mutually agreed between the leadership of the Intelligence Community and the cadre of professional analysts as the basis for an “analytic compact” that will affect priorities and taskings as well as incentives and rewards for performance. Furthermore, even if users do not fully agree, they also should understand these principles to be the foundation that guides the analytic community in its work. In the long-term, it is clearly in the interest of policymakers to rebuild the community’s expertise and knowledge so that intelligence can provide highly contextualized and meaningful judgments, whether on current or future issues.

The primary purpose of analytic effort is "sensemaking" and understanding, not producing reports ...

These principles ought, above all, to convey that “analysis” needs to be construed broadly and not solely in a narrow, “reductionist” context that seeks to “know” by decomposing a phenomenon into its constituent parts and approaching it analytically only on the basis of induction from detailed evidence. Many complex phenomena may be better comprehended by approaches that are based more on synthesis—that is, understanding the larger picture—by focusing on the relationships among the parts and on the emergent behavior produced by such interactions. These principles—which, of course, are not meant to be exhaustive—will recognize, therefore, that:

 

Philosophy and Values

  • Analysts must have a “duty of curiosity,” and the analytic process must encourage and reward a deep and meaningful understanding of the phenomena under investigation;

  • Analysts must be responsible for defining knowledge needs and, therefore, collection requirements; to do this effectively, they must understand collection capabilities and be sensitive to their limitations;

  • Analysts must be active participants in developing integrated strategies for collection and analysis, seeking information instead of being merely passive recipients;

  • The primary purpose of analytic effort is “sensemaking” and understanding, not producing reports; the objective of analysis is to provide information in a meaningful context, not individual factoids;

  • The knowledge discovered and the expertise created when an analyst researches a problem is at least as important as “finished intelligence” products that may result;

  • Learning is an activity that is valued highly by both analysts and the organization;

  • Not all intelligence need be immediately “actionable;” informing decisionmakers and enhancing the quality of the decision process is a critical objective;

  • Intuition and creative thinking, including positing hypotheses to be tested, are as important to analysis as evidence-based inductive approaches.

 

The Overall Approach

  • There are different analytic problems, and there are diverse approaches and methods for resolving them; in many cases, appropriate and intuitively usable tools may enable their consistent use and enhance the effectiveness of these approaches;

  • There should be no assumed “hierarchy of privilege” of sources or analytic methods; a wider range of methodologies needs to be employed routinely and consistently, not seen as exceptional “alternative” techniques;

  • Analytic tools are intended to support, not supplant, rigorous and structured cogitation by human analysts;

  • There should be better access to, and exploitation of, open-source information as an element of all-source analysis;

  • Deductive hypothesis-based methods should be employed more often as a complement to traditional evidence-based inductive approaches;

  • There should be more use of formal tests of diagnosticity of evidence, thereby improving the ability to confirm or deny hypotheses;

  • Collaboration during the analytical process should be routine, not exceptional, and workloads should be balanced accordingly;

  • Review and assessment—including peer review—must be an integral element of the analysis and should not be conducted only after-the-fact;

  • There should be greater recognition of the propaedeutic and heuristic roles of writing—as tools of discovery and learning—for the analyst; writing is not just a method of transmitting information to the user;

  • Contrarian methods and “Red Teams” should be a routine part of the analytic process;[6]

  • As opposed to the incremental approach, in which new evidence is assessed piecemeal for its effect on the course of judgments made, more use should be made of a “stock-take” approach, in which the entire collection of evidence is reviewed holistically;[7]

  • “Process watchers,” charged with recognizing cognitive impediments and process failures, should become integral parts of analytic teams;[8]

  • In addition to reviewing products at hand, analytic managers and senior Intelligence Community overseers should subject the knowledge base and domain expertise to continual appraisal in order to assess whether the scaffolding of evidence and the inferential reasoning is sufficiently strong to bear the weight of the judgments being made and the policy decisions that may rest upon them.[9]

Interaction with Users

  • There may not be a “right” answer, and there may be limits on “knowability”;

  • Users are owed an honest assessment of the quality of evidence, uncertainties in judgments, and the “knowability” of the answer; transparency in the logic chain and application of evidence is essential;

  • Such assessments can be better conveyed through conversation and dialogue than through static “finished” products;

  • Analysts should expect users to “pull threads” and question judgments in open dialogue.

Analysts should expect users to "pull threads" and question judgments in open dialogue.

Management and Oversight

  • Management’s first responsibility should be to remove impediments to analysts’ ability to function effectively; the core management function is to ensure the process is being followed and is adapting as necessary to changing needs;

  • Commitment by analysts rather than enforced compliance by managers must become the driving force in renewed analytical practices;

  • Management must be knowledgeable about the practice of analysis and must provide appropriately focused incentives for those they supervise;

  • Managers must encourage self-awareness, questioning of existing procedures, and striving for continuous improvement;

  • The organization should create a “learning environment”—not only for domain knowledge, but also for process and methodological expertise;

  • The organization must encourage not only “near-miss” analysis and error detection, but also the consistent reporting of anomalies and errors;

  • There should be toleration of first errors, but no tolerance for repeating the same mistake; being wrong will happen, but failing to learn should be subject to sanction.

 

Cautions and Precautions

  • Self-awareness of cognitive biases and institutional prejudices, as well as a more self-reflective manner, should be intrinsic elements of an analyst’s mindset;

  • There is no “revealed truth,” and assertion by reference to evidence or previous “finished” intelligence products is not proof;

  • “Truth” is not held solely by the Intelligence Community, either here or abroad;

  • “Truth” is also not necessarily the first priority of users’ questions, but answers by analysts should be;

  • Ongoing re-examination and revalidation of previous judgments is very important; unless care is taken to validate and maintain the currency of the library of finished intelligence, “layering” poses significant dangers for the analytic process;

  • Keeping an open mind—framing multiple hypotheses and looking at different potential interpretations of “the evidence”—is essential if locking in premature judgments is to be avoided;

  • Regardless of the apparent rigor of the process, an analyst must approach analytic issues with “mindfulness” of the potential pitfalls in evidence and analytic methods that are being applied to a particular problem;[10]

  • Tools and methodologies should be tested and evaluated for analytic effectiveness and usability before adoption, and analysts should have an important role in evaluating them;

  • Because “mindfulness” only goes so far in detecting one’s own biases and cognitive failings, a supervisory function that watches over the conduct of the process is important;[11]

  • Differences in culture and governance mechanisms are real—not all human actions are determined by Western notions of rational thought or Enlightenment values—and they affect societal and individual preferences and decision metrics.

Additionally, if at all possible, the first audience for the knowledge gained should be the peer “community of practice” rather than the policy users.[12] Too much focus on “serving the first customer” may have served to shortchange the depth of analysis and the rigor that would be demanded by knowledgeable peers. Moreover, analysis must pay more attention to how the knowledge is conveyed to the users while, at the same time, recognizing that the message should rest on a “sound story.” The analytic process must reinstill a sense that the discipline of writing serves to discover the story, not only to convey it, and appropriate incentives to encourage these behaviors must be instituted.

 

Getting Started: Six Fundamentals

If at all possible, the first audience for knowledge gained should be the peer "community of practice" rather than the policy users.

This set of principles provides the basis for a systematic campaign to enhance analytic effectiveness in the three interrelated areas of people, process, and technology. The process dimension defines the characteristics and qualities of the enhancements needed in the people and technology areas, and that is the focus of this study. The people dimension must address, in addition to recruitment and training, impediments to good work and work practices, retention, and professional development, as well as appropriate organizational and institutional incentives to overcome them. The technology dimension must be centered around enabling and facilitating both the people and process dimensions, not simply on updating the technical infrastructure to move more data faster.

The corrective measures proposed here address six processes that are indispensable to restoring the Intelligence Community’s capability to perform effective intelligence analysis. These proposals emphasize:

1. A reconceptualized set of processes and procedures (including tools, methods, and practices) for analysis;

2. An integrated process for recruiting, training, educating, and professionalizing analysts based on a traditional graduate education model emphasizing close mentoring;

3. A new, more interactive process for communication between users and intelligence analysts throughout the intelligence cycle;

4. A fundamentally revised process for establishing “proof,” validating evidence and judgments, and reviewing those judgments;

5. A process for capturing the lessons of experience and advancing organizational learning; and

6. A process for continual collaboration and sharing.

 

A Revamped Analytic Process

Effective intelligence analysis requires the coupling of deep expertise with innovative approaches and intuition instead of the constraining formalism of “scientism.” Although adopting methods of alternative analysis[13] and setting up red teams are a useful start, creating a more coherent structure and a demanding, self-reflective analytic process must also involve more than calls for lateral, out-of-the-box, or non-linear thinking on the part of individual analysts. Real change must alter the very modes of thought that dominate the expectations and practices of today’s users, managers, and creators of all-source analysis. Both “sensemaking” and curiosity should be basic elements of this transformed paradigm.[14]

Models already exist for such a new paradigm, as a review of other domains in which failure of work processes can have large potential adverse consequences demonstrates. These domains fall into two distinct categories: 1) where the standardized procedures may not be sufficient to prevent routine but costly failures and 2) where routine procedures are clearly insufficient to face extraordinary conditions. In all of these models, however, the salient features are a high degree of self-awareness, emphasis on early error detection and correction, and the ethos of a “learning organization.”

In the first category are organizations, such as those engaged in manufacturing and transaction processing, which specialize in maximizing the effectiveness of routine operations through continuous attention to improvements in the process. For these organizations, even small variances in outcome are a signal that routine procedures need to be adjusted, or significantly altered, in order to correct errors that cumulatively could become worrisome. These organizations have adopted the emphasis of the Quality Movement on consistent, continual self-examination and improvement; perhaps the best known of these is Toyota with its formalized kaizen (continuous improvement) system. Other organizations have implemented similar techniques; among these are the “5 Whys Approach,” which focuses on recursive questioning to identify the root causes of failure rather than its superficial symptoms, and the Six Sigma Movement, which emphasizes consistency and reductions in process variance.[15]

Successful day-to-day operations can give rise to mindlessness, defined as inattention to the environment and to internal procedures.

In the second category, such domains include the nuclear power industry, civil aviation, and aircraft carrier operations. These organizations recognize that routine operations can produce or face conditions that unexpectedly turn extraordinary, with serious, adverse consequences. The paradigm for organizations of this type is the High Reliability Organization (HRO) model.[16]

Some commentators would also include hospitals, and especially their high-risk specialty units, as HROs because of the large consequences of error. Built-in practices—formal protocols, structured procedures, and self-awareness measures such as mortality and morbidity (M&M) conferences—are a sign that hospitals recognize the risks of errors.[17]

A common denominator of both categories of organizations is that they “…reliably forestall catastrophic outcomes through ‘mindful’ attention to ongoing operations.”[18] As Fishbein and Treverton note, quoting Karl Weick and Kathleen Sutcliffe, “The unifying trait of HROs is that they exhibit the quality of ‘mindfulness,’ defined as:

…the combination of ongoing scrutiny of existing expectations, continuous refinement and differentiation of expectations based on new experiences, willingness and capability to invent new expectations that make sense of unprecedented events,…and identification of new dimensions of context that improve foresight and current functioning.[19]

These organizations address the possibility of extraordinary events by building procedures that are designed to do more than maximize effectiveness and efficiency in the conduct of routine operations. The organization is focused on addressing non-routine operations, so that the unexpected “…doesn’t surprise or disable them” and “…coping actions seldom make the situation worse.”[20] These organizations see small errors in routine operations, in addition to their role as process signals, as indications that organizational compliance and managerial oversight are slipping—and that such slippage could presage worse failures.

Both categories of organization understand that successful day-to-day operations can give rise, in effect, to mindlessness, defined as inattention to the environment and to internal procedures. In these circumstances, people slip into routines, fail to notice changes in a larger context, see new phenomena in old categories, and use incoming information (even if it indicates significant variances) to confirm expectations. Mindfulness, on the other hand, emphasizes continuous updating and assessing alternate interpretations and implications of incoming information; even small signs of failure can suggest serious problems in organizational processes and compliance with them. In these organizations, according to Wieck and Sutcliffe, there is a “preoccupation with failure, both past and present”; and there is a concomitant stress on early error detection and correction at the lowest levels, as well as emphasis on error reporting upwards as part of the self-assessment “contract.”[21]

Mindfulness, on the other hand, emphasizes continuous updating and assessing of alternate interpretations and implications of incoming information.

Each category contains proven features that recommend themselves for inclusion in a new synthesis for a transformed intelligence analysis process.[22] From the High Reliability Organization, the feature is “mindfulness,” which is composed of five processes (and sub-processes below them): anticipating and becoming aware of the unexpected; containing the unexpected; near-miss analysis; active management; and enhancing containment. The most important elements in creating an anticipatory capability are a reluctance to simplify interpretations, which increases blind spots by filtering and abstracting key details; recognizing the effect of categorization on expectations by continually reexamining categories and event coding; and reassessing the basic assumptions and keystones of one’s analysis and analysis process. In addition, the stress on “near-miss analysis” is designed to recognize imperfections and errors before they cause consequential failures.

Both [near-miss and line stoppages in a just-in-time system] trigger root-cause analysis meant to uncover not only the proximate cause of the incident, but to eliminate, through redesign of the organization if necessary, the background conditions which generated the immediate source of the danger.[23]

From the Quality Movement, the synthesis model adopts the “5 Whys Approach,” employing a recursive questioning process that emphasizes the importance of identifying root causes of errors. This involves looking beyond the obvious first answer to the “what went wrong” question to the serially deeper causes until the base source of error is found. From medicine, there are two elements worth including. First is the practice of mortality and morbidity conferences, which focus on both near-miss and failure analysis. The second is the practice of “grand rounds” in which house staff, students, and attending physicians brief and review especially difficult cases. Both measures underscore the importance of open and collaborative communication in identifying and assessing hard problems, including both those that were resolved successfully and those that failed. Such open communication is essential to both investigatory and teaching roles. Finally, the introduction of a “process watcher,” as suggested by Kahneman, is intended to bring a clear and unbiased, outside expert’s eye to analytic teams. The process watcher function, unlike that of a Red Team, is intended to focus exclusively on identifying errors in the analytic process, not on alternative interpretations of the evidence or different logic chains.

The process watcher function is intended to focus exclusively on identifying errors in the analytic process, not on alternative interpretations of the evidence or different logic chains.

In light of the real challenges to conducting complex analysis effectively, it is important that the Intelligence Community identify and evaluate tools and methodologies that can help analysts make sense of complex phenomena rife with ambiguous or incomplete evidence—and then actually provide them. The revamped paradigm must also include processes that are more specifically directed toward strengthening the practice and content of analytic methods. This paradigm would very likely incorporate more widespread and routinized use of formalized techniques, such as Analysis of Competing Hypotheses (ACH), to explore multiple hypotheses and would employ appropriate supporting tools to facilitate their use.[24] There would also be more emphasis on the use of negative evidence and on its implications for key assumptions and inferences, especially when the orientation is toward current intelligence.[25] Finally, new collaborative mechanisms, such as “blogs” (web logs), “wikis” (wikipedia entries), and groupware could be employed to facilitate better communication and a more continuous dialogue among the parties within the community of interest.[26]

This new analytic paradigm is designed to lead analysts to reflect more intensively on the practice of their trade. It is also intended to develop more structured procedures and to instill in both individual analysts and analytic units the discipline to follow them. Such changes will need to be implemented carefully, so as not to interfere with individual creative processes and existing analytic practices that are effective, especially those that are unorthodox and not easy to assess with formalized metrics. It will be especially important to guard against the danger that too much introspection will cause analysts to avoid risk by dodging judgments.

 

Recruiting, Training, Educating, and Developing Professionals

In the wake of 9/11, the Intelligence Community has been able to take advantage of an upwelling of public support to tap a large pool of candidate analysts. These candidates are, on the surface, talented, diverse, and well educated; but they will require extensive training and professionalization in order to become effective and productive contributors to the analytic community. Furthermore, in current circumstances, when analysts should more frequently be addressing the challenge of “discovery” and open-ended, undefined problems than more readily defined monitoring tasks, they are confronted by time pressures that leave them with little latitude for reflection and wondering. Moreover, our educational system increasingly produces linear thinkers more comfortable “painting within the lines” and pointed more toward likely solutions than toward broader problem-solving capabilities.[27] It is unclear if these shortcomings can be corrected by better-targeted recruitment and more effective training, or if it must be addressed by redesigning fundamental processes and practices; it is likely that both tracks will be needed.

As an Intelligence Analyst’s Rite of Passage Might Look

As the Stefiks made admirably clear in their book about innovation:

Graduate school is a rite of passage for becoming researchers and inventors. Graduate schools create the next generation of researchers and inventors who are primed to step into positions in the world of science and innovation.

The experience of graduate school draws on a much earlier tradition than undergraduate education, or even high school and grammar school. Education prior to graduate school is dominated by a program of lectures, exercises and exams. Such educational practices have a predetermined curriculum intended to serve classes of students essentially in lock step.…

In contrast, graduate school is based on the older tradition of mentoring and apprenticeship. Graduate education is about assisting students to take on a professional practice. The curriculum is more tailored. Students acquire the practice by working with multiple mentors, adjusting the emphasis to fit their career objectives. Students discover, sometimes by osmosis, elements of practice that would seldom be encountered in a classroom setting. Graduation requires demonstrated mastery at the level of a practitioner in the field.*

*Stefik and Stefik, 85.

However the question of academic preparation is resolved, rebuilding the apprenticeship and mentoring system is crucial. It is essential, therefore, to reconstitute the “keystone species” represented by the journeyman analyst.[28] This cannot be done by bringing in a flood of young, inexperienced analysts; dousing them with a short period of classroom training (begrudged by their managers and mostly focused on the right tone and format for reports); and leaving them to learn good practices while sifting for nuggets of current intelligence.

There is no known mechanism that can turn the apprentices into “instant” journeymen; they cannot be transformed by lectures, abstract study, or classroom exercises; nor can software tools enable them to substitute for more experienced analysts.[29] This challenging task demands a focused, directed effort at deep analysis within a subject area, most likely through a tutoring and apprenticeship model.[30] In addition, revised classroom instruction, based on a significantly strengthened curriculum emphasizing analytic methodologies and methods, could have an important role within a rebuilt program for professional development. The Intelligence Community should move away from “training courses” that take analysts off-line for weeks to months and reintegrate this type of training directly into the “practice” of intelligence analysis, as is done in clinical education for medicine and law. Moreover, it is absolutely essential to create a professional “duty of curiosity,” which the training process would embed in the professional ethos and management would encourage, even in the face of time pressures to meet priority taskings. Further, a peripatetic career for senior analysts—”moving around to move up,” often increasingly farther from the actual practice of analysis—is not a useful way to foster deep expertise or to create effective role models. Reestablishing a new cadre of effective professional intelligence analysts will require basic changes in their career patterns, potentially requiring that the military practice of tracking and protecting vital sub-specialties with their own career ladders be emulated.

These factors prompt the thrust to restructure the career path of intelligence analysts, perhaps along the lines of medical or science graduate education—especially doctoral and post-doctoral—that integrates classroom learning, research, and clinical practice. In neither case are education and training separated; each is seen as an integral element in producing a practicing member of the medical or scientific community. In this context, then, deeper analytical products are essential to building the process expertise and domain knowledge of the analyst and to rebuilding the domain knowledge base. Together, these capabilities enable a skilled analyst to contextualize current intelligence for the decisionmaker.

Such a restructured program should also seek to foster an open seminar atmosphere during the training process and impart to analysts the practice of critical collaborative discussion within a professional network carried directly into the workplace. Moreover, in this process, in which socialization is seen as an important aspect of absorbing the ethics and ethos that should guide a professional, a close relationship with a mentor is crucial. The mentorship process also reinforces the HRO’s emphasis on a “culture of learning,” a habit of error reporting developed by encouraging openness, tolerance of even “stupid questions,” and professional collaboration as a norm.

The drastic shortages in the cadre of experienced analysts prompt three final thoughts on this process. First, the Intelligence Community must bring back sufficient mentors (even if on contracts that permit double-dipping), so that it can truly support the apprenticeship model with highly personalized mentoring. This will not be an easy task; mentors will need to be chosen carefully and supervised properly, so that the right lessons from experience are passed on—not cynical views carried away from previous experience with a dysfunctional process. Second, the community also needs to give up the conceit that it can develop software and tools that will make the novices into journeymen or experts without their going through this lengthy process of apprenticeship. This is not to say, however, that appropriate tools either do not exist or cannnot be developed to help them do their jobs better and perhaps progress through the cycle more quickly. Third, and perhaps most important in the interim, the Intelligence Community should look for alternative ways to produce the intelligence insights that the journeymen used to provide while, at the same time, reestablishing the vital interrelated processes that created and fostered the development of “intellectual middleware.” During this effort, managers must avoid the temptation to use these mentors to supplement the analytic cadre or to force the apprentices to rush their “analytical” fences.

About Apprenticeships
There is often a gap between what can be learned in formal lessons and what needs to be conveyed in total.…


When graduate students begin working with their mentors, they are embarking on a journey with an experienced guide. Apprenticeship amounts to going around the research cycle a few times, asking questions, and getting help at the trickier steps.*

*Stefik and Stefik, 86.

 

User-Community Interactions

The current processes for interaction between the Intelligence Community and its consumers, especially senior policymakers, do not work well at either end of the “Intelligence Cycle.” The community does a less than satisfactory job of communicating its judgments to these customers.[31] At the same time, few users have any real education or understanding of the Intelligence Community’s capabilities and limitations. To a large degree, both problems can be ascribed to too little sustained dialogue, interaction, engagement, or mutual understanding; both parties fail to understand that they are in a collective process of discovery, sensemaking, and judgment. These failings have not come about by accident, however, but by community preference. They flow from what have been deeply rooted beliefs—especially strong at CIA—that too close an association with policymakers and their political concerns runs the danger of contaminating “pristine” intelligence analyses with “policy judgments.”[32] As noted earlier, maintaining an appropriate degree of objectivity is a difficult problem; but a good solution is to be found neither by substituting users’ judgments for those of intelligence professionals nor by the professionals ignoring users’ real interests and needs.

Analysts and their managers should be prepared to be forthright in admitting what they don’t know and in identifying explicitly the uncertainties in judgments they provide.

Since the collapse of the Soviet Union, this mutual lack of understanding, combined with the lack of a sustained strategic policy that would provide consistent guidance and priorities, has forced the Intelligence Community to divine targets and priorities from immediate customer requirements rather than from a longer time horizon with a more strategic, synoptic, and open-minded field of view. Moreover, this tendency was reinforced throughout most of the 1990s by budget pressures that drove the community to focus on issues “relevant” to senior policymakers —and, therefore, defensible in budget hearings. Unfortunately, this approach guarantees myopia, and the Intelligence Community winds up bearing the blame for “failures” to see threats from outside the policymakers’ fields-of-vision.

The current “over-the-transom” process for taskings and questions often leaves the working analyst without a good understanding of the real issues at stake or the purposes to which customers will put the answers once delivered. Yet, both are important if uncertainties and sources are to be addressed in a context that the policymaker can appreciate. When the Intelligence Community provides its analyses and judgments, too much emphasis is given to the format in which the information is presented, and there is too little real dialogue with the users. The layout—too frequently formal, precisely-formatted, sterile “finished products”—often masks uncertainties and points of contention unless the reader is witting enough to “pull the threads,” an effort that can lead to charges of politicization.

The community should move away from the notion that “finished intelligence” conveys certainty; highlighting and clarifying disagreements, especially over fundamental assumptions and judgments, would be of more value to high-level policymakers.[33] At the same time, the false expectations conveyed by the “conceit of finished intelligence” and the “illusion of omniscience” must be changed on both sides. Users must learn that there may not be a “right” answer; that the “more probable” case in the forecast set may not be the situation that will eventuate; and that, therefore, a range of thoughtful (and thought through) contingency responses may be necessary.

Equally, analysts and their managers should be prepared to be forthright in admitting what they don’t know and in identifying explicitly the uncertainties in judgments they provide. This means providing greater transparency and traceability as they construct inference chains based on explicitly denoted qualified evidence, other information, assumptions, and hypotheses.[34] Analysts must themselves remember and remind policymakers of Heisenberg’s rule: that we are not simply outside observers of a process who have no effect on the outcome; our perceptions and actions are integral elements in a multiplayer game with strongly coupled feedback loops and action-reaction cycles.[35] Finally, it must be recognized that the frequent lack of deep understanding of intelligence on the part of its primary policy users suggests that a serious education effort needs to be undertaken. Members of other communities—the strategic nuclear community may be the best example—deal with arcane subjects that must be communicated clearly to national decisionmakers who often lack experience or expertise in such matters. They have addressed this problem by developing formal procedures for educating their users and managing their expectations.

Unfortunately, if all the policymakers see is a continuous flow of “current information” while lacking deep knowledge of a topic or the time to synthesize and integrate what they do learn into a coherent picture, an Intelligence Community version of Gresham’s law may well apply—factoids devoid of context will drive out thinking. This situation has implications not only for management of the technical systems that support and encourage collaboration and sharing, but also for the important social aspects of group behavior and the mechanisms for interaction between policy users and the analysts. It has significant implications both for the roles intelligence analysts will play and for the modalities through which they produce and communicate their analyses to users.

Communicating complex judgments and degrees of confidence in those judgments is best done through conversation among the parties, which demands different mechanisms than simple dissemination of “facts.” If the mechanisms for interaction with the users of intelligence are designed only to support the provision of individual pieces of evidence rather than to engage both parties in an extended conversation in which ambiguity and subtlety can be communicated, it is unlikely that either party will be satisfied with these interactions. Indeed, it is worth asking whether the Intelligence Community has any unique contributions to make to anticipatory judgments compared with what policymakers can provide themselves, and, if so, what are they? Posing the question this way emphasizes that such anticipatory assessments are in the realm of judgment, an area that the community, in its quest for “rigor,” has often tried to avoid. Now, given the variety of alternative information sources available to the policymaker, is even superb “current reporting” enough to make the community essential, especially to the policy users?

 

“Proof,” Validation, and Review

Failures in the pre-war estimates concerning Iraq‘s WMD capabilities highlight deep-seated problems in the extremely “self-referential”—that is, customarily internal, collegial, and lacking rigor—process for reviewing and validating intelligence judgments. At the same time, there is absolutely no excuse for allowing easily correctable errors, such as “Key Judgments” that differ from the body of the text or references to earlier assessments that portray their judgments inaccurately, to be conveyed to consumers.[36] The existing process relies fundamentally on an analyst-level coordination process augmented by a hierarchical review process, most often by managers who possess less specific knowledge and are farther removed from the craft of analysis. The community needs to create new processes that capture the best of the legal system’s adversarial model of open combat and the scientific community’s truly horizontal peer review and independent replication, sprinkling in alternative analyses and red teams to do so on both process and substance.

The community needs to create processes that capture the best of the legal system's adversarial model and the scientific community's horizontal peer review.

Such processes, even if conducted totally inside the community, are bound to be distressing, as knowledgeable individuals subject an analyst’s evidence, assumptions, hypotheses, and logic to increased scrutiny. These mechanisms, including workshops and roundtables, would be useful; but if they are composed solely of Intelligence Community members, or those who defer to them, then the problem of a self-referential “proof process” will continue. Therefore, painful as it might prove to be, the Intelligence Community must step outside its usual circle and exploit a wider range of expertise.

In addition to peer review, the scientific community has fostered a range of self-correcting features, such as tension between experimentalists and theoreticians. It also relies on a wider range of alternative proof models. Such additional mechanisms for validation should be examined for incorporation in the revamped intelligence analysis process. Furthermore, as in medicine, law, and science, mechanisms need to be developed and implemented to ensure that knowledge bases are updated and corrected as necessary and that users are notified of major errors in previous reports, so that the cumulative knowledge base is as accurate as possible.[37]

Finally, both the Intelligence Community and its users must have an accurate calibration of whether intelligence is “on top” of important issues and domains. The review process must go beyond assessments of individual products and personnel performance to create processes that assess the “state of knowledge” and the community’s (and its users’) awareness of that state. This type of self-diagnosis is badly needed, especially as it creates an environment for self-criticism combined with a license to look at the adequacy of information both on specific issues and across wide domain areas.

 

After-Action Reports and Lessons-Learned Processes

To understand "what works" and "what doesn’t work," the Intelligence Community should establish an institutional “lessons- learned” process.

Unlike medicine or law, militaries do not usually have the luxury of a continuous stream of real opportunities to “practice” their craft or profession. To address these problems, the US military counts on intensive individual and unit training, institutionalized after-action reports and lessons- learned processes, and the exercise of complete operational organizations before units participate in actual operations. It also instantiates understanding of what works best as formal “doctrine” that can be studied and inculcated to provide a common frame of reference and instinctive procedural basis. One can find very similar systems created in both academia and medicine to educate and train their incoming members. In these other professions, there is little distinction or separation between education, research, and training; rather, there is a continuum of learning that depends heavily on the “hands-on” transmission of domain knowledge and process expertise.

In order to understand both “what works” and “what doesn’t work,” the Intelligence Community should establish an institutionalized lessons-learned process. This would include not only postmortems on major failures but also on successes and near misses.[38] The purpose of this process is not to assign blame, which is traditionally an inspector general function, nor is it to punish; rather, it should serve as an aid to individual and organizational learning. Both after action reports, developed originally at the US Army’s National Training Center (NTC), and lessons-learned processes, used by the Army and the US Joint Forces Command (JFCOM), are effective methods and would be good starting points for creating an Intelligence Community effort. A good complement to the lessons-learned process is the use of wargaming and scenario methods, developed on an accurate historical basis, to force the participants to examine the situation, the players, their interactions, and the outcomes in a thoughtful manner.

After action reports and lessons-learned processes can also furnish objective evidence of the utility of tools and methodologies and their suitability for addressing various kinds of problems. Without such a measured baseline of effective procedures, methods, and tools, it is difficult to create consistent processes to select and adopt appropriate analytic methodologies or to train and exercise personnel. In addition, a baseline makes credit assignment and personnel efficiency reviews more reliable. As another former analyst and thoughtful observer of the analytic process has written,

The identification of causes of past failure leads to kernels of wisdom in the form of process modifications that could make the intelligence product more useful. A more effective, more accurate intelligence capability may still be vulnerable to the cognitive and institutional pathologies that cause failure, but a self-conscious and rigorous program based on the lessons derived from the existing literature would strengthen the intelligence product.[39]

 

Collaboration

Several of the recent investigations point to the important role in these intelligence failures of lapses in sharing information and coordinating efforts among the constituent elements of the Intelligence Community. Not surprisingly, the suggested remedies often involve establishing new directive authorities to mandate coordination and collaboration and build or improve the technical information infrastructures that could support collaborative activities. In fact, solutions do not start with either directive authorities or new IT systems, although an improved, technically sophisticated information infrastructure would help. Rather, effective collaboration is fundamentally a matter of culture and values; what is needed is, first, to create appropriate incentive structures for sharing and, second, to forge expert social networks and effective “distributed trust” systems. These are problems of organizational culture that demand active leadership at all levels of management throughout the community.

Key Questions in Reviewing Lessons Learned

Veteran analyst Charles Allen provides a list of questions that a lessons-learned process can assist in answering:

  • What set of hypotheses was being considered? Was the set comprehensive, or was there bias in the selection of hypotheses? What a priori probability was attached to each hypothesis? Again, was there bias?

  • Was there a good understanding about the observables that were expected to differentiate between the hypotheses? Was intelligence collection requested on the basis of these differentially diagnostic observables?

  • Were all the available data considered? How were the data weighted? What degree of credibility was accorded the sources?

  • Was the possibility of deception considered and accounted for?

  • Was the analytic process logically correct? Was the confidence in rendered judgments correctly estimated? If so, and if the confidence was low, was additional collection requested?

  • Were the judgments presented in a timely and adequate manner?

  • And, of course, was intelligence collection responsive and timely?*

*Allen, 3–4

Moreover, as mentioned previously, the success of the Intelligence Community depends on the promotion of an entire set of effective collaborations: among analysts; between analysts and collectors; between analysts and operations officers; between analysts and the intelligence users; and not least, between community analysts and information sources outside the intelligence or national security enterprise. For example, tasks such as target assessment and collection planning involve complex collaborative activity among analysts and collectors. Often there are subtle bits of information that appear significant only in the context of tidbits from other disciplines or aspects of tradecraft which may be little understood—or not at all—outside the ranks of its own practitioners. Each of these collaborations involves a distinct “community of interest” or “community of practice” and represents a different type of social construct; none is fundamentally dependent on building more elaborate technical infrastructures as its primary need. Fostering effective collaborations can serve an important bootstrapping function as well, because participants in the collaboration are likely to become effective champions of more collaboration.

Sharing of private information among experts is task for which technical solutions exist, but the community's technical, organizational, cultural, and incentive structures fail to support it.

Furthermore, true collaboration within the Intelligence Community should address more than simply sharing raw intelligence among different analytic components or taking part in the coordination process for finished products. The Intelligence Community understands how to do these tasks, even if they are not implemented effectively today. The real challenge and greatest leverage will come from sharing private information, initial hypotheses and tacit knowledge, among networks of experts in order to increase the opportunities for discovery of previously unrecognized significance. This is a task for which technical solutions are now known and feasible, but the community’s technical, organizational, cultural, and incentive structures fail to support it. A true collaborative environment must build effective trust systems among community analysts and collectors, especially between the DI and the DO. Indeed, such trusted environments should extend beyond the Intelligence Community to policymakers and, eventually, to other sources of expertise outside of the community and even outside the United States.

Another important reason for fostering collaboration is that of reducing information costs. Such reductions can result not only from the powerful impacts of information technologies on organizational forms, but also from the role of “social networks” as a medium for sharing among trusted members. Such networks, especially those among journeymen analysts, substantially reduce the transaction costs of creating and transmitting knowledge; in particular, they are a low-cost information resource for the essential tacit knowledge, both domain and process, that is so difficult to elicit and instantiate in formal knowledge systems. Moreover, because of the wide-ranging social network that journeymen build over their years of service, these networks can both help to diffuse account-specific insights into other areas and infuse other perspectives into their domain, often cutting across formal community security compartmentation restrictions. As with many examples of military C2, it is these informal networks and processes that truly enable the system to function. The potential of using “groupware,” such as Groove and other collaboration platforms, in building and strengthening these social networks bears careful exploration and experimentation.[40]

Finally, the Intelligence Community’s security mindset should also be addressed. The overall approach to security—mandated from the top by the newly created DNI if it is to have sufficient weight to become the new paradigm—must move away from the existing risk-averse model, which really seeks to avoid problems. Despite the likelihood of significant opportunity costs, the direction must be toward a risk management model like that increasingly being adopted for information security in other government agencies and the private sector. One approach would take responsibility for personnel and IT security out of the hands of dozens of individual agencies and make them the responsibility of a single community manager who would also have the power to adjudicate equities and make risk management decisions.[41] A single authority under the new DNI would issue security clearances and set security requirements for IT that would apply throughout the federal government (or at least the executive branch), among government contractors, and, ideally, to state and local government agencies, as required. This system would also eliminate the need to have clearances passed, making expedient collaboration easier. Uniform security standards for IT systems could make the electronic sharing of information and on-line collaboration among analysts substantially easier, thereby enabling the formation of collaborative “communities of interest” within a secure information environment.

 


Footnotes:

[1]A comprehensive discussion of remedial measures is beyond the scope of this paper.

[2]Several reviewers, all of whom are experts in management theory and organizational behavior, suggested this idea in parallel.

[3]See Michael Mears, “The Intelligence Community Genome: RIA Paper #3.” It is an insightful look at the practical challenges of organizational transformation in the Intelligence Community.

[4]See Jeffrey Cooper, “Intelligence & Warning: Analytic Pathologies,” and Johnston.

[5]See Kerr, et al. for a more extensive discussion by seasoned analysts on improvements needed in analysis.

[6]Composition of the teams, however, might better be rotational and ad hoc. See also Kerr, et al., 52: “Indeed, although certain gaps were acknowledged, no product or thread within the intelligence provided called into question the quality of basic assumptions….”

[7]The “stock-take” originated in the UK atomic weapons program; it differs fundamentally from the more usual practice, which holds that new information will appropriately correct earlier judgments. But this approach, based on Bayesian logic, depends on the earlier “priors,” which may never be carefully reexamined.

[8]Professor Daniel Kahneman, who teaches at Princeton University, also suggested this approach in the slightly different military “command and control” (C2) context in an interview with me in October 2000. Johnston discusses this approach at length in his chapter on “Integrating Methodologists into Teams of Experts.”

[9]Fritz Ermarth made this suggestion in a communication to me on 23 January 2005.

[10]See Warren Fishbein and Gregory Treverton, Rethinking “Alternatives Analysis” to Address Transnational Threats.

[11]The standard approach to addressing bias and prejudice in judgment and decisionmaking has been through training to recognize one’s own cognitive errors. During my research on cognitive impediments to C2 decisionmaking for the Defense Advanced Research Projects Agency (DARPA), Professor Daniel Kahneman pointed out that this approach has not succeeded, despite more than 25 years of trying. Kahneman was awarded the Nobel Prize in Economics for his work on imperfections in decisionmaking. (Personal interview, October 2000.)

[12]This issue was highlighted in Rozak, et al.

[13]See Fishbein and Treverton, 2, for a succinct and useful discussion of “alternatives analysis.”

[14]“Sensemaking” in this context means the ability to perceive, analyze, represent, visualize and make sense of one's environment and situation in a contextually appropriate manner. See both Fishbein and Treverton, 3, and Cooper, “Sensemaking: Focusing on the Last Six Inches.”

[15] See the Six Sigma website, http://www.isixsigma.com/.

[16]A number of perceptive commentators have pointed to HRO as a useful model, although often for diverse reasons. See Sabel, Johnston, and Stephen Marrin in “Preventing Intelligence Failures by Learning from the Past,” and Fishbein and Treverton.

[17]The continuing high rates of medical errors and the increasing appearance of “iatrogenic” (that is, physician-induced) diseases in hospitals suggest, however, that these organizations have a long way to go before they can be fully recognized as HROs.

[18]Karl E. Weick and Kathleen M. Sutcliffe, Managing the Unexpected: Assuring High Performance in an Age of Uncertainty, 23.

[19]Fishbein and Treverton, 4, quoting Weick and Sutcliffe, 25.

[20]Weick and Sutcliffe, 42.

[21]Weick and Sutcliffe, quoted in Fishbein and Treverton.

[22]This study will develop a brief outline of such organizational characteristics, but a comprehensive treatment will require a separate paper.

[23]Sabel, 30.

[24]An example is the ACH Tool developed by Palo Alto Research Center (PARC) for the Novel Intelligence from Massive Data (NIMD) Program under the Intelligence Community’s Advanced Research and Development Activity (ARDA).

[25]Kerr, et al.

[26]See Calvin Andrus’s Galileo Award winning paper, “Toward a Complex Adaptive Intelligence Community,” in Studies in Intelligence 49, no. 3 (2005).

[27]As Daniel Goleman noted in his answer to the “2005 Edge Question,” an annual survey by The Edge, an internet site favored by the technical community.

[28]Additional incentives to retain these experts appear worth considering.

[29]Frank J. Hughes, Preparing for the Future of Intelligence Analysis.

[30]Hughes, 2–3

[31]The term “Intelligence Cycle” is itself part of the problem. With its Industrial Age antecedents, it usually conveys the notion of a self-contained “batch” process rather than a continuous spiral of interactions.

[32]See Davis, “Kent-Kendall Debate.”

[33]See Kerr, et al. on “Integration with the Policy Community,” 52.

[34]This has important and challenging implications for the estimative methods used by a community self-defined as “evidence-based.”

[35]German mathematician Werner Heisenberg (1901-1976). I am indebted to John Bodnar for highlighting this issue.

[36]See SSCI Report, 286 and 300.

[37]This problem is significantly aggravated by the lack of a coherent information infrastructure.

[38]This is very much like Klein’s “premortems,” cited in Fishbein and Treverton, 7. See also Gary Klein, Intuition at Work: Why Developing Your Gut Instinct Will Make You Better at What You Do, 88.

[39]Stephen Marrin, “Preventing Intelligence Failures by Learning from the Past,” International Journal of Intelligence and Counterintelligence, 2004. A recent conference on lessons learned, sponsored by the Center for the Study of Intelligence, discussed this question at some length and provides an excellent point-of-departure for implementation projects. See Intelligence Lessons Learned Conference.

[40]Groove is a software package that creates a virtual private network (VPN) within an information network that controls access to a shared collaborative space and provides a variety of tools to facilitate information sharing and collaboration. It is accepted by some components of the Intelligence Community as secure and trustworthy, but it is not widely employed at this time.

[41]The IRTPA, creating the Information Sharing Executive Program Manager, does mandate this, at least for counter-terrorism information.


Historical Document
Posted: Mar 15, 2007 04:11 PM
Last Updated: Jun 27, 2008 10:00 AM