Library

 

Integrating Methodologists into Teams of Substantive Experts

methodologists, integrating into analytic team,

UNCLASSIFIED


 

Reducing Analytic Error

Integrating Methodologists into Teams of Substantive Experts

Rob Johnston


Intelligence analysis, like other complex tasks, demands considerable expertise.  It requires individuals who can recognize patterns in large data sets, solve complex problems, and make predictions about future behavior or events.  To perform these tasks successfully, analysts must dedicate a considerable number of years to researching specific topics, processes, and geographic regions.

Paradoxically, it is the specificity of expertise that makes expert forecasts unreliable.  While experts outperform novices and machines in pattern recognition and problem solving, expert predictions of future behavior or events are seldom as accurate as simple actuarial tables.  In part, this is due to cognitive biases and processing-time constraints.  In part, it is due to the nature of expertise itself and the process by which one becomes an expert.1

Becoming an Expert

Expertise is commitment coupled with creativity.  Specifically, it is the commitment of time, energy, and resources to a relatively narrow field of study and the creative energy necessary to generate new knowledge in that field.  It takes a considerable amount of time and regular exposure to a large number of cases to become an expert.

An individual enters a field of study as a novice.  The novice needs to learn the guiding principles and rules—the heuristics and constraints—of a given task in order to perform that task.  Concurrently, the novice needs to be exposed to specific cases, or instances, that test the boundaries of such heuristics.  Generally, a novice will find a mentor to guide her through the process of acquiring new knowledge.  A fairly simple example would be someone learning to play chess.  The novice chess player seeks a mentor to teach her the object of the game, the number of spaces, the names of the pieces, the function of each piece, how each piece is moved, and the necessary conditions for winning or losing the game.

In time, and with much practice, the novice begins to recognize patterns of behavior within cases and, thus, becomes a journeyman.  With more practice and exposure to increasingly complex cases, the journeyman finds patterns not only within cases but also between cases.  More importantly, the journeyman learns that these patterns often repeat themselves over time.  The journeyman still maintains regular contact with a mentor to solve specific problems and learn more complex strategies.  Returning to the example of the chess player, the individual begins to learn patterns of opening moves, offensive and defensive game-playing strategies, and patterns of victory and defeat.

When a journeyman starts to make and test hypotheses about future behavior based on past experiences, she begins the next transition.  Once she creatively generates knowledge, rather than simply matching superficial patterns, she becomes an expert.  At this point, she is confident in her knowledge and no longer needs a mentor as a guide—she becomes responsible for her own knowledge.  In the chess example, once a journeyman begins competing against experts, makes predictions based on patterns, and tests those predictions against actual behavior, she is generating new knowledge and a deeper understanding of the game.  She is creating her own cases rather than relying on the cases of others.

The chess example is a rather short description of an apprenticeship model.  Apprenticeship may seem like a restrictive 18th century mode of education, but it is still a standard method of training for many complex tasks.  Academic doctoral programs are based on an apprenticeship model, as are fields like law, music, engineering, and medicine.  Graduate students enter fields of study, find mentors, and begin the long process of becoming independent experts and generating new knowledge in their respective domains.

To some, playing chess may appear rather trivial when compared, for example, with making medical diagnoses, but both are highly complex tasks.  Chess has a well-defined set of heuristics, whereas medical diagnoses seem more open ended and variable.  In both instances, however, there are tens, if not hundreds, of thousands of potential patterns.  A research study discovered that chess masters had spent between 10,000 and 20,000 hours, or more than ten years, studying and playing chess.  On average, a chess master stores, 50,000 different chess patterns in long-term memory.2

Similarly, a diagnostic radiologist spends eight years in full time medical training—four years of medical school and four years of residency—before she is qualified to take a national board exam and begin independent practice. 3 According to a 1988 study, the average diagnostic radiology resident sees forty cases per day, or around 12,000 cases per year.4 At the end of a residency, a diagnostic radiologist has stored, on average, 48,000 cases in long-term memory.

Psychologists and cognitive scientists agree that the time it takes to become an expert depends on the complexity of the task and the number of cases, or patterns, to which an individual is exposed.  The more complex the task, the longer it takes to build expertise, or, more accurately, the longer it takes to experience and store a large number of cases or patterns.

The Power of Expertise

Experts are individuals with specialized knowledge suited to perform the specific tasks for which they are trained, but that expertise does not necessarily transfer to other domains.5 A master chess player cannot apply chess expertise in a game of poker—although both chess and poker are games, a chess master who has never played poker is a novice poker player.  Similarly, a biochemist is not qualified to perform neurosurgery, even though both biochemists and neurosurgeons study human physiology.  In other words, the more complex a task is, the more specialized and exclusive is the knowledge required to perform that task.

An expert perceives meaningful patterns in her domain better than non-experts.  Where a novice perceives random or disconnected data points, an expert connects regular patterns within and between cases.  This ability to identify patterns is not an innate perceptual skill; rather it reflects the organization of knowledge after exposure to and experience with thousands of cases.6

Experts have a deeper understanding of their domains than novices do, and utilize higher-order principles to solve problems.7 A novice, for example, might group objects together by color or size, whereas an expert would group the same objects according to their function or utility.  Experts comprehend the meaning of data and weigh variables with different criteria within their domains better than novices.  Experts recognize variables that have the largest influence on a particular problem and focus their attention on those variables.

Experts have better domain-specific short-term and long-term memory than novices do.8 Moreover, experts perform tasks in their domains faster than novices and commit fewer errors while problem solving.9 Interestingly, experts go about solving problems differently than novices.  Experts spend more time thinking about a problem to fully understand it at the beginning of a task than do novices, who immediately seek to find a solution.10 Experts use their knowledge of previous cases as context for creating mental models to solve given problems.11

Better at self-monitoring than novices, experts are more aware of instances where they have committed errors or failed to understand a problem.12 Experts check their solutions more often than novices and recognize when they are missing information necessary for solving a problem.13 Experts are aware of the limits of their domain knowledge and apply their domain’s heuristics to solve problems that fall outside of their experience base.

The Paradox of Expertise

The strengths of expertise can also be weaknesses.14 Although one would expect experts to be good forecasters, they are not particularly good at making predictions about the future.  Since the 1930s, researchers have been testing the ability of experts to make forecasts.15 The performance of experts has been tested against actuarial tables to determine if they are better at making predictions than simple statistical models.  Seventy years later, with more than two hundred experiments in different domains, it is clear that the answer is no.16 If supplied with an equal amount of data about a particular case, an actuarial table is as good, or better, than an expert at making calls about the future.  Even if an expert is given more specific case information than is available to the statistical model, the expert does not tend to outperform the actuarial table.17

There are few exceptions to these research findings, but the exceptions are informative.  When experts are given the results of the actuarial predictions, for example, they tend to score as well as the statistical model if they use the statistical information in making their own predictions.18 In addition, if an expert has privileged information that is not reflected in the statistical table, she will actually perform better than the table.  A classic example is the broken leg argument:  Judge X has gone to the theater every Friday night for the past ten years.  Based on an actuarial table, one would predict, with some certainty, that the judge would go to the theater this Friday night.  An expert knows, however, that the judge broke her leg Thursday afternoon and is currently in the hospital until Saturday.  Knowing this key variable allows the expert to predict that the judge will not attend the theater this Friday night.

Although this argument makes sense, it is misleading.  Forecasting is not simply a linear logical argument but rather a complex, interdisciplinary, dynamic, and multivariate task.  Cases are rare where one key variable is known and weighed appropriately to determine an outcome.  Generally, no single static variable predicts behavior; rather, many dynamic variables interact, weight and value change, and other variables are introduced or omitted to determine outcome.

Theorists and researchers differ when trying to explain why experts are less accurate forecasters than statistical models.  Some have argued that experts, like all humans, are inconsistent when using mental models to make predictions.  That is, the model an expert uses for predicting X in one month is different from the model used for predicting X in a following month, although precisely the same case and same data set are used in both instances.19

A number of researchers point to human biases to explain unreliable expert predictions.  During the last 30 years, researchers have categorized, experimented, and theorized about the cognitive aspects of forecasting.20 Despite such efforts, the literature shows little consensus regarding the causes or manifestations of human bias.  Nonetheless, there is general agreement that two types of bias exist:

  • Pattern bias—looking for evidence that confirms rather than rejects a hypothesis and inadvertently filling in missing data with data from previous experiences.

  • Heuristic bias—using inappropriate guidelines or rules to make predictions.

The very method by which one becomes an expert explains why experts are much better at describing, explaining, performing tasks, and problem-solving within their domains than are novices, but, with a few exceptions, are worse at forecasting than actuarial tables based on historical, statistical models.

A given domain has specific heuristics for performing tasks and solving problems.  These rules are a large part of what makes up expertise.  In addition, experts need to acquire and store tens of thousands of cases within their domains in order to recognize patterns, generate and test hypotheses, and contribute to the collective knowledge within their fields.  In other words, becoming an expert requires a significant number of years of viewing the world through the lens of one specific domain.  It is the specificity that gives the expert the power to recognize patterns, perform tasks, and solve problems.

Paradoxically, it is this same specificity that is restrictive, narrowly focusing the expert’s attention on one domain to the exclusion of others.  It should come as little surprise, then, that an expert would have difficulty identifying and weighing variables in an interdisciplinary task such as forecasting an adversary’s intentions.

The Burden on Intelligence Analysts

Intelligence is an amalgam of a number of highly specialized domains.  Within each of these domains, a number of experts are tasked with assembling, analyzing, assigning meaning, and reporting on data, the goals being to describe, solve a problem, or make a forecast.

When an expert encounters a case outside her expertise, her options are to repeat the steps she initially used to become an expert in the field.  She can:

  • Try to make the new data fit with a pattern that she has previously stored;

  • Recognize that the case falls outside her expertise and turn to her domain’s heuristics to try to give meaning to the data;

  • Acknowledge that the case still does not fit with her expertise and reject the data set as being an anomaly; or

  • Consult with other experts.

A datum, in and of itself, is not domain specific.  Imagine economic data that reveal that a country is investing in technological infrastructure, chemical supplies, and research and development.  An economist might decide that the data fit an existing spending pattern and integrate these facts with prior knowledge about a country’s economy.  The same economist might decide that this is a new pattern that needs to be remembered (or stored in long-term memory) for some future use.  The economist might decide that the data are outliers of no consequence and should be ignored.  Or, the economist might decide that the data would be meaningful to a chemist or biologist and therefore seek to collaborate with other specialists who might reach different conclusions regarding the data than would the economist.

In this example, the economist is required to use her economic expertise in all but the final option of consulting with other experts.  In the decision to collaborate, the economist is expected to know that what appears to be new economic data may have value to a chemist or biologist, domains with which she may have no experience.  In other words, the economist is expected to know that an expert in some other field might find meaning in data that appear to be economic.

Three confounding variables affect the economist’s decisionmaking:

  • Processing time, or context.  This does not refer to the amount of time necessary to accomplish a task, but rather the moment in time during which a task occurs—“real time”—and the limitations that come from being close to an event.  The economist doesn’t have a priori knowledge that the new data set is the critical data set for some future event.  In “real time,” they are simply data to be manipulated.  It is only in retrospect, or long-term memory, that the economist can fit the data into a larger pattern, weigh their value, and assign them meaning.

  • Pattern bias.  In this particular example, the data appear to be economic and the expert is an economist.  The data are, after all, investment data.  Given the background and training of an economist, it makes perfect sense to try to manipulate the new data within the context of economics, despite the fact that there may be other more important angles.

  • Heuristic bias.  The economist has spent a career becoming familiar with and using the guiding principles of economic analysis and, at best, has only a vague familiarity with other domains and their heuristics.  An economist would not necessarily know that a chemist or biologist could identify what substance is being produced based on the types of equipment and supplies that are being purchased.

This example does not describe a complex problem—most people would recognize that the data from this case might be of value to other domains.  It is one isolated case, viewed retrospectively, which could potentially affect two other domains.  But what if the economist had to deal with one hundred data sets per day?  Now, multiply those one hundred data sets by the number of potential domains that would be interested in any given economic data set.  Finally, put all of this in the context of “real time.”  The economic expert is now expected to maintain expertise in economics, which is a full-time endeavor, while simultaneously acquiring some level of experience in every other domain.  Based on these expectations, the knowledge requirements for effective collaboration quickly exceed the capabilities of the individual expert.

The expert is left dealing with the data through the lens of her own expertise.  She uses her domain heuristics to incorporate the data into an existing pattern, store the data into long-term memory as a new pattern, or reject the data set as an outlier.  In each of these options, the data stop with the economist instead of being shared with an expert in some other domain.  The fact that these data are not shared then becomes a critical issue in cases of analytic error.21

In hindsight, critics will say that the implications were obvious—that the crisis could have been avoided if the data had been passed to one specific expert or another.  In “real time,” however, an expert cannot know which particular data set would have value for an expert in another domain.

The Pros and Cons of Teams

One obvious solution to the paradox of expertise is to assemble an interdisciplinary team.  Why not simply make all problem areas or country-specific data available to a team of experts from a variety of domains?  This ought, at least, to reduce the pattern and heuristic biases inherent in relying on only one domain.

Ignoring potential security issues, there are practical problems with this approach.  First, each expert would have to sift through large data sets to find data specific to her expertise.  This would be inordinately time-consuming.

Second, during the act of scanning large data sets, the expert inevitably would be looking for data that fit within her area of expertise.  Imagine a chemist who comes across data that show that a country is investing in technological infrastructure, chemical supplies, and research and development (the same data that the economist analyzed in the previous example).  The chemist recognizes that these are the ingredients necessary for a nation to produce a specific chemical agent, which could have a military application or could be benign.  The chemist then meshes the data with an existing pattern, stores the data as a new pattern, or ignores the data as an anomaly.

The chemist, however, has no frame of reference regarding spending trends in the country of interest.  The chemist does not know if this is an increase, a decrease, or a static spending pattern—answers that the economist could supply immediately.  There is no reason for the chemist to know if a country’s ability to produce this chemical agent is a new phenomenon.  Perhaps the country in question has been producing the chemical agent for years and these data are part of some normal pattern of behavior.

One hope is that neither expert treats the data set as an anomaly, that both report it as significant.  Another hope is that each expert’s analysis of the data—an increase in spending and the identification of a specific chemical agent—will come together at some point.  The problem is at what point?  Presumably, someone will get both of these reports somewhere along the intelligence chain.  Of course, the individual who gets these reports may not be able to synthesize the information.  That person is subject to the same three confounding variables described earlier:  processing time, pattern bias, and heuristic bias.  Rather than solving the paradox of expertise, the problem has merely been shifted to someone else in the organization.

In order to avoid shifting the problem from one expert to another, an actual collaborative team could be built.  Why not explicitly put the economist and the chemist together to work on analyzing data?  The utilitarian problems with this strategy are obvious.  Not all economic problems are chemical and not all chemical problems are economic.  Each expert would waste an inordinate amount of time.  Perhaps one case in one hundred would be applicable to both experts; during the rest of the day, the experts would drift back to their individual domains, in part because that is what they are best at and in part just to stay busy.

Closer to the real world, the same example may also have social, political, historical, and cultural aspects.  Despite an increase in spending on a specific chemical agent, the country in question may not be politically, culturally, socially, historically, or otherwise inclined to use it in a threatening way.  There may be social data—unavailable to the economist or the chemist—indicating that the chemical agent will be used for a benign purpose.  In order for collaboration to work, each team would have to have experts from many domains working together on the same data set.

Successful teams have very specific organizational and structural requirements.  An effective team requires discrete and clearly stated goals that are shared by each team member.22 Teams require interdependence and accountability—the success of each individual depends on the success of the team as a whole and the individual success of every other team member.23

Effective teams require cohesion, formal and informal communication, cooperation, and shared mental models, or similar knowledge structures.24 While cohesion, communication, and cooperation might be facilitated by specific work practices, creating shared mental models, or similar knowledge structures, is not a trivial task.  Creating shared mental models may be possible with an air crew or a tank crew, where an individual’s role is clearly identifiable as part of a larger team effort—like landing a plane or acquiring and firing on a target.  Creating shared mental models in an intelligence team is less likely, given the vague nature of the goals, the enormity of the task, and the diversity of individual expertise.  Moreover, the larger the number of team members, the more difficult it is to generate cohesion, communication, and cooperation.  Heterogeneity can also be a challenge:  It has a positive effect on generating diverse viewpoints within a team, but requires more organizational structure than does a homogeneous team.25

Without specific processes, organizing principles, and operational structures, interdisciplinary teams will quickly revert to being just a room full of experts who ultimately drift back to their previous work patterns.  That is, the experts will not be a team at all; they will be a group of experts individually working in some general problem space.26

Looking to Technology

There are potential technological alternatives to multifaceted teams.  An Electronic Performance Support System (EPSS), for example, is a large database, coupled with expert systems, intelligent agents, and decision aids.  Applying such a system to intelligence problems might be a useful goal.  At this point, however, the notion of an integrated EPSS for large complex data sets is more theory than practice.27 Ignoring questions about the technological feasibility of such a system, fundamental epistemological flaws present imposing hurdles.  It is virtually inconceivable that a comprehensive computational system could by-pass the three confounding variables of expertise described earlier.

An EPSS, or any other computational solution, is designed, programmed, and implemented by a human expert from one domain:  computer science.  Historians will not design the “historical decision aid;” economists will not program the “economic intelligent agent;” chemists will not create the “chemical agent expert system.”  Software engineers and computer scientists will do all of that.

Computer scientists may consult with various experts during the design phase of such a system, but when it is time to sit down and write code, the programmer will follow the heuristics of computer science.  The flexibility, adaptability, complexity, and usability of the computational system will be dictated by the guidelines and rules of computer science.28 In essence, one would be trading the heuristics from dozens of domains for the rules that govern computer science.  This would reduce the problem of processing time by simplifying and linking data, and it may potentially reduce pattern bias.  But it will not reduce heuristic bias.29 If anything, it may exaggerate it by reducing all data to a binary state.

This is not simply a Luddite reaction to technology.  Computational systems have had a remarkable, positive effect on processing time, storage, and retrieval.  They have also demonstrated utility in identifying patterns within narrowly defined and highly constrained domains.  However, intelligence analysis is neither narrowly defined nor highly constrained.  Quite the opposite, it is multivariate and highly complex, which is why it requires the expertise of so many diverse fields of study.  Intelligence analysis is not something a computational system handles well.  While an EPSS, or some other form of computational system, may be a useful tool for manipulating data, it is not a solution to the paradox of expertise.

Analytic Methodologists

Most domains have specialists who study the scientific process or research methods of their discipline.  These people are concerned with the epistemology of their domain, not just philosophically but practically.  They want to know how experts in their discipline reach conclusions or make discoveries.  Rather than specializing in a specific substantive topic within their domain, these experts specialize in mastering the research and analytic methods of their domain.

In the biological and medical fields, these methodological specialists are epidemiologists.  In education and public policy, these specialists are program evaluators.  In other fields, they are research methodologists or statisticians.  Despite the label, each field recognizes that it requires experts in methodology to maintain and pass on the domain’s heuristics for problem solving and making discoveries.

The methodologist’s focus is on selecting and employing a process or processes to research and analyze data.  Specifically, the methodologist identifies the research design, the methods for choosing samples, and the tools for data analyses.  This specialist becomes an in-house consultant for selecting the process by which one derives meaning from the data, recognizes patterns, and solves problems within a domain.  Methodologists become organizing agents within their field by focusing on the heuristics of their domain and validating the method of discovery for their discipline.

The methodologist holds a unique position within the discipline.  Organizing agents are often called on by substantive experts to advise on a variety of process issues within their field because they have a different perspective than do the experts.  On any given day, an epidemiologist, for example, may be asked to consult on studies of the effects of alcoholism on a community or the spread of a virus, or to review a double-blind clinical trial of a new pharmaceutical product.  In each case, the epidemiologist is not being asked about the content of the study; rather he is being asked to comment on the research methods and data analysis techniques used.

Well over 200 analytic methods, most from domains outside intelligence, are available to the intelligence analyst; however, few methods specific to the domain of intelligence analysis exist.30 Intelligence analysis lacks specialists whose professional training is in the process of employing and unifying the analytic practices within the field of intelligence.  Knowing how to apply methods, select one method over another, weigh disparate variables, and synthesize the results is left to the individual intelligence analysts—the same analysts whose expertise is confined to specific substantive areas and their own domains’ heuristics.

Intelligence needs methodologists to help strengthen the domain of analysis.  Such methodologists need to specialize in the processes that the intelligence domain holds to be valid.  In some fields, like epidemiology and program evaluation, methodologists are expected to be experts in a wide variety of quantitative and qualitative methods.  In other fields, the methodologists may be narrowly focused—a laboratory-based experimental methodologist, for example, or statistician.  In all cases, however, methodologists can only be effective if they are experts at the process of making meaning within their own disciplines.

In order to overcome heuristic biases, intelligence agencies need to focus personnel, resources, and training on developing intelligence methodologists.  These methodologists will act as in-house consultants for analytic teams, generate new methods specific to intelligence analysis, modify and improve existing methods of analysis, and increase the professionalization of the discipline of intelligence.

Conclusion

Intelligence analysis uses a wide variety of expertise to address a multivariate and complex world.  Each expert uses his or her own heuristics to address a small portion of that world.  Intelligence professionals have the perception that somehow all of that disparate analysis will come together at some point, either at the analytic team level, through the reporting hierarchy, or through some computational aggregation.

The intelligence analyst is affected by the same confounding variables that affect every other expert:  processing time, pattern bias, and heuristic bias.  This is the crux of the paradox of expertise.  Domain experts are needed for describing, explaining, and problem solving; yet, they are not especially good at forecasting because the patterns they recognize are limited to their specific fields of study.  They inevitably look at the world through the lens of their own domain’s heuristics.

What is needed to overcome the paradox of expertise is a combined approach that includes formal thematic teams with structured organizational principles; technological systems designed with significant input from domain experts; and a cadre of analytic methodologists.  Intelligence agencies continue to experiment with the right composition, structure, and organization of analytic teams; they budget significant resources for technological solutions; but comparatively little is being done to advance methodological science.

Advances in methodology are primarily left to the individual domains.  But relying on the separate domains risks falling into the same paradoxical trap that currently exists.  What is needed is an intelligence-centric approach to methodology, an approach that will include the methods and procedures of many domains and the development of heuristics and techniques unique to intelligence.  In short, intelligence analysis needs its own analytic heuristics designed, developed, and tested by professional analytic methodologists.  This will require using methodologists from a variety of other domains and professional associations at first, but, in time, the discipline of analytic methodology will mature into its own sub-discipline with its own measures of validity and reliability.

 


Dr. Rob Johnston is a postdoctoral research fellow at the CIA Center for the Study of Intelligence and a member of the research staff at the Institute for Defense Analyses.

1. More than 200 individuals contributed to this study.  The author is indebted to the researchers, fellows, and staff at the Center for the Study of Intelligence, the Institute for Defense Analyses, the National Military Intelligence Association, Evidence Based Research, Inc., and ANSER Inc.  Staff and students at the CIA University, the Joint Military Intelligence College, the Naval Postgraduate School, Columbia University, Georgetown University, and Yale University also contributed to the project.

2. W. Chase and H. Simon, “Perception in Chess,” Cognitive Psychology, Vol. 4, 1973, pp. 55-81.

3. American College of Radiology.  Personal communication, 2002.

4. A. Lesgold, H. Rubinson, P. Feltovich, R. Glaser, D. Klopfer, and Y. Wang, “Expertise in a Complex Skill: Diagnosing X-Ray Pictures,” M. Chi, R. Glaser, and M. Farr, eds., The Nature of Expertise (Hillsdale, NJ:  Lawrence Erlbaum Associates, 1988).

5. M. Minsky and S. Papert, Artificial Intelligence (Eugene, OR:  Oregon State System of Higher Education, 1974); J. Voss and T. Post, “On the Solving of Ill-Structured Problems,” M. Chi, R. Glaser, and M. Farr, eds., Op. Cit.

6. O. Akin, Models of Architectural Knowledge (London:  Pion, 1980); D. Egan and B. Schwartz. “Chunking in Recall of Symbolic Drawings,” Memory and Cognition, Vol. 7, 1979, pp. 149-158; K. McKeithen, J. Reitman, H. Rueter, and S. Hirtle, “Knowledge Organization and Skill Differences in Computer Programmers,” Cognitive Psychology, Vol. 13, 1981, pp. 307-325.

7. M. Chi, P. Feltovich, and R. Glaser, “Categorization and Representation of Physics Problems by Experts and Novices,” Cognitive Science, Vol. 5, 1981, pp. 121-125;  M. Weiser and J. Shertz, “Programming Problem Representation in Novice and Expert Programmers,” Instructional Journal of Man-Machine Studies, Vol. 14, 1983, pp. 391-396.

8. W. Chase and K. Ericsson, “Skill and Working Memory,” G. Bower, ed., The Psychology of Learning and Motivation (New York, NY:  Academic Press, 1982).

9. W. Chase, “Spatial Representations of Taxi Drivers,” D. Rogers and J. Slobada, eds., Acquisition of Symbolic Skills (New York, NY:  Plenum, 1983).

10. J. Paige and H. Simon, “Cognition Processes in Solving Algebra Word Problems,” B. Kleinmuntz, ed., Problem Solving (New York, NY:  Wiley, 1966).

11. J. Voss and T. Post, “On the Solving of Ill-Structured Problems,” M. Chi, R. Glaser, and M. Farr, eds., Op. Cit.

12. M. Chi, R. Glaser, and E. Rees, “Expertise in Problem Solving,” R. Sternberg, ed., Advances in the Psychology of Human Intelligence (Hillsdale, NJ:  Lawrence Erlbaum Associates,1982); D. Simon and H. Simon, “Individual Differences in Solving  Physics Problems,” R. Siegler, ed., Children’s Thinking: What Develops? (Hillsdale, NJ:  Lawrence Erlbaum Associates, 1978).

13. J. Larkin, “The Role of Problem Representation in Physics,” D. Gentner and A. Stevens, eds., Mental Models (Hillsdale, NJ:  Lawrence Erlbaum Associates, 1983).

14. C. Camerer and E. Johnson, “The Process-Performance Paradox in Expert Judgment:  How Can Experts Know so Much and Predict so Badly?” K. Ericsson and J. Smith, eds., Toward a General Theory of Expertise:  Prospects and Limit (Cambridge, UK:  Cambridge University Press, 1991).

15. H. Reichenbach, Experience and Prediction (Chicago, IL:  University of Chicago Press, 1938);  T. Sarbin, “A Contribution to the Study of Actuarial and Individual Methods of Prediction,” American Journal of Sociology, Vol. 48, 1943, pp. 593-602.

16. R. Dawes, D. Faust, and P. Meehl, “Clinical Versus Actuarial Judgment,” Science, Vol. 243, 1989, pp. 1668-1674; W. Grove and P. Meehl, “Comparative Efficiency of Informal (Subjective, Impressionistic) and Formal (Mechanical, Algorithmic) Prediction Procedures: The Clinical-Statistical Controversy,” Psychology, Public Policy, and Law, Vol. 2, No. 2, 1996, pp. 293-323.

17. R. Dawes, “A Case Study of Graduate Admissions: Application of Three Principles of Human Decision Making,” American Psychologist, Vol. 26, 1971, pp. 180-188; W. Grove and P. Meehl, Op. Cit. (see footnote 16); H. Sacks, “Promises, Performance, and Principles: An empirical study of Parole Decision-making in Connecticut,” Connecticut Law Review, Vol. 9, 1977, pp. 349-422;  T. Sarbin, “A Contribution to the Study of Actuarial and Individual Methods of Prediction,” American Journal of Sociology, 1943, pp. 48, 593-602; J. Sawyer, “Measurement and Prediction, Clinical and Statistical,” Psychological Bulletin, Vol. 66, 1966, pp. 178-200; W. Schofield and J. Garrard, “Longitudinal Study of Medical Students Selected for Admission to Medical School by Actuarial and Committee Methods,” British Journal of Medical Education, Vol. 9, 1975, pp. 86-90.

18. L. Goldberg, “Simple Models or Simple Processes?  Some Research on Clinical Judgments,” American Psychologist, Vol. 23, 1968, pp. 483-496; L. Goldberg, “Man versus Model of Man: A Rationale, Plus some Evidence, for a Method of Improving on Clinical Inferences,” Psychological Bulletin, Vol. 73, 1970, pp. 422-432;  D. Leli and S. Filskov, “Clinical-Actuarial Detection of and Description of Brain Impairment with the Wechsler-Bellevue Form I,” Journal of Clinical Psychology, Vol. 37, 1981, pp. 623-629.

19. J. Fries, et al., “Assessment of Radiologic Progression in Rheumatoid Arthritis:  A Randomized, Controlled Trial,” Arthritis Rheum., as written by author Vol. 29, No. 1, 1986, pp. 1-9.

20. J. Evans, Bias in Human Reasoning:  Causes and Consequences (Hove, UK:  Lawrence Erlbaum Associates, 1989); R. Heuer, Psychology of Intelligence Analysis (Washington, DC:  Center for the Study of Intelligence, 1999);  D. Kahneman, P. Slovic, and A. Tversky, Judgment Under Uncertainty:  Heuristics and Biases (Cambridge, UK:  Cambridge University Press, 1982);  A. Tversky and D. Kahneman, “The Belief in the ‘Law of Small Numbers,’” Psychological Bulletin, Vol 76, 1971, pp. 105-110; A.Tversky and D. Kahneman, “Judgment Under Uncertainty:  Heuristics and Biases,” Science, Vol. 185, 1974, pp. 1124-1131.

21. L. Kirkpatrick, Captains Without Eyes:  Intelligence Failures in World War II (London:  MacMillan Company, 1969); F. Shiels, Preventable Disasters:  Why Governments Fail (Savage, MD:  Rowman and Littlefield,  1991); J. Wirtz, The Tet Offensive:  Intelligence Failure in War (Ithaca, NY:  Cornell University Press, 1991); R. Wohlstetter, Pearl Harbor:  Warning and Decision (Stanford, CA:  Stanford University Press, 1962).

22. D. Cartwright and A. Zander, Group Dynamics:  Research and Theory (New York, NY:  Harper & Row, 1960); P. Fandt, W. Richardson, and H. Conner, “The Impact of Goal Setting on Team Simulation Experience,” Simulation and Gaming, Vol. 21, No. 4, 1990, pp. 411-422; J. Harvey and C. Boettger, “Improving Communication within a Managerial Workgroup,” Journal of Applied Behavioral Science, Vol. 7, 1971, pp.164-174.

23. M. Deutsch, “The Effects of Cooperation and Competition Upon Group Process,” D. Cartwright and A. Zander, eds., Op. Cit.; D. Johnson and R. Johnson, “The Internal Dynamics of Cooperative Learning Groups,” R. Slavin, S. Sharan, S. Kagan, R. Hertz-Lazarowitz, C. Webb, and R. Schmuck, eds., Learning to Cooperate, Cooperating to Learn (New York, NY:  Plenum, 1985); D. Johnson, G. Maruyama, R. Johnson, D. Nelson, and L. Skon, “Effects of Cooperative, Competitive, and Individualistic Goal Structure on Achievement: A Meta-Analysis,” Psychological Bulletin, Vol. 89, No. 1, 1981, pp. 47-62; R. Slavin, “Research on Cooperative Learning: Consensus and Controversy,” Educational Leadership, Vol. 47, No. 4, 1989, pp.52-55; R. Slavin, Cooperative Learning (New York, NY:  Longman, 1983).

24. J. Cannon-Bowers, E. Salas, S. Converse, “Shared Mental Models in Expert Team Decision Making,” N. Castellan, ed., Current Issues in Individual and Group Decision Making (Hillsdale, NY:  Lawrence Erlbaum Associates, 1983); L. Coch and J. French, “Overcoming Resistance to Change,” D. Cartwright and A. Zander, eds., Op. Cit.; M. Deutsch, “The Effects of Cooperation and Competition Upon Group Process,” D. Cartwright and A. Zander, eds., Group Dynamics:  Research and Theory (New York, NY:  Harper & Row, 1960); L. Festinger, “Informal Social Communication,” D. Cartwright and A. Zander, eds., Op. Cit.; D. Johnson, R. Johnson, A. Ortiz, and M. Stanne, “The Impact of Positive Goal and Resource Interdependence on Achievement, Interaction, and Attitudes,” Journal of General Psychology, Vol 118, No. 4, 1996, pp. 341-347; B. Mullen and C. Copper, “The Relation Between Group Cohesiveness and Performance:  An Integration,” Psychological Bulletin, Vol. 115, 1994, pp. 210-227; W. Nijhof and P. Kommers, “An Analysis of Cooperation in Relation to Cognitive Controversy,” R. Slavin, S. Sharan, S. Kagan, R. Hertz-Lazarowitz, C. Webb, and R. Schmuck, eds., Learning to Cooperate, Cooperating to Learn (New York, NY:  Plenum, 1995); J. Orasanu, “Shared Mental Models and Crew Performance,” Paper presented at the 34th annual meeting of the Human Factors Society, Orlando, FL, 1990; S. Seashore, Group Cohesiveness in the Industrial Workgroup (Ann Arbor, Michigan, MI:  University of Michigan Press, 1954).

25. T. Mills, “Power Relations in Three-Person Groups,” D. Cartwright and A. Zander, eds., Op. Cit.; L. Molm, “Linking Power Structure and Power Use,” K. Cook, ed., Social Exchange Theory (Newbury Park, CA:  Sage, 1987); V. Nieva, E. Fleishman, and A. Rieck, Team Dimensions: Their Identity, Their Measurement, and Their Relationships, RN 85-12 (Alexandria, VA:  US Army Research Institute for the Behavioral and Social Sciences, 1985); G. Simmel, The Sociology of Georg Simmel, K. Wolff, trans. (Glencoe, IL:  Free Press, 1950).

26. R. Johnston, Decision Making and Performance Error in Teams:  Research Results (Arlington, VA:  Defense Advanced Research Projects Agency, 1997); J. Meister, “Individual Perceptions of Team Learning Experiences Using Video-Based or Virtual Reality Environments,” Dissertation Abstracts International, UMI No. 9965200, 2000.

27. R. Johnston, “Electronic Performance Support Systems and Information Navigation,” Thread, Vol. 2, No. 2, 1994, pp. 5-7.

28. R. Johnston and J. Fletcher, A Meta-Analysis of the Effectiveness of Computer-Based Training for Military Instruction (Alexandria, VA:  Institute for Defense Analyses, 1998).

29. J. Fletcher and R. Johnston, “Effectiveness and Cost Benefits of Computer-Based Decision Aids for Equipment Maintenance,” Computers in Human Behavior, Vol. 18, 2002, pp. 717-728.

30. Exceptions include: S. Feder, “FACTIONS and Policon:  New Ways to Analyze Politics,” H. Westerfield, ed., Inside CIA’s Private World (New Haven, CN:  Yale University Press, 1995); R. Heuer, Psychology of Intelligence Analysis (Washington, DC:  Center for the Study of Intelligence, 1999); R. Hopkins, Warnings of Revolution:  A Case Study of El Salvador,” TR 80-100012 (Washington, DC:  Center for the Study of Intelligence); J. Lockwood and K. Lockwood, “The Lockwood Analytical Method for Prediction (LAMP),” Defense Intelligence Journal, Vol. 3, No. 2, 1994, pp. 47-74; J. Pierce, “Some Mathematical Methods for Intelligence Analysis,” Studies in Intelligence, Summer, Vol. 21, 1977, pp. 1-19 (declassified); E. Sapp, “Decision Trees,” Studies in Intelligence, Winter, Vol. 18, 1974, pp. 45-57 (declassified); J. Zlotnick, “Bayes’ Theorem for Intelligence Analysis,” H. Westerfield, ed., Op. Cit.


UNCLASSIFIED

Posted: May 08, 2007 09:01 AM
Last Updated: Aug 03, 2011 02:30 PM