REPLY TO THE NATIONAL RESEARCH COUNCIL STUDY ON PARAPSYCHOLOGY

Document Type: 
Collection: 
Document Number (FOIA) /ESDN (CREST): 
CIA-RDP96-00789R002200420001-1
Release Decision: 
RIFPUB
Original Classification: 
K
Document Page Count: 
34
Document Creation Date: 
November 4, 2016
Document Release Date: 
October 14, 1998
Sequence Number: 
1
Case Number: 
Publication Date: 
June 20, 1988
Content Type: 
REPORT
File: 
AttachmentSize
PDF icon CIA-RDP96-00789R002200420001-1.pdf1.32 MB
Body: 
Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 REPLY TO THE NATIONAL RESEARCH COUNCIL STUDY ON PARAPSYCHOLOGY JOHN A. PALMER CHARLES HONORTON JESSICA UTTS A SPECIAL REPORT PREPARED FOR THE BOARD OF DIRECTORS OF THE PARAPSYCHOLOGICAL ASSOCIATION, INC. Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 MANAGING TOTAL QUALITY 3M QUALITY MANAGEMENT SERVICES 612/778-7560 (A Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 REPLY TO THE NATIONAL RESEARCH COUNCIL STUDY ON PARAPSYCHOLOGY JOHN A. PALMER CHARLES HONORTON JESSICA UTTS A SPECIAL REPORT PREPARED FOR THE BOARD OF DIRECTORS OF THE PARAPSYCHOLOGICAL ASSOCIATION, INC. Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 Copyright ? 1988 by the Parapsychological Association, Inc. Parapsychological Association, Inc. P. O. Box 12236 Research Triangle Park, NC 27709 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 Contents 1. OVERVIEW .............................. 1 2. ANATOMY OF PREJUDGMENT: COMPOSITION OF THE NRC PARAPSYCHOLOGY SUBCOMMITTEE ....... 3 3. EVALUATION ISSUES ........................ 6 4. TREATMENT OF THE SCIENTIFIC EVIDENCE ........ 8 4.1 Psi Ganzfeld Research ......................10 4.2 Random Number Generator (RNG) Experiments ......13 4.3 Backster's "Primary Perception" .................16 5. POTENTIAL APPLICATIONS ....................17 6. QUALITATIVE EVIDENCE AND SUBJECTIVE BIAS ....18 7. CONCLUSIONS ............................19 REFERENCES ...............................20 APPENDIX .................................24 On the NRC Committee's Response to Harris and Rosenthal's Postscript ................25 THE AUTHORS ..............................28 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 1. Overview On December 3, 1987, the National Research Council (NRC) of the National Academy of Sciences convened a well attended press con- ference in Washington, D.C., to announce the release of its report, En- hancing Human Performance (Druckman & Swets, 1988). This report is the result of a study commissioned by the U.S. Army to assess various techniques of enhancing human performance. One of the six areas ad- dressed in the report is parapsychology (research involving anomalous communication processes such as extrasensory perception and psychokinesis, collectively known as psi phenomena). At the NRC press conference, John A. Swets, the Committee Chairman, stated that "Per- haps our strongest conclusions are in the area of parapsychology." In- deed, the report concludes that "The Committee finds no scientific justification from research conducted over a period of 130 years for the existence of parapsychological phenomena" (p. 22). After carefully reviewing the NRC report, we have found that al- though it is couched in scientific language, it does not represent an un- biased scientific assessment of parapsychology and that the above conclusion is totally unwarranted. In particular: ? The single chapter of the report on parapsychology is restricted to four selected areas of research conducted during the past 20 years. The scope of the review is limited to less than 10% of the systematic scientific effort in parapsychology, which began with the Duke University work of J. B. Rhine in the 1930's, and no explanation is provided concerning the purported "130 year" history of re- search in parapsychology. ? The two principal evaluators of parapsychological research for the Commit- tee, RayHyman andJames Alcock, were publicly committed to a negative posi- tion on parapsychology at the time the Committee was formed. Both are mem- bers of the Executive Council of an organization well-known for its zealous crusade against parapsychology. Yet no attempt was made to balance the Committee with scientists who have taken a more positive or a neutral position on parapsychology. Approved For ReleaseP,20W/te r:4CfAZI DPS6-00789R002200420001-1 Approved For Release 2002/05/17: CIA-RDP9"97AP 24~ r )syjhoiogy ? Even within this limited scope of review, the Committee's method of assessing parapsychology violates its own stated guidelines for research evaluation, which specify the identification and assessment of plausible alternatives. With regard to the better parapsychological experiments, the Committee admits, "We do not have a smoking gun, nor have we demonstrated a plausible alter- native" (p. 200). ? The report selectively omits important findings favorable to parapsychology contained in one of the background papers commissioned for the Committee, while liberally citing from other papers supportive of the Committee's position. The principal author of the favorable paper, an eminent Harvard psychologist, was actually asked by the Chairman of the NRC Committee to withdraw his favorable conclusions. Obviously the Committee's conclusion far outstrips the scope of its investigation. What it accomplishes is to suggest that parapsychology is a field that has had its day and failed to deliver the goods, which in turn implies that future research is not likely to be fruitful. It thus has the ef- fect of discouraging future research and the funding of such research. The Committee's overall conclusion, as well as many :other statements in the report, encourage the reader to ignore the fact that parapsy- chologists have accumulated a large body of experimental findings that (a) suggest important new means of human interaction with their en- vironment and (b) cannot be plausibly attributed to known convention- al mechanisms. Approved For Release 20iQ.ffi7ogiC4AAOA!9&OA0789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R00220042000131 2. Anatomy of Prejudgment: Composition of the NRC Parapsychology Subcommittee ... Belief in paranormal phenomena is still growing, and the dangers to our society are real.... [I]n these days of government budget-cutting the Defense Department may be spending millions of tax dollars on developing'psychic arms....' Please help us in this bat- tle against the irrational. Your contribution, in any amount, will help us grow and be better able to combat the flood of belief in the paranor- mal .... -- From a fund-raising letter from the Committee for the Scien- tific Investigation of Claims of the Paranormal (CSICOP), dated March 23,1985, and co-signed by Ray Hyman, Chair- man of the NRC Subcommittee on Parapsychological Tech- niques. In a background paper solicited by the NRC Committee, Griffin (1988) outlines the difficulties in trying to evaluate evidence objectively when one is committed to a belief system. He also emphasizes the strong desire to protect those beliefs. "Probably the most powerful force motivating our desire to protect our beliefs -- from others' attacks, from our own questioning, and from the challenge of new evidence -- is commitment ... people will often react to disconfirming evidence by strengthening their beliefs and creating more consonant explanations. This drive to avoid dissonance is especially strong when the belief has led to public commitment" (p. 33). While it was Griffin's intent to show how parapsychologists' beliefs can lead them to ignore certain evidence, we contend that the NRC report exemplifies the Committee's need to protect their beliefs. Both Hyman, Chairman of the NRC Parapsychology Subcommittee, and Al- cock (1988), author of the only paper specifically on Parapsychological Approved For Release (}2PMglc-blAGhOia Ri8.6-00789R002200420001-1 Approved For Release 2002/05/17: CIA-RDPR9 9907?~902204?~0001-1 p t on arapsychology Techniques to be commissioned by the Committee, belong to CSICOP's Executive Council and are among its most active members. Hyman has been Chairman of CSICOP's Parapsychology Subcommittee since its in- ception. CSICOP is well known for its efforts to debunk parapsychology. It was founded in 1976 by philosopher Paul Kurtz and sociologist Marcel- lo Truzzi, when "Kurtz became convinced that the time was ripe for a more active crusade against parapsychology and other pseudo-sciences" (Pinch and Collins, 1984, p. 527). Truzzi resigned in 1977 "because of what he saw as the growing danger of the committee's excessive negative zeal at the expense of responsible scholarship" (Collins & Pinch, 1982, p. 42). He has since stated that "CSICOP has proclaimed its major aim to be inquirywhile actually being centrally concerned with advocacy (i.e., discrediting claims of the paranormal)" (Truzzi, 1982, p 4). In their own literature, CSICOP makes clear their belief that claims for paranormal phenomena are unreasonable: "Why the sudden explosion of interest, even among some otherwise sensible people, in all sorts of paranormal 'happenings'?" (CSICOP brochure, emphasis added). Both Hyman and Alcock were members of the CSICOP Executive Council in 1985, when the NRC Committee was formed, and there was abundant evidence at that time that both men had publicly committed themselves to the belief that scientists who were convinced by the evidence for psychic phenomena were fooling themselves. For example, Hyman (1985b) wrote: "The total accumulation of 130 years' worth of psychical investigation has not produced any consistent evidence for paranormality that can withstand acceptable scientific scrutiny. What should be interesting for the scientific establishment is not that there is a case to be made for psychic phenomena, but, rather, that the majority of scientists who decided to seriously investigate believed that they had made such a case. How can it be that so many scientists, including several Nobel Prize winners, have convinced themselves that they have obtained solid evidence for paranormal phenomena?" (p. 7, emphasis in original) Alcock (1981) expressed the same view somewhat more colorfully: Approved For Release 200rZJ 5h1igcl+4ssRQJ'o9,Q0789R00220042000Y-1 rdegelease 2002/05/17 : CIA-RDP96-00789R00220042000151 "Parapsychology is indistinguishable from pseudo-science, and its ideas are essentially those of magic. This does not of course mean that psi does not exist, for one cannot demonstrate the non-existence of psi any more than one can prove the non-existence of Santa Claus. But let there be no mistake about the empirical evidence: There is no evidence that would lead the cautious observer to believe that parapsychologists and paraphysicists are on the track of a real phenomenon, a real ener- gy or power that has so far escaped the attention of those people engaged in "normal" science. There is considerable reason, on the other hand, to believe that human desire and self-delusion are respon- sible for the durability of parapsychology as a formal endeavor" (p. 196, emphasis in original). Given the attitudes of these two individuals, the biased nature of the NRC report was easily predictable. The Army presumably expected the National Research Council to provide sound and unbiased advice regarding this controversial subject. Why then was the Parapsychology Subcommittee composed the way it was? Almost half a million dollars of taxpayers' money was spent on the NRC report. These taxpayers, as well as the scientific community and the Army itself, deserve an answer to this question. Approved For ReleaseP& M 7ttlE,'s1 aRB 6-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 3. Evaluation Issues "Ruling out alternative explanations or mechanisms requires intimate knowledge of a research area. Historical findings and critical com- mentary are needed to identify alternatives, determine theirplausibility, and judge how well they have been ruled out in particular sets of ex- periments." - STANDARDS FOR EVALUATING BASIC RE- SEARCH, Enhancing Human Performance, p. 25, italics added. With respect to parapsychology, the Committee admits that "We do not have a smoking gun, nor have we demonstrated a plausible al- ternative" (p. 200). They go on, however, to suggest that it is not neces- sary for them to do so in order to justify their wholesale dismissal of the evidence. What they offer instead is the metaphor of a"dirty test tube," which they describe as follows: .. the critic does not claim that the results have been produced by some artifact, but instead points out that the results have been obtained under conditions that fail to meet generally accepted standards. The gist of this type of criticism is that test tubes should be clean when doing careful and important scientific research. To the extent that the test tubes were dirty, it is suggested that the experiment was not carried out according to acceptable standards. Consequently, the results remain suspect even though the critic cannot demonstrate that the dirt in the test tubes was sufficient to have produced the outcome. Hyman's critique of the Ganzfeld psi research and Alcock's [background] paper on remote viewing and random number generator research are examples of this type of criticism" (pp. 199-200, italics added). This approach directly contradicts the Committee's own guidelines for the evaluation of scientific evidence, as illustrated by the quote at the beginning of this section. Argument by metaphor is no substitute for sys- tematic scientific analysis and no justification is provided for this singular violation of the Committee's own standards. 0001-1 Approved For Release 200~3J 5/t1igcQIAAbQJ?og,Q0789R0022 0 3. E RMYR&JeFor Release 2002/05/17 : CIA-RDP96-00789R00220042000171 It is clear that the Committee's failure to identify plausible alterna- tive explanations for many of the parapsychological studies it reviews is not for want of trying. For example, Hyman attempted to demonstrate a statistical relationship between putative methodological flaws and study outcomes in the Ganzfeld experiments. In his initial analysis, Hyman (1982) claimed an almost perfect linear correlation between his flaw assignments and the study outcomes. This analysis contained a large number of errors that Hyman later attributed to typing errors (com- munication to Honorton, November 29, 1982). His next published criti- que (Hyman, 1985a) was based on a complex multivariate analysis that was subsequently shown to be meaningless (Saunders, 1985). Finally, Hyman agreed that "the present data base does not support any firm con- clusion about the relationship between flaws and study outcome" (Hyman & Honorton, 1986, p. 353). The Committee's approach to evaluating evidence for parapsy- chological processes is scientifically sterile. Its resort to the "dirty test tube" metaphor provides an unrestricted license for the wholesale dis- missal of research findings on the basis of vague and ad hoc "weaknesses." Whole domains of research are dismissed through allusions to "inade- quate documentation," "inadequate controls," "overcomplicated ex- perimental setups," and "lack of experimental rigor." Yet, if the measure of good scientific methodology is its capacity to rule out plausible alter- natives, such attributions are clearly inappropriate. No scientific experi- ment is so pristine that it can withstand the efforts of a sufficiently determined critic, but the fact remains that by any reasonable standard the methodology of many successful psi experiments is fundamentally sound. Approved For ReleaseP4gF1-A@tk4W@6-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 4. Treatment of the Scientific Evidence Despite the impression given by the Committee's overall conclusion, quoted above on page 1, their assessment of the scientific evidence for parapsychological processes is restricted to a few selected areas of research. In this section, we respond to the Committee's treatment of research in these selected areas. Two of the research domains con- sidered, the psi Ganzfeld and random number generator (RNG) experi- ments, have been the subject of recent meta-analytic investigations (Honorton, 1985; Hyman, 1985a; Radin & Nelson, 1987), and like a num- ber of other psi research areas that are ignored by the Committee, these domains have been found to have replication rates comparable to those in other areas of psychology. Since the Committee members are them- selves psychologists, we were surprised by their characterization of replicability in parapsychology: "The type of replicability that has been claimed so far is the possibility of obtaining significant departures from the chance baseline in only a proportion of the experiments, which is a kind of replicability quite dif- ferent from the consistent and lawful patterns of covariation found in other areas of inquiry" (pp. 174-175, italics added). Contrary to the Committee's statement, less than perfect replicability is the rule and not the exception in most areas of the behavioral scien- ces and this is one of the reasons why meta-analytic techniques for as- sessing whole areas of behavioral research have become so widely used. It is not uncommon in psychology for conclusions to be drawn from studies where replications have not even been attempted. Replicability problems have been widely acknowledged in recent years by experts in areas as diverse as the neurochemistry of learning and memory (Dunn, 1980) and medical studies of placebo efficacy (Moermon, 1981). Even such "hard-science" areas as laser construction have sometimes been beset by replication problems (Collins, 1974). Approved For Release 284WQ&Na Z1 jRQ6flP0789R0022 00 42 M- 4.ft~meonyo>hFpclieglCt~ine002/05/17 :CIA-RDP96-007898002200420001-4 As for "consistent and lawful patterns of covariation," the limited scope of the Committee's review, which focuses exclusively on evidence for the existence of parapsychological effects in a few selected areas, com- pletely ignores parapsychological research oriented toward identifica- tion of patterns of covariation of psi performance with other variables. There is, however, a considerable body of such research. While a detailed discussion of this work is beyond the scope of the present paper, three such areas may be briefly mentioned: ? Extraversion/Introversion. Beginning in the early 1940's numerous at- tempts have been made to correlate experimental ESP performance with in- dividual differences in subjects' personality and attitudinal characteristics. Palmer (1977), in a review of 33 experiments involving the relationship between ESP performance and standard psychometric measures of extraversion/intro- version, found that extraverts scored higher than introverts in 70% of these ex- periments (p = .017) and all eight of the significant relationships showed su- perior ESP performance by extraverts (p = .0039). ? Belief in ESP. In a review of 17 experiments testing the hypothesis that sub- jects who believed in ESP would show superior ESP performance compared to subjects who did not believe in ESP, Palmer (1971) found that the predicted pattern occurred in 76% of the experiments (p = .024) and all six of the ex- periments with individually significant outcomes were in the predicted direc- tion (p = .015). ? Hypnosis. Schechter (1984) reported a meta-analysis of studies comparing the effects of hypnotic induction and nonhypnosis control procedures on per- formance in ESP card-guessing tasks. There were 25 experiments by inves- tigators in 10 different laboratories. Consistently superior ESP performance was found to occur in the hypnotic induction conditions compared to the con- trol conditions of these experiments (p = .006). Detailed evaluations of these and other areas involving the systematic covariation of psi performance with other variables are presented else- where (e.g., , Eysenck, 1967; Honorton, 1977; Johnson & Haraldsson, 1984; Palmer, 1985; Schmeidler & McConnell, 1973/1958; Stanford, 1987). Approved For Release 20 OW4gks/F AcFtOP -007898002200420001-1 4proved For Release 2002/05/17 : CIA-RDI %}?AQBO gpMpRclo,logy 4.1 Psi Ganzfeld Research ESP research using Ganzfeld stimulation, a mild form of perceptual isolation, has its origin in earlier research linking psi effects to dream- ing, hypnosis, and other internal attention states which ark characterized by functional sensory deprivation. A typical Ganzfeld experiment invol- ves a sender and a receiver, each sequestered in separate acoustically- isolated rooms. The receiver, undergoing perceptual isolation, attempts to describe a randomly selected target picture presented to the physical- ly-remote sender by providing a continuous verbal report of ongoing im- agery and associations. Upon completion of the session, the receiver, on a blind basis, attempts to identify the actual target from 'a judging pool containing the target and three or more control pictures. A meta-analysis of 28 psi Ganzfeld studies by investigators in ten dif- ferent laboratories (Honorton, 1985) found a combined z.-score of 6.6, a result associated with a probability of less than 1 part in a billion. Inde- pendently significant outcomes have been reported by six of the ten in- vestigators and the overall significance is not dependent on the work of any one or two investigators. Moreover, in order to accohnt for the ob- served experimental results on the basis of selective reporting, it would be necessary to assume that there were more than 400 unreported studies averaging chance results. The NRC Committee's assessment of ESP Ganzfeld research, which is based on Hyman's (1985a) critique, clearly illustrates; its systematic failure to cite information unfavorable to its case in evaluating parapsy- chology. For example, while Hyman has subsequently conceded "that there is an overall significant effect in [the ESP Ganzfeld] data base that cannot reasonably be explained by selective reporting or multiple analysis" (Hyman & Honorton, 1986, p. 351) and that "significant out- comes have been produced by a number of different investigators" (p. 352), neither of these important points is mentioned anywhere in the Committee's report. Approved For Release TWd8'/4 s16#AclDlzt96c-00789R002200420001-1 4 #pp gykpogregrr ,eM&n?P02/05/17 : CIA-RDP96-00789R002200420001111 The Committee drew heavily on Palmer's (1985) report to the Army, which includes reviews of the major research projects it evaluated. Pal- mer made several criticisms of these projects and the Committee cites these favorably. However, Palmer also made several criticisms of points made by other critics of the experiments (including a critique of Hyman's flaw analyses of the Ganzfeld research); these criticisms were consistent- ly ignored. A particularly revealing example of the Committee's selective bias is its treatment of a background paper the Committee solicited from Monica Harris and Robert Rosenthal of Harvard University (Harris & Rosenthal, 1988). Rosenthal is a leading social science methodologist and a pioneer in the development of meta-analytic techniques for evaluating entire research domains. Harris and Rosenthal have no prior involvement with parapsychology, nor have they taken a public position on this controversial subject. They undertook a comparative study of the major topics reviewed by the Committee and concluded that "only the Ganzfeld ESP studies [the only psi studies they evaluated] regularly meet the basic requirements of sound experimental design" (p. 53). On a 25- point scale of "overall quality," the Ganzfeld experiments were given a rating of 19, whereas the other (nonparapsychological) areas reviewed received ratings from 3 to 13. This relative weighting of research quality is diametrically opposed to that of the Committee. In its evaluation of parapsychology, the Parapsychology Subcommit- tee extensively cites the background papers of Alcock (1988) and Grif- fin (1988), with whose conclusions it agrees, but it does not cite the Harris and Rosenthal paper even once. (Their paper is, however, cited else- where in the report, in the evaluation of other, nonparapsychological, areas by the Subcommittee on Accelerated Learning.) Incredibly, at one stage of the process, John Swets, Chairman of the Committee, actually phoned Rosenthal and asked him to withdraw the parapsychology sec- tion of his paper. When Rosenthal declined, Swets and Druckman then requested that Rosenthal respond to criticisms that Hyman had included in a July 30, 1987 letter to Rosenthal. Approved For Release M IfiYcaAM,9'6,-00789R002200420001-1 A- proved For Release 2002/05/17 : CIA-RDPR99-00 8 002200420001-1 Reply tote Study on Parapsychology If the Committee considered Hyman's critique of theHarris-Rosen- thal paper to be effective, why did they not simply challenge the latter in the report itself? The overall behavior of the Committee suggests that it was eager to give Rosenthal's dissenting views as little exposure as pos- sible. In response to Hyman's letter, Harris and Rosenthal prepared a Postscript to their paper. Rosenthal conducted additional analyses of the putative relationship between psi Ganzfeld study ;outcomes and potential methodological flaws: "The heart of the matter is the relationship of flaws to research results and that is what our analyses are designed to investigate. In a 1986 manuscript Hyman suggested that the relationship of flaws to study outcomes should be examined in a multivariate manner. Accordingly, that is the nature of our analyses in our first pass effort to examine the likelihood that methodological flaws are driving the results of the Ganzfeld studies to an appreciable degree" (Harris & Rosenthal Postscript, pp. 1-2). Using Hyman's own most recent flaw ratings, Rosenthal failed to find any significant relationships between flaws and ESP significance levels and effect sizes in each of two multivariate analyses. Harris and Rosen- thal concluded: "Our analysis of the effects of flaws on study outcome lends no sup- port to the hypothesis that Ganzfeld research results are a significant function of the set of flaw variables. In addition, a series. of 10 new studies designed to control for earlier presumed flaws yielded results quite consistent with the original set of 28 studies" (Harris & Rosen- thal Postscript, p. 3). The Committee's response to the Harris and Rosenthal, postscript be- came available to us after the completion this report. Ours comments on the Committee's response are presented in the Appendix. Approved For Release W9&t9g7cal.QjAvlaPPi, :007 - 4. rAme#tffArc jeasee 2/05/17: CIA-RDP96-00789R002200420001-13 4.2 Random Number Generator (RNG) Experiments Psi research with random number generators (RNGs) involves at- tempts by subjects to introduce a bias in the normally random output of an electronic device, solely by intention. The randomness of the device is typically based on radioactive decay, electronic diode noise, or randomly seeded pseudorandom sources. Nearly all of these experi- ments involve a variety of controls against experimental error. These in- clude automated data recording, trial-by-trial oscillation of the target definition, alternation of subjects' attempts to produce high or low scores, and control runs in which large numbers of RNG trials are col- lected without attempts to influence the device. A comprehensive meta- analytic review of the RNG research literature encompassing all known RNG studies between 1959 and 1987 has been reported by Radin and Nelson (1987), comprising over 800 experimental and control studies conducted by a total of 68 different investigators. The overall z-score for the 597 experimental series was 15.76 (p < 10-35), while 235 control series yielded an overall z-score of -0.67, a result well within the range of chance fluctuation. In order to account for the observed experimen- tal results on the basis of selective reporting, it would be necessary to as- sume that there were more than 50,000 unreported studies averaging chance results. The NRC Committee concedes that the RNG research cannot reasonably be explained by chance (p. 207). Although it criticizes the methods used to test the randomness of these devices under control con- ditions, the Committee admits that "the critics have not specified any plausible mechanisms that would account for the obtained differences between the experimental and control trials" (p. 187). In contrast to the Radin and Nelson meta-analysis, the Committee's discussion of the RNG research is limited exclusively to the work of two investigators, Helmut Schmidt of the Mind Science Foundation in San Antonio and Robert Jahn of the Princeton University School of En- gineering. While the Committee is correct in stating that Schmidt and Jahn have been the two largest contributors to the RNG research litera- ture, the overall significance of this research area is not dependent on Approved For Release 1d 15 t4cardff'RbP9 00789R002200420001-1 1#pproved For Release 2002/05/17 :CIA-RD 9 jOr ~90? o00P4 2000h _1 the contribution of these two investigators. When the work of Schmidt and Jahn is removed, the overall z-score for the remaining 66 inves- tigators is still 5.48, a result that is associated with a probability of less than 1 in 40 million (Radin, May, and Thompson, 1985, p. 216). Radin and Nelson evaluated the impact of potential flaws on study outcomes by assigning to each experiment a single quality weight based upon 16 criteria derived in part from published criticisms of RNG studies by major critics including Hyman and Alcock. These quality criteria fell into four categories: RNG integrity, data integrity, statistical integrity, and procedural integrity. No significant relationship was found between research quality and study outcome. Moreover, the unweighted and quality-weighted effect sizes were nearly identical. The Committee refers to an unnamed physicist "who claims to have several years of experience in constructing and testing random number devices" who told them "it is quite possible, under some circumstances, for the human body to act as an antenna and, as a result, possibly bias the output" (p. 190). Inexplicably, the Committee makes no attempt to document this claim which, if true, could conceivably provide a plausible alternative to some RNG study outcomes. In view of the' Committee's central mission for the Army, it is odd that no attempt was made to fol- low this up, since the military's interest in RNG effects is related to human interaction with delicate electronics. If this unnamed source's claim were valid, it could have important implications, irrespective of the "paranormality" of the effect. Yet the Committee was content to simply repeat this undocumented assertion for the purpose of casting doubt on the RNG research. Nor does the Committee acknowledge the fact that the purported antenna effect, even if it were shown to be valid in some instances, is totally irrelevant to many of the RNG studies (including studies by Schmidt and Jahn) such as those using prerecorded targets or pseudorandom sources and studies in which the target definition is os- cillated on a trial-by-trial basis. The Committee favorably cites one RNG experiment by May, Humphrey, and Hubbard (1980b), but they downgrade its importance because of its "marginal" level of significance (p = .029). However, this criticism is completely spurious. The May et al., experiment used a standard sequential sampling technique (Wald, 1947) which was Approved For Release 8.f}$ ()~tc1~7catl lacl W. 00789ROO2200420001 -1 4. %)( t)Fe(Nip4gjli? e;N02/05/17: CIA-RDP96-00789R002200420001115 designed not to achieve a highly significant p-value in a fixed number of trials but to reach a preset significance level (p _< .05) in as few trials as possible and then stop. Sequential sampling was a primary feature of the May et al., experiment and it is inconceivable that this fact could be over- looked by anyone reading their report. In order to avoid problems in- volving optional stopping, the analysis of this experiment, which was specified in advance (May, Humphrey & Hubbard, 1980a), was based on the number of subjects who successfully met the sequential sampling criteria, and the reported significance level represents the probability that two or more subjects would meet these criteria. While the Commit- tee complains that "only two of the seven subjects produced significant results," they fail to mention that this success rate is nearly six times that expected by chance. The Committee fails to cite a number of other experiments in which their criticisms are demonstrably inapplicable. A prime example is the recent experiment of Schmidt, Morris, and Rudolph (1986), the protocol for which was published prior to the beginning of the experiment (Schmidt, Morris & Rudolph, 1982). An important control feature of this experiment, which used prerecorded targets, is that responsibility for the test was shared by three different investigators in two inde- pendent laboratories. The Committee's failure to mention this experi- ment is especially surprising because Alcock (1988) who reviewed the experimental report for the Committee and even had access to the per- tinent raw data admitted that he could find nothing seriously wrong with The Committee alludes to the possibility of data tampering by sub- jects in RNG experiments, but it offers no suggestions as to how such tampering could have been accomplished. Subject fraud has never been demonstrated in RNG research and we are aware of no evidence that would justify suspicion of fraud in any of the experiments under review. Moreover, one of the 16 quality criteria employed in the Radin/Nelson meta-analysis was "Unselected Subjects." Studies using the ex- perimenter, self-proclaimed "psychics," or otherwise special subjects were penalized. No significant difference was found between studies using unselected and selected subjects. Approved For Release M'O&d&/4 arPt&-00789R002200420001-1 1#pproved For Release 2002/05/17: CIA-R 07~2L1h[0? (42001 070V e to o ara c 4.3 Backster's "Primary Perception" any readers of the Committee's report who are not familiar with the technical literature of parapsychology might be impressed by the fact that the Committee's critique of parapsychology is based not only on its examination of selected portions of the research literature, but also upon "experimental work that the committee actually witnessed by visit- ing a parapsychological laboratory" (p. 169; italics added). Statements of this sort occur in several places in the report. In fact, the experiment witnessed by the Committee was not that of a parapsychologist, nor did it take place in a "parapsychological laboratory." The experiment was that of a polygraph specialist, Cleve Backster and took place at the Back- ster Research Foundation. Neither Backster nor his institution has ever had any affiliation with scientific parapsychology. Since all attempts by serious investigators to replicate his initial claim have been unsuccess- ful, Backster's claims are not taken seriously within mainstream parapsy- chology. Nevertheless, the Committee's characterization, quoted above, encourages readers to assume that Backster's work is representative of parapsychological research in general. This assumption is:strongly rein- forced by the fact that the Committee devotes nearly as much space to Backster as it does to the RNG research and the Backster claim receives almost twice as much space as the Ganzfeld research domain. Even though the Committee acknowledges that Backster's research is "at a far less developed stage" (p. 193), the fact that it is treated inJthe same for- mat as this other research and is given so much attention creates the im- pression that it is representative of parapsychological research in general. It most certainly is not. Backster's research illustrates a lack of sensitivity to the needs of proper experimental control that, exceeds even the most problematic of the research considered by the Committee and would never be accepted for publication in a mainstream parapsy- chological journal. It does, however, illustrate research in which a plausible, empirically-grounded alternative explanation does exist and a negative conclusion is therefore justified. Approved For Release /at1~POt6-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-17 5. Potential Applications Since the NRC report was funded by the Army for the express purpose of assessing potential applications, the Committee has been remiss in ignoring controlled psi studies directed toward eventual applications (e.g., Ryzl, 1966; Carpenter, 1975; Puthoff, 1985; Puthoff, May, & Thompson, 1986). These experiments all used statistical averaging techniques based on information theory to enhance accuracy. In Ryzl's experiment, for ex- ample, a subject was asked to guess on repeated trials whether the green orwhite face of each of a set of concealed cards was uppermost. Although the subject's rate of success on individual calls was only 62%, translation of the "majority vote" for each card resulted in the identification, without error, of 15 binary-encoded decimal digits. Carpenter's experiment resulted in the successful identification of the binary (Morse) code equivalent of the word "PEACE." A successful effort to improve the ac- curacy of detection of random binary sequences generated by hidden roulette-wheel spins (red/black) and coin tosses (heads/tails) was reported by Puthoff (1985), in which initial success rates of 52.6%, 55%, and 60% were amplified, respectively, to 60%, 60%, and 71%. Later, Puthoff, et al. (1986) reported encouraging results based on a more ef- ficient statistical averaging procedure. While no responsible parapsychologist would claim that reliable prac- tical applications of psi abilities have yet been achieved, more intensive applications-oriented basic research is clearly justified by the success of these and similar efforts. Instead of seriously addressing the challenge posed by such research, the Committee ignores this work completely and expresses its negative bias against potential psi applications through reference to "warrior monks" and "hyperspace howitzers" (p. 171). Clear- ly, the Committee's mission for the Army has not been well-served by this type of frivolousness. Approved For Release 2GO ,frZ56 eaC4A4 ie00789R002200420001-1 jApproved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-1 6. Qualitative Evidence and Subjective Bias The section entitled "The Problem of Qualitative Evidence" (pp. 200- 206) vividly demonstrates the very "problem" it condemns. Perhaps aware of the weakness of its case against the scientific evidence, the Com- mittee tries to strengthen it by insinuating that parapsychologists have been biased by personal psychic experiences. The clear implication is that they cannot be trusted to carry out basic experimental procedures in a competent fashion. Highly prejudicial statements of the following sort riddle these pages: "More typical is the proponent who, as a result of compelling personal experience, not only has no doubt about the reality of underlying paranormal cause, but also has no patience with the refusal of others to support that belief" (p. 202). The whole purpose of this section, presumably, is to argue that conclusions of fact should be based upon empirical evidence obtained by proper scientific methods. Yet no such evidence is offered to support the assertion that attitudes such as those reflected in the above quote are shared by even a significant minority of parapsychologists. Obviously the interpretation of profound personal experiences is open to all sorts of potential biases and distortions. Clearly such ex- periences are no substitute for scientific evidence. On the other hand, when careful consideration of such alternative explanations render them implausible, it is not unreasonable to entertain the possibility of psi, un- less, of course, one "knows" in advance that psi events are impossible. It does not follow that having a personal psychic experience disqualifies one from conducting rigorous scientific research any more than a deep appreciation for the grandeur of the universe disqualifies one from doing sound research in astrophysics. Scientists frequently choose to study specific research topics out of curiosity generated by events in their per- sonal lives. Approved For Release sI l cRfdP9-00789R002200420001-1 Approved For Release 2002/05/17 : CIA-RDP96-00789R002200420001-119 7. Conclusions in this response we have documented some, but by no means all, of the problems with the Committee's report. As we have seen, the Committee's primary conclusion regarding parapsychology is not mere- ly unjustified by their report, it is directly contradicted by the Committee's admission that it can offer no plausible alternatives. This concession, coming as it does from a Committee whose principal evaluators of parapsychology were publicly committed to a negative ver- dict at the outset of their investigation, actually constitutes a strong source of support for the conclusion that parapsychology has identified genuine scientific anomalies. We have documented numerous instances where, in lieu of plausible alternatives, the Committee's attempts to portray parapsychology as "bad science" have been based upon erroneous or incomplete descriptions of the research in question, rhetorical enumeration of alleged "flaws" that by its own admission frequently have no demonstrable empirical conse- quences, selective reporting of evidence favorable to its case, and the selective omission of evidence not favorable to its case. Moreover, with respect to the Committee's central mission for the U.S. Army, we have shown that the Committee's prejudice against parapsychology has led it to ignore research, the further development of which could have impor- tant implications for our national security. The scientific and defense communities are entitled to a rigorous and unbiased assessment of this research area. A strong prima facie case has been made for the existence of psi anomalies, and meaningful relation- ships between such events and psychological variables have been reported in the literature. Further efforts and resources should be ex- pended toward the identification of underlying mechanisms and the development of theoretical models, either conventional or "paranormal," that can provide adequate understanding. Approved For Release 20 1 t caC4A RDP "0789R002200420001-1 204pproved For Release 2002/05/17: CIA-RR oQA71 299A2S,} alaJy References Alcock, J.E. (1981). Parapsychology: Science or Magic? Oxford: Per- gamon Press. Alcock, J.E. (1988). A comprehensive review of major empiricol studies in parapsychology involving random event generators or remote viewing. National Academy Press. Carpenter, J.C. (1975, January). Toward the effective utilization of en- hanced weak-signal ESP effects. Paper presented at the T feeting of the American Association for the Advancement of Science, New York, NY. Collins, H.M. (1974). The TEA set: tacit knowledge and scientific net- works. Science Studies, 4, 165-186. Collins, H.M., & Pinch, T.J. (1982). Frames of Meaning: The Social Con- struction of Extraordinary Science. London: Routledge & Kegan Paul. Druckman, D. & Swets, J.A. (Eds.) (1988). Enhancing Human Perfor- mance. Issues, Theories, and Techniques. Washington, D.C.: National Academy Press. Dunn, A.J. (1980). Neurochemistry of learning and memory: an evalua- tion of recent data. Annual Review of Psychology, 31, 343-390. Eysenck, H.J. (1967). Personality and extra-sensory perception. Journal of the Society for Psychical Research, 44, 55-71. Griffin, D. (1988). Intuitive judgment and the evaluation of evidence. Washington, D.C.: National Academy Press. Harris, M.J., & Rosenthal, R. (1988). Interpersonal expectancy; effects and human performance research. Washington, D.C.: National Academy Press. Honorton, C. (1977). Psi and Internal Attention States. In B.B. Wolman (Ed.) Handbook of Parapsychology. New York: Van Nostrand Rein- hold, pp. 435-472. Approved For ReleasePlflOttOSRai-CP #DP,96-00789R002200420001-1 iW9yed For Release 2002/05/17 : CIA-RDP96-00789R002200420001-121 Honorton, C. (1985). Meta-analysis of psi ganzfeld research: a response to Hyman. Journal of Parapsychology, 49, 51-91. Hyman, R. (1982). Hyman's tally of flaws in ganzfeld/psi experiments. Written communication to Honorton, July 29, 1982. Hyman, R. (1985a). The ganzfeld psi experiment: a critical appraisal. Journal of Parapsychology, 49, 3-49. Hyman, R. (1985b). A critical historical overview of parapsychology. In P. Kurtz (Ed.) A Skeptics Handbook of Parapsychology. Buffalo, N.Y.: Prometheus Books. Pp. 3-96. Hyman, R., & Honorton, C. (1986). A joint communique: the psi ganzfeld controversy. Journal of Parapsychology, 50, 351-364. Hyman, R. (1988). Comments on the postscript to Harris and Rosenthal. National Academy Press. Johnson, M., & Haraldsson, E. (1984). The defense mechanism test as a predictor of ESP scores. Journal of Parapsychology, 48, 185-200. May, E.C., Humphrey, B.S., & Hubbard, G.S. (1980a). Phase I.? Hardware Construction and System Evaluation.. (SRI Project 8585.) Menlo Park, CA: SRI International. May, E.C., Humphrey, B.S., & Hubbard, G.S. (1980b). Electronic System Perturbartion Techniques. (Final Report.) Menlo Park, CA: SRI Inter- national. Moerman, D.E. (1981). Edible symbols: the effectiveness of placebos. In T.A. Seboek & R. Rosenthal (Eds.), The Clever Hans Phenomenon. Annals of the New York Academy of Sciences, Volume 364. New York: NYAS. Pp. 256-268. Palmer, J.A. (1971). Scoring in ESP tests as a function of belief in ESP. Part I: The sheep-goat effect. Journal of the American Societyfor Psychical Research, 65, 373-408. Palmer, J.A. (1977). Attitude and personality traits in experimental ESP research. In B.B. Wolman (Ed.) Handbook of Parapsychology. New York: Van Nostrand Reinhold. Palmer, J.A. (1985). An Evaluative Report on the Current Status of Para- psychology. Contract DADA 45-84-M-0405. U.S. Army Research In- stitute for the Behavioral and Social Sciences, Alexandria, Virginia. Approved For Release 2 bi c 4ROP96x00789R002200420001-1 2Approved For Release 2002/05/17 : CIA-R4lWpWQ7 AfW4ZQMA9AJ* Pinch, T.J., & Collins, H.M. (1984). Private science and public knowledge: The Committee for the Scientific Investigation of Claims of the Paranormal and its use of the literature. Social Studies of Science. 14, 521-546. Puthoff, H.E. (1985). Calculator-assisted psi amplification. In R.A. White & J. Solfvin (Eds.), Research in Parapsychology 1984. Metuchen, N.J.: The Scarecrow Press. Pp. 48-51. Puthoff, H.E., May, E.C., & Thompson, M. (1986). Calculator-assisted psi amplification II: use of the sequential-sampling technique as a variable-length majority-vote code. In D.H. Weiner & D.I. Radin (Eds.) Research in Parapsychology 1985. Metuchen, N.J. The Scarecrow Press. Pp. 73-76. Radin, D.I., May, E.C., & Thompson, M. (1985). Psi experiments with random number generators: Meta-analysis Part 1. In Proceedings of the Presented Papers of the ParapsychologicalAssociation 28th Annual Convention. Volume 1, 201-233. Radin, D.I., & Nelson, R.D. (1987). Replication in random event gener- ator experiments: a meta-analysis and quality assessment. Technical Report 87001. Princeton, N.J.: Princeton University Human Informa- tion Processing Group. Saunders, D.R. (1985). On Hyman's factor analyses. Journal of Parapsy- chology, 49, 86-88. Ryzl, M. (1966). A model of parapsychological communication. Journal of Parapsychology, 30, 18-30. Schechter, E.I. (1984). Hypnotic induction vs. control conditions: il- lustrating an approach to the evaluation of replicability in parapsy- chological data. Journal of the American Society for Psychical Research, 78, 1-27. Schmeidler, G.R., & McConnell, R.A. (1973/1958). ESP and Personality Patterns. Westport, CT: Greenwood Press. Schmidt, H., Morris, R.L., &Rudolph, L. (1982). Channeling psi evidence to critical observers. In W.G. Roll, R.L. Morris, & R.A. White (Eds.) Research in Parapsychology 1981. Metuchen, NJ: The Scarecrow Press. Pp. 136-138. Approved For Release 0&4g7ca1EIAe1a 31? 94-00789ROO2200420001 Afewyed For Release 2002/05/17: CIA-RDP96-00789R002200420001-123 Schmidt, H., Morris, R.L., & Rudolph, L. (1986). Channelling evidence for a PK effect to independent observers. Journal of Parapsychology, 50, 1-16. Stanford; R.G. (1987). Ganzfeld and hypnotic-induction procedures in ESP research: toward understanding their success. In S. Krippner (Ed.) Advances in Parapsychological Research.. (Volume 5.) Jeffer- son, N.C.: McFarland. Pp. 39-76. Truzzi, M. (1982). Editorial. Zetetic Scholar. Number 9, March, 1982. pp. 3-5. Wald, A. (1947). SequentialAnalysis. New York: Dover. The authors wish to acknowledge the assistance of two consultants who made valuable contributions to this report: George P. Hansen, Re- search Associate, Psychophysical Research Laboratories, Princeton, N.J., and Donald J. McCarthy, Associate Professor, Dept. of Mathe- matics and Computer Science, St. Johns University, Jamaica, N.Y. Approved For Release 2G&25 313g1cEl OSS6rn00789R002200420001-1 241pproved For Release 2002/05/17: CIA-R[XPQW1Q,3$Q},g3* Appendix Approved For ReleaseM6/4iai-at'PpiRwae-00789R002200420001-1 Ap4ttoved For Release 2002/05/17: CIA-RDP96-00789R002200420001 215 On the NRC Committee's Response to Harris and Rosenthal's Postscript Hyman (1988) attempts to justify the NRC Committee's negative posi- tion on the Ganzfeld experiments in a reply to Harris and Rosenthal's Postscript. He argues that the Ganzfeld studies "are not truly inde- pendent of one another" (p. 7), presumably because of commonalities in multiple studies contributed by different investigators. The general thrust of Hyman's reply indicates that he now believes that the Ganzfeld research should be analyzed by investigators rather than by studies. He also suggests that Rosenthal's analyses may have lacked sufficient statis- tical power to demonstrate a significant relationship between putative flaws and study outcome. The gist of Hyman's argument is that no con- clusions can be drawn from the Ganzfeld database because it lacks robustness. This claim is demonstrably false. The robustness argument hinges on a disagreement between Hyman (1985a) and Honorton (1985) in their evaluations of the adequacy of the randomization method employed in eight experiments by Carl Sargent and his colleagues at Cambridge University. Hyman classified these ex- periments as inadequate on randomization and Honorton classified them as adequate. Hyman says: "When a data base is so unstable that just a single change on a dis- puted point yields a different conclusion, the data base lacks robust- ness. Statisticians have devised a number of indicators... to assess the robustness of a given data base. When such indicators inform the in- vestigator that the alteration or removal of just one case -- or even a few cases -- can alter the conclusions this warns the investigator against drawing any conclusions" (Hyman, 1988, pp. 2-3, italics added). We agree, but the disagreement between Hyman and Honorton does not, as Hyman implies, involve just a single case, it involves eight cases, or 28% of the experiments in the Ganzfeld database. While we can find no legitimate grounds for the wholesale elimination of the Sargent studies, the disposition of Sargent's experiments affects neither the ef- fect size nor the statistical significance of the Ganzfeld database. As Harris and Rosenthal have shown (1988, Table 4A), when Sargent's Approved For Release f M Ilfgcf'&A'9-* l00789R002200420001-1 proved For Release 2002/05/17: CIA-RDR(/77e$9)AWAa1g, studies are removed from the database, the median effect size is un- changed. When the analysis is by investigators, rather than studies, as Hyman now believes is more appropriate, dropping the Sargent studies actually increases the median effect size for the remaining nine inves- tigators. The overall z-score for the 9 remaining investigators is still 5.07 (p = 2.1 x 10-7; that is, one part in 5 million). If, in addition, the studies of Honorton, the other major contributor to the Ganzfeld database, are also eliminated, the overall z-score for the 8 surviving investigators is still 3.67 (p = 1.2 x 10-4; less than one part in 8 thousand). In other words, 50% of the studies can be removed without jeopardizing the statistical significance of the Ganzfeld database. Hyman points out that the median z-score for eight undisputed ex- periments--those he and Honorton agree involved adequate methods of randomization--is 0.185, which is consistent with chance. But this is mis- leading. The eight undisputed studies were contributed by five different investigators, two of whom obtained highly significant outcomes. When these undisputed studies are analyzed by investigator, as Hyman now ad- vocates, the overall z-score is 2.32 (p = .01). Thus, the Ganzfeld database remains significant even when more than 70% of the studies are removed. Hyman's claim that, "If we eliminate Sargent's experiments, all the experiments that reported significance were deficient in allowing for the possibility of sensory leakage" (p. 7), is also misleading. While the state- ment is true when applied to experiments involving a methodology based on the blind judging of target sets, it ignores the outcomes of studies using a different methodology based on binary-coding of predefined tar- gets. Hyman has previously acknowledged that the binary-coding methodology is not susceptible to potential sensory leakage (Hyman and Honorton, 1986, p. 355). There were five such experiments by three dif- ferent investigators (Honorton, 1985, p. 69). Three of the experiments, by two different investigators, were independently significant and the overall z-score for all of the binary coding studies was 2.84 (p = .0023). Harris and Rosenthal refer to a new series of experiments by Honor- ton, designed to control against presumed flaws in some of the earlier Ganzfeld studies. Here we find ourselves in partial agreement with Hyman. Since this new evidence has not yet been fully reported in a Approved For Release kbi7&j ardiRcROP-00789R002200420001-1 ArA oved For Release 2002/05/17 : CIA-RDP96-00789R002200420001 peer-reviewed journal, it is understandable that the NRC Committee did not consider it in its report. Although the NRC Committee has been in- consistent in this regard (e.g., its consideration of the as yet unpublished May, et al., RNG study), we subscribe to the general principle that poten- tially important new scientific findings should undergo rigorous peer review before they are taken seriously. We understand that the new series of Ganzfeld studies by Honorton is currently being prepared for publication. To summarize, we have shown that (a) the significance of the Ganzfeld database is not dependent upon the studies of any one inves- tigator; (b) analysis by investigators rather than studies, as currently ad- vocated by Hyman, does not appreciably affect the significance of the Ganzfeld database; and (c) even when more than 70% of the studies are removed, the Ganzfeld database remains significant. Under these cir- cumstances, and contrary to Hyman's claim, the Ganzfeld database is very robust. Approved For Release z~d~11~5/~~catsoo~Igc-U0789R002200420001-1 $pproved For Release 2002/05/17: CIA-RDR {0@l ,k^ Ia2WAatbgy The Authors JOHN A. PALMER receives: his Ph.D. in experimental personality psychology at the University of Texas. He has been actively involved in parapsychological research for 16 years and has authored numerous ex- perimental reports, review articles, and book chapters on the subject. He is a past President of the Parapsychological Association and current- ly serves on its Board of Directors. In 1985 he wrote a critical review of modern parapsychological research for the U.S. Army that was frequent- ly cited by the NRC Committee in its report. He is presently Senior Re- search Associate at the Institute for Parapsychology in Durham, N.C. CHARLES HONORTON has been engaged in parapsychological re- search since 1966. He served as Senior Research Associate, and later as Director of Research, in the Division of Parapsychology and Psychophysics, Dept. of Psychiatry, Maimonides Medical Center, Brook- lyn, N.Y., 1967-1979, and was co-recipient with Montague Ullman and Stanley Krippner of the first major federal research grant for parapsy- chology from the U.S. Public Health Service, National Institute of Men- tal Health, 1973-75. Since 1979, he has been Director of the Psychophysical Research Laboratories in Princeton, N.J. His research was cited by the NRC Committee as among the best in the country. He is a past President of the Parapsychological Association and currently serves on its Board of Directors. JESSICA UTTS received her Ph.D. in statistics from Penn State Univer- sity. She is Associate Professor in the Division of Statistics gat the Univer- sity of California at Davis. She is a past President of the Western North American Region of the Biometric Society. Her research interests in statistics include robust statistical methods, linear models, and Bayesian analysis. She has published articles in numerous statistics journals in- cluding the Journal of the American Statistical Association, Statistical Science, Psychometrika, and the Journal of the Royal Statistical Society. She is on the editorial board of the Journal of theAmerican StatisticalAs- sociation and is Statistical Editor of the Journal of the American Society for Psychical Research. Approved For Release'?6'NYf`g` &i-Wfiyp?g=00789R002200420001-1