DECISION AUGMENTATION THEORY: TOWARD A MODEL OF ANOMALOUS MENTAL PHENOMENA

Document Type: 
Collection: 
Document Number (FOIA) /ESDN (CREST): 
CIA-RDP96-00789R003200180001-7
Release Decision: 
RIFPUB
Original Classification: 
U
Document Page Count: 
19
Document Creation Date: 
November 4, 2016
Document Release Date: 
October 27, 1998
Sequence Number: 
1
Case Number: 
Publication Date: 
April 22, 1994
Content Type: 
RS
File: 
AttachmentSize
PDF icon CIA-RDP96-00789R003200180001-7.pdf946.65 KB
Body: 
Approved For Relle one12OOeory: O/ T08ar~ [~ePf9gf9( l?9RO032001800014g, 22 April 1994 Decision Decision Augmentation Theory: Toward a Model of Anomalous Mental Phenomena by Edwin C. May, Ph.D Science Applications International Corporation Menlo Park, CA Jessica M. Utts, Ph.D. University of California, Davis Department of Statistics Davis, CA and S. James P. Spottiswoode Science Applications International Corporation (Consultant) Menlo Park, CA Abstract Decision Augmentation Theory (DAT) holds that humans integrate information obtained by anoma- lous cognition into the usual decision process. The result is that, to a statistical degree, such decisions are biased toward volitional outcomes. We introduce DAT and define the domain for which the model is applicable. In anomalous mental phenomena research, DAT is applicable to the understanding of effects that are within a few sigma of chance. We contrast the experimental consequences of DATwith those of models that treat anomalous perturbation as a causal force. We derive mathematical expres- sions for DAT and causal models for two distributions, normal and binomial. DAT is testable both retro- spectively and prospectively, and we provide statistical power curves to assist in the experimental design of such tests. We show that the experimental consequences of DAT are different from those of causal models except for one degenerate case. Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Approved For Release 2000/08/08: CIADPp6- 9R003200180001y79 22 April 1994 Decision Augmentation Theory: Toward a ode of Introduction We do not have positive definitions of the effects that generally fall under the heading of anomalous mental phenomena (AMP).* In the crassest of terms, AMP is what happens when nothing else should, at least as nature is currently understood. In the domain of information acquisition, or anomalous cogni- tion (AC), it is relatively straightforward to design an experimental protocol (Honorton et al., 1990, Hyman and Honorton, 1986) to assure that no known sensory leakage of information can occur. In the domain of causation, or anomalous perturbation (AP), however, it is very difficult, if not impossible (May, Humphrey, and Hubbard, 1980 and Hubbard, Bentley, Pasturel, and Issacs, 1987); thus, making the interpretation of results equally difficult. We can divideAP into two categories based on the magnitude of the putative effect. Macro-AP include phenomena that generally do not require sophisticated statistical analysis to tease out weak effects from the data. Examples include inelastic deformations in strain gauge experiments, the obvious bend- ing of metal samples, and a host of possible "field phenomena" such as telekinesis, poltergeist, tele- portation, and materialization. Conversely, micro-AP covers experimental data from noisy diodes, ra- dioactive decay and other random sources. These data show small differences from chance expectation and require statistical analysis. One of the consequences of the negative definitions of AMP is that experimenters must assure that the observables are not due to "known" effects. Traditionally, two techniques have been employed to guard against such interactions: (1) Complete physical isolation of the AP-target system. (2) Counterbalanced control and effort periods. Isolating physical systems from potential "environmental" effects is difficult, even for engineering spe- cialists. It becomes increasingly problematical the more sensitive the Macro-AP device. For example Hubbard, Bentley, Pasturel, and Issacs (1987) monitored a large number of sensors of environmental variables that could mimic AP effects in an extremely isolated piezoelectric strain gauge. Among these were three-axis accelerometers, calibrated microphones, and electromagnetic and nuclear radiation monitors. In addition, the sensors were mounted in a government-approved enclosure to assure no leakage (in or out) of electromagnetic radiation above a given frequency, and the enclosure itself was levitated on an air suspension table. Finally, the entire setup was locked in a controlled access room which was monitored by motion detectors. The system was so sensitive, for example, that it was possible to identify the source of a perturbation of the strain gauge that was due to innocent, gentle knocking on the door of the closed room. The financial and engineering resources to isolate such systems rapidly become prohibitive. The second method, which is commonly in use, is to isolate the target system within the constraints of the available resources, and then construct protocols that include control and effort periods. Thus, we trade complete isolation for a statistical analysis of the difference between control and effort periods. The assumption implicit in this approach is that environmental influences of the device will be random * The Cognitive Sciences Laboratory has adopted the term anomalous menial phenomena instead of the more widely known psi. Likewise, we use the terms anomalous cognition and anomalous perturbation for r3P and PK, respectively. We have done so because we believe that these terms are more naturally descriptive of the ohservahlcs and arc neutral with regard to mecha- nisms. These new terms will he used throughout this paper. Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 2 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Decision Augmentation Theory: Toward a Model of AMP V9. 22 April 1994 and uniformly distributed in both the control and effort conditions, while AP will tend to occur in the effort periods. Our arguments in favor of an anomaly, then, are based on statistical inference and we must consider, in detail, the consequences of such analyses, one of which implies a generalized model for AMP. Background As the evidence forAMP becomes more widely accepted (Bem and Honorton, 1994, Utts, 1991, Radin and Nelson, 1989) it is imperative to determine the underlying mechanisms of the phenomena. Clearly, we are not the first to begin thinking of potential models. In the process of amassing incontrovertible evidence of an anomaly, many theoretical approaches have been examined; in this section we outline a few of them. It is beyond the scope of this paper, however, to provide an exhaustive review of the theoretical models of AMP; a good reference to an up-to-date and detailed presentation is Stokes (1987). Brief Review of Models Two fundamentally different types of models have been developed: those that attempt to order and structure the raw observations inAMP experiments (i.e., phenomenological), and those that attempt to explainAMP in terms of modifications to existing physical theories (i.e., fundamental). In the history of the physical sciences, phenomenological models, such as the Snell's law of refraction or Ampere's law for the magnetic field due to a current, have nearly always preceded fundamental models of the phe- nomena, such as quantum electrodynamics and Maxwell's theory. In producing useful models ofAMP it may well be advantageous to start with phenomenological models, of which DAT is an example. Psychologists have contributed interesting phenomenological approaches. Stanford (1974a and 1974b) proposed PSI-mediated Instrumental Response (PMIR) as a descriptive model. PMIR states that an organism usesAMP to optimize its environment. For example, in one of Stanford's classic experiments (Stanford, Zenhausern, Taylor, and Dwyer 1975) subjects were offered a covert opportunity to stop a boring task prematurely if they exhibited unconscious AP by perturbing a hidden random number gen- erator. Overall, the experiment was significant in the unconscious tasks; it was as if the participants were unconsciously scanning the extended environment for any way to provide a more optimal situation than participating in a boring psychological task! As an example of a fundamental model, Walker (1984) proposed a literal interpretation of quantum mechanics in that since superposition of eigenstates holds, even for macrosystems, AMP might be due to macroscopic examples of quantum phenomena. These concepts spawned a class of theories, the so- called observation theories, that were based either upon quantum formalism conceptually or directly (Stokes, 1987). Jahn and Dunne (1986) have offered a "quantum metaphor" which illustrates many parallels between AMP and known quantum effects. Unfortunately, these models either have free pa- rameterswith unknown values, or are merely hand waving metaphors and therefore have not led to test- able predictions. Some of these models propose questionable extensions to existing theories. For ex- ample, even though Walker's interpretation of quantum mechanical formalism might suggest wave-like properties of macrosystems, the physics data to date not only show no indication of such phenomena at room temperature but provide considerable evidence to suggest that macrosystems lose their quantum Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 3 Dec sion Augmenttat on Theory/ Toward a Mot el ofA1VIP89R003200180001V3. 22 April 1994 coherence above 0.5 Kelvins (Washburn and Webb, 1986) and no longer exhibit quantum wave-like be- havior. This is not to say that a comprehensive model of AMP will not eventually require quantum mechanics as part of its explanation, but it is currently premature to consider such models as more than interesting speculation. The burden of proof is on the theorist to show why systems, which are normally considered classical (e.g., a human brain), are, indeed, quantum mechanical. That is, what are the experimental consequences of a quantum mechanical system over a classical one? Our Decision Augmentation Theory is phenomenological and is a logical and formal extension of Stan- ford's elegant PMIR model. In the same manner as early models of the behavior of gases, acoustics, or optics, it tries to subsume a large range of experimental measurements into a coherent lawful scheme. Hopefully this process will lead the way to the uncovering of deeper mechanisms. In fact DAT leads to the idea that there may be only one underlying mechanism of all AMP effects, namely a transfer of in- formation between events separated by negative time intervals. Historical Evolution of Decision Augmentation May, Humphrey, and Hubbard (1980) conducted a careful random number generator (RNG) experi- ment. What makes this experiment unique is the extreme engineering and methodological care that was taken in order to isolate any potentially known physical interactions with the source of randomness. It is beyond the scope of this paper to describe this experiment completely; however, those specific de- tails which led to the idea of Decision Augmentation are important for the sake of historical complete- ness. May, Humphrey, and Hubbard were satisfied in that RNG study, that they had observed a genuine sta- tistical anomaly. In addition, because of an accurate mathematical model of the random device and the engineering details of the experiment, they were equally satisfied that the deviations were not due to any known physical interactions. They concluded, in their report, that some form of AMP-mediated data selection had occurred. They named it then Psychoenergetic Data Selection. Following a suggestion by Dr. David R. Saunders of MARS Measurement and Associates, we noticed in 1986 that the effect size in binary RNG studies varied on the average as the square root of the number of bits in the sequence. This observation led to the development of the Intuitive Data Sorting model that appeared to describe the RNG data to that date (May, Radin, Hubbard, Humphrey, and Utts, 1985). The remainder of this paper describes the next step in the evolution process. We now call the model Decision Augmentation Theory (DAT). Decision Augmentation Theory-A General Description Since the case forA C-mediated information transfer is now well established, it would be exceptional if we did not integrate this form of information gathering into the decision process. For example, we rou- tinely use real-time data gathering and historical information to assist in the decision process. Perhaps, what is called intuition may play a important role. Why, then, should we not includeAC information? DAT holds thatAC information is included along with the usual inputs that result in a final human deci- sion that favours a "desired" outcome. In statistical parlance, DAT says that a slight, systematic bias is introduced into the decision process by AC. Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 4 Approved For Release 2000/08/08 : CIA-RDP96-00789R0032001800019; 22 A ril 1994 Decision Augmentation Theory: Toward a Model of AMP p This philosophical concept has the advantage of being quite general. We know of no experiment that is devoid of at least one human decision; thus, DAT might be the underlying basis for AMP To illustrate the point, we describe how the "cosmos" determines the outcome of a well-designed, hypothetical ex- periment. To determine the sequencing of an RNG experiment, suppose that the entry point into a table of random numbers will be chosen by the square root of the barometric pressure as stated in the weather report that will be published seven days hence in the New York Times. Since humans are notori- ously bad at predicting or controlling the weather, this entry point might seem independent of a human decision; but why did we "chose" seven days in advance? Why not six or eight? Why the New York Times and not the London Times? DATwould suggest that the selection of seven days, the New York Times, the barometric pressure, and square root function were optimal choices, either individually or collectively, and that other decisions would not lead to as significant an outcome. Other non-technical decisions may also be biased by AC in accordance with DAT. When should we schedule a Ganzfeld session; who should be the experimenter in a series; how should we determine a specific order in a tri-polar protocol? It is important to understand the domain in which a model is applicable. For example, Newton's laws are sufficient to describe the dynamics of mechanical objects in the domain where the velocities are very much smaller than the speed of light, and where the quantum wavelength of the object is very small compared to the physical extent of the object. If these conditions are violated, then different models must be invoked (e.g., relativity and quantum mechanics, respectively). The domain in which DAT is applicable is when experimental outcomes are in a statistical regime (i.e., a few standard deviations from chance). In other words, does the measured effect occur under the null hypothesis? This is not a sharp-edged requirement and DAT becomes less apropos the more a single measurement deviates from mean-chance-expectation (MCE). We would not invoke DAT, for exam- ple, as an explanation of levitation if one found the authors hovering near the ceiling! All this maybe interesting philosophy, but DAT can be formulated mathematically and subjected to rig- orous examination. Development of a Formal Model While DAT may have implications for AMP in general, we develop the model in the framework of un- derstanding experimental results. In particular, we consider AP vs AC in the form of DAT in those ex- periments whose outcomes are in the few-sigma, statistical regime. We define four possible mechanisms for the results in such experiments: (1) Mean Chance Expectation. The results are at chance. That is, the deviation of the dependent vari- able meets accepted criteria for MCE. In statistical parlance, we have measurements from an un- perturbed parent distribution with unbiased sampling. (2) Anomalous Perturbation. Nature is modified by some anomalous interaction. That is, we expect a causal interaction of a "force" type. In statistical parlance, we have measurements from a perturbed parent distribution with unbiased sampling. (3) Decision Augmentation. Nature is unchanged but the measurements are biased. That is, AC in- formation has "distorted" the sampling. In statistical parlance, we have measurements from an unperturbed parent distribution with biased sampling. Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 5 Ap piov d For R lean 2000/OTo/ward 16-, ePg~ ,89R003200180001V7, 22 April 1994 (4) Combination. Nature is modified and the measurements are biased. That is, both AP and AC are present. In statistical parlance, we have conducted biased sampling from a perturbed parent dis- tribution. General Considerations Since the formal discussion of DATis statistical, we will describe the overall context for the development of the model from that perspective. Consider a random variable, X, that can take on continuous values (e.g., the normal distribution) or discrete values (e.g., the binomial distribution). Examples ofX might be the hit rate in an RNG experiment, the swimming velocity of cells, or the mutation rate of bacteria. Let Ybe the average computed over n values of X, where n is the number of items that are collectively subjected to an AMP influence as the result of a single decision-one trial. Often this maybe equivalent to a single effort period, but it also may include repeated efforts. The key point is that, regardless of the effort style, the average value of the dependent variable is computed over the n values resulting from one decision point. In the examples above, n is the sequence length of a single run in an RNG experi- ment, the number of swimming cells measured during the trial, or the number of bacteria-containing test tubes present during the trial. Assumptions for DAT We assume that the parent distribution of a physical system remains unperturbed; however, the mea- surements of the physical system are systematically biased by some AC-mediated informational pro- cess. Since the deviations seen in experiments in the statistical regime tend to he small in magnitude, it is safe to assume that the measurement biases might also he small; therefore, we assume small shifts of the mean and variance of the sampling distribution. Figure 1 shows the distributions for biased and un- biased measurements. Figure 1. Sampling Distribution Under DAT. The biased sampling distribution shown in Figure 1 is assumed to be normally distributed as: u 6 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Approved For Releasj 2000/0+8/ggr~CI* FO i ~,89R003200180001V . 22 April 1994 Decision Augmentation heor : o where the notation means that Z is distributed as a normal distribution with am e an ofItz and a standard deviation of aZ. Assumptions for an AP Model For comparison sake, we develop a model for AP interactions. With a few exceptions reported in the poltergeist literature,AP appears to be a relatively "small" effect in laboratory experiments. That is, we do not readily observe anomalous and obvious mental interactions with the environment. Thus, we be- gin with the assumption that a putative AP force would give rise to a perturbational interaction. What we mean is that given an ensemble of entities (e.g., binary bits, cells), a force acts, on the average, equal- lyon each member of the ensemble. We call this type of interaction perturbational AP (PAP). Figure 2 shows a schematic representation of probability density functions for a parent distribution un- der the PAP assumption and an unperturbed parent distribution. In the PAP model, the perturbation induces a change in the mean of the parent distribution but does not effects its variance. We parameter- ize the mean shift in terms of a multiplier of the initial standard deviation. Thus, we define anAP-effect size as: (fi l - fr o) EAP - ao where ?t and ?o are the means of the perturbed and unperturbed distributions, respectively, and where oo is the standard deviation of the unperturbed distribution. Figure 2. Parent Distribution for Perturbational AP For the moment, we consider eAp as a parameter which, in principle, could he a function of a variety of variables (e.g., psychological, physical, environmental, methodological). As we develop DAT for specif- ic distributions and experiments, we will discuss this functionality of &,4p. Calculation of E(Z2) We compute the expected value and variance of Z2 under MCC, PAP, and DAT for the normal and bino- mial distributions. The details of the calculations can he found in the Appendix; however, we summa- rize the results in this section. Table I shows the results assuming that the parent distribution is normal. 7 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Approved For Release 2000/ /08 : CI 89R003200180001 . 22 Aril 1994 Decision Augmentation Theory: oward a A p Table 1. Normal Parent Distribution Quantity E(Z2) 1 + E,;pn ?4+aZ Var(7.2) 2(1 + 2F2 Ap n) 2(aT + 2?4a=) Table 2 shows the results assuming that the parent distribution is binomial. In this calculation, PO is the binomial event probability and a0 = ,/p0(1-p0). Table 2. Binomial Parent Distribution Quantity E(Z2) Var(Z2) 2+-1--(1 -6(yo) 0 I +E,,lp(n- 1) + L0(1 -2p0) ?4+(4 2(1 + 2F;,1 n) 2((1+ 2?Z(4) * The variance shown assumes p0 = 0.5 and n >> 1. See the Appendix for other cases. We wish to emphasize at this point that in the development of the mathematical model, the parameter eAp for PAP, and the parameters ?Z, and az in DAT may all possibly depend upon n; however, for the moment, we assume that they are all n-independent. We shall discuss the consequences of this assump- tion below. Figure 3 displays these theoretical calculations for the three mechanisms graphically. Figure 3. Predictions of MCE, PAP, and DAT. 8 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 ep piov d For R lean 2000/08/ward IA-R~P`~?? 9R003200180001y7j. 22 April 1994 a MO Within the constraints mentioned above, this formulation predicts grossly different outcomes for these models and, therefore, is ultimately capable of separating them, even for very small perturbations. Retrospective Tests It is possible to apply DAT retrospectively to any body of data that meet certain constraints. It is critical to keep in mind the meaning of n-the number of measures of the dependent variable over which to compute an average during a single trial following a single decision. In terms of their predictions for experimental results, the crucial distinction between DAT and the PAP model is the dependence of the results upon n; therefore, experiments which are used to test these theories must be those in which ex- periment participants are blind to n. In a follow-on to this theory-definition paper, we will retrospec- tively apply DAT to as many data sets as possible, and examine the consequences of any violations of these criteria. Aside from these considerations, the application of DAT is straight forward. Having identified the unit of analysis and n, simply create a scatter diagram of points (Z2 n) and compute a least square fit to a straight line. Tables I and 2 show that for the PAP model, the square of theAP-effect size is the slope of the resulting fit. A student's t-test may be used to test the hypothesis that theAP-effect size is zero, and thus test for the validity of the PAP model. If the slope is zero, these same tables show that the intercept maybe interpreted as anAC strength parameter for DAT The follow-on paper will describe these tech- niques in detail. Prospective Tests A prospective test of DAT will not only test the AMP hypothesis against mean chance expectation, but will also test for a PAP contribution. In such tests, n should certainly he a double-blind parameter and take on at least two values. If you wanted to check the prediction of a linear functional relationship between n and the E(Z2) that is suggested by PAP model, the more values of n the better. It is not pos- sible to separate the PAP model from DAT at a single value of n. In any prospective test, it is helpful to know the number of runs, N, that are necessary to determine with 95% confidence, which of the two models best fits the data. Figure 4 displays the problem graphically. L. - 1.645 n Figure 4. Model Predictions for the Power Calculation. 9 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Approved For Release 2000/08/08 : CIA--R P9 89800320018000 q 22 A ril 1994 fMT. A p Decision Augmentation Theory: Toward a Mo el o Under PAP, 95% of the values of Z2 will be greater than the point indicated in Figure 4. Even if the measured value of Z2is at this point, we would like the lower limit of the 95% confidence interval for this value to be greater than the predicted value under the DAT model. Or: ZAP 1.645 - 1.960 ? EAC (ZZ). FN FN Solving for N in the equality, we find: z r 3.605 a,,P (1) N = LEAP (Z') - EAC (ZZ) Since aAp ? 'Ac, this value of N will always be the larger estimate than that derived from beginning with DAT and calculating the confidence intervals in the other direction. Suppose, from an earlier experiment, one can estimate a single-trial effect size for a specific value of n, say n1. To determine whether the PAP model or DAT is the proper description of the mechanism, we must conduct another study at an additional value of n, say n2. We use Equation 1 to compute how many runs we must conduct at n2 to assure a separation of mechanism with 95% confidence, and we use the variances shown in Tables I and 2 to compute ap. Figure 5 shows the number of runs for an RNG-like experiment as a function of effect size for three values of n2. We chose n. = 100 hits because it is typical of the numbers found in the RNG database and the values of n2 shown are within easy reach of today's computer-based RNG devices. For example, assuming az = 1.0 and assuming an effect size of 0.004, one we derived from a publication of PEAR data (Jahn, 1982), then at n1 = 100,?r = 0.004 x /100 =0.04andEAc(Z2) = 1.0016. Suppose n2 = 104. ThenEEp(Z2) = 1.160 and alp = 1.625. Using Equation 1, we find N = 1368 runs, which can be approximately obtained from Figure 5. That is in this example, 1.368 runs are needed to resolve the PAP model from DAT at n2 = 104 at the 95% confidence level. Since these runs are easily obtained in most RNG experiments, an ideal prospective test of DAT, which is based on these calculations, would be to conduct 1500 runs ran- domly counterbalanced between n = 102 and n = 104 hits/trial. If the effect size at n = 102 is near 0.004, than we would resolve the AP vsAC question with 95% confidence. AC Effect Size at nj = 100 hits Figure 5. Runs Required for RNG Effect Sizes 10 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Approved For Release 2000/08/08: CIA R 1P9 89800320018000 IV3 22 A ril 1994 Decision Augmentation Theory: Toward a o e o A p Figure 6 shows similar relationships for effect sizes that are more typical of biologicalAP as reported in the Former Soviet Union (May and Vilenskaya, 1994). Similarly, for biological oriented AP experiments, we chose n j = 2 because use two simultaneous AP targets is easily accomplished. If we assume an effect size of 0.3 and aZ = 1.0, at n2 = 10 we compute EAC(Z2) = 1.180, E4p(Z2) = 1.900, aaAp = 2.366 and N = 140, which can be approximately obtained from Figure 6. We have included n2 = 100 in Figure 6, because this is within reach in cellular experiments although it is probably not practical for most biological AP experiments. Figure 6. Runs Required for Biological AP Effect Sizes We chose nr = 2 units for convenience. For example in a plant study, the physiological responses can easily be averaged over two plants and n2 = 10 is within reason for a second data point. A unit could be a test tube containing cells or bacteria; the collection of all ten test tubes would simultaneously have to be the target of the AP effort to meet the constraints of a valid test. The prospective tests we have described so far are conditional; that is, given an effect size, we provide a protocol to test if the mechanism forAMP is PAP or DAT An unconditional test does not assume any effect size; all that is necessary is to collect data at a large number of different values of n, and fit a straight line through the resulting Z2s. The mechanism is PA P if the slope is non-zero and may be DAT if the slope is zero. Discussion We now address the possible n-dependence of the model parameters. A degenerate case arises if Eqp is proportional to Vn; if that were the case, we could not distinguish between the PAP model and DAT by means of tests on then dependence of results. If it turns out that in the analysis of the data from a vari- ety of experiments, participants, and laboratories, the slope of a Z2vs n linear least-squares fit is zero, then either e, p = 0.0 or EA p is exactly proportional to Vn depending upon the precision of the fit (i.e., errors on the zero slope). An attempt might be made to rescue the PAP hypothesis by explaining the Vn dependence of Z2 in the degenerate case as a fatigue or other time dependence effects. That is it might 11 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Approved For Release 2000/08/08: CIA-RDP91R 89R003200180001 q9 22 April 1994 Decision Augmentation Theory: Toward a Model o be hypothesized that human participants might becomeAP-tired as a function of n; however, it seems improbable that a human-based phenomena would be so widely distributed and constant and give ex- actly the \/n dependency in differing protocols needed to imitate DAT. We prefer to resolve the degen- eracy by wielding Occam's razor: if the only type of AP which fits the data is indistinguishable from AC, and given that we have ample demonstrations of A C by independent means in the laboratory, then we do not need to invent an additional phenomenon called AP. Except for this degeneracy, a zero slope for the fit allows us to reject all PAP models, regardless of their n-dependencies. DAT is not limited to experiments that capture data from a dynamic system. DAT may also he the mech- anism in protocols which utilize quasi-static target systems. In a quasi-static target system, a random process occurs only when a run is initiated; a mechanical dice thrower is an example. Yet, in a series of unattended runs of such a device there is always a statistical variation in the mean of the dependent variable that may be due to a variety of factors, such as Brownian motion, temperature, humidity, and possibly the quantum mechanical uncertainty principle (Walker, 1974). Thus, the results obtained will ultimately depend upon when the run is initiated. It is also possible that a second-order DAT mecha- nism arises because of protocol selection; how and who determines the order in tri-polar protocols. In second order DAT there may be individuals, other than the formal subject, whose decisions effect the experimental outcome and are modified by AC. Finally, we would like to close with a clear statement of what is meant by DAT: the decisions on which experimental outcomes depend are augmented by AC to capitalize upon the unperturbed statistical fluctuations of the target system. In our follow-on paper, we will examine retrospective applications to a variety of data sets. Acknowledgements Since 1979, there have been many individuals who have contributed to the development of DAT. We would first like to thank David Saunders without whose remark this work would not have been. Beverly Humphrey kept the philosophical integrity intact at times under extreme duress. We are greatly appre- ciative of Zoltan Vassy, to whom we owe the Z-score formalism, to George Hansen, Donald McCarthy, and Scott Hubbard for their constructive criticisms and support. 12 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Approved For Release 2000/08/08: CIA-RDP96 -~789R003200180001g9 22 April 1994 Decision Augmentation Theory: Toward a Model of P References Bern, D. J. and Honorton, C. (1994). Does psi exist? Replicable evidence for an anomalous process of information transfer. Psychological Bulletin. 115, No. 1, 4-18. Honorton, C., Berger, R. E., Varvoglis, M. P., Quant, M., Derr, P., Schechter, E. I., and Ferrari, D. C. (1990) Psi Communication in the Ganzfeld. Journal of Parapsychology, 54, 99-139. Hubbard, G. S., Bentley, P. P., Pasturel, P. K., and Isaacs, J. (1987). A remote action experiment with a piezoelectric transducer. Final Report - Objective H, Task 3 and 3a. SRI International Project 1291, Menlo Park, CA. Hyman, R. and Honorton, C. (1986). A joint communique: The psi ganzfeld controversy. Journal of Parapsychology. 50, 351-364. Jahn, R. G. (1982). The persistent paradox of psychic phenomena: an engineering perspecitve. Proceedings of the IEEE. 70, No. 2, 136-170. Jahn R. G. and Dunne, B. J. (1986). On the quantum mechanics of consciousness, with application to anomalous phenomena. Foundations of Physics. 16, No 8, 721-772. May, E. C., Humphrey, B. S., Hubbard, G. S. (1980). Electronic System Perturbation Techniques. Final Report. SRI International Menlo Park, CA. May, E. C., Radin, D. I., Hubbard, G. S., Humphrey, B. S., and Utts, J. (1985) Psi experiments with random number generators: an informational model. Proceedings of Presented Papers Vol 1. The Parapsychological Association 28th Annual Convention, Tufts University, Medford, MA, 237-266. May, E. C. and Vilenskaya, L. (1994). Overview of Current Parapsychology Research in the Former Soviet Union. Subtle Energies. 3, No 3. 45-67. Radin, D. I. and Nelson, R. D. (1989). Evidence for consciousness-related anomalies in random physical systems. Foundations of Physics. 19, No. 12, 1499-1514. Stanford, R. G. (1974a). An experimentally testable model for spontaneous psi events I. Extrasensory events. Journal of the American Societyfor Physical Research, 68, 34-57. Stanford, R. G. (1974b). An experimentally testable model for spontaneous psi events II. Psychokinetic events. Journal of the American Societyfor Physical Research, 68, 321-356. Stanford, R. G., Zenhausern R., Taylor, A., and Dwyer, M. A. (1975). Psychokinesis as psi-mediated instrumental response. Journal of the American Societyfor Physical Research, 69, 127-133. Stokes, D. M. (1987). Theoretical parapsychology. In Advances in Parapsychological Research 5. McFarland & Company, Inc. Jefferson NC, 77-1.89. Utts, J. (1991). Replication and meta-analysis in parapsychology. Statistical Science. 6, No. 4, 363-403. Walker, E. H. (1974). Foundations of Paraphysical and Parapsychological phenomena. Proceedings of an International Conference: Quantum Physics and Parapsychology. Oteri, E. Ed. Parapsychology Foundation, Inc. New York, NY, 1-53. Walker, E. H. (1984). A review of criticisms of the quantum mechanical theory of psi phenomena. Journal of Parapsychology. 48, 277-332. Washburn S. and Webb, R. A. (1986). Effects of dissipation and temperature on macroscopic quantum tunneling in Josephson junctions. In New Techniques and Ideas in Quantum Measurement Theory. Greenburger, D. M. Ed. New York Academy of Sciences, New York, NY, 66-77. 13 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Approved For Release 2000/QB/08 : CLfO Pj 8PM 89R003200180001-7 Decision Augmentation Theory: Toward a Appendix Mathematical Derivations for the Decision Augmentation Theory In this appendix we develop the formalism for the Decision Augmentation Theory (DAT). We consider cases for the mean-chance-expectation (MCE), anomalous perturbation (AP), and anomalous cogni- tion (AC) under two assumptions-normality and Bernoulli sampling. For each of these three models, we compute the expected values of Z and Z2, and the variance of Z2* Mean Chance Expectation (MCE) Normal Distribution We begin by considering a random variable, X, whose probability density function is normal, (i.e., N(/co, ap2)t). After many unbiased measures from this distribution, it is possible to obtain reasonable ap- proximations to yo and ao2 in the usual way. Suppose n unbiased measures are used to compute a new variable, Y, given by: Yk = Then Y is distributed as N(yo, a, 2), where ant = ao2/n. If Z is defined as Z=Yk - fro an , then Z is distributed as N(0, 1) and E(Z) is given by: E = f ze 0.5z2dZ Since Var(Z) = I = E(Z2) - E2(Z), then EMCE(Z2) = 1 1z2e0572dz TI~ The Var(Z2) = E(Z4) - E2(Z2) = E(Z4) - I. But * We wish to thank Zoltan Vassy for originally suggesting the Z2 formalism. t Throughout this appendix, this notation means: 2 N(? (72) = 1 e O.S~x a? l . a2- (1) Appendix: 1 Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 DcFisioneougmen ji ir~pT y1p%%rcCA l IMR 9R003200180001-7 ENfcr(Z") = 1 J z4e "o -2dz = 3. 2z Var.ascr(Z2) = 2. (3) Bernoulli Sampling Let the probability of observing a one under Bernoulli sampling be given hypo. After n samples, the discrete Z-score is given by: 7=k-npo (Y0 = I/Po(1 - PO), and k is the number of observed ones (0 < k C n). The expected value of Z is given by: Eu MCE(Z) _ (n)k(, _ pk(n,po) = k Popo)? k? The first term in Equation 4 is the E(k) which, for the binomial distribution, is npo. Thus EH MCE(Z) (y0 Fn k=0 - nPo)Bk(n,Po), (4) (k - nPo)13k(n,Po) = 0. (5) The expected value of Z2 is given by: Et1' (Z2) = Var(Z) + E2(Z), MCF Var(k - npo) _ + 0, na'o z EMCr(Z2) = ncr2 = 1. 0 As in the normal case, the Var(Z2) = E(Z4) - E2(Z2) = E(Z4) - 1. Buts * Johnson, N. L., and S. Kotz, Discrete Distributions, John Wiley & Sons, New York, p. 51, (1969). (6) Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Appendix: 2 Approved For Release 2000/08/08: CIA-RDP9?- T89R003200180001-7 Decision Augmentation Theory: Toward a Model o EMCE(Za) _ (k - npo)4Bk(n,po) = 3 + - (1 - 600). n C12 0 Varl,~c1s(Z2) = 2 + nL (1 - 6a2) = 2 - 2 , (Po = 0.5). 0 (7) Anomalous Perturbation (AP) Normal Distribution Under the perturbation assumption described in the text, we let the mean of the perturbed distribution be given by,uo+ Fapap, where Fap is an AP strength parameter, and in the general case may he a function of n and time. The parent distribution for the random variable, X, becomes N(1.to+ Fapap, 002). As in the MCE case, the average of n independent values off, is Y- N(tto+ Fapco, an2). Let Y =/10 +Fapap+AY, dY = y - (ro + FapO ). For a mean of n samples, the Z-score is given by z 0 = Fapap + dY = Fap a? a? where i; is distributed as N(0, 1) and is given by J), / an. Then the expected value of Z is given by Enir(Z) = EAp(Fap Jn + ~) = Fap + E(C) = Fap I (8) and the expected value of Z2 is given by r S) EN (Z2) = EAp([Eap Jr` + C)2) = nfap + E(C2) + 2E, Fn E(p = 1 + Fapn, (9) since E( = 0 and E(1;2) = 1. In general, Z2 is distributed as anon-central X2 with '1 degree of freedom and non-centrality parameter nEp2, X2(1, n ,p2). Thus, the variance of Z2is given by` Varvt,(Z2) = 2(1 + 2nFap). (10) Bernoulli Sampling As before, let the probability of observing a one under MCE be given hypo, and the discrete Z-score be given by: * Johnson, N. L., and S. Kotz, Continuous Univariate Distributions-2, John Wiley & Sons, New York, p. 134, (1970). Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Appendix: 3 Approved For Release 20001Q1,8/08 : CIA -R A -7 Decision Augmentation Theory: oward a o e o k - npo aov where k is the number of observed ones (0 < k < n). Under the perturbation assumption, we let the mean of the distribution of the single-bit probability be given by pi = po + eapoo, where rap is an AP strength parameter. The expected value of Z is given by: Erir(Z) _ - npo)Bk(n,pr), Bk(n,pl) = ()p(1 -Pi)? k. The expected value of Z becomes E",,(Z) = 1 ZkBk(n,Pl) - nPo ao~n k=0 _ (Pi PO /n _ Fap?n. Go Since eap = E(Z)/V, so rap is also the binomial effect size. The expected value of Z2 is given by: EB,,(Z2) = Var(Z) + E2(Z), Var(k - npo) z o + eQPn, n 0 1 _ p'( - pi + rapn. - (2 Expanding in terms of pi = p0 + eapo0, E;,,(Z2) = 1 + sap(n - 1) + ao (1 - 2Po)- (12) If Po = 0.5 (i.e., a binary case) and n > 1, then Equation 12 reduces to the E(Z2) in the normal case, Equation 9. We begin the calculation of Var(Z2) by using the equation for the jth moment of a binomial distribution mt = ~Lj t,I(q + Pc`)nl I r=o. Since Var(Z2) = E(Z4) - E2(Z2), we must evaluate E(Z4). Or, n EA~,(Z4) = na4 7, (k - npo)4pk(n,pi). Ok=0 Expanding n -200 -4(k - npo)4, using the appropriate moments, and subtracting E2(Z2), yields Var(Z2) = Co + C, n + C_, n y'. (13) Approved For Release 2000/08/08 : CIA-RDP96-00789R003200180001-7 Appendix: 4 -ApiroveO For R Itase~~$gg:o -~% -R1Qj89R003200180001-7 Decision u men a ion r f, 2 Co =2-36Lap+10EQp+8E~?.o(1 2po)(1 -2Eap)+6i 0 3 Cl = 4EaP(1 - Ea 4) + 4 ao (i - 2p ), and C_1 = 48 - 6[Eap - 3]2 + 12 cso (1 - 2po) + (1 027Lap) + ? (1 - 2P0)(12P0 - 12po + 1). 0 0 Under the condition that Lap 1, e