Parapsychology Journalism: The People, The Theory, The Science, The Skeptics
To get you in the proper mood to evaluate the latest skepticism regarding psi research, I present a two minute clip from the otherwise somewhat forgettable movie Erik The Viking. The island is sinking and the mayor and many townspeople are in complete denial:
This all started from a previous blog post some years ago that made this statement:
The point of all this is not to mock parapsychology for the sake of it, but rather to emphasise that parapsychology is useful as a control group for science. Scientists should aim to improve their procedures to the point where, if the control group used these same procedures, they would get an acceptably low level of positive results. That this is not yet the case indicates the need for more stringent scientific procedures.
From there, a blog by physician, Scott Alexander titled: “The Control Group is Out of Control” has dragged this convoluted reasoning out from under a rock, where it should have stayed:
The results are pretty dismal. Parapsychologists are able to produce experimental evidence for psychic phenomena about as easily as normal scientists are able to produce such evidence for normal, non-psychic phenomena. This suggests the existence of a very large “placebo effect” in science – ie with enough energy focused on a subject, you can always produce “experimental evidence” for it that meets the usual scientific standards. As Eliezer Yudkowsky puts it:
Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored – that they are unfairly being held to higher standards than everyone else. I’m willing to believe that. It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter.
The reasoning goes something like this:
1. Psychic ability cannot possibly exist, yet . . .
2. Parapsychology routinely gets positive results, therefore . . .
3. The statistical methods themselves must be flawed.
It’s pretty obvious that this line of reasoning is, well, batshit crazy. Faced with the realization that parapsychological research is statistically and methodologically sound, these otherwise very intelligent people have gotten themselves caught up in an immature, playground-quality argument. If they can’t win, well then, the whole game must be stupid.
The renewed interest in this idiotic line of reasoning was occasioned by a new meta analysis of Daryl Bem’s “Feeling the Future” experiments which involved 90 experiments. (You can find a description of the experiment here.) A little background is required to understand why this set of experiments, and not say, the Ganzfeld, are so difficult for skeptics to brush off. For starters, there is Bem himself. There is no way to dismiss him as a fringe parapsychology nut. He is a social psychologist and professor emeritus at Cornell University. He is also the originator of the self-perception theory of attitude change. One thing that he has always been good at is getting published in the mainstream journals. The author or co-author of over 70 scientific papers, editor of five different mainstream journals and a reviewer for many more, author, or co-author of 7 psychology textbooks as well as psychology related books, Bem is not someone who can be taken lightly. He is well known for being a good experimenter.
In early 2011, his paper, “Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect“, published in the prestigious Journal of Personality and Social Psychology came out to the usual mixed bag response. What was hard to ignore was that the JPSP is one of the most rigorously refereed journals in the entire field of psychology, with a rejection rate of 82% in 2009. This paper was accepted based on the collective assessment of the two editors and four reviewers who vetted it for the Journal. Moreover, authors’ names and other identifying information are removed from a manuscript before it is sent to reviewers so that their evaluations will be based purely on its merits and not be influenced by knowledge of an author’s reputation or the prestige of his or her institutional affiliation.
The skeptical trash talk machine kicked into high gear. In January of 2011, James Alcock, in an article for the unapologetically biased Skeptical Enquirer, wrote a factually challenged, over-the-top piece slamming not only Bem’s paper, but all of parapsychology as well. Bem responded to this. The more moderate mainstream skeptical response was that there weren’t enough replications to accept the results. What was particularly interesting was that no one was attacking the methodology this time. It was already vetted by the JPSP and was therefore beyond reproach.
In 2010, skeptic Richard Wiseman and wife Caroline Watts set up a registry for replications of Bem’s experiments. It was ingenious attempt to grab control of the replications and make it appear as if the experiment was a complete failure. In 2012 Wiseman gathered up his meager results and wrote up a paper and shopped until he found a journal to accept it and was published in March of 2012. Failing the Future: Three Unsuccessful Attempts to Replicate Bem’s ‘Retroactive Facilitation of Recall’ Effect. This failure to replicate got a great deal of press, as noted in the comment section of that paper.
What Wiseman never tells people is in Ritchie, Wiseman and French is that his online registry where he asked everyone to register, first of all he provided a deadline date. I don’t know of any serious researcher working on their own stuff who is going to drop everything and immediately do a replication… anyway, he and Ritchie and French published these three studies. Well, they knew that there were three other studies that had been submitted and completed and two of the three showed statistically significant results replicating my results. But you don’t know that from reading his article. That borders on dishonesty.
As of 2012, as far as the mainstream press knew, Bem’s experiments had started out promisingly, but no one had been able to replicate them. In fact, other, somewhat similar types of precognition experiments had been around for many years with successful replications. An early description of one can be found in Dean Radin’s landmark book “The Conscious Universe” (pg. 118-124) published in 2000.
In 2012, Julia Mossbridge, Patrizio Tressoldi and Jessica Utts published a meta analysis of similar studies dating back to 1978. (Update: The experiments have some significant differences. See Michael Duggan’s comment below for clarification.) Studies into subliminal precognition had been around a long time and weren’t anything new. As with most science, this new experiment wasn’t a bolt out of the blue, but just another iteration of an already tried and true area of study. In parapsychology, this effect was already taken for granted.
What Bem did mostly was to use his knowledge and experience to tighten it up, improve the protocol, make sure it was easily replicable and most importantly, get it published in a high profile way. There never was any real chance of failure. Over the long term, the experiment was eventually going to get a sufficiently large number of successful replications; it was only a matter of time.
From a scientific perspective, this happened rather quickly. In early 2014 the first comprehensive meta analysis was made public, (although it is still under review.) A quote from the abstract sums it up:
In 2011, the Journal of Personality and Social Psychology published a report of nine experiments purporting to demonstrate that an individual’s cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded (Bem, 2011). To encourage exact replications of the experiments, all materials needed to conduct them were made available on request. We can now report a metaanalysis of 90 experiments from 33 laboratories in 14 different countries which yielded an overall positive effect in excess of 6 sigma with an effect size (Hedges’ g) of 0.09, combined z = 6.33, p = 1.2 × 10-10. A Bayesian analysis yielded a Bayes Factor of 1.24 × 109, greatly exceeding the criterion value of 100 for “decisive evidence” in favor of the experimental hypothesis (Jeffries, 1961).
An effect in excess of 6 sigma is greater than the proof that was required for the Higgs Boson. It’s very, very significant. And so now there is a replicated, bulletproof experiment that demonstrates the existence of psychic ability. The methodology was vetted by a prestigious mainstream journal, so there is no wiggle room there. You can’t pull the “selective reporting” (a.k.a. filedrawer effect) card because the study hasn’t been around long enough for a bunch of unreported studies to have taken place. And just as importantly, care was taken to provide exact replications. It’s not really possible to dispute the results of this experiment in any truly scientific way, which is why skeptics have resurrected the whole “statistical science itself must be flawed” argument. They have completely run out of any remotely sane options for disagreement.
So what now? Well, it’s always possible that some group of skeptics will get together to do a Bayesian analysis and set their prior probability number to something impossible and then claim that the meta analysis actually showed no effect. It wouldn’t be the first time that has happened. Daryl Bem, Jessica Utts and Wesley Johnson did an excellent and very readable paper on how this game was played with Bem’s original study. There,Wagenmakers, Wetzels, Borsboom, and van der Maas (2011) did something ridiculous:
The first and most familiar is the prior odds that H0 rather than H 1 is true. It is here that Wagenmakers et al. (2011) formally expressed their prior skepticism about the existence of psi by setting these odds at 99,999,999,999,999,999,999 to 1 in favor of H0. [The odds that psychic ability does not exist.]
If they pulled the same stunt with the meta analysis, that huge number would have to be exponentially higher and be so ridiculous that even they might balk at it, but the question is, would anyone examine the skeptical paper to find out if the prior probability has been set to a plausible number? Probably not. What the media would focus on would be that someone disagreed. It’s complete garbage, but it would make the results of the meta analysis “controversial” and that’s all that the mainstream press would need to blow it off. Aaaand, it wouldn’t be the first time this has happened.