Click here to support the Daily Grail for as little as $US1 per month on Patreon

Slippery Skepticism

I know most of us here have pretty low expectations of ‘skeptical’ articles written by Michael Shermer. But his latest ‘Skeptic’ column for Scientific American pretty much wins the prize for plumbing the lowest depths. This time around, he manages to heavily imply that Daryl Bem – a respected and influential psychologist – ‘massaged’ the data in his recent, controversial precognition study. He finishes his article by picking out a quote from CSICOPian James Alcock, who critiqued Bem’s experiments in an article for Skeptical Inquirer titled “Back from the Future“:

Perhaps they missed what psychologist James Alcock of York University in Toronto found in Bem’s paper entitled “Writing the Empirical Journal Article” on his Web site, in which Bem instructs students: “Think of your data set as a jewel. Your task is to cut and polish it, to select the facets to highlight, and to craft the best setting for it. Many experienced authors write the results section first.”

Now, Bem has already responded to elements of Alcock’s critique, and Dean Radin has also criticised Alcock’s “rewriting” of psi history/results. But I thought that perhaps I’d check the quote that Shermer uses, to have a look at the context. Firstly I noticed that Shermer had only used part of a longer excerpt quoted in Alcock’s article, which featured even more ‘dodgy’ suggestions for writing a research paper from Bem. However, I couldn’t fail to notice a number of ellipses in Alcock’s quoting, so I went back to Bem’s actual article to see what was missing. And wouldn’t you know it, the missing bits are fairly important.

In his article, Bem is advising researchers to not just ‘go through the motions’ with their data – instead he exhorts them to look through the data for other interesting patterns, for this is how real discoveries are made. Alcock quotes the parts of his article which suggest that Bem just looks for patterns and makes post-hoc conclusions from them, but conveniently leaves out the sections where Bem warns about the dangers of this. And Shermer plays off the “write the results first” passage, even though it’s simply suggesting a method of *writing* the paper, rather than being an admission of starting the experiment with a conclusion already in mind.

Below I have quoted the section from the Bem article that Alcock quoted, and italicised the parts that Alcock left out (and enclosed them in brackets and ellipses). I have also bolded particular phrases which Alcock chose to edit out that are of considerable importance:

Once upon a time, psychologists observed behavior directly, often for sustained periods of time. No longer. Now, the higher the investigator goes up the tenure ladder, the more remote he or she typically becomes from the grounding observations of our science. If you are already a successful research psychologist, then you probably haven’t seen a participant for some time. Your graduate assistant assigns the running of a study to a bright undergraduate who writes the computer program that collects the data automatically. And like the modern dentist, the modern psychologist rarely even sees the data until they have been cleaned by human or computer hygienists.

To compensate for this remoteness from our participants, let us at least become intimately familiar with the record of their behavior: the data. Examine them from every angle. Analyze the sexes separately. Make up new composite indices. If a datum suggests a new hypothesis, try to find further evidence for it elsewhere in the data. If you see dim traces of interesting patterns, try to reorganize the data to bring them into bolder relief. If there are participants you don’t like, or trials, observers, or interviewers who gave you anomalous results, drop them (temporarily). Go on a fishing expedition for something — anything — interesting.

…[No, this is not immoral. The rules of scientific and statistical inference that we overlearn in graduate school apply to the “Context of Justification.” They tell us what we can conclude in the articles we write for public consumption, and they give our readers criteria for deciding whether or not to believe us. But in the “Context of Discovery,” there are no formal rules, only heuristics or strategies. How does one discover a new phenomenon? Smell a good idea? Have a brilliant insight into behavior? Create a new theory? In the confining context of an empirical study, there is only one strategy for discovery: exploring the data.

Yes, there is a danger. Spurious findings can emerge by chance, and we need to be cautious about anything we discover in this way. In limited cases, there are statistical techniques that correct for this danger. But there are no statistical correctives for overlooking an important discovery because we were insufficiently attentive to the data. Let us err on the side of discovery.]…

When you are through exploring, you may conclude that the data are not strong enough to justify your insights formally, but at least you are now ready to design the ‘right’ study. …[If you still plan to report the current data, you may wish to mention the new insights tentatively, stating honestly that they remain to be tested adequately.]… Alternatively, the data may be strong enough to justify re-centering your article around the new findings and subordinating or even ignoring your original hypotheses.

…[This is not advice to suppress negative results. If your study was genuinely designed to test hypotheses that derive from a formal theory or are of wide general interest for some other reason, then they should remain the focus of your article. The integrity of the scientific enterprise requires the reporting of disconfirming results.]…

Your overriding purpose is to tell the world what you have learned from your study. If your research results suggest a compelling framework for their presentation, adopt it and make the most instructive findings your centerpiece. Think of your data set as a jewel. Your task is to cut and polish it, to select the facets to highlight, and to craft the best setting for it. Many experienced authors write the results section first.

To be clear, I have left a few passages out as well, in order to point out clearly the specific, important passages that Alcock omitted – you can read the entire article here.

That’s not to say that there may not be genuine problems in replicating Bem’s results. Ben Goldacre mentions a new paper from Richard Wiseman, Chris French and Stuart Richie which replicated one of Bem’s experiments, and failed to find any significant results. I’ve seen the paper, and it seems to be pretty solid (though I of course leave it to more informed participants to debate the finer points). It was interesting to note though, that Ben Goldacre – a ‘watchdog’ of bad science writing – himself writes quite a poor article about the experiments. He says that the researchers “have re-run three of these backwards experiments”, when in fact they re-ran only one of Bem’s nine experiments, three times. Worse though is his framing of the experiments as “cheesy”, saying “I wasn’t very interested, for the same reasons you weren’t”. Actually, dear writer, I was very interested in Bem’s experiments…so please don’t talk on my behalf.

But the ripple effect begins: for example, this io9 article tells readers that, based only on Shermer’s debunking article(!), “we can officially dismiss this study.” And why wouldn’t you trust the writer, with his obvious familiarity with Bem’s research paper, when he points out another big problem: “None of Bem’s other eight experiments showed any signs of such a precognitive effect”. Except maybe for the small fact that Bem’s paper says that of his nine experiments “all but one of them yielded statistically significant results” (though, to be fair, that note was buried in the abstract…I mean, who’s going to read that far?). Eight out of nine were significant, not eight out of nine were non-significant…just a teensy error. Perhaps a little article updating might be in order Mr Wilkins?

But beyond the validity of Bem’s precognition results, which remains undecided at this point, it’s worth standing back and looking at this reaction from the likes of Alcock, Shermer and Goldacre (and other scientists and journalists). What drives the antipathy? Bem is highly respected in his field, he designed “kosher” experiments which showed up something interesting (though not exactly proving it). This is where science is meant to say dispassionately “let’s see if it can be replicated”. Instead we have a columnist for Scientific American going out of his way to bring Bem’s scientific approach into disrepute, based on another psychologist’s use of selective quoting to sow doubt in the reader’s mind. I think a guy by the name of Michael Shermer can probably explain this bizarre behaviour for us:

Scepticism is integral to the scientific process, because most claims turn out to be false. Weeding out the few kernels of wheat from the large pile of chaff requires extensive observation, careful experimentation and cautious inference. Science is scepticism and good scientists are sceptical.

Denial is different. It is the automatic gainsaying of a claim regardless of the evidence for it – sometimes even in the teeth of evidence. Denialism is typically driven by ideology or religious belief, where the commitment to the belief takes precedence over the evidence. Belief comes first, reasons for belief follow, and those reasons are winnowed to ensure that the belief survives intact.

Insightful!

You might also like…

Editor
  1. Responsibility
    As always one concentrates on the people who write such articles but of course the fault lies much deeper as one follows it along.
    First the editors of the media and journals are the guilty ones for their editorial policy’s, next the owners of the media, next the politicians for allowing untrue bias in all areas to be allowed.
    But at the end of the day, the people, for being taken in and not having the time or inclination to search and find the truth.

    This is why capitalism must keep people struggling to pay a mortgage or the gas bill, so they have no mind to delve into the deeper aspects of life, such as, production since the 1950’s has risen many fold so how can it be that people in general are not retiring at say 40.
    Instead of fearing loss of jobs we should of course celebrate every time technology frees more workers for leisure time, or to help others.
    Every need and fair luxuries could be produced by less than half the workforce.
    Rossi’s Ecat in October will give the world another chance to, get it “right,” we shall see.

  2. from the Zeiticism-or-Bust-Dept.
    Which always makes me wonder if people have any free will at all, at all :3

    I’ve seen Shermer be agnostic aboot what he has been investigating — there is this one youtube video where he investigates one astrologer’s claims…and the results at the end show that SOMETHING is going on :3

  3. Evening the scales
    Nice article Greg! It’s always sad to see “reputable” sources unfairly quoting and representing another’s work. That me even happier to know that there are people like you assessing their work critically, not just framing “evidence” in a partial manner.

    As an aside, I find the all the buzz about this article much more interesting than the results myself. If I remember correctly the precognitive effect was only found <55% of the time. I appreciate what statistics can do for us, but based on my own common sense (whatever that means), 55%, albeit better than chance (true), is not very convincing to me.

  4. Eroding the Ramparts of Dogmatic Rationalism
    Thank you for bringing this to attention!

    I am finishing up a terrific book, which many here may have read: George P. Hansen’s “The Trickster and the Paranormal.” Hansen discusses at length how the the dogmatic rationalists of CSICOP (who, he notes, rarely practice science themselves) feel compelled to marginalize scientific research into the paranormal by any means necessary. They must prevent the inherently destabilizing nature of such phenomena from eroding the ramparts of rationalism’s dominance and threatening the skeptic’s own elevated status. It is, therefore, in his self-interest to debunk and ridicule any parapsychological study which may claim positive results.

    Psi blurs boundaries. It subverts the binary divisions between outer and inner, mind and matter, and perhaps most importantly, subject and object. The subject-object divide is foundational not only to science but to the whole of Western thought. Any subject which calls into question the dominant worldview should expect to come under attack by those who benefit most from the existing social order.

  5. positive replication
    Hey Greg.

    First of all great sight and post. In your twitter feed you mentioned a positive replication (in relation to the new scientist article). Any more details?
    Michael.

  6. How does Shermer get work?
    Seriously, he must have photos of SciAm editors. How Shermer’s “review” of Leslie Kean’s UFOs book got printed beggars belief. Not only is it blatantly obvious he didn’t read the book and only skimmed the first chapter, but he stoops to ridicule and the old “anal probe” meme:

    [quote=Shermer]… what I call Completely Ridiculous Alien Piffle (CRAP), such as crop circles and cattle mutilations, alien abductions and anal probes, and human-alien hybrids?[/quote]

    Seriously SciAm — CRAP? That constitutes a mature, scientific review? The irony is Kean is the real skeptic, whereas Shermer is just a priest of CSI dogma.

    Afterall, this is the guy who played with alien action figures during a live television debate about UFOs on Larry King.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Mobile menu - fractal