I know most of us here have pretty low expectations of ‘skeptical’ articles written by Michael Shermer. But his latest ‘Skeptic’ column for Scientific American pretty much wins the prize for plumbing the lowest depths. This time around, he manages to heavily imply that Daryl Bem – a respected and influential psychologist – ‘massaged’ the data in his recent, controversial precognition study. He finishes his article by picking out a quote from CSICOPian James Alcock, who critiqued Bem’s experiments in an article for Skeptical Inquirer titled “Back from the Future“:
Perhaps they missed what psychologist James Alcock of York University in Toronto found in Bem’s paper entitled “Writing the Empirical Journal Article” on his Web site, in which Bem instructs students: “Think of your data set as a jewel. Your task is to cut and polish it, to select the facets to highlight, and to craft the best setting for it. Many experienced authors write the results section first.”
Now, Bem has already responded to elements of Alcock’s critique, and Dean Radin has also criticised Alcock’s “rewriting” of psi history/results. But I thought that perhaps I’d check the quote that Shermer uses, to have a look at the context. Firstly I noticed that Shermer had only used part of a longer excerpt quoted in Alcock’s article, which featured even more ‘dodgy’ suggestions for writing a research paper from Bem. However, I couldn’t fail to notice a number of ellipses in Alcock’s quoting, so I went back to Bem’s actual article to see what was missing. And wouldn’t you know it, the missing bits are fairly important.
In his article, Bem is advising researchers to not just ‘go through the motions’ with their data – instead he exhorts them to look through the data for other interesting patterns, for this is how real discoveries are made. Alcock quotes the parts of his article which suggest that Bem just looks for patterns and makes post-hoc conclusions from them, but conveniently leaves out the sections where Bem warns about the dangers of this. And Shermer plays off the “write the results first” passage, even though it’s simply suggesting a method of *writing* the paper, rather than being an admission of starting the experiment with a conclusion already in mind.
Below I have quoted the section from the Bem article that Alcock quoted, and italicised the parts that Alcock left out (and enclosed them in brackets and ellipses). I have also bolded particular phrases which Alcock chose to edit out that are of considerable importance:
Once upon a time, psychologists observed behavior directly, often for sustained periods of time. No longer. Now, the higher the investigator goes up the tenure ladder, the more remote he or she typically becomes from the grounding observations of our science. If you are already a successful research psychologist, then you probably haven’t seen a participant for some time. Your graduate assistant assigns the running of a study to a bright undergraduate who writes the computer program that collects the data automatically. And like the modern dentist, the modern psychologist rarely even sees the data until they have been cleaned by human or computer hygienists.
To compensate for this remoteness from our participants, let us at least become intimately familiar with the record of their behavior: the data. Examine them from every angle. Analyze the sexes separately. Make up new composite indices. If a datum suggests a new hypothesis, try to find further evidence for it elsewhere in the data. If you see dim traces of interesting patterns, try to reorganize the data to bring them into bolder relief. If there are participants you don’t like, or trials, observers, or interviewers who gave you anomalous results, drop them (temporarily). Go on a fishing expedition for something — anything — interesting.
…[No, this is not immoral. The rules of scientific and statistical inference that we overlearn in graduate school apply to the “Context of Justification.” They tell us what we can conclude in the articles we write for public consumption, and they give our readers criteria for deciding whether or not to believe us. But in the “Context of Discovery,” there are no formal rules, only heuristics or strategies. How does one discover a new phenomenon? Smell a good idea? Have a brilliant insight into behavior? Create a new theory? In the confining context of an empirical study, there is only one strategy for discovery: exploring the data.
Yes, there is a danger. Spurious findings can emerge by chance, and we need to be cautious about anything we discover in this way. In limited cases, there are statistical techniques that correct for this danger. But there are no statistical correctives for overlooking an important discovery because we were insufficiently attentive to the data. Let us err on the side of discovery.]…
When you are through exploring, you may conclude that the data are not strong enough to justify your insights formally, but at least you are now ready to design the ‘right’ study. …[If you still plan to report the current data, you may wish to mention the new insights tentatively, stating honestly that they remain to be tested adequately.]… Alternatively, the data may be strong enough to justify re-centering your article around the new findings and subordinating or even ignoring your original hypotheses.
…[This is not advice to suppress negative results. If your study was genuinely designed to test hypotheses that derive from a formal theory or are of wide general interest for some other reason, then they should remain the focus of your article. The integrity of the scientific enterprise requires the reporting of disconfirming results.]…
Your overriding purpose is to tell the world what you have learned from your study. If your research results suggest a compelling framework for their presentation, adopt it and make the most instructive findings your centerpiece. Think of your data set as a jewel. Your task is to cut and polish it, to select the facets to highlight, and to craft the best setting for it. Many experienced authors write the results section first.
To be clear, I have left a few passages out as well, in order to point out clearly the specific, important passages that Alcock omitted – you can read the entire article here.
That’s not to say that there may not be genuine problems in replicating Bem’s results. Ben Goldacre mentions a new paper from Richard Wiseman, Chris French and Stuart Richie which replicated one of Bem’s experiments, and failed to find any significant results. I’ve seen the paper, and it seems to be pretty solid (though I of course leave it to more informed participants to debate the finer points). It was interesting to note though, that Ben Goldacre – a ‘watchdog’ of bad science writing – himself writes quite a poor article about the experiments. He says that the researchers “have re-run three of these backwards experiments”, when in fact they re-ran only one of Bem’s nine experiments, three times. Worse though is his framing of the experiments as “cheesy”, saying “I wasn’t very interested, for the same reasons you weren’t”. Actually, dear writer, I was very interested in Bem’s experiments…so please don’t talk on my behalf.
But the ripple effect begins: for example, this io9 article tells readers that, based only on Shermer’s debunking article(!), “we can officially dismiss this study.” And why wouldn’t you trust the writer, with his obvious familiarity with Bem’s research paper, when he points out another big problem: “None of Bem’s other eight experiments showed any signs of such a precognitive effect”. Except maybe for the small fact that Bem’s paper says that of his nine experiments “all but one of them yielded statistically significant results” (though, to be fair, that note was buried in the abstract…I mean, who’s going to read that far?). Eight out of nine were significant, not eight out of nine were non-significant…just a teensy error. Perhaps a little article updating might be in order Mr Wilkins?
But beyond the validity of Bem’s precognition results, which remains undecided at this point, it’s worth standing back and looking at this reaction from the likes of Alcock, Shermer and Goldacre (and other scientists and journalists). What drives the antipathy? Bem is highly respected in his field, he designed “kosher” experiments which showed up something interesting (though not exactly proving it). This is where science is meant to say dispassionately “let’s see if it can be replicated”. Instead we have a columnist for Scientific American going out of his way to bring Bem’s scientific approach into disrepute, based on another psychologist’s use of selective quoting to sow doubt in the reader’s mind. I think a guy by the name of Michael Shermer can probably explain this bizarre behaviour for us:
Scepticism is integral to the scientific process, because most claims turn out to be false. Weeding out the few kernels of wheat from the large pile of chaff requires extensive observation, careful experimentation and cautious inference. Science is scepticism and good scientists are sceptical.
Denial is different. It is the automatic gainsaying of a claim regardless of the evidence for it – sometimes even in the teeth of evidence. Denialism is typically driven by ideology or religious belief, where the commitment to the belief takes precedence over the evidence. Belief comes first, reasons for belief follow, and those reasons are winnowed to ensure that the belief survives intact.
Insightful!
You might also like…