Stephanie West Allen brought to my attention a blog entry written by magician, motivational speaker, and physician, Steve Bedwell, entitled “When Harvard Meets Hogwarts: What Can Scientists Learn From Magicians?”. Let me start by saying that I have great respect for Bedwell as a magician and creative thinker. The first time I saw his “Reboxed” effect performed live, it broke my brain. His essay, which is critical of research examining the psychological and neuroscientific bases of magic, follows on the heels of Teller’s recent critique in Smithsonian Magazine (and on NPR), on which we commented here. Unfortunately, as was the case with Teller’s article, Bedwell’s critique is rather short-sighted and based on limited information. He argues that:
To date, much of the work that has been published on the neuroscience of magic has involved honing in on one marginal feature of extremely complex sleight-of-hand. The neuroscientific explanation of why the trick is deceptive then becomes all about this one, often tangential, subtlety.
This faulty assertion is derived, I believe, from two different problems: One of these is a general misunderstanding of the scientific method, and one is a failure of researchers to communicate the limitations of their research. As I noted in my commentary on Teller’s article, there is redundancy built into every piece of magic in order to optimize the odds of successful deception. However, it is not always clear that each of these redundancies has a real effect on the audience’s perception. This is one of the roles that science can play in the life of the magician…To determine whether the benefit of each subtlety is “real.”
More importantly, however, this “honing in” on a single element of deception is also the best route forward, as far as the scientific method is concerned. Much research is inherently reductionistic. That is, we have to be able to manipulate variables independently of each other in order to assess the relative contribution that each variable makes to changes in the dependent variable (in this case, deception). Theoretically, it would be possible to design an experiment that independently manipulates every aspect of a magic trick, such as the coin vanish described in Bedwell’s essay, within a factorial design. However, this would be extremely difficult, and the complexity of the final experiment could introduce a host of other potential confounds. So, instead, researchers will focus on a single variable or a small number of variables that can be manipulated independently of each other without much trouble.
This was the tactic taken in a recent paper examining perception of the “French Drop,” a piece of sleight of hand often used to make a coin vanish (Otero-Millan et al., 2011). The authors focused primarily on a single variable: the type of movement that the magician’s hand made as it retreated after feigning transfer of the coin from one hand to the other. However, there’s a clear reason why they chose to examine only one variable. They had a specific hypothesis that they were testing, which related only to that variable! This is the way science works, and it should not be interpreted as being ignorant of the fact that magical deceptions are multiply-determined. All dependent variables are influenced by myriad factors, but a good experiment holds these extra variables constant while manipulating only the variables that are relevant to the hypothesis. It’s also important to note that science builds upon itself, so future studies may examine other variables involved in successful deception via the french drop. The story certainly does not end with a single study.
This brings me to the second source of Bedwell’s gripe. Scientists are apt to exaggerate the implications of their findings. Unfortunately, this is a natural consequence of the publish-or-perish scientific culture and the nature of grant funding. Scientists have to convince funding agencies and the rest of the scientific community that their work is important. This is especially true for relatively new areas of inquiry, such as the science of magic. However, if an author’s exaggerations are not tempered in print by a parallel acknowledgement that additional factors are likely to be at play in the real world, a reader who is not accustomed to “discussion section exaggerations” could come away with the impression that the authors are myopic in their handling of the problem under investigation.
Returning to Bedwell’s essay, he continued by taking aim at the inattentional blindness work of Gustav Kuhn and and Ben Tatler (2005; advisers to two of the authors of this blog), who used a magic trick to examine the relationship between inattentional blindness (our tendency to miss salient pieces of our environment while engaged in an attentionally-demanding task) and eye movements. In the trick, the magician makes a cigarette vanish by dropping it into his lap while attention is directed elsewhere. (Click here to see the video and learn more about the method.) Bedwell says of Kuhn’s work:
However, any study employing magic (…) is confounded by one crucial factor. The experimental subjects are knowingly watching a magic trick and being asked to self-report on whether or not it fooled them. Here’s the problem: What about subjects who don’t see the dump of the cigarette, but figure out where the cigarette must be hiding? (…) How do the researchers distinguish between fooling both the eye and the mind, and fooling the eye but not the mind? This is especially relevant with experiments that employ unsophisticated magic tricks because, while this makes for relatively simple analysis, the tricks are also easily figured out.
As was the case with much of Teller’s critique of the science of magic, Bedwell demonstrates that he has a very limited knowledge of the work that has actually been carried out in this field. The “crucial factor” that he sees as a confound for the whole research program has, in fact, been accounted for within the design. In Kuhn and Tatler’s original study, only half of the participants knew that they were about to watch a magic trick. Beyond this, they were given extra information that usually isn’t available to audience members at a magic show. They were told exactly what was going to happen in the magic trick, and that their job was to figure out how the magic trick was accomplished. The other half believed that they were about to take part in a picture-rating task.
Surprisingly (and problematically for Bedwell’s critique), it turned out that the two groups differed very little in their viewing of the magic trick. Gaze patterns during viewing of the magic trick did not differ between groups, and rates of inattentional blindness differed very little, as well. None of the uninformed participants detected the method behind the magic trick, and only two of ten informed participants detected the falling cigarette, despite the fact that they had scads of information that should have helped them deploy their attention appropriately to detect the drop. Thus, Bedwell’s confound is hardly that.
Bedwell also fails to appreciate that the 2005 paper which he referenced was only the first in a line of experiments using this magic trick to examine inattentional blindness. Remember how science builds upon itself? Each additional paper refined the method and addressed further potential confounds and considerations. Specifically, both Kuhn et al. (2008) and Kuhn and Findlay (2010) assessed whether participant inference undermined their findings, as Bedwell suggested in his critique. It turns out that Bedwell’s criticism is another relative non-issue. If you watched the video in the link above, you may have noted that the cigarette vanish followed the disappearance of a cigarette lighter (using the same methodology). In addition to asking participants whether they detected how the cigarette vanish was accomplished, Kuhn et al. (2008) asked participants how the lighter disappeared. None of the participants who detected the falling cigarette claimed knowledge of how the lighter was made to vanish. Had they inferred information about the cigarette (detected with their minds rather than their eyes, in Bedwell’s terms), it would not have been a far leap to generalize that inference to the lighter vanish. Since none made this generalization, it suggests that there was minimal inference driving participant behavior. For this subset of participants, the cigarette drop did not fool their eyes or their minds, but the lighter vanish, using an identical methodology, fooled both!
Furthermore, Kuhn and Findlay (2010) directly assessed the inference hypothesis through a manipulation in their methodology. They instituted a “fake” condition where the falling object was digitally edited out of the video. Thus, if participants reported detection of a falling object, it could only be the result of inference, as there was no object to detect! Their eyes should be fooled, but their minds shouldn’t be. In this condition, none of the participants reported seeing how the trick was accomplished (which is good!). However, when prompted to guess at the method, 40% of participants correctly inferred that the object was dropped. In the “real” condition (where the object was visibly dropped), no participants who failed to detect the drop inferred the correct method. These results suggest that participants can successfully dissociate perception from inference and are generally honest in their self-reports, undermining Bedwell’s critique.
We at the “Science of Magic” blog do glean one very important point from Bedwell’s piece. While, as I’ve tried to demonstrate, his inference hypothesis is not a problem for Kuhn’s inattentional blindness research, it is an interesting topic for future research. What are the limits upon participants’ abilities to reconstruct the method of a magic trick post hoc? What are the variables that may interact with this inferential process? One of these is likely to be what magicians refer to as “time misdirection,” an idea that I am planning to address in the long-awaited final entry in my “Gestalt Magic” series here on the blog. Stay tuned on that one!
To conclude what I had intended to be a short-and-sweet blog entry, I must admit that I am rather befuddled by the group of magicians who are resistant to psychology’s interest in their methods. Bedwell may not be among this group. His piece opens with a generally positive perspective on the integration of science and magic. Admittedly, it is a very small but vocal subset of the magic community that opposes this research, and most every magician with whom I have spoken personally tends to be hugely supportive of the “science of magic” movement. Discomfort with the field of study may be due to the exaggeration inherent in popular media reporting of scientific results, which could also foster the perception that researchers have tunnel-vision when it comes to the underlying mechanisms of magic. I am all for fighting against bad science, but uninformed critiques do a disservice to valid science by painting a warped image of the methods that are used in the laboratory on a day-to-day basis. In a society whose political climate tends to undervalue science, dissemination of this kind of misinformation only acts to strengthen an irrational bias against the scientific establishment on the whole.
Kuhn, G. & Findlay, J. M. (2010). Misdirection, attention and awareness: Inattentional blindness reveals temporal relationship between eye movements and visual awareness. Quarterly Journal of Experimental Psychology, 63, 136-146. (link)
Kuhn, G. & Tatler, B. W. (2005). Magic and fixation: Now you don’t see it, now you do. Perception, 34, 1155-1161. (link)
Kuhn, G., Tatler, B. W., Findlay, J. M., & Cole, G. G. (2008). Misdirection in magic: Implications for the relationship between eye gaze and attention. Visual Cognition, 16, 391-405. (link)
Otero-Millan, J., Macknik, S. L., Robbins, A., & Martinez-Conde, S. (2011). Stronger misdirection in curved than in straight motion. Frontiers in Human Neuroscience, 5: 133. (link)