Do You Believe in Eye-Beams?

By Neuroskeptic | December 23, 2018 6:20 am

Do you believe that people’s eyes emit an invisible beam of force?

According to a rather fun paper in PNAS, you probably do, on some level, believe that. The paper is called Implicit model of other people’s visual attention as an invisible, force-carrying beam projecting from the eyes.

To show that people unconsciously believe in eye-beams, psychologists Arvid Guterstam et al. had 157 MTurk volunteers perform a computer task in which they had to judge the angle at which paper tubes would lose balance and tip over. At one side of the screen, a man was shown staring at the tube.

The key result was that volunteers rated the tube more likely to tip over if it was tilted in the direction away from the man gazing at it – as if the man’s eyes were pushing the tube away. The effect was small, with a difference in the estimated tip-angle of just 0.67 degrees between tipping-away and tipping-towards the man, but it was significant (p=0.006). No such effect was seen if the man was blindfolded, suggesting that his eyes had to be visible in order for the sense of force to be felt.

Some smaller follow-up experiments replicated the effect and also showed (Experiment 4) that the effect didn’t work if participants were told the tube was full of heavy concrete, which is consistent with the idea that people believed the eye-beams to be very weak.

Guterstam et al. conclude that:

People construct an implicit model of other people’s vision as an active process that emerges from an agent and that can physically affect objects in the world. This fictitious influence of gaze on objects is extremely subtle. If it were not, people would presumably notice the discrepancy between their perceptions and reality.

This is a fun paper because the belief that vision involves a force or beam coming out from the eyes is actually a very old one. The theory is called “extramission” and it was popular among the ancient Greeks, but few people would admit to believing in eye-beams today – even if the concept is well known in recent fiction:

hero_beams

In fact, Guterstam et al. quizzed the volunteers in this study and found that only about 5% explicitly endorsed a belief in extramission. Excluding these believers didn’t change the experimental results.

This study seems fairly solid, although it seems a little fortuitious that the small effect found by the n=157 Experiment 1 was replicated in the much smaller (and hence surely underpowered) follow-up experiments 2 and 3C. I also think the stats are affected by the old erroneous analysis of interactions error (i.e. failure to test the difference between conditions directly) although I’m not sure if this makes much difference here.

eyebeam

CATEGORIZED UNDER: funny, papers, select, Top Posts
ADVERTISEMENT
  • OWilson

    The ancient Greek version went something like, you can only see a small needle dropped on a floor, if your eye beams happen to fall on it. It does not “find” your eyes.

    Good enough, for those days. :)

    • Erik Bosma

      Well, they just really had it backwards.

    • Kamran Rowshandel

      That’s all the proof I need to know nobody actually thought eyebeams are a thing

  • Arvid Guterstam

    Thanks for this very nice post about our study. It was a happy surprise to discover it in my Twitter feed today! I would like the address the two points that you bring up in the last paragraph, one of which a reviewer of this paper actually also remarked on. I was meaning to write a short comment, but it got somewhat out of hand, so let me I apologize beforehand for a lengthy response!

    First, the fact that a greater number of participants was needed to observe a robust tilt bias effect in experiment 1 compared to experiments 2 & 3 is probably related to exp 1 being conducted online on MTurk while exp 2 & 3 were in-lab experiments. Conducting online studies using MTurk has many advantages, but a major drawback is that you can’t control where participants are looking on the screen, if they have other windows open while doing your task, if they properly understand the task instructions, etc. Thus, the signal-to-noise ratio in exp 1 was substantially lower than in exp 2 & 3 (which were conducted in a controlled laboratory environment with eye tracking), which could explain the higher number of participants needed in exp 1.

    The issue of how the report the statistics is one that we thought about deeply, and I am quite sure we reported them correctly. First, it should be noted that each of the bars shown in the figure is already a difference between two means (mean angular tilt toward the face vs. mean angular tilt away from the face), not itself a raw mean. What we report, in each case, is a statistical test on a difference between means. If I interpret your argument correctly, it suggests that the critical comparison for us is not this tilt difference itself, but the difference of tilt differences. In our study, however, I would argue that this is not the case, for a couple of reasons:

    In experiment 1 (a similar logic applies to exp 2), we explicitly spelled out two hypotheses. The first is that, when the eyes are open, there should be a significant difference between tilts toward the face and tilts away from the face. A significant different here would be consistent with a perceived force emanating from the eyes. Hence, we performed a specific, within-subjects comparison between means to test that specific hypothesis. Doing away with that specific comparison would remove the critical statistical test. Our main prediction would remain unexamined. Note that we carefully organized the text to lay out this hypothesis and report the statistics that confirm the prediction. The second hypothesis is that, when the eyes are closed, there should be no significant difference between tilts toward the face and tilts away from the face (null hypothesis). We performed this specific comparison as well. Indeed, we found no statistical evidence of a tilt effect when the eyes were closed. Thus, each hypothesis was put to statistical test. One could test a third hypothesis: any tilt difference effect is bigger when the eyes are open than when the eyes are closed. I think this is the difference of tilt differences asked for. However, this is not a hypothesis we put forward. We were very careful not to frame the paper in that way. The reason is that this hypothesis (this difference of differences) could be fulfilled in many ways. One could imagine a data set in which, when the eyes are open, the tilt effect is not by itself significant, but shows a small positivity; and when the eyes are closed, the tilt effect shows a small negativity. The combination could yield a significant difference of differences. The proposed test would then provide a false positive, showing a significant effect while the data actually do not support our hypotheses.

    Of course, one could ask: why not include both comparisons, reporting on the tests we did as well as the difference of differences? There are at least two reasons. First, if we added more tests, such as the difference of differences, along with the tests we already reported, then we would be double-dipping, or overlapping statistical tests on the same data. The tests then become partially redundant and do not represent independent confirmation of anything. Second, as easy as it may sound, the difference-of-differences is not even calculatable in a consistent manner across all four experiments (e.g., in the control experiment 4), and so it does not provide a standardized way to evaluate all the results.

    For all of these reasons, we believe the specific statistical methods reported in the manuscript are the simplest and the most valid. I totally understand that our statistics may seem to be affected by the erroneous analysis of interactions error, at first glance. But on deeper consideration, analyzing the difference-of-differences turns out to be somewhat problematical and also not calculatable for some of our data sets.

    • https://www.facebook.com/numberofGod Douglas J. Bender

      You misspelled “I-Beam”. 😉

    • Nurfaizatul Aisyah

      waoo! interesting study you have here. a lot of culture believe in the power of eyes or the ‘evil eyes’. Muslims do believe in ‘evil eye’ or in the Arabic word is ‘ain.

    • Nick

      Arvid, you wrote: “One could test a third hypothesis: any tilt difference effect is bigger when the eyes are open than when the eyes are closed.”

      It seems to me that that is exactly the hypothesis that you *are* testing when you (perhaps implicitly) invite the reader to note that the difference between the two directions was statistically significant in one condition and not in the other. Hence, you should be testing the interaction. There does not appear to be anything special about what you are doing that avoids the necessity to do this. It’s not a question of how you choose to describe your hypotheses.

      An alternative might be to compute D0 (the mean difference between the facing/away angles in the control condition) and D1 (the difference between the facing/away angles in the experimental condition) for each participant, and conduct a paired-samples t test.

      Out of interest, could you calculate the interaction (or the alternative t test) and report the result here? Alternatively, perhaps you could make the dataset available so that interested readers can do this for themselves? (I didn’t find any indication in the article that the data have been made available yet.)

  • Arvid Guterstam

    Thanks for this very nice post about our study. It was a happy surprise to discover it in my Twitter feed today! I would like the address the two points that you bring up in the last paragraph, one of which a reviewer of this paper actually also remarked on. I was meaning to write a short comment, but it got somewhat out of hand, so let me I apologize beforehand for a lengthy response!

    First, the fact that a greater number of participants was needed to observe a robust tilt bias effect in experiment 1 compared to experiments 2 & 3 is probably related to exp 1 being conducted online on MTurk while exp 2 & 3 were in-lab experiments. Conducting online studies using MTurk has many advantages, but a major drawback is that you can’t control where participants are looking on the screen, if they have other windows open while doing your task, if they properly understand the task instructions, etc. Thus, the signal-to-noise ratio in exp 1 was substantially lower than in exp 2 & 3 (which were conducted in a controlled laboratory environment with eye tracking), which could explain the higher number of participants needed in exp 1.

    The issue of how the report the statistics is one that we thought about deeply, and I am quite sure we reported them correctly. First, it should be noted that each of the bars shown in the figure is already a difference between two means (mean angular tilt toward the face vs. mean angular tilt away from the face), not itself a raw mean. What we report, in each case, is a statistical test on a difference between means. If I interpret your argument correctly, it suggests that the critical comparison for us is not this tilt difference itself, but the difference of tilt differences. In our study, however, I would argue that this is not the case, for a couple of reasons:

    In experiment 1 (a similar logic applies to exp 2), we explicitly spelled out two hypotheses. The first is that, when the eyes are open, there should be a significant difference between tilts toward the face and tilts away from the face. A significant different here would be consistent with a perceived force emanating from the eyes. Hence, we performed a specific, within-subjects comparison between means to test that specific hypothesis. Doing away with that specific comparison would remove the critical statistical test. Our main prediction would remain unexamined. Note that we carefully organized the text to lay out this hypothesis and report the statistics that confirm the prediction. The second hypothesis is that, when the eyes are closed, there should be no significant difference between tilts toward the face and tilts away from the face (null hypothesis). We performed this specific comparison as well. Indeed, we found no statistical evidence of a tilt effect when the eyes were closed. Thus, each hypothesis was put to statistical test. One could test a third hypothesis: any tilt difference effect is bigger when the eyes are open than when the eyes are closed. I think this is the difference of tilt differences asked for. However, this is not a hypothesis we put forward. We were very careful not to frame the paper in that way. The reason is that this hypothesis (this difference of differences) could be fulfilled in many ways. One could imagine a data set in which, when the eyes are open, the tilt effect is not by itself significant, but shows a small positivity; and when the eyes are closed, the tilt effect shows a small negativity. The combination could yield a significant difference of differences. The proposed test would then provide a false positive, showing a significant effect while the data actually do not support our hypotheses.

    Of course, one could ask: why not include both comparisons, reporting on the tests we did as well as the difference of differences? There are at least two reasons. First, if we added more tests, such as the difference of differences, along with the tests we already reported, then we would be double-dipping, or overlapping statistical tests on the same data. The tests then become partially redundant and do not represent independent confirmation of anything. Second, as easy as it may sound, the difference-of-differences is not even calculatable in a consistent manner across all four experiments (e.g., in the control experiment 4), and so it does not provide a standardized way to evaluate all the results.

    For all of these reasons, we believe the specific statistical methods reported in the manuscript are the simplest and the most valid. I totally understand that our statistics may seem to be affected by the erroneous analysis of interactions error, at first glance. But on deeper consideration, analyzing the difference-of-differences turns out to be somewhat problematical and also not calculatable for some of our data sets.

  • Pingback: Links for the Week of December 24th, 2018 – Verywhen()

  • John Washington Heights

    If “eye beam” is the proper term then I have no difficulty in registering my belief in them. Any habitué of the subway is familiar with the mysterious effect where looking at another’s face, who may be reading a book or be absorbed in his phone, maybe 20 or 30 feet away, will cause him suddenly to swivel his glance toward the onlooker. Let any who doubt experiment.

    • koenigsking

      That may be hard to explain but it’s even harder to argue with. Nice observation.

    • https://www.facebook.com/numberofGod Douglas J. Bender

      That is not evidence of an eye-beam. It is evidence of one’s outward-projecting astral force being sucked in by the person staring at you, and you detecting the existence and direction of your diminishing astral force. It’s science.

    • Bezoar

      Rupert Sheldrake is the British guy who espouses this ‘eye-beam’ idea. I remain dubious,

  • https://ridingtheirownmelting.wordpress.com/ cgs
    • http://www.id5984522.sexyofe.website jenniferrobidoux

      disqus_hZaIDLTy9w but

  • Ahcuah

    I can see it now. In addition to Creationists and Flat-Earthers, add Eye-Beamers.

    • Bezoar

      And, geocentrists.

      • https://youtu.be/6dm5fk84HtU Lee Rudolph

        I wonder what the joint distribution of Flat-Earthism and Geocentrism looks like. Certainly (meaning, based on my probably confused memory of what I’ve read about it, coupled with a complete lack of adverbial scruples) traditional Hollow-Earthers are overwhelmingly Geocentrists. But Flat-Earthers are a whole other kettlegriddle of fish.

        • misterveritas666

          I enjoy f*cking with both Flat Earthers and Hollow Earthers by insisting of websites that propose Flat Earth that the Earth is hollow, and the reverse on Hollow Earth sites, that the Earth is flat.

          Or I’ll just go on about how no Flat Earther can say what is on the OTHER side of the disk. Which is easy, doh, that’s where the Reptilian shapeshifters live.

          Actually, a Flat Earth disk and a Hollow Earth might be compatible, and a proper configuration of the inner Earth (in the shape of an extreme oval, or double funnel, with the wide part at the equator, might get around the gravity problem: on a Flat Earth where there is an equal distribution of matter on the disk, as you move towards the rim your travel feels increasingly uphill, due to the mass gravitational vectors. An inner double funnel shape would place most of the mass at the edges, so gravity on each surface would always be “down”.

  • Deplorablewinner

    It makes sense when considering the large brain and eyes of aliens, and their telepathic abilities. I wonder if the quantum conundrum of particles behaving differently when observed may be associated with ‘eye beam’ theory? My cat sends me a ‘feed me’ message a lot.

  • smut clyde
  • smut clyde
  • Pingback: More on extramissive theories – Towards the Second Book()

  • Pingback: Measuring The Strength Of A Person's Gaze – Scientific American | InnerCirclePress.com()

  • E3dmond

    Just ask hunters or bird watchers if they exist. They know never to look directly at the animals head/eyes or they will be spooked.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+