Solar Silliness: The Heart-Sun Connection

By Neuroskeptic | March 22, 2018 3:11 pm

heart_sunOn Twitter, I learned about a curious new paper in Scientific Reports: Long-Term Study of Heart Rate Variability Responses to Changes in the Solar and Geomagnetic Environment by Abdullah Alabdulgader and colleagues.

According to this article, the human heart “responds to changes in geomagnetic and solar activity”. This paper claims that things like solar flares, cosmic rays and sunspots affect the beating of our hearts.

Spoiler warning: I don’t think this is true. In fact, I think the whole paper is based on a simple statistical error. But more on that later.

Here’s how the study worked. The authors – an international team including researchers from Saudi Arabia, Lithuania, NASA, and the HeartMath Institute (no, really) – recorded the heartbeats of 16 female volunteers. Data collection spanned a period of five months, with the cardiac recordings running for up to 72 hours at a stretch. These ECG recordings were then used to calculate the heart rate variability (HRV) from moment to moment. HRV measures the beat-to-beat variability in the heart rate, and is thought to be an index of heart health as well as emotional arousal.

The main part of the study was the correlation of the HRV data against 9 different ‘solar and geomagnetic’ phenomena. Here’s an overview of these cosmic variables:

solar_geomagneticFor each participant in the study, the authors correlated aspects of the HRV timeseries against the geosolar variables. This was done using linear regression. A large number of these regressions were performed, because the authors wanted to try various ‘lags’ for each of the geosolar measures, to test whether HRV was associated with (say) cosmic ray count 3 hours previously (or 4 hours, or 5 hours… up to 40 hours.) If this sounds like a lot of statistical tests, it was – but the authors corrected for multiple comparisons in a rigorous way.

Based on the results of this analysis, the authors found that “HRV measures react to changes in geomagnetic and solar and activity during periods of normal undisturbed activity… cosmic rays, solar radio flux, and Schumann resonance power are all associated with increased HRV.”

Unfortunately, I think the analysis is fatally flawed. The problem is one that regular readers may remember: autocorrelation, also known as non-independence of observations.

Simply put, you shouldn’t use linear regression to compare two time-series. This is because the basic assumption of any regression (or correlation) analysis is that the data points are independent of each other, and in a time-series, the points are not independent, because two observations close together in time are likely to be more similar than two observations far apart in time (or in technical terms, time-series are usually autocorrelated).

Non-independence is an insidious statistical problem that accounts for many spurious results. Previously I’ve blogged about two(1, 2) published papers which were, I believe, based on false conclusions caused by failing to account for non-independent data. This paper makes a third.

*

Here’s a simple analysis I ran to illustrate how autocorrelation can generate spurious correlations. I couldn’t use the data from the sun-heart study for this purpose, because the authors don’t seem to have shared it, so I took two time-series datasets from the internet.

The first dataset is the monthly temperature average for London, England. The second is the yearly number of publications on PubMed containing the words ‘heart rate variability’ for the past 12 years (2006-2017). These are the first two variables I thought up: I did not cherry-pick them.

autocorrelation_example

Clearly, there cannot be a true relationship between these two time-series. They are unrelated in every way. They don’t even have the same timescale: one is in months, the other is in years.

However, if you calculate the correlation coefficient between these two, it is statistically significant (p < 0.05), in 7 / 12 cases! The 12 different cases are the 12 possible ways of lining up the two time-series – i.e. January = 2006, or January = 2007, or January = 2008… and so on. Remember, Alabdulgader et al. tried lots of different alignments (lags) too.

The reason for the high false positive rate is autocorrelation: both of the time-series show gradual changes over time, so each datapoint is quite similar to the previous one, meaning that the datapoints are not independent.

Alabdulgader et al. corrected for the problem of multiple comparisons, but this correction does not solve the problem of autocorrelation. They’re two quite different problems.

If I’m right about this, the associations reported in the Alabdulgader et al. paper are most likely spurious and due to chance alone. As the data from this paper don’t seem to be public, I can’t prove that this is true, but I would be surprised if I’m wrong.

See also: Orac’s take on this paper.

CATEGORIZED UNDER: papers, select, statistics, Top Posts, woo
ADVERTISEMENT
  • smut clyde

    The Intertubes showed a nice example using random walks.
    https://i.stack.imgur.com/yPhrx.png
    This was literally the first hit from searching for “time series correlation significance”.

  • Marc Lustig

    Have you shared your conclusions with the authors and asked them if you could use the data? Most scholars I know don’t go through the process of publishing their data, but are happy to provide them when asked.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      I might do this, if I make time!

  • LCND

    This is a huge problem and one that shows up in a lot of places. There are a few answers to how to deal with it, two off the top of my head are: 1. to use permutation testing/bootstrapping of your data in a way that preserves the autocorrelation 2. to remove the autocorrelation by prewhitening.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Yes, I think permutation testing is the way forward here. You would permute the pairings between heart timeseries and solar timeseries (so e.g. you might pair up heart measure from today with solar measure from 3 weeks in the future)

      The only tricky thing is that the authors kind of already did this with their “multiple lags” analysis!

      So you might have to avoid permutations which paired up two timeseries that were within 40 hours of each other.

      • LCND

        Prewhitening should work just as well if done properly. The “serial correlations” discussed in the paper below is exactly the same issue. As I recall it has been discussed in the context of fMRI resting state/functional connectivity as well by Georgopolous. It should take care of any linear autocorrelation issue perfectly. Permutation testing is good because it can take care of any data structure issues, not just autocorrelation, as long as the permutations are done in such a way as to preserve that structure (which is not always trivial).

        https://www.spiedigitallibrary.org/journals/Neurophotonics/volume-3/issue-3/031410/Correction-of-motion-artifacts-and-serial-correlations-for-real-time/10.1117/1.NPh.3.3.031410.short?SSO=1

      • smut clyde

        So the rigorous approach would be to perform a large series of correlations with random pairings and lags, to define the distribution of correlations that can arise from chance; then define the window of lags and matches where one might look for a causal effect, and see where the results fit within that distribution.

        In the present paper, the authors have applied the first part of the process, and simply taken the peak correlations to be significant.

  • OWilson

    The entire universe is inter-related,

    What does the moon have to do with a woman’s menstrual Cycle?

  • Ricardo Vieira

    Potentially relevant to HRV research among non-physiologists: https://psyarxiv.com/637ym

  • Pingback: Bad Science of the Havana Embassy “Sonic Attack” – KESIMPULAN()

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+