Perceiving Randomness

By Sean Carroll | April 6, 2009 9:50 am

The kind way to say it is: “Humans are really good at detecting patterns.” The less kind way is: “Humans are really good at detecting patterns, even when they don’t exist.”

I’m going to blatantly swipe these two pictures from Peter Coles, but you should read his post for more information. The question is: which of these images represents a collection of points selected randomly from a distribution with uniform probability, and which has correlations between the points? (The relevance of this exercise to cosmologists studying distributions of galaxies should be obvious.)

randompoints.gif

The points on the right, as you’ve probably guessed from the set up, are distributed completely randomly. On the left, there are important correlations between them.

Humans are not very good at generating random sequences; when asked to come up with a “random” sequence of coin flips from their heads, they inevitably include too few long strings of the same outcome. In other words, they think that randomness looks a lot more uniform and structureless than it really does. The flip side is that, when things really are random, they see patterns that aren’t really there. It might be in coin flips or distributions of points, or it might involve the Virgin Mary on a grilled cheese sandwich, or the insistence on assigning blame for random unfortunate events.

Bonus link uncovered while doing our characteristic in-depth research for this post: flip ancient coins online!

CATEGORIZED UNDER: Science
  • http://backreaction.blogspot.com/ Bee

    Well, you know, I’ve been told already in high school that when asked to produce a random string humans won’t include sufficient repetitions. Now guess what I do when I’m ask to write down a sequence of random numbers. I wouldn’t be surprised if one day somebody repeats this exercise and finds humans actually produce too many strings with same numbers, just to make sure.

  • TimG

    For those who enjoy memorizing irrational numbers, you can cheat on things like this.

    E.g.: “random” coin flips (corresponding to digits of pi modulo 2)

    HHTHHHTTHHHTHHHHTHTTTTTTHHTHT…

    Of course, I’m assuming that the digits of pi are statistically random. Is this proven?

  • Andy C

    I feel I must add to that list ‘Technical Analysis’ (an exercise in identifying patterns in charts of financial instruments in an attempt to predict future direction). As Mandelbrot noted on a number of occasions, it is frightening how much money changes hands on this faulty thinking.

  • http://telescoper.wordpress.com Peter Coles

    Sean

    Thanks for adding the link to my page. It’s nice to get a few hits on items other than those about the doom and gloom about physics funding in the UK!

    Peter

  • Aatash

    The main assumption in Technical Analysis is that the behavior of stock prices is NOT random. Technical Analysis is based on the idea that if you dig into historical data you can identify patterns which are repeated over the lifetime of the instrument (which is not totally unreasonable given the swing of mood and psychology in the market); so in this way, in Technical Analysis you exploit your knowledge of the identified patterns in order to make money betting on the fact that the historical patterns will be repeated. Paul Wilmott says ‘Technical Analysis is bunk!’, but that’s perhaps out of his academic prejudice. Who gives a damn whether you made money with technical analysis or with the sophisticated stochastic quantitative finance models? Indeed we have now seen a few times how stochastic models can be efficient in hiding and camouflaging the big events of the market such as the formation of bubbles.

  • Matt

    Sean-

    I’ve noticed from your recent work on cosmology and the arrow of time that you seem to use the term entropy somewhat more broadly than its strict definition, to include the concept of algorithmic (or Kolmogorov) complexity, which characterizes the disorder of a single, disordered state, rather than an ensemble of states with a probability scheme. See, for instance, the corresponding Wikipedia entries:

    http://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics)
    http://en.wikipedia.org/wiki/Kolmogorov_complexity

    See also, for example, Zurek’s paper (Phys. Rev. A 40, 4731 – 4751 (1989)) carefully distinguishing between the two concepts:

    http://prola.aps.org/abstract/PRA/v40/i8/p4731_1

    Since you’re writing a book that will feature entropy and disorder in a big way, perhaps you could take the opportunity to enlighten the public on this subtle but important distinction…

  • Matt

    (One important distinction being that the algorithmic complexity of a pure state can increase, even under unitary time evolution.)

  • Matt

    Algorithmic complexity, in particular, is what distinguishes the two quantum states:

    |1111111111>

    and:

    |1011101011>

    despite both states being pure. The algorithmic complexity is what distinguishes between a pure state in which all the gas molecules are in one tiny corner of a box from a pure state in which they are spread around throughout the box randomly, despite the fact that both states are pure and hence have vanishing entropy.

  • Brian

    TimG:

    Of course, I’m assuming that the digits of pi are statistically random. Is this proven?

    No, it’s not proven. Pretty much everybody believes that they are, but it’s a very difficult thing to actually prove.

    Search on “is pi normal” for more info.

  • Matt

    The question over the randomness of the digits of pi is a perfect example of algorithmic complexity, as opposed to entropy.

  • Matt

    Here’s another example of the difference, with implications for the 2nd law of thermodynamics:

    Consider a box filled with 100 rubber superballs, which all start out crowded together in one of the top corners. The initial disorder is obviously very low. Release the superballs, and keep track of how they all move. (For 100 superballs, this task is eminently reasonable, with negligible impact on their trajectories.) After a few seconds, they will have spread out to occupy the whole box in a very disordered configuration.

    But the entropy the whole time has been precisely zero, since we have always known the system’s exact state! What’s increasing is the complexity, not the entropy. This scenario represents an important version of the 2nd law of thermodynamics, in which entropy plays no role.

  • Low Math, Meekly Interacting

    Apophenia is one of my favorite words.

    (A seemingly random comment…or IS IT?)

  • uncle sam

    The issue of percieving randomness should be connected to the issue of talking about it. There are big problems talking about randomness, and how to rate the truthfulness of statements about probability. I have long wondered how to treat the truth value of a statement like “70% chance of rain today.” How can we rate the truth value of such statements? Neither raining nor not raining can show the statement either true or false! Do such statements need a “collective” truth value? Can we say, if we gather 1,000 such predictions from a given forcaster and it rained only 40% of all those times, the statements are collectively “not very true” etc? But of course, what rightly defines the “collection” of note?

    BTW, there seems to be a limit on how many characters per post, but I don’t see that advertised. Is there, what is it, and it would be good CS to post that info. tx

  • uncle sam

    (I mean, per comment.)

  • Jesse M.

    Not that this is the sort of thing that really requires an attribution, but Peter Coles seems to have gotten his two visual examples from an illustration in Stephen Jay Gould’s book “Bully For Brontosaurus” (where Gould says the illustrations came from a computer program whipped up by his colleague, physicist Ed Purcell)–see pages 266 and 267 here (the nonrandom example is rotated 180, the random example is oriented the same way). I mention this mainly just because it’s a great Gould essay and worth checking out (it obviously made an impression on me if the illustrations in this post immediately reminded me of it, although I didn’t notice they were actual reproductions until I compared them–my memory for random dots isn’t that good!)

  • Matt

    And, as a final example, Peter Coles’ two pictures: Both pictures have zero entropy, since we know the precise state in each case. But one picture is more random than another, and hence has a larger algorithmic complexity. Less information would be required by a second party to reproduce the first picture than the second.

  • Ras

    Don’t you need to know the velocities to define the state, Matt?

  • Matt

    Oh, yes. But with only 100 macroscopic superballs, that information can easily be tracked (or simulated) by a computer.

    Of course, the concepts of complexity and entropy converge in the thermodynamic limit (say, of a box with zillions of molecules, even in the classical case) when our memory device is limited to a finite information storage capacity and therefore simply cannot store the full details of the exact state of the subject system.

    In that case, our memory device can only manage to record enough information to define the macrostate of the subject system—that is, information that defines a probability distribution. If the subject system is truly in a state of low (high) complexity, then our memory device can (must) employ a probability distribution exhibiting low (high) entropy, where the entropy -(sum)rho log rho of the probability distribution is of order the complexity of the state.

    That’s why we often use the terms “complexity” and “entropy” interchangeably. But in the Peter Coles’ example picture in the blog, the two terms are not equivalent; the two pictures both have zero entropy, since they are both exact states and not represented by probability distributions, but one is more random than the other.

  • Elliot

    That is not the Virgin Mary. I’m pretty sure I dated that girl in high school….

    e.

  • TimG

    Brian, thanks for the answer regarding normal numbers.

    Matt, I’m with you on macroscopic disorder being distinct from entropy, although it’s not so clear to me how one quantifies macroscopic disorder. Even if 100 balls all start out clustered close together, it seems to me that I’d still need 100 x, y, and z coordinates to tell you precisely where they are. If on the other hand the balls follow some pattern like “One ball exactly every ten centimeters” then it’s more clear to me how this allows an abbreviated description.

    Regarding your example of two quantum states |1111111111> and |1011101011>, isn’t the greater complexity of the second case merely a consequence of our choice of basis, rather than some inherent property of the system itself?

    Then again, I suppose the entropy of a system is likewise a function of our description of the system, in that it depends on how we partition microstates into macrostates. To be honest it’s never been completely clear to me why we have to group the states by the particular quantities we use (pressure, volume, temperature, etc.) — other than that these happen to be the things we’re good at measuring in the macroscopic system.

  • wererogue

    This comes up pretty quickly when you’re writing videogames – nobody likes it when the same random effect/sound byte keeps being played again and again.

  • operator

    Hey Matt, it’s the 19th Century on the line. They say they’re looking for their definition of entropy and they’re wondering if you’ve seen it.

  • Arnold

    The most interesting thing is that we can compare stochastic datasets.
    The sequence of n = 15 two-digit numbers

    03, 09, 27, 81, 43, 29, 87, 61, 83, 49, 47, 41, 23, 69, 07 (A)

    looks as random as the sequence

    37, 74, 11, 48, 85, 22, 59, 96, 33, 70, 07, 44, 81, 18, 55 (B)

    But the degrees of their stochasticity can be more objectively measured by the Kolmogorov parameter. It can be proved that that the stochasticity probability is approximately 4,700 times higher for the sequence (A) than for the sequence (B).

  • Interested Bystander

    Umm… all the superballs would end up sitting on the bottom of the box.

  • http://telescoper.wordpress.com Peter Coles

    Jesse M,

    You are right, I got the pictures from Stephen Jay Gould’s book and used them with appropriate credit in my book From Cosmos to Chaos .

    You will also find the same pair of images in various places around the web.

    Peter

  • Matt

    These are all great questions. Let me answer them in turn.

    So, as you know, given a probability distribution rho_i, where i labels the possible states, the entropy is defined by:

    entropy = -(sum i) rho_i log rho_i.

    It vanishes for a pure state–that is, when rho_i is equal to one for a single state and zero for all other states. The entropy is maximal if we have no information and must assign equal probability to all states. Given knowledge of the average energy of the system, we must maximize the entropy (i.e., our ignorance) subject to the constraint that has the given value, and we then obtain the familiar canonical ensemble of thermodynamics.

    Entropy can be measured in bits, provided that we take our logarithm base 2. (Changing the base is equivalent to multiplying the formula by a uniform constant.) As an example, the entropy of an unknown binary string n bits long, and therefore having 2^n possible states each with equal probability 1/2^n, is just n bits, as expected.

    Meanwhile, the algorithmic complexity of a single state can be defined in several ways. One simple definition is that it’s the number of bits in the shortest string needed to communicate the value of the state to a second individual. A set of 10 vertical arrows obviously requires a shorter description than a set of 10 randomly oriented arrows.

    Another definition of the complexity of a state is that its the number of bits in a miinimal algorithm or computer program needed to generate the value of the state.

    Now, you are correct that there remains some ambiguity in the definition of complexity. I could always change my labelings so that a state |1010111001> became |1111111111>. Obviously the definition of a complexity function depends on our labeling scheme for the states.

    But the crucial fact is that once we’ve chosen a complexity scheme, whatever our choice, there will always be exponentially more states of high complexity than low complexity, because the complexity function defines a one-to-one correspondence between the states of the system and the set of minimal binary messages representing them. That’s the crucial feature of complexity. That fact ensures the various properties I described earlier.

  • Matt

    For example, the exponential property of the complexity ensures that for a system with large entropy in the thermodynamic limit, all but a negligible number of states in the probability distribution will have high complexity, of order the entropy. One can show that the complexity is peaked at an average value of order the entropy.

    Also, if our memory device is very limited, then we have no choice but to describe very complex states by means of a probability distribution with comparable entropy.

    Finally, given an initial single state with low complexity, the system is highly likely to be found in a state of high complexity later on, again because there are exponentially many more states of high versus low complexity. This is the complexity version of the 2nd law of thermodynamics, valid even for perfectly closed systems with unitary time evolution.

  • Andy C

    @Aatash,

    I’m not sure if you were trying to correct me, or just adding to my comment; but to clarify, I’m well aware that Technical Analysts do not consider stock price movements random (by ‘adding to the list’, I meant that TAs see patterns in random fluctuations). My reference to Mandelbrot was a nod to the fact that it has been shown that many (perhaps all) of the features that a TA would look for in stock price movements can be generated by, for example, multifractal models of the stock market. With regards to “Who gives a damn [how you make money]”, I think the key issue here is that if you have been making money by chance, based on a deeply flawed idea (be it technical analysis or some inappropriate stochastic model), then one day you may find that those flaws catch up to you, and relieve you of your capital.

  • NewEnglandBob

    Old topic.

  • Brian2

    Maybe, in terms of characteristics favored by evolution, correctly perceiving that the situation is not random can alert one to possible danger and perhaps save one’s life, whereas incorrectly thinking that things are not “normal” may merely lead to a minor waste of time and energy and a bit of unneccessary angst.

  • Pingback: Randomness: our brain deceive us « Mirror Image()

  • Alfonso

    May be humans try to find a patern were SEAMS to be random
    What we usually think is random (like the stars in the sky) is not
    For that reason we are good finding patterns were they are but we are not good at generating random series

  • Pingback: » Randomness is the materialist’s delusion()

  • Pingback: Arrows and Demons « In the Dark()

  • jpd

    Carl Pilkington seems to be a supported of Boltzmann brains.

  • http://www.physics.usyd.edu.au/~brewer/ Brendon Brewer

    >>the two pictures both have zero entropy, since they are both exact states and not represented by probability distributions<<

    Yay! I've been saying this for ages but it rarely seems to get through. Another example: a data set is never Gaussian.

  • Pingback: Perception, Piero and Pollock « In the Dark()

  • John

    Why do you believe that random-without-correlations is ‘more random’ than random-with-correlations? The output, for instance, of a Markov chain can have any correlation length whatsoever.

    But you’re not alone. My physics colleagues almost universally say random when they mean uncorrelated.

  • Pingback: Looking For Patterns - Tips for Creative Problem Solving « ZenStorming - Where Science Meets Muse…()

  • Pingback: Random Links XXXXI « Random Musings of a Deranged Mind()

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »