Valuing Negativity

By Mark Trodden | June 13, 2006 10:36 pm

Ben Goldacre’s most recent Bad Science section of The Guardian has a thought-provoking discussion of the relative ease of publication and degree of press coverage devoted to positive results in science, as opposed to negative results.

I expect that it is not particularly surprising to anyone that the media generally report sparsely, if at all, on negative results, while reserving significant acreage, hyperbole and optimism for new claims, however speculative. It is equally easy to identify the broad reasons why this happens. From the researcher’s side, one is often much more excited about new proposals to explain unsolved problems and therefore more likely to talk about them more. I also imagine that it would be much more difficult to persuade the Public Relations departments at any university to publicize a new negative result to local and national media outlets.

From the media’s side, there are fewer and fewer resources for science reporting and, indeed, fewer and fewer science reporters to get the job done. A tightly stretched science reporter is going to find it much easier to find stories about positive results than negative ones, and will have a more straightforward time finding scientists willing to give their time to discuss such a result.

Finally, it’s clear that the public is much more likely to find a positive result or speculation more interesting; and who can blame them really?

However, Goldacre’s column also points out that such bias against negative results is not confined to newspapers or television, but that it also extends to professional scientific publications. Admittedly, his focus is mostly on medicine and the extent to which trials contesting highly positive claims about a drug typically receive far less attention than the original papers.

Major academic journals aren’t falling over themselves to publish studies about new drugs that don’t work. Likewise, researchers get round to writing up ground-breaking discoveries before diligently documenting the bland, negative findings, which sometimes sit forever in that third drawer down in the filing cabinet in the corridor that nobody uses any more. But it gets worse. If you do a trial for a drug company, they might – rarely – resort to the crude tactic of simply making you sit on negative results which they don’t like, and over the past few years there have been numerous systematic reviews showing that studies funded by the pharmaceutical industry are several times more likely to show favourable results than studies funded by independent sources. Most of this discrepancy will be down to cunning study design – asking the right questions for your drug – but some will be owing to Pinochet-style disappearings of unfavourable data.

But I think such an attitude exists in a much wider scientific context and that it is a bit of a shame.

There have been a few times during my career when I’ve written a (what I, naturally, consider to be very cute) negative paper. Now, I personally have no beef with how I’ve been treated by scientific journals, so don’t take this as a complaint; but it has been clear to me that referees and the journals themselves tend to be much less excited about such papers. In fact, referees sometimes comment that they don’t really consider negative results sufficiently interesting. I’ve heard similar complaints from a number of colleagues over the years and I must say that I find related thoughts going through my mind when faced with refereeing a negative paper (although I do my best not to act on them).

I think that this kind of bias against negative results is a shame because they play a remarkably important role in the progress of science.

There are a number of ways in which this is true. First, there is the rather obvious statement that if one can demonstrate cleanly that a given idea or set of ideas is inconsistent or at odds with an established piece of data, then one provides an invaluable service to science, pruning the tree of speculations. Good science indeed!

Second, it is perhaps less well known outside the physics community that theorists look on the strongest negative results – the decisively-named no-go theorems – as distinct challenges to their physicisthoods (OK, not really a word, but it should be). When something gets called a theorem, it means that it starts with some clearly expressed assumptions from which the deathblow result then logically follows. Those juicy assumptions are just asking for it in the eyes of aggressive young physicists.

Perhaps the best-known example of this is the story that led to the idea of supersymmetry. There is a famous and beautiful theorem known as the Coleman-Mandula theorem, after it’s discoverers – Sidney Coleman and Jeffrey Mandula. Titled All Possible Symmetries of the S Matrix, and published in 1967, it has a great negative paper first sentence of the abstract

We prove a new theorem on the impossibility of combining space-time and internal symmetries in any but a trivial way.

The basic point is that if one assumes that the generators of the internal symmetry group are commuting operators (and that their commutation relations define the group Рi.e. that they comprise a Lie algebra), then the only possible total symmetry is a direct product of the space-time symmetries (the Poincar̩ group) and the internal symmetry group. This is what they meant by trivial in the abstract.

If this had been the end of the story, then bosons and fermions (and therefore force carriers and matter) would be destined to forever remain distinct. But here comes the loophole. The 1975 Haag-Lopuszanski-Sohnius theorem (after Rudolf Haag, Jan Lopuszanski, and Martin Sohnius) pointed out that if one relaxes one of the assumptions, and allows anticommuting operators as generators of the symmetry group, then there is a possible non-trivial unification of internal and space-time symmetries. Such a symmetry is called supersymmetry and, as you know, constitutes a large part of current research into particle physics.

No-go theorems are fun in physics because they formalize where the important barriers lie and provide guidance about the directions of future attacks on the problem in question. Negative results in general, although not quite as glamorous or exciting, are still great stuff. We should celebrate them. Plus, we don’t want to be like the medical community do we?

CATEGORIZED UNDER: Science, Science and the Media
ADVERTISEMENT
  • http://electrogravity.blogspot.com/ Science

    ‘Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’

    — Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

    What amazes me is that people see no contradiction between fighting wars for freedom, and censoring out or sneering at negativity.

    To really understand the key points of an argument, you need to not only know what the positive hype is, but you also need to know precisely where the difficulties lie.

    As Lakatos points out, good negativity itself doesn’t kill a theory. It merely puts the positive hype into context, and allows a clearer picture to be seen.

    The real, deep danger to any claim or theory comes from alternatives. By automatically deeming all alternatives to be crackpot by definition, the danger of progress past the present paradigm is averted, or at least delayed.

  • Thomas Larsson

    The multi-dimensional generalization of the Virasoro algebra beats SUSY at negativity, since it is forbidden not just by one, but by two no-go theorems:

    1. In field theory, there are no diff anomalies in 4D.
    2. The difeomorphism algebra has no central extensions except in 1D.

    Fortunately, I didn’t know about these no-go theorems when I started to look. Sometimes ignorance is a good thing.

  • http://insti.physics.sunysb.edu/~siegel/plan.html Warren

    I don’t get it. Most of the news is negative. This is certainly true of politics. Your arguments are proven false by that simple example. “Soft” news, like sports and entertainment, tends to be neutral at worst. So why is science news always positive? Why is a negative scientific result any less interesting (or entertaining) than a bomb or a divorce or your country losing the World Cup? I don’t think it has anything to do with the public, or the number of reporters, or the attitudes of universities. It isn’t very hard to find a negative scientific result, especially in the day of the Internet, and certainly a lot easier to find than who’s having an affair with whom. It can only be stupidity or incompetence. Most likely the whole modern style of public science reporting was invented ages ago by some idiot, and the vast majority of news agencies are too unoriginal to change it. The only reason we aren’t seeing much negative science news is because the reporters aren’t looking. After all, reporting isn’t about broadcasting university propaganda, it’s about asking questions — just like science! If so-called science reporters behave as if they were reporting news in a communist country, their inability to recognize news is no excuse. I wouldn’t be surprised if being assigned to science reporting is not recognition of expertise in that area, but rather promotion from the mail room.

  • http://blogs.discovermagazine.com/cosmicvariance/mark/ Mark

    Hi Warren,

    I’m not sure what you mean by

    Your arguments are proven false by that simple example.

    I would agree that most news is negative. It seems that you agree that most science news is not. We most also seem to agree that in order to get consistent decent science reporting, one needs many more and much more highly trained science reporters. What precisely is being proven false?

    I think the reasons I gave all do contribute to the reporting phenomenon and find no contradiction in the public being fascinated by the negativity of who’s having an affair with whom and the positivity of science stories (so perhaps that is where we disagree), but agree these obstacles would be overcome if the right reporters were around. But they aren’t.

    The main point I found interesting about Goldacre’s article was about medical professional journals.

    I wouldn’t be surprised if being assigned to science reporting is not recognition of expertise in that area, but rather promotion from the mail room.

    I wouldn’t go that far, but it is certainly true that at many newspapers one need have no particular demonstrated skills to become a science reporter. If papers were really interested in good science reporting, they’d be looking to hire people like K.C. Cole, Tom Siegfried and Dennis Overbye, who are clear examples of how to do it right.

  • http://countiblis.blogspot.com Count Iblis

    It happens here on Cosmicvariance too. :)
    Not so long ago there was a posting here about an interesting paper on quantum computing:

    http://blogs.discovermagazine.com/cosmicvariance/2006/02/28/paul-kwiat-on-quantum-computation/

    Yesterday a preprint appeared that shows that the results of the paper are not as strong as it seems:

    http://blogs.discovermagazine.com/cosmicvariance/2006/02/28/paul-kwiat-on-quantum-computation/#comment-31408

  • Steve

    I’ve long thought that there is a market for a special journal — the Journal of Negative Results — which would specialize in such things. Plus it would sound cool when cited: “J. Neg. Res.”.

  • http://insti.physics.sunysb.edu/~siegel/plan.html Warren

    Mark,

    I disagree with you on the causes. You don’t need to be a science expert to get good science news, just a good reporter. If you look for negative science news, you can find it. If you ask a movie star about their recent divorce, they won’t want to talk about it, but that never stopped a real reporter. So you can ask biting questions from a scientist, and they will hardly ever punch you, throw your camera at you, or have you barred from their press conferences. So I don’t blame the public or academia. Their behavior is typical human behavior. But science reporters’ behavior is not typical reporter behavior.

    P.S. I here distinguish science from technology. Technology news doesn’t suffer from this problem. Medicine is the technology of biology, just as engineering is the technology of physics. Some news services have even merged their science sections into “science & technology”, with science taking a back seat.

  • http://muon.wordpress.com/ Michael

    Hi Mark,

    your point is an interesting one. My friends who are successful theorists are sometimes very upset or frustrated when, after a long and difficult effort, they find that their project or theoretical model fails because it is incompatible with solid measurements, or the parameters of their model must take on extreme values in order to satisfy experimental constraints. So I observe the problem that you describe second-hand and can verify that theorists really do want positive results that they can publish, and not negative ones that journal referees will find uninteresting.

    The situation in experimental physics is quite different. Unfortunately, all we have right now are negative results when we search for phenomena beyond the Standard Model. And while some of our “searches” papers can be dull to read, they do get published, since that is all the experimental particle physics can offer. Now, imagine that the journals rejected all papers which reported negative results on searches for new phenomena. First, that activity would be difficult to sustain within experimental HEP, and second, we would not know what are the bounds on the Higgs boson, or whether gravity deviates from Newton’s law at short distances, etc. etc. That would be rediculous, for sure!

    So I wonder whether theoretical referees could take a lesson from that, and listen to your complaint seriously. I hope so…

  • http://motls.blogspot.com/ Lubos Motl

    I completely agree that negative results are great. In fact, there is no quite objective way to distinguish positive and negative results. One can formulate positive ones negatively and vice versa. But there is a huge difference between results and non-results. Empty bitter criticism is something entirely different than negative results.

  • Chris W.

    From Richard Feynman’s “Cargo Cult Science” (from a Caltech commencement address given in 1974—also included in Surely You’re Joking, Mr. Feynman!):

    I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.

    For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of his work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing– and if they don’t support you under those circumstances, then that’s their decision.

    One example of the principle is this: If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish BOTH kinds of results.

    I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state. If you don’t publish such a result, it seems to me you’re not giving scientific advice. You’re being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish at all. That’s not giving scientific advice.

    Other kinds of errors are more characteristic of poor science. When I was at Cornell, I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this–it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do A. So her proposal was to do the experiment under circumstances Y and see if they still did A.

    I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person–to do it under condition X to see if she could also get result A, and then change to Y and see if A changed. Then she would know the the real difference was the thing she thought she had under control.

    She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happened.

    Nowadays, there’s a certain danger of the same thing happening, even in the famous field of physics. I was shocked to hear of an experiment being done at the big accelerator at the National Accelerator Laboratory, where a person used deuterium. In order to compare his heavy hydrogen results to what might happen with light hydrogen, he had to use data from someone else’s experiment on light hydrogen, which was done on different apparatus. When asked why, he said it was because he couldn’t get time on the program (because there’s so little time and it’s such expensive apparatus) to do the experiment with light hydrogen on this apparatus because there wouldn’t be any new result. And so the men in charge of programs at NAL are so anxious for new results, in order to get more money to keep the thing going for public relations purposes, they are destroying–possibly–the value of the experiments themselves, which is the whole purpose of the thing. It is often hard for the experimenters there to complete their work as their scientific integrity demands.

    All experiments in psychology are not of this type, however. For example, there have been many experiments running rats through all kinds of mazes, and so on–with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

    The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell.

    He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

    Now, from a scientific standpoint, that is an A-number-one experiment. That is the experiment that makes rat-running experiments sensible, because it uncovers that clues that the rat is really using– not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat-running.

    I looked up the subsequent history of this research. The next experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running the rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic example of cargo cult science.

  • Supernova

    I think there is some conflation in the previous comments between “negative” news reporting and negative scientific results. They aren’t the same thing. “Negative news” generally refers to stories about war, death, destruction, crime, homelessness, etc. … things that most people agree are detrimental to a community or to society. A negative scientific result is not negative in the same way; it simply disproves or puts constraints on an existing idea. I see the two as only tangentially related, if at all. Like all scientists and writers, we should be careful about the definitions of the terms we use.

  • Q

    Negative results are “bad” news in medical sciences, surgery and the pharmaceutical industry.

    One does not continuosly report on the failures of heart surgery, one waits till there is a “success”. First heart transplant patient survivor is hailed and aired with much fanfare, and if the patient dies the next day, he is swept under the carpet and replaced with a ‘live’ patient. Desperate people can be duped into trying anything. But to secure government funding and attract or appeal to the vanity of the well “endowed” who can make you rich & famous, you need to make (enhance) the probabilities and odds of a good prognostic.

    That is the continuing story of the organ transplant industry. They show you their survivors as their success story, the deceased are not mentioned. Well they would have died anyway, no? well maybe. But in that case the survivors too may have survived without the transplant, no?

    If the walls of Transplant clinics were filled with photos of deceased patients, would they get as much funding or as many clients and paying customers? If the walls of transplant centres were filled with photos of some of the butchery and carnage, would they have got so many to put their life in their hands?

    If the walls of the plastic surgery clinics were covered with photos of those patients butchered by plastic surgery it would turn the stomachs of, and turn away prospective patients with more cash or hope than sense.

    If the pharmaceutical industry filled its reception walls and glossy magazines with photos of the real tests or results of lab experiments on animals and photos of thelidomide children, they would get less volunteers or human guinea pigs, thus being unable to meet the legal requirements to market their drugs as “safe” or proven.

    Incidentally ever read the disclaimers on pharmaceutical products: “If it doesn’t kill you, it may cure you” and if it doesn’t kill you or cure you, here TRY THIS ONE!

  • http://blogs.discovermagazine.com/cosmicvariance/mark/ Mark

    I would agree with that comment Lubos – empty criticism is pointless, and certainly not what I mean by “negative results”.

  • Matt

    I was taught to use the phrase null results (this was in the context of the Michelson-Morley experiment). The word null doesn’t carry the connotation of failure that the word negative does.

    I vaguely remember enjoying a cartoon duo named Null & Void. I wish I could remember their special powers, but I recall thinking they were cool. After a fair bit of Googling, I think I must be recalling a pair of friends (not villians, I hope!) in the TV cartoon Inspector Gadget. I don’t know how the brain works, but I’m sure that storage area could have better used. *sigh*

  • Elliot

    Mark,

    Very interesting perspective. I am reminded of an analogy from art and architecture where the use of “negative space” is part of the artistic value of an object.

    On the issue of results of various clinical trials of drugs, supplements, other alternative treatments or life style, it is very important to understand what is considered statistically significant in order to identify negative and/or positive results. This is a tricky business and the myriad of seemingly contradictory studies. (coffee is good/bad for you) (vitamins are good/no effect) (alcohol is good for your heart/bad for your liver/might be good for your prostate) etc. are often difficult to decipher for laypeople not to mention the doctors who should be making recommendations to their patients.

    Elliot

  • Richard

    I wonder if there would have been as much commotion in the press if Perelman had provided a solid counter example to the Poincaré conjecture rather than a road map to a proof. Unfortunately, I’m guessing not, although to me this would have been a fascinating situation. Negative results often define bounds that help us avoid getting lost down blind theoretical alleys. But negative results, particularly in relation to intuitively appealing conjectures as this one is, can also be considered to be an opportunity: “whoa … what does this mean then? We still must be missing something here!”

  • http://www.haloscan.com/comments/59de/115017290815882577/?a=26669#197784 Plato

    Maybe, that the value can be gained, from those in science who purposely “align themselves in opposition” to progress furthering ideas on the issues of “quantum gravity?”

    So who have you seen in “opposition to strings/loops,” and what have you gained from their reactions?

    Susskind and Smolin? Maybe, Gell-Mann and Feynman in another forum?

    So one may seek to create these “relationships,” in the true spirit of dialogue? :)

  • Q

    Hi Elliot, of course Biology & Medical Sciences aren’t as precise as physics.

    There is more than way to skin a cat, oh and there is more than one way to propel flight, cars or trains, maglev possibly my favourite for the latter (for now), and there are many ways to produce energy, large for industrial use + metropolis, smaller alternatives for certain greener rural needs or desert areas.

    But yes milk is good, but not for all
    Milk is bad, well only for certain people or Vegan beliefs

    Meat is good, well for most carnivours.
    Meat is necessary, nope, not for vegetarians or vegans

    Beer has good qualities, unless you abuse
    Wine has good qualities, nless you abuse

    Not eating pork, will not save you from cardiac problems
    Smoking will aggravate cardio-thoraxic problems.

    But I was being more specific, where some break thru is hailed as a miracle with muchfanfare, but the negative results are kept quiet for commercial reasons. Or miracle drugs are promised that are not so miraculous and given a ridiculous price tag to recoup the investment on all the other tens of failed drugs and research. This is not ‘good’ science, unless you call marketing and economy science. But as we know econometrix and economic & political sciences are not precise sciences, in that they can be ‘false’ truths upheld by inertia, vested interests, shareholders profits or if necessary armies of lawyers, and when necessary as with other resources Oil, gas, land or other interests by the military abroad & the ‘police’ at home.

    Yep inevitable all research and funding, especially big league is dependant on political will & government, whether NASA, space exploration, colliders, Boeing, Airbus, bullet trains, nuclear power, large hydro electric damns or wave power (estuaries, deltas, river mouths) because of the scale of the project. Private enterprise just plays an administrative role to bankroll projects. Incidentally despite allusions to the contrary the pharmaceutical industry and medical research is wholly dependant on state funding, it is just that private companies syphon as much away to pay bonuses and pie in the sky dividends (smoke + mirrors) to make them attractive to investors and pension funds. Whereas a lot of that money could actually be better spent on personal care of patients. But that is more costly and time consuming and not as profitable, as precribing some magic bullet or pill. Talk about gold dust or added value. Something that costs less than 10 cents per pill to make can be prescribed for $14,000 a year and you don’t even have to sugar coat it.

    WOW! Am I missing something?

  • http://electrogravity.blogspot.com/ Science

    ‘I would agree with that comment Lubos – empty criticism is pointless, and certainly not what I mean by “negative results”.’ – Mark

    I hope you are consistent in making this statement. String theorists have blocked me from discussions of alternatives while at the Open University in 1997 (Dr Bob Lambourne used stringy stuff to dismiss the factual proof, under the false impression that Popper had disproved the existence of factual proofs in science).

    In fact, the laws of buoyancy aren’t falsifiable: they are based on empirical facts and logic by Archimedes. Popper simply ignores non-falsifiable science.

    The use of vacuous speculation like strings to “discredit alternatives” is what is pointless. You can falsify string theory by proving that the gravity mechanism – which makes non-ad hoc predictions which tally with observations – doesn’t rely on strings. Put it like this, since a proved non-string mechanism accounts for 100% of gravity, strings aren’t involved.

    However, the string people control arxiv and the minds of everyone who wants to hold their job in academia. They are bitter, and pathetic. Arxiv administrators in email with me in 2002 told me to get my own internet site. That shows what level of interest there is in science there. None. They are just bitter and silly.

  • http://blogs.discovermagazine.com/cosmicvariance/mark/ Mark

    Science; I expect I am consistent, but don’t see why you’re asking me. The rest of your comment is just a rant, aimed at string theorists (which I am not, by the way). I get it – you feel maligned that people don’t pay attention to your ideas and are annoyed at string theorists. Once again though, I’m asking you not to clog up my comment threads with comments like this – they are very out of place in this particular discussion.

    Also, after using the word “bitter” twice in such a comment, I would suggest looking up the word “irony”.

    In the future I will be deleting such comments on threads where they are off topic.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Mark Trodden

Mark Trodden holds the Fay R. and Eugene L. Langberg Endowed Chair in Physics and is co-director of the Center for Particle Cosmology at the University of Pennsylvania. He is a theoretical physicist working on particle physics and gravity— in particular on the roles they play in the evolution and structure of the universe. When asked for a short phrase to describe his research area, he says he is a particle cosmologist.

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+