The Ethics of Citation

By Neuroskeptic | March 12, 2017 2:04 pm

Earlier this week, Jordan Anaya asked an interesting question on Twitter:

This got me thinking about what we might call the ethics of citation.

Citation is a little-discussed subject in science. Certainly, there’s plenty of talk about citations – about whether it is right to judge papers by the number of citations they receive, whether journals should be ranked by their impact factor (average number of citations per paper), and so on. But citation, the actual process of choosing which papers to cite when writing papers, has largely escaped scrutiny.


I think citation is an ethically meaningful process. Like it or not, citations are the currency of success in science. By citing a paper, we are not simply giving a helpful reference for the readers of the paper. We are giving the cited paper an accolade, and we are tangibly rewarding the authors for publishing it. To not cite a certain paper is, likewise, an act with consequences.

So if we care about fairness and the just distribution of resources, we as publishing scientists should take citation seriously.

What are the specific ethical problems of citation? Here are three that I think matter:

  • The tendency for authors to preferentially cite their friends, colleagues and acquaintances; we could loosely call this “nepotism”. In any other scientific context, this kind of preferential treatment would be considered wrong or at least concerning: in the peer review context, for instance, many journals do not invite the authors’ colleagues to review a given paper. But in citation, nepotism happens all the time. Should it?
  • Review papers. Authors like citing review paper because they offer a way to cite a single paper to support multiple statements. It’s also easier to locate a recent review paper than to find the originals which might be quite old. This is why review papers are often highly cited. But is this fair? The review paper authors may not have contributed anything to the discoveries they summarized, yet they end up getting (some of) the credit for them.
  • Citing papers we’ve never read. I’m guilty of this. In fact I confess that I’ve cited papers without even reading the abstracts: I just searched for a paper whose title suggests it supported the point I was making, and cited it. I think this is very common. But is that really how citations – the ‘coins’ of value in science – should be minted? By someone who knows absolutely nothing about the quality of the paper?
CATEGORIZED UNDER: ethics, papers, science, select, Top Posts
  • Nick

    The other day I was reading article X that mistakenly cited article A instead of article B. A simple honest mistake; A and B were on the same topic. I looked for the cited information in A, didn’t find it, went Googling with some well-chose words, and quickly found the cited argument in B. Both A and B are quite extensively cited throughout article X; this was apparently just a simple slip of the fingers.

    Then I read article Y, published a few years after X on the same topic. The author of Y also mistakenly cited article A instead of B, using words that were very clearly a paraphrasing of how article X had cited it. Busted! This was a shame, as I quite liked the arguments made in Y, but this was on a fairly complex technical subject, which makes me wonder what else the author of Y has just skimmed.

    • Omnes Res

      This conversation is getting me thinking about a paper I cited that I didn’t completely understand but now understand a little better. Maybe I’ll write a blog post as a confessional and post feedback on the paper.

  • Leonid Schneider

    I can tell you why I had to cite reviews instead of original papers back then when I was in science writing papers: the journal told me my text was too long, and references count not just numerically, but as part of the text. So I trashed all original research papers and put reviews in instead.

    • Neuroskeptic

      Some journals also have maximum citation limits now. This does help to prevent citation inflation, but it also forces you to cite review papers in many cases.

      • smut clyde

        One result of this perverse incentive is to encourage plagiarists to concentrate on review papers, where there is more chance of being cited. Did that show up in your plagiarism survey?

  • smut clyde

    Citing papers we’ve never read. I’m guilty of this.

    As Nick says, sometimes you can trace back the process of a misattribution becoming part of general knowledge… someone mistakenly cites a paper that does not sustain the argument they’re trying to make, then someone else repeats that mis-citation rather than bother with the primary literature, and so it goes. I’ve seen it often enough to trust no-one.


      I got paid $104,000 in last twelve months by doing an on-line job from my house and I was able to do it by w­o­r­k­i­n­g part-time f­o­r several hrs daily. I was following a business opportunity I was introduced by this web-site i found online and I am so thrilled that i was able to make so much money on the side. It’s so user-friendly and I’m just so happy that I found out about this. Check out what I did… http://urlof­.­site/qYSkQ

  • smut clyde

    How about the issue of “reviewer-flattery-by-citation”? That is, suspecting that one’s research colleague or rival X will probably be a peer reviewer, so giving priority to a few of X’s publications from the range of possible citations.

    • Andrew Collmus

      Right. And the similar but related issue of journal flattery. Some journal editors will “encourage” authors to cite papers from that same journal, which is probably not the best practice toward the objective goal of science.

    • Nick

      Heh, I did this in my undergraduate thesis. I knew who the external
      examiner had been the previous year and guessed thet would ask the same
      person again, so I cited a couple of his books that were only very
      peripherally related to my work…

  • Bernard Carroll

    Under the nepotism header, don’t overlook commercially driven citations. When a paper comes out from a corporation it is likely to cite their own stuff or that of their KOLs. Remember, the marketing department scrutinizes all corporate publications with the goal of maximizing advertorial messaging.

    • Neuroskeptic

      Oh yes, this is another problem.

  • Bernard Carroll

    Citation counts don’t necessarily reflect incisive scientific advances. Most papers that have very high counts describe methods or tools rather than fundamental discoveries. In dementia, for instance, the #1 cited paper is from Folstein, Folstein and McHugh (all good friends of mine) for the Mini Mental State scale to assess severity of dementia, with ~40,000 cites. Meanwhile, #25 on the list is the 1976 paper by Davies and Maloney describing loss of cholinergic neurons in Alzheimer disease. It is the basis of most current treatment efforts for AD and it has been cited ~2500 times. But the original paper by Alois Alzheimer in 1906 describing the case of Auguste Deter was rarely cited until a new English translation appeared in 1995, and even that has been cited only ~250 times.

  • non_sig

    I find these really difficult issues. Regarding citing your friends papers for example, these may also be the ones which you know best (which may also be a minus, depending on the papers themselves and the circumstances under which they were produced), so how long are you be supposed to search for other papers? Of course it could be argued, that you should be aware of any other relevant papers anyways,… but then those “everybody” is aware of are the ones which are already cited (and may only be true for the main topic of the study and not side-issues). So how long should you search for other papers?

    “I just searched for a paper whose title suggests it supported the point I was making”. Whether you then read the paper or not, I’m not sure searching for papers that support a point is (ethically) the right strategy anyways (though it surely is common practice). I mean, shouldn’t we search for evidence and counter-evidence for the points we are trying to make? But it would take a lot of time and maybe would make arguments weaker instead of stronger, so I think that people who are doing that would lose… (which of course doesn’t mean that it would not be the right thing to do, but I think it’s really difficult, because what is gained when you can’t finish anything or when, what is finished, is not valued by others because it doesn’t tell a straight story?).

    This may also be one reason why reviews are cited so much. If they weren’t original research papers had to take both parts, doing the review and presenting the research (and of course there’s neither place nor time for a review on topics that may not be the main point of the paper).

    I think it’s really difficult… and common practices aren’t much help…

    • Neuroskeptic

      I agree, there are no easy answers.

      Citing your friends is often fine because often they are the ones who write the relevant papers. However I have heard of some institutions and some labs which explicitly encourage people to cite each other whenever possible.

  • Brenton Wiernik

    It is far too much of an overgeneralization to say that review papers “may not have contributed anything to the discoveries they summarized.” Most contemporary review papers are creative endeavors in and of themselves, particularly if the “review” is a systematic review or meta-analysis. Science is a cumulative process; it is often only when aggregating studies on a topic that the true insights and discoveries can be made. It takes profound creativity to examine a noisy literature and identify the key factors underlying a phenomenon across studies.

    • jrkrideau

      Indeed, it can be a very difficult exercise and provides an extremely useful resource. Just don’t pretend you have read the original papers by putting them in your reference list. Oh and pray the reviewer has read the originals.

      See my earlier comment where I was ranting a bit.

      • Brenton Wiernik

        Agree on all points there.

  • Pingback: The Ethics of Citation | StratCom()

  • Jon

    Relevant paper on the topic:

  • a6z

    There is also the immoral use of citations for political, religious, sexual, national or other cultural favoritism, or its opposite: advancing women scientists, say, or whatever other group you favor; or withholding them from someone whose politics, religion, or nationality you despise.

  • Peggy Heppelmann

    I would love to see more critical assessment of the quality of cited papers. Often papers are cited to support the authors hypothesis, which are terrible from a methodology perspective. They should lend no credibility to the author’s paper, but the more bad studies cited, the more credible the paper becomes. For an example of a recent study with terrible methodology, look at the study on gluten intake and diabetes by Geng Zong, a research fellow in the Department of Nutrition at Harvard University’s T.H. Chan School of Public Health. The study included data from three previous studies consisting of 4.24 million people followed from 1984-90 to 2010-13 finding the lowest level of gluten intake was associated with a 13% higher risk of diabetes than the highest intake. However no effort was made to correct for diabetes risk in the samples at the lowest and highest level of gluten intakes. Common sense says, that before the anti-gluten fad developed (which is the period in which the data was gathered), the lowest levels of intakes were likely to be found in traditionally low gluten food cultures like Asians and Latin Americans. These ethnic groups in America have diabetes rates 50% higher or more than caucasian Americans, so if this sample correction had been made, one might have concluded that low levels of gluten protect against diabetes (this conclusion would also be wrong since it still only shows an association). Despite the terrible methodology of this paper, once published, it will be cited over and over again to support other nutritional studies and reviews.

  • Uncle Al

    The functional predictor of research success violates managerial practice: Hire those you despise and fear (e.g., venture capital). Numbers are irrelevant for the next discovery.

    A burrito assembly line is amenable to PERT charts, oversight, spreadsheet analysis; then quantitative optimizing feedback. Real world research begins with unlikeable young faculty and tossed in resources. Return to your office and sweat blood (Bell Labs, Skunk Works, Google).

    To solve the puzzle, to even define the puzzle, you need autists. They ignore your world, they are unpleasant, and they screw around burning resources. 90% will not crack the nut. That is easily 5% more success than obtained with traditional management paradigms. It insults good business practice.

    Management cannot manage discovery. Management forwards its own appetites. “The R&D Function” Harvard Business Review 61(6) 195 (1983) is the terrible alternative.

  • Maria

    Great article.

    gay snapchat

  • Frazier Mccollum

    “The tendency for authors to preferentially cite their friends…” I actually just realized this when I was citing journals on one topic. At first, there was nothing too odd about seeing an author cite a colleague, but I soon noticed that this was becoming a persistent trend. I just brushed it off, thinking that there’s probably a limited number of studies in the field. But that didn’t turn out to be the case. I soon heard that researchers who also teach or work in the academe tend to cite their colleagues not just out of preference, but as a way of helping each other improve their ranking; where the the institution provides financial benefits when certain tiers are reached. Can anyone verify this?

    • Neuroskeptic

      It’s true. I’ve heard that some institutions encourage all their people to cite others from the same institution. And most people see nothing wrong with it, because citing papers is not seen as an ethical issue.

  • Pingback: Lectuur op zaterdag: Nutella bij het ontbijt, ethics, goudvis en 3-18 | X, Y of Einstein?()

  • Jane E. Rosen, Ph.D.

    What can one do when one’s Doctoral mentor publishes the first peer reviewed publication of your dissertation in a reputable journal, without your knowledge or permission. What makes it totally egregious is that the mentor did not even cite the dissertation in the bibliography! When the reputable peer reviewed journal was notified they claimed they would investigate then did nothing…..and maintains the published paper on their data base as well as on pub med. To date the manuscript has been cited by others but the true author is unknown to the world along with her rather well done dissertation that was approved by the mentor before he decided to publish it. He removed the graduate’s name from the byline as well. When the ORI and the journals promise to investigate and then do nothing they merely encourage and promote the lack of integrity in terms of proper co-authorship and citation where it is well deserved. Loss of authorship and citation will harm a scientists career….so why do the alleged authorities aid and abet this conduct?

    • Ceca

      This similar situation happened to someone I know too. :-( I believe it is more common than reported. The mentor tends to have tenure etc. and the doctoral student can only do so much. As a librarian I hear similar stories of this type of scooping. Sorry but this has happened to you.

      • Jane E. Rosen, Ph.D.

        In my case the ORI accepted to review it for misconduct. They made a final decision and set a dangerous precedent. They stated that unless I could prove that EVERY idea of my dissertation was solely my own that I had no right to authorship of my original work. Of course during one’s Ph.D. training we have many discussions with our mentor so how is it possible to prove that ALL the ideas were solely my own. One would have to retain a video crew and tape every little conversation over a period of years. Totally impossible….so basically ORI gave every mentor the green light to steal their student’s work because no one can ever prove that ALL the ideas were their own. This is a travesty and graduate students need to stand together to reverse this published decision otherwise it can and will be used over and over again. Finally NYU must have realized that the mentor did a terrible act so they forced his resignation from his tenured faculty position. That was not restitution to me at all. I still wish to have my authorship credit and citation to my great work. Forcing his resignation was not restitution to me for the harm he caused.

  • Csaba

    There is one more, common citation-related ethical problem. The lack of citing prior papers that are directly related to the work to be published. There are two reasons (not sure which is worse): 1) intentionally leaving them out, in order to boost the perceived novelty of the paper and/or 2) not being familiar with the literature. At least 30% of the papers I see are like this.

  • MrEdmonton

    I’ve always thought that you cite papers that are the basis for the work that you are describing in your own paper, possibly giving more detail about some point that you can include in the paper that you are writing and also including citation of papers that you do not agree with and that are disputed in your paper.
    The idea of citation being some measure of quality or importance is ridiculous at it very foundation. Citation should be a way of framing your work in the larger universe of the field of study.



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar