Influence

By Julianne Dalcanton | April 18, 2008 1:39 pm

Much of the April 15th angst that Sean described comes from student’s questioning “Will I be a success if I go to this particular graduate school?”. They place a tremendous weight on this decision (and rightly so, given the 5+ year duration of a typical PhD). The decision of where to go to school presents a clean well-defined juncture, where you can imagine two clear paths before you, one that leads to a happy land filled with unicorns and flowers and all night coffee shops and independent record stores, and another that leads to a sad grey land where you spend your time shuffling piles of paper for The Man. However, having been in the game from the faculty side for nearly a decade, I can say that much of what determines whether one is a “success” is largely independent from this decision. (An aside: for this discussion I’m going to assume “success” equals working as a research scientist, which is the typical goal of an entering grad student. I don’t mean this as a value judgement, since “success” is really “whatever career path you find fulfilling”, and I’m just as happy to train phenomenal future high school science teachers as future faculty at Harvard.)

I think the essence of what determines your long-term success as a scientist is your ability to influence the scientific discussion. When you’re at a point in your career when people pay attention to your work, and want to know “What does <your name > think about this?”, you are on a near certain path to a stable position as a research scientist. Instead, if no one is reading your papers (to the extent that you’ve published them at all), or wants to hear what you say at conferences, or calls you up to ask you about your area of expertise, then you’re in danger of drifting out of the field.

Now, the factors that lead to having scientific influence are many. Among the most important are:

  • Writing lots of papers
  • Writing interesting papers
  • Writing papers using novel or superior data sets
  • Writing papers on a timely topic
  • Being recognized as leading the above papers, rather than being directed by others
  • Communicating your ideas with clarity
  • Being socially well-connected in your field
  • Being really, really, really, unusually smart and/or creative
  • Having influential mentors promoting you

To be scientifically successful, you don’t need to have all of these factors, or even most of these factors. You just need to have enough of them, or a long enough suit in one or two of them, that people can’t ignore what you’re doing.

Of this list, there are at least half that are almost entirely under a student’s own control, no matter where they go to graduate school. You can pick inspiring mentors, write lots of papers on interesting, timely topics, and give riveting talks about them, no matter where you are. You can fail to write any papers (on topics boring or not) and give lousy talks, under the negative guidance of ineffective advisors, even if you go to a top-ranked school. Some of the other factors do probably have some correlation with top-ranked programs, in that such programs are more likely to have the luxury to admit only students with early evidence of brains and creativity, and they tend to have more of the resources that lead to superior data access, or a larger pool of productive theorists (postdocs & faculty). [However, in astronomy at least, there is sufficiently rich access to public resources (SDSS, NASA's Great Observatories, 2MASS, etc) that one can usually have sufficient access to create "novel or superior data sets" no matter where you are. For lab-based physics, this is likely less true.] In this list, the relative “prestige” of one’s graduate program has little direct impact on your eventual scientific impact. When I hire postdocs, or evaluate fellowship applications, I am drastically more impressed by what someone actually did, than where they went to school.

Besides the import for deciding where to attend school, the above elucidates why “climate” issues can have such a large impact on your eventual career success. If you’re at an institution that places obstacles in your path that make it difficult for you to write papers, to find good mentors, and to make scientific connections in your field, then you’ve got a problem. You’re going to be struggling uphill.

However, the same list also provides the recipe for climbing that hill, if you find that you’re on it. The number one thing you can do is to write papers (and preferably interesting and timely ones). People cannot ignore a large body of high quality work for long. Sometimes it takes a while before they notice, it’s true. But the more you publish, the more likely it is that people will begin to notice your work, and be influenced by it. As that happens, they will start noticing you as well, and will tend to deem you “someone worth having around”, whether as a postdoc, or at their conference, or as their next faculty colleague. This process is vastly easier with a good mentor behind you, but if you wound up without one (or gawd forbid with an anti-mentor), it’s going to be your only route out.

I think the clearest evidence of this is a relatively jaw-dropping preprint that was recently posted to the arXiv (h/t to Zuska). A former particle-physics postdoc (and current grad student in statistics) carried out a very detailed analysis of the productivity of postdocs on the Run II Dzero experiment, and how that translated into giving conference presentations, and being hired into faculty positions. The paper found that the postdocs’ success in eventually landing faculty jobs were highly correlated to productivity (as measured by internal papers), to conference presentations (which were awarded by the leadership of the project), and to the degree of “physics socialization”. These correlations are all what you would expect, and reinforce the above list of what leads to being scientifically influential.

The jaw-dropping aspect of the paper is that the awarding of conference presentations was grossly gender biased (as was the fraction of service work assigned to the women). The female postdocs had drastically higher levels of productivity (indeed, half the men were less productive than the least productive woman), but were allocated far fewer conference presentations than men with comparable productivity. (Note: this is a paper you actually have to read, rather than just flipping to the table at the end. It’s a very well-done piece of statistical analysis, and can’t be fully appreciated from just comparing two means in a table.)

In this exercise, we see the influence game writ large. You need to be productive and visible. If some sort of bias (against women, or shy people, or people from state schools, or whomever) is present that conspires to make you less visible, you’re going to have to be even more productive. It’s not fair, and people in positions to fight against the bias in their institution should do so. But, at least it’s something that you have a chance of controlling.

CATEGORIZED UNDER: Academia, Women in Science
  • http://kea-monad.blogspot.com Kea

    How true.

  • http://startswithabang.com/ Ethan

    This is a great post with a link to a great article. I have a related question to pose to you: given that the vast majority of graduate students who enter graduate school wind up, for a variety of reasons, not choosing to define success as “becoming a research scientist,” do you know of any efforts to help them find non-academic careers?

    This didn’t exist at any of the four Universities I’ve called home at various times, and yet, it seems like this is just the sort of framework that might make having an advanced degree in the natural sciences more palatable and “normal-seeming” to the general public. Any thoughts?

  • Ambitwistor

    This post has an interesting quantitative critique — including a simple Monte Carlo simulation of an alternate null hypothesis — of the gender-bias preprint you mention.

  • Pingback: HowTo: succeed in grad school « Entertaining Research

  • Pingback: On becoming and influencial researcher - The TestMagic Forums

  • Ellipsis

    What does your D0 colleague Gordon Watts — as he is a directly knowledgeable and I think certainly non-sexist individual — have to say about that paper?

  • Harold

    Thank you Julianne. That was a very good post.

  • http://blogs.discovermagazine.com/cosmicvariance/julianne Julianne

    Gordon is off in Europe on sabattical, so I don’t know his take. He might blog about it.

    And while I didn’t state it above, I doubt that anyone participated in conscious “Girls can’t do math” types of sexism. I think you get a net effect from a series of small decisions that seem defensible when taken individually, but that add up to a real net bias. We have a natural tendency to recognize talent that looks like our own (ie. people whose strengths lie in careful detailed analyses often find fast-moving creative types to be shallow and showy, and fast-moving creative types are more likely to find careful deliberate scientists to be plodding and dull). Thus, when passing out conference presentations, you naturally want to give them to someone that you think will do a good job, where “a good job” frequently means “do the job the way I would have done it”. This leads to a tendency to give breaks to people who fit into a mold that you already know and respect, and require extra proof of merit from people who lie outside that norm. The net result can be gender biased, even if the decisions that produce the result aren’t intrinsically gendered.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    I have discovered an interesting proof by example that the accumulated impact of many tiny effects can be quite dramatic.

  • http://blogs.discovermagazine.com/cosmicvariance/julianne Julianne

    But Sean, that rock was soft and just didn’t want to stay put.

  • http://gordonwatts.wordpress.com Gordon Watts

    Believe me when I say folks inside D0 are taking this quite seriously.

    This is the first time anyone has done data mining like this of D0′s actual performance (that I know of). I very much like the idea of doing something like this – I think of it like a “closure” test. You say you aren’t biased – here is a way to test it.

    As for me blogging about it – and the specifics of the study – I can’t see me really adding anything to the conversation until/unless D0 releases more information (i.e. numbers). Otherwise I will just add to the noise. And, frankly, I can’t see how I wouldn’t be seen as a biased source as both an active member (and supporter) of D0 and a man. I’d prefer to wait for numbers. :(

    I think this goes without saying that everything I write here is my opinion.

  • Terry

    The use of “productivity” in the Towers paper is not what most active research physicists would consider as productivity. In particular, productivity was measured by the number of internal reports written. I note that “Writing lots of internal reports” is not in your list of important factors for scientific influence. Towers does not address the number or importance of refereed publications for the male and female groups. What is interesting to me is the willingness of the females to spend time writing internal reports, which (I would guess) is not an effective way to boost your chances of getting a tenure-track job.

  • http://www.physics.usyd.edu.au/~brewer/ Brendon Brewer

    I haven’t read that preprint properly yet, I just had a glance. However, it worries me that people would address such an important question with the discredited, obsolete machinery of R.A. Fisher’s null hypothesis testing. I guess it’s not as much of a problem as the fact that it’s used extensively in medical research, but still…

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Terry, that’s not the way that large particle-physics experiments work. Every refereed publication has exactly the same list of authors — the entire collaboration, listed in alphabetical order. Completely useless for judging productivity. The “internal reports” are the way in which actual technical results are communicated within the collaboration. They are a very sensible proxy for productivity.

    It is always fascinating, when results like this come out, to see people grasp at any straw available, rather than just face up to the obvious. Of course I would like to see a much larger dataset and much cleaner controls, but studies like this aren’t telling us anything that shouldn’t already have been obvious.

  • http://www.math.columbia.edu/~woit/wordpress Peter Woit

    According to the AAUW web-site, the author of this article is in the middle of a lawsuit against her former employer over the issue of how she was treated by her supervisor in the D0 experiment when she went on maternity leave.

    She may very well be right that D0 physicists are biased against women, but the lawsuit she has filed involving them seems to me to be worth mentioning.

  • http://mingus.as.arizona.edu/~bjw/ Ben

    She may very well be right that D0 physicists are biased against women, but the lawsuit she has filed involving them seems to me to be worth mentioning.

    Indeed, I’m sure that whenever her former supervisor is hiring for a job, he also mentions that his institution is being sued over his previous supervisory actions.

    One consideration I’d add to what matters in grad school: your fellow students. If you continue in the field, these people are going to be your friends, network, and sometimes collaborators for a long time. It’s important to go someplace where there will be lively people that you can talk to, that will be helpful, that you’ll learn things from. If you visit a place and everybody is unhappy and competing with each other, or never comes out of their offices, that is a huge red flag. Of course, the tenor of grad student behavior is partly set by how the faculty encourages them to behave, and you control how outgoing you yourself will be. But the “quality” of a department is its students almost as much as its faculty (even postdocs matter, if anyone lets them).

  • Haelfix

    Absent some control over the actual quality of the internal papers in this case, its hard to make a clean judgement. For instance if you make 1 revolutionary internal paper, thats worth at least 5 basic reviews.

    Incidentally this contradicts some other research i’ve seen in the field, where women tend to make longer papers, vs men who write smaller letters more frequently -shrug-

    Having said that, I find the socialization argument compelling. If you have a lot of friends, you naturally will have more people reading them. If women or shy people find it difficult to socialize within a group b/c of unconscious gender bias, it should add up to a net effect as seen.

    Alas, its hard to correct for that, particularly in a large collaboration where you tend to lose one’s identity and where such factors are probably important for the net success of a project. Its not hard to imagine a similar scenario, where say a project is dominated by a group who speak mostly chinese. It strikes me as obvious that an outsider, even one who speaks chinese well, will need to dispaly his worth to a much higher degree. An all too human situation.

  • Haelfix

    I’ve sometimes wondered if we could correct for this effect by doing a little simple tinkering with the presentation of papers.

    What if (thinking aloud), the authors names were actually listed at the end, rather than the beginning. It would be inconvenient (particularly if its part of a multi threaded line of research), but it might make people scanning abstracts a little more likely to actually read the paper rather than look for name recognition to weigh its merits before reading further.

    I’m surely guilty of this at times, as I suspect most people are (especially when in a time crunch)

  • Mike M

    The most depressing thing about that paper is that is is so obviously prejudiced. Surely, an honest appraisal of a dataset such as this would highlight that other simpler and more robust tests, like comparing mean metrics for male and female samples, fail to indicate any gender discrimination, and that the ultimate “output” of this alleged discrimination is that a higher fraction of the female postdocs went on to faculty positions than the male ones.

    Personally, I think it highly likely that there is gender discrimination in many fields, including physics, but such papers which so clearly set out to reach a specific conclusion, and were not going to quit until they found a metric that “proved” the point, do not go any way toward demonstrating it. Indeed, it might be argued that they achieve quite the opposite by allowing those who would deny that such practices occur to dismiss legitimate research by lumping it in with this kind of carefully engineered propaganda.

  • Count Iblis

    Is Eq. 4 on page 10 also a commonly accepted measure for socialization in theoretical/mathematical physics? I would score very low using that measure…

  • Dave

    I am puzzled by Mike M’s claim that the Powers paper is obviously biased. Certainly, the author has her own experience within the D0 collaboration, which has resulted in a lawsuit based on related issues. So, some bias might be suspected. But, I don’t see any evidence of bias in the paper, although I can’t say the same for Mike M’s comment.

    The fraction of D0 postdocs that went on to get faculty jobs is not likely to be a good statistic for such a study. I believe that my own university was hiring in high energy physics during the time considered by this study, and if I recall correctly, several of the leading candidates were actually married to each other. I think that as many as two pairs of married postdocs received double offers from other universities, although it may be that one of these pairs were from CDF. In any case, the search for double jobs makes this a very difficult statistic to use. Plus, the difference in faculty job success (4/9 females vs 16/48 males) is obviously not statistically significant.

    Perhaps an even stronger evidence for bias in physics is the difference in the number of postdocs: 9 females vs. 48 males. This is not so likely to be bias by D0 because the number of physics students graduates to hire from probably has a similar ratio. But the difference in the numbers of male and female varies very significantly from country to country. In France, Italy and Argentina, the fraction of female physics students approaches 50%, and the Argentine students I’ve met were surprised to learn that there were many more male physics students in other countries.

  • amused

    From what I’ve seen, the typical successful research career trajectory goes like this: phd with an advisor who is prominent in a fashionable field, followed by a couple of postdocs with leading groups in the field, followed by a tenure track faculty positon. Typically, but not always, the phd and postdocs will be at top institutions, since that’s where the top groups tend to be. (However, from what i’ve seen, it is extremely rare for someone to land any faculty position at a research uni without at least one of the phd/postdocs having been at a top uni.) I’m not so sure that the person has to be “influential” at the time they land the faculty job. Typically, the person will have been junior author on some influential papers during his/her time with the hot-shot folks. He/she may have some single-author papers as well, but it seems these don’t need to have had much impact and often they don’t. (E.g., looking up some random people who followed this trajectory i notice that some of their papers as junior author with the big shots have 100s of cites, while their single-author ones have around 10 cites.)

    I would guess that “influence” becomes more of an issue when the person is up for tenure. Getting the initial tenure-track job seems often more based on perceived potential. A cynic might say that it also has a lot to do with influential senior people feeling that they have something personal at stake in seeing the person succeed.

    As someone trying to “climb the hill”, i’ve found it all but impossible to compete with people on the “success trajectory” described above, and basically survive on the scraps leftover. That’s no doubt due as much to my own shortcomings as anything else, but one thing i can say for sure is that single-author PRL papers are no match against their junior authorships of papers with the big-shots.

  • Mike M

    I am puzzled by Mike M’s claim that the Powers paper is obviously biased. Certainly, the author has her own experience within the D0 collaboration, which has resulted in a lawsuit based on related issues. So, some bias might be suspected. But, I don’t see any evidence of bias in the paper, although I can’t say the same for Mike M’s comment.

    Of course there is bias in my comment. There is bias in everyone’s comments. That is what comments are. But as for evidence of bias in the paper, well, what would you have done with the data presented (assuming that you were sufficiently arrogant to believe that your personal interest in the case had not completely compromised your objectivity)? Surely, the obvous first step is to calculate a simple relatively-robust statistic such as a t-test to check for differences in mean properties between the subsamples. Once you had discovered that this test failed to reveal any particularly interesting differences between the male and female subsamples, hopefully you would have stopped because you would be a good enough statistician to know that if you keep playing around you will always find a statistic that returns a significant result, but that such analysis is fatally flawed because you had to cast around to find it. At the very least, you would have reported the null results of the t-test and anything else you tried before you had found the one you wanted, to demonstrate your scientific integrity in not only emphasizing the results that re-enforce your prejudices. But not in this paper, because it has a mission.

    The fraction of D0 postdocs that went on to get faculty jobs is not likely to be a good statistic for such a study.

    On the contrary, it is the only unequivocal statistic that one can quote since anything else is subject to interpretation, while this “output” figure is the sole statistic that does not rely on trying to unpick the operations of a black box. And if your goal is to look for the places where the pipe leaks, it is exactly such statistics that are pertinent.

    Indeed, your comment on the number of postdocs of each gender involved in the experiment implies that, when it suits you, you are very happy to consider issues of career progression. Unfortunately, as with the author of that paper, you do the argument no favors at all by dismissing those elements that do not support your perspective.

    As to why some countries do very much better in terms of numbers, sadly the answer is fairly obvious: gender equality in physics anti-correlates quite closely with the level of esteem in which the subject is held. My female friends in Spain and Italy have made this point very forcefully, by pointing to the gender inequalities in subjects like engineering that are held in high esteem in those countries.

  • http://mingus.as.arizona.edu/~bjw/ Ben

    A t-test only tests for differences in means of some quantity. It does not tell you if two quantities are correlated, or if (for example) the distributions have different shapes. At the end of Sec 4.1, the paper points out that males and females have very different shaped productivity distributions. In this case, applying a t-test is completely bogus.

  • Pingback: tom-mcgee.com: the blog » How To Succeed in ________

  • ike

    “It pays to be one of us…”

    Not too pleasant, but let’s be realistic here, for example…

    Persistent Nepotism in Peer Review

    Abstract In a replication of the high-profile contribution by Wenneras and Wold on grant peer-review, we investigate new applications processed by the medical research council in Sweden. Introducing a normalisation method for ranking applications that takes into account the differences between committees, we also use a normalisation of bibliometric measures by field. Finally, we perform a regression analysis with interaction effects. Our results indicate that female principal investigators (PIs) receive a bonus of 10% on scores, in relation to their male colleagues. However, male and female PIs having a reviewer affiliation collect an even higher bonus, approximately 15%. Nepotism seems to be a persistent problem in the Swedish grant peer review system.

    Face it – many people worked their way into positions of authority by political maneuvering, not scientific merit – and there is, in most universities today, a certain ideological barrier that has been raised.

    Essentially, if you open your mouth about the corporate takeover of the universities, the obsession with patents, proprietary research, and murky public-private partnerships, and the resulting wave of questionable research, you’ll also find your career facing a possible termination. Everyone knows this is true, but few people want to talk about it.

  • ike

    Also regarding the productivity issue… quick story here:

    There was a famous professor at a prestigious university who published a phenomenal number of papers – and his graduate students grew tired of his manic tendencies to slice one paper up into three, and so they hung up on the office door a running tally of how many papers had been published.

    While the graduate students did this to mock the ridiculous number of publications (“way more prolific than Feynman, but way more forgettable” was how I heard it) put out, the professor took it as a compliment…

    Many graduate students, when asked what they hate most about their jobs/lives/careers, will immediately reply “the politics”. The personal vendettas, the underhanded decisions about funding distributions, the decisions over who teaches classes and who doesn’t have to – it’s as bad as the court of the French Kings in many places.

    There’s an argument to be made that scientific merit takes second place to political maneuvering in our nation’s universities these days.

  • Hiranya

    #23 Mike M: What makes you think that physics is held in high esteem (compared to say, engineering, which is your example) in the US? By whom does it have to be held in high esteem for gender imbalance in numbers to be correlated with esteem?

  • Mike M

    A t-test only tests for differences in means of some quantity. It does not tell you if two quantities are correlated, or if (for example) the distributions have different shapes. At the end of Sec 4.1, the paper points out that males and females have very different shaped productivity distributions. In this case, applying a t-test is completely bogus.

    Of course it isn’t completely bogus: it addresses the question of whether, on average, the population of males and the population of females are treated differently, which is the sensible first question to ask. As I am sure you know, the central limit theorem tends to make statistics based on means a lot more robust than other statistics, since there is some hope of knowing the underlying distribution of the statistic. Any higher-order statistics may be more powerful, but they suffer greatly in robustness, as the fact that someone else can reach completely the opposite conclusion based on the same statistics but a different test demonstrates.

  • Mike M

    #23 Mike M: What makes you think that physics is held in high esteem (compared to say, engineering, which is your example) in the US? By whom does it have to be held in high esteem for gender imbalance in numbers to be correlated with esteem?

    Mainly I am reporting what I have been told by people I know who have worked in southern European countries and in the US as their explanation for the difference in fraction of women doing physics. My own experience is that physics in the US is held in relatively high esteem compared to, say, the UK, but I have less direct experience of Italy and Spain.

  • Ben

    Of course it isn’t completely bogus: it addresses the question of whether, on average, the population of males and the population of females are treated differently, which is the sensible first question to ask. As I am sure you know, the central limit theorem tends to make statistics based on means a lot more robust than other statistics, since there is some hope of knowing the underlying distribution of the statistic. Any higher-order statistics may be more powerful, but they suffer greatly in robustness, as the fact that someone else can reach completely the opposite conclusion based on the same statistics but a different test demonstrates.

    Mike, this is not good statistical procedure for several reasons. The “productivity” here is the independent variable; we might wish to measure conference presentations as a function of productivity. From what the author says, I gather that the productivity is not normally distributed, which is not surprising. Many things aren’t; you can have an asymmetric distribution with a small number of highly productive people, for example. A Kolmogorov-Smirnov test to compare the distributions of productivity is then more appropriate than a t-test.

    As a hypothetical example, it’s possible to construct two populations with the same distribution of x, one in which y is correlated with x with slope +1, and one in which y is correlated with x with slope -1. (Let us say, x is hair color, y is success as a Hall impersonator or an Oates impersonator, respectively). These populations could have the same mean and and pass the respective t-tests. However, the relation of y(x) is directly opposite, which would be apparent by measuring a correlation coefficient.

  • String Theorist

    Excellent post on what contributes to a “successful” career!

    About women in physics: I don’t know about experimenters, they seem often like large corporations where things work differently. But in Theory, I have known one occasion when a very smart female theorist did extremely well for herself, two instances where pretty smart female theorists sank after postdocs like many others in the field, and two cases where not-so-smart female theorists did *extremely* well for themselves. I cannot say that I have met many guys who can fit into this last category in string theory. Of course I am going to be attacked on my personal views on “who is smart” and on my relying on anecdotal evidence,…

    In any case, in the (obviously biased) view of some of their male colleagues, the last category had easy entry into male dominated collaborations, their mentors were more generous on them with their times, they often were able to associate themselves with the “superstars”, etc. because they were not unattractive.

    Society has a way of trying to correct past wrongs by doing compensating wrongs now. I am certainly not saying there isn’t a problem (women are so extremely rare in theory, afterall!), but that the problem is more complicated than how it is often discussed.

  • ike

    It seems pretty clear that when people talk about discrimination, there are a number of issues that can’t be easily measured – but money is not one of them. Thus, any responsible analysis of discrimination and/or nepotism in science, say physics in particular, say one high-energy physics experiment in particular – ah, yes, that is essentially an anecdotal event if you want to extrapolate back and make meaningful statements about discrimination in science in general. . . excuse the disjoint, but you have to look at faculty appointments and research funding decisions if you want to really look at discrimination.

    Here is another anecdote that also indicates a good degree of gender discrimination at top U.S. universities: Lawrence Summers, while President at Harvard

    However, a one-dimensional statistical test along the gender axis misses important “hidden variables” and results in systematic biases, the ever-present issue in any statistical analysis. You would need to do a two-dimensional statistical analysis along the axis of gender and the axis of nepotism, at least. Nepotism is often gender-independent, for example.

    It’s high time that some independent sociologist types – ideally a external investigation, not an in-house review – took a look at this issue in the United States. It’s been done elsewhere with far more statistical rigor:

    Wenneras & Wold (1997) Nepotism and sexism in peer-review Nature v387

    In the multiple-regression analysis, we assumed that the competence scores given to applicants are linearly related to their scientific productivity. We constructed six different multiple-regression models, one for each of the productivity variables outlined above. In each of these models, we determined the influence of the following factors on the competence scores: the applicant’s gender; nationality (Swedish/non-Swedish); basic education (medical, science or nursing school); scientific field; university affiliation; the evaluation committee to which the applicant was assigned; whether the applicant had had postdoctoral experience abroad; whether a letter of recommendation accompanied the application; and whether the applicant was affiliated with any of the members of the evaluation committee.

  • jack brennen

    Reading her article I started to get the feeling that the whole field is broken even before I got to the sexism part. She basically says being good at doing experiments is not valued either internally or by the universities when hiring, and that consequently everyone spends the minimum possible time actually working on the experiment. From the way she states it I’m guessing this isn’t even a controversial statement.
    I can understand why this is (universities want chiefs not indians), but it seems exactly the wrong incentive structure if we want more/better science for a fixed amount of funding.

  • http://virteal.com/SecteDesBouders JeanHuguesRobert

    “But, at least it’s something that you have a chance of controlling.”

    If only you were true. Unfortunately one does not control things merely because one is aware of things.

    Gravity rules me despites my understand of its mechanism.

    You point out that Success is social, even in science. OK. Now, what can we do about this? How can we advance science *despite* the social obstacle?

  • Mike M

    From what the author says, I gather that the productivity is not normally distributed, which is not surprising. Many things aren’t; you can have an asymmetric distribution with a small number of highly productive people, for example. A Kolmogorov-Smirnov test to compare the distributions of productivity is then more appropriate than a t-test.

    Indeed, given that the metric for productivity is fairly arbitrary, it can have pretty much any distribution you want. I suppose you could use a KS test to see whether male and female productivity differs, but I wouldn’t recommend it since the numbers are sufficiently small that you would have to MC the significance, plus a positive result doesn’t tell you how they differ, which would seem to be the salient issue. Comparing the means of the distributions., on the other hand, is relatively robust even with the small numbers here, and directly addresses the interesting question “are female postdocs on average more productive than their male counterparts,” to which the answer is “no.” So, the facts remain that on average the productivity of male and female postdocs is indistinguishable, and on average the fraction who went on to faculty positions is indistinguishable. Faced with those very basic results, you would be hard pressed to find evidence of sex discrimination in the data.

  • Pingback: Sherry Towers: Geschlechterdiskriminierung in der Teilchenphysik? « Begrenzte Wissenschaft

  • Mike M

    However, a one-dimensional statistical test along the gender axis misses important “hidden variables” and results in systematic biases, the ever-present issue in any statistical analysis. You would need to do a two-dimensional statistical analysis along the axis of gender and the axis of nepotism, at least. Nepotism is often gender-independent, for example.

    It’s high time that some independent sociologist types – ideally a external investigation, not an in-house review – took a look at this issue in the United States. It’s been done elsewhere with far more statistical rigor:

    But the question is, what should we do if we find evidence for nepotism? It seems to me that at some level it is unavoidable. Given the choice between employing someone you know personally and therefore already know that you can have a useful working relationship and an academically-equivalent but otherwise-unknown applicant, which do you go for? When reading two superlative references, one from someone you know and trust and the other from someone you have never met, to which do you give more weight?

    A recent very clear example was highlighted in The Times. The article points out that in the recent prioritization of UK astronomy and particle physics facilities required by funding cuts, those in which the panel members had a vested interest seemed to be preferentially at high enough priority to avoid being shut down. I have the greatest respect for the members of the panel who were confronted with an impossible job and did the absolute best they could under appalling conditions. I am sure they took all the appropriate steps to ensure that their interests were declared and mitigated, but in any such exercise a number of things have to be taken on trust because no-one is an expert on everything, and it is a lot easier to have that trust when you have some personal knowledge.

    This kind of bias seems to me unavoidable, and if we try to deny its existence and strive for some completely unachievable gold standard of objectivity, surely all we are doing is deluding ourselves.

  • Martin

    Mike M: some bias is unavoidable; asking a small panel to vote on the future of projects in which they all have some vested interest is not the best way to minimize it, though. STFC could perfectly well have organized a review panel from outside the UK community. The fact that they didn’t is symptomatic of the slipshod way in which the whole thing has been handled. But we digress.

  • http://www.mpe.mpg.de/~erwin/ Peter Erwin

    Mike M @ 23:
    As to why some countries do very much better in terms of numbers, sadly the answer is fairly obvious: gender equality in physics anti-correlates quite closely with the level of esteem in which the subject is held. My female friends in Spain and Italy have made this point very forcefully, by pointing to the gender inequalities in subjects like engineering that are held in high esteem in those countries.

    Is this actually true? (I should point out that I’ve heard a somewhat similar argument from a female Turkish friend about the roles of physics and engineering in Turkey — that is, Turkish men who might be interested in physics are encouraged to study engineering instead, on the grounds that they’ll be the ones supporting a family and so need a better-paying career — so I’m inclined to take the argument seriously. But I have this weird habit of wanting to look for actual evidence, so…)

    Here are some numbers for the fraction of undergraduate physics degrees going to women in countries you’ve mentioned, for 2004:
    US: 21%
    UK: 21%
    Italy: 36% [*]
    Spain: 27%

    According to your argument, we should expect lower female fractions in engineering in Italy and Spain (where engineering has higher “esteem”), and higher fractions for the US and UK.

    Fraction of undergraduate degrees going to women in engineering:
    US: 21%
    UK: 16%
    Italy: 28.%
    Spain: 31%

    Hmm… looks like countries with higher fractions of women physicists also have higher fractions in engineering. I’d say your “fairly obvious” explanation isn’t.

    (Data mostly from the NSF’s “Science and Engineering Indicators 2008″ report, plus a few other reports found here and there.)

    [*] This is actually the number for 1998; I haven’t been able to find any newer numbers for Italy. Tentatively, I would assume that the 2004 value is, if anything, slightly higher.

  • http://okham.livejournal.com Massimo

    papers which so clearly set out to reach a specific conclusion [...] do not go any way toward demonstrating it. Indeed, [...] they achieve quite the opposite by allowing [...] to dismiss legitimate research by lumping it in with this kind of carefully engineered propaganda.

    Precisely the conclusion at which I arrived after reading the paper. I really do not believe that her case is strong enough to make the claims that she makes, even if we accept all of her premises (some of which are dubious).

  • Hiranya

    #40 Peter, thank you – this was what I was trying to get at with my question above. This “esteem” correlation can’t be true because of the much more equitable situation in the medical profession, which if anything is held in much higher esteem by the general population than either engineering or physics. Of course someone is going to trot out the old chestnut about “women preferring caring professions” but that’s not the point for the argument in question, unless you are also positing that the men in the medical profession are nice guys who gave up the hotly competed-for slots in medical schools to women out of charity!

  • Mike M

    Hmm… looks like countries with higher fractions of women physicists also have higher fractions in engineering. I’d say your “fairly obvious” explanation isn’t.

    Except that we weren’t talking about undergraduate degrees, but rather the numbers further along in career progression. The figures that you report reflect the higher priority of STEM subjects in some countries. The more salient question is how these numbers translate into PhD students, postdocs and faculty positions in physics compared to engineering. Clearly, the relatively small difference between the fraction of female undergraduates studying physics in the UK (21%) and Spain (27%) is not sufficient to explain the much larger differences further down the pipe.

  • ike

    There are a numer of places where nepotism plays a key role. My vote for the most likely datasets to look at would be the initial pools of applicants to faculty postions vs. the ones who are put on the short list, vs. the ones who are actually hired.

    Notable anecdotal stories of this process include faculty members who load the short list with obvious non-starters in order to make their favored candidate look better, as well as “job listings” that, after inquiry, turn out to be in-house promotions of junior members that have to be presented to the public as “job searches”.

    Does this really produce the best scientific research teams, whether big or small? In other words, “why is nepotism bad”?

    Nepotism is the showing of favoritism toward relatives and friends, based upon that relationship, rather than on an objective evaluation of ability or suitability. For instance, offering employment to a relative, despite the fact that there are others who are better qualified and willing to perform the job, would be considered nepotism. The word nepotism is from the Latin word ‘nepos’, meaning “nephew” or “grandchild”.

    In such situations, the candidate’s “parental lineage” becomes a more important factor than the candidates actual abilities in terms of research and teaching. The reasons such tendencies are unhealthy should be obvious to anyone who wants to maintain a high-quality research and teaching institution.

    Right now, it should be noted, academics have a pretty cushy position – but imagine a situation in which external political considerations control all university activities – as was the case in Lysenko’s Soviet science “community”, and as was the case at all the German scientific institutions during the 1930s and 40s. The main external influence over science institutions in the U.S. today is the growth of secretive public-private relationships and the new emphasis on proprietary corporate research within all of the U.S. public universities. Extrapolate the trends of the past two decades forward 20 years, and what do you get?

    This is why people are calling for more openness and transparency in all aspects of academic life – but especially in hiring and funding decisions.

  • http://www.mpe.mpg.de/~erwin/ Peter Erwin

    ike @ 44:

    In such situations, the candidate’s “parental lineage” becomes a more important factor than the candidates actual abilities in terms of research and teaching. The reasons such tendencies are unhealthy should be obvious to anyone who wants to maintain a high-quality research and teaching institution.

    In Spain, this is known as the “endogamy problem” (endogamy = marriage within the tribe or extended family). It occurs because, as I understand it, university hiring committees usually include several prominent citizens from the local city, and these people will tend to favor applicants who are locals over those from other regions (or foreigners). The result is that people who aren’t from that locality can sometimes have a harder time getting faculty or equivalent positions.

    (This isn’t nepotism in the strict sense — the people being favored are not necessarily directly related to the hiring committee; they’re just “locals”.)

  • http://gordonwatts.wordpress.com Gordon Watts

    There is now a nature article on this – http://www.nature.com/news/2008/080423/full/452918a.html – which has some hard numbers that D0 was able to put together on very short notice, as well as some context. Comments like those from Freya are pretty common in our field in the USA, unfortunately (see article). There is clearly work to be done.

  • Mike M

    The more recent data highlights the “publication bias” that is unavoidable when a result is claimed to be significant on the basis of such limited statistics: had Dr Towers based her analysis on the data from 2006/7 and discovered that women were over-represented in conference presentations over the period (though presumably not at a statistically-significant level), then presumably there would have been no arXiv preprint and no heated discussion here. Or perhaps she would have kept looking for some other metric of perceived value until she found one that matched her prior, and published that instead.

    And once again let me reiterate that I think it highly likely that sex discrimination exists in many places including particle physics: my point is only that those who “prove” it by such dubious means are actually doing more harm than good in making the case.

  • Nicole

    Mike M, Dr. Towers sent a copy of her manuscript to the collaboration in 2006. This is why the situation for female post-docs subsequently improved. This information is in the Nature article.

  • Mike M

    Nicole: you have no way of knowing whether that is true or not. The only facts that are apparent are that Dr Towers produced a paper that claimed a statistically significant degree of gender discrimination. An alternative analysis of the data found no such effect. So the conclusion is demonstrably questionable.

    The obvious way to check is to repeat the experiment and see if the statistics hold up or whether they are just noise. Such a repeat failed to reproduce any claim of discrimination. So either you can conclude that something has radically changed or that the original conclusion was flawed.

    On the basis of the evidence provided, there is no way of telling which is the case.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »