As I observed before, modern medicine is subject to some of the same statistical issues as social science in its tendency to put unwarranted spotlight on preferred false positive results. Trials and Errors – Why Science Is Failing Us:
This doesn’t mean that nothing can be known or that every causal story is equally problematic. Some explanations clearly work better than others, which is why, thanks largely to improvements in public health, the average lifespan in the developed world continues to increase. (According to the Centers for Disease Control and Prevention, things like clean water and improved sanitation—and not necessarily advances in medical technology—accounted for at least 25 of the more than 30 years added to the lifespan of Americans during the 20th century.) Although our reliance on statistical correlations has strict constraints—which limit modern research—those correlations have still managed to identify many essential risk factors, such as smoking and bad diets.
Summary: Crowdfunding science is a good idea to add additional support to underfunded missions or to enable small projects. It is not a good idea to draw upon the public opinion to fund research projects from scratch. It might appear as if public money is put to good use, but that use is likely to be very inefficient and misdirected and doesn’t actually solve any systemic problem. If you must, go occupy Wall Street, vote, and make sure your taxes are put to good use.
I have a few of the same questions as Sabine overall. This despite the fact that I solicited funds in a genotyping project. The key is that if you’re going to do crowdfunding/crowdsourcing you have to be clear and precise about the aims. In the abstract I think most people understand that most science fails, but I think it will be hard to get funds if you continue to fail because you’re aiming for “home-runs.” Rather, the best option if you want to go in this direction is to be modest, and aim toward a low reward/risk project. This will minimize the disappointment on the part of your numerous funders, who are going to be more engaged and curious as to the specific result than the NSF or NIH would be.
Though one issue that does need to be pointed out is that at the early stage the people donating to these projects are not the typical citizen. I know the identities of the people who donated to the Malagasy genotyping project, and well over half of them were faculty, postdocs, or grad students. In other words scientists were funding science.
But I think the bigger issue here in terms of the “crowd” isn’t going to be in the area of funds. Rather, I suspect it will be collaboration and labor input. Something analogous to the open source movement. And just like open source software doesn’t mean that firms like Google and Microsoft aren’t eminently profitable, open source science isn’t going to replace “traditional” science. Rather, it’s going to complement.
Over the past few weeks I’ve been observing the response to Rick Scott’s suggestion that Florida public universities focus on STEM, rather than disciplines such as anthropology. You can start with John Hawks, and follow his links. More recently I notice a piece in Slate, America Needs Broadly Educated Citizens, Even Anthropologists. There several separate issues here. Superficial concerns of money going to your political antagonists, commonsense considerations of the best utilization of public educational resources, and broader reflections upon the nature of a ‘liberal’ education.
One example of cyclicality that continues to today is the practice of law. The basic principles of Roman private law and the complaints that people made about lawyers and litigation were remarkably similar in the 300s to what they are today.
In the 6th century Justinian the Great sponsored a compilation of the body of law which was being widely practiced in the Roman Empire at the time, what is now known as the Corpus Juris Civilis. This is not an abstract or obscure point in the history of modern law:
The present name of Justinian’s codification was only adopted in the 16th century, when it was printed in 1583 by Dionysius Gothofredus under the title “Corpus Juris Civilis”. The legal thinking behind the Corpus Juris Civilis served as the backbone of the single largest law reform of the modern age, the Napoleonic Code, which marked the abolition of feudalism.
Imagine that the astronomical models of Ptolemy served as a basis for modern astrophysics! There’s only a vague family resemblance in this case. The difference is that law is fundamentally a regulation of human interaction, and the broad outlines of human nature remain the same as they were during the time of Justinian and Theodora. In this way law resembles many humanities, which don’t seem to exhibit the same progressivity of science. Our cultures may evolve, but there are constraints imposed by our nature as human beings. Human universals in humanistic enterprises speak to us across the ages. The story of Joseph and his brothers in in Genesis speaks to us because it is not too unfamiliar from our own. The meditations of Arjuna are not incomprehensible to the modern, even if they come from the imagination of Indians living thousands of years in the past. The questions and concerns of the good life are fundamentally invariant because of the preconditions of our biology.
Our single biggest concern when examining research is publication bias, broadly construed. We wonder both (a) how many studies are done, but never published because people don’t find the results interesting or in line with what they had hoped; (b) for a given paper, how many different interpretations of the data were assembled before picking the ones that make it into the final version.
The best antidote we can think of is pre-registration of studies along the lines ofClinicalTrials.gov, a service of the U.S. National Institutes of Health. On that site, medical researchers announce their questions, hypotheses, and plans for collecting and analyzing data, and these are published before the data is collected and analyzed. If the results come out differently from what the researchers hope for, there’s then no way to hide this from a motivated investigator.
As the example of the NIH illustrates this is not just a social science problem, it is rife in any science which utilizes statistics. Statistical methods have become metrics to attain by any means necessary, when in reality they should be guidelines to get a better grasp of reality. The only solution to the problem of conscious and unconscious bias in statistical sciences seems to me to be radical transparency of some sort. There’s a fair amount of science ethnography which suggests that how science is done departs greatly from the clean and rational enterprise which one might presume based on the final product. The only way to clean up some of the natural human bias in the enterprise is to shed some light on it.
I’ve been running the African Ancestry Project for a while now on the side on Facebook. But it’s getting unwieldy, so I finally set up the website. The main reason I started it up is that there have been complaints for a while now of problems with the 23andMe “ancestry painting” and such for some African groups. For example, a Nubian might be 70% “European.” One might argue that this is due to Arab admixture, but this is clearly not so if you look at the PCA plot. What’s going on? Probably a problem with the reference populations (only Yoruba for Africa), ascertainment bias in the chip (they’re tuned to European variation), and the fact that African genetic variance can cause some issues. I don’t know. But the problem has been persistent, and since most of the other genome blogging projects exclude Africans because they’re so genetically diverse I decided to take it on.
Three groups of people have submitted:
- People of the African Diaspora in the New World
- People from Africa, disproportionately Northeast Africans (Horn of African + Nubia, etc.)
- People of some suspected or known minor component of African ancestry
I’m at ~70 participants now. As one reference population set I’ve been using a subset of Henn et al. as well as some populations from Behar et al. I call this my “thin” set since there are only ~40,000 SNPs. A “thick” set has on the order of 300-400 thousand markers. But fewer populations. I’ve been putting the AAP members through ADMIXTURE in batches of 10, but I also run them all together sometimes for apples-to-apples comparisons. Yesterday I ran AF001 to AF070 from K = 2 to K = 14, unsupervised, with the thin reference. If you want to see all the results, go here. Doing all this myself over and over has given me some intuition as to the pitfalls in this sort of analysis. Especially in the area of confirmation bias.
First, if it is clear that you haven’t read the post itself and leave a comment I won’t just not publish it, but I’ll ban you. Second, if you complain about this in the comments, I’ll ban you too. Now that you feel appropriately welcome, I want to explore some of the issues beneath Chris Mooney’s post, Vaccine Denial and the Left:
So I want to further explain my assertion that vaccine denial “largely occupies” the political left. It arises, basically, from my long familiarity with this issue, having read numerous books about it, etc.
First, it is certainly true that environmentalists and Hollywood celebrities have been the loudest proponents of anti-vaccine views. To me, that is evidence, although not necessarily definitive. So is the fact that we see dangerously large clusters of the unvaccinated in places like Ashland, Oregon, and Boulder, Colorado, which are very leftwing cities.
What’s tricky is, there’s not a standard left-right political ideology underlying this. Rather, it seems more associated with a Whole Foods and au natural lifestyle that, while certainly more prominent on the bicoastal left, isn’t the same as being outraged by inequality or abuses of the free market.
This is a tricky issue. There is a stereotype that liberals who reject religion tend to gravitate toward New Age/environmentalist spirituality. “The mind abhors a vacuum” model. I used to accept this, but if you poke around the General Social Survey the reality is more complicated. For example, you can look up attitudes toward genetically modified food and astrology. The results don’t fall neatly into a Left-Right dichotomy. Part of the issue is that there has been aggregation of distinct groups into on catchall category. Consider me. I identify as a conservative, which would indicate a far higher odds of me being a Creationist, but I’m clearly not.
There aren’t any questions about vaccination in the General Social Survey, but there are several about trust and faith in science, or lack thereof. First I pruned all of the questions which were before 1998. So the results below are for the 2000s by and large. After that I had a set of variables to play with, to serve as replicates in terms of observing trends. Below are three tables with my results.
Table #1 is just a set of results which shows how political ideology, party identification, and educational attainment, correlate with attitudes toward science. So in that table the columns add up to 100%. So below 4% of liberals strongly agree while the assertion that “we trust too much in science,” and 21% strongly disagree.
The second table is limited to self-identified liberals. I wanted to query how attitudes toward science vary by demographic among liberals. In this case the rows add up to 100% on the margin (rotated from the first table). So in terms of those who strongly agree that we trust too much in science, 29% are male and 71% female, among self-identified liberals. Remember that in some classes there won’t be a 50/50 breakdown, so look for the variation in relative trends.
Finally, for the third table I have a regression. I now divided the sample into liberal and conservative groups, and ran a set of variables to predict opinions on the questions which I’ve covered so far. The first row has the R-squared, the magnitude of which illustrates how much the listed variables predict variation on the question. Subsequent rows have beta values for the variables, which indicate the direction and magnitude of the effect from that given variable. The questions are all easily numerical, or recoded as numerical (e.g., atheist, agnostic…to total belief in God is 1, 2…6). To get an intuition as to what’s going on, just look at each variable and its value. Those which are bold are statistically significant at p = 0.05. For example, among liberals confidence in belief in god seems to decrease trust in science. Socioeconomic status seems to increase it.
Please note that I’ve omitted some categories for variables where the sample size is too small, so some rows/columns may be less than 100% (e.g., Jews in “religion”). Additionally I’ve removed some response classes where N < 25, as the noise can confuse the trend line.
This morning I received an email from the communication director of the American Anthropology Association. The contents are on the web:
AAA Responds to Public Controversy Over Science in Anthropology
Some recent media coverage, including an article in the New York Times, has portrayed anthropology as divided between those who practice it as a science and those who do not, and has given the mistaken impression that the American Anthropological Association (AAA) Executive Board believes that science no longer has a place in anthropology. On the contrary, the Executive Board recognizes and endorses the crucial place of the scientific method in much anthropological research. To clarify its position the Executive Board is publicly releasing the document “What Is Anthropology?” that was, together with the new Long-Range Plan, approved at the AAA’s annual meeting last month.
The “What Is Anthropology?” statement says, “to understand the full sweep and complexity of cultures across all of human history, anthropology draws and builds upon knowledge from the social and biological sciences as well as the humanities and physical sciences. A central concern of anthropologists is the application of knowledge to the solution of human problems.” Anthropology is a holistic and expansive discipline that covers the full breadth of human history and culture. As such, it draws on the theories and methods of both the humanities and sciences. The AAA sees this pluralism as one of anthropology’s great strengths.
Changes to the AAA’s Long Range Plan have been taken out of context and blown out of proportion in recent media coverage. In approving the changes, it was never the Board’s intention to signal a break with the scientific foundations of anthropology – as the “What is Anthropology?” document approved at the same meeting demonstrates. Further, the long range plan constitutes a planning document which is pending comments from the AAA membership before it is finalized.
Anthropologists have made some of their most powerful contributions to the public understanding of humankind when scientific and humanistic perspectives are fused. A case in point in the AAA’s $4.5 million exhibit, “RACE: Are We So Different?” The exhibit, and its associated website at www.understandingRACE.org, was developed by a team of anthropologists drawing on knowledge from the social and biological sciences and humanities. Science lays bare popular myths that races are distinct biological entities and that sickle cell, for example, is an African-American disease. Knowledge derived from the humanities helps to explain why “race” became such a powerful social concept despite its lack of scientific grounding. The widely acclaimed exhibit “shows the critical power of anthropology when its diverse traditions of knowledge are harnessed together,” said Leith Mullings, AAA’s President-Elect and the Chair of the newly constituted Long-Range Planning Committee.
Chemistry likes to think of itself as the “central science.” Is that true? Intuitively it makes sense. But how can we measure that more rigorously? In comes the Stanford Dissertation Browser:
The Stanford Dissertation Browser is an experimental interface for document collections that enables richer interaction than search. Stanford’s PhD dissertation abstracts from 1993-2008 are presented through the lens of a text model that distills high-level similarity and word usage patterns in the data. You’ll see each Stanford department as a circle, colored by school and sized by the number of PhD students graduating from that department.
When you click a department, it becomes the focus of the browser and every other department moves to show its relative similarity to the centered department. The similarity scores are computed using a supervised mixture model based on Labeled LDA: every dissertation is taken as a weighted mixture of a unigram language model associated with every Stanford department. This lets us infer, that, say, dissertation X is 60% computer science, 20% physics, and so on. These scores are averaged within a department to compute department-level statistics (the similarities shown), and need not be symmetric. For instance, Economics dissertations at Stanford use more words from Political Science than vice versa. Essentially, the visualization shows word overlap between departments measured by letting the dissertations in one department borrow words from another department. Which departments borrow the most words from which others? The statistics are computed for each year in the data.
You can play around with the browser here. I’m assuming at some point in the near future this sort of analysis is going to get much, much, easier, because of the sea of data which powerful software can extract and visualize patterns out of. Below are the fold are five screen shots I thought were of interest. Genetics, biology, and chemistry dissertations in 2008. And Anthropology in 2007 and 1998.
David Dobbs has a link roundup and commentary on what’s been going down with l’affaire Hauser. It doesn’t look good for Hauser et al., though it seems that the downfall was precipitated ultimately from within if press reports are to be believed. Part of the issue here seems to be that there’s a level of opacity in the scientific process, and you have to trust the scientists themselves over the short term. Over the long term the system of science and its general culture tends to self-correct, at least in the natural sciences, but over the long term careers can rise and fall, and science is produced by human beings. We know that science is possible, it’s been done for at least a few centuries even with the most constrained definitions, but we also know that it isn’t necessarily entailed by the existence of any complex society. A particular set of contingent conditions need to come together to allow for its emergence and perpetuation. So it’s all fine and good to observe that science as a system self-corrects, but without the individual incentives and institutional checks & balances it may never have a chance to flower.
This brings me to Dobbs’ comment about more “open science”:
Are Top Scientists Really So Atheistic? Look at the Data asks Chris Mooney. He’s referring to a new book, Science vs Religion: What Scientists Really Think by Elaine Howard Ecklund. Here’s the Amazon description:
… In Science vs Religion, Elaine Howard Ecklund investigates this unexamined assumption in the first systematic study of what scientists actually think and feel about religion. Ecklund surveyed nearly 1700 scientists, interviewed 275 of them, and centers the book around vivid portraits of 10 representative men and women working in the physical and social sciences at top American research universities. She finds that most of what we believe about the faith lives of elite scientists is wrong. Nearly 50 percent of them are religious. Many others are what she calls “spiritual entrepreneurs,” seeking creative ways to work with the tensions between science and faith outside the constraints of traditional religion. Her respondents run the gamut from Margaret, a chemist who teaches a Sunday-school class, to Arik, a physicist who chose not to believe in God well before he decided to become a scientist. Only a small minority are actively hostile to religion….
Some of Chris’ readers are rather agitated about this all, and he suggests that perhaps they should read the book to answer their questions. I haven’t read the book, but you can read much of it on Google Books or Amazon’s text search feature. Skimming a bit I encountered the term “spiritual atheist,” which many might find an oxymoron. Rather than present her interpretation, let me post some of the tables which have data in them.