I should be careful about being flip on this issue. As recently as the mid aughts (see Mutants) the details of this trait were not entirely understood. Today the nature of inheritance in various populations is well understood, and a substantial proportion of the evolutionary history is also known to a reasonable clarity as far as these things go. The 50,000 foot perspective is this: we lost our fur millions of years ago, and developed dark skin, and many of us lost our pigmentation after we left Africa ~50,000 years ago (in fact, it seems likely that hominins in the northern latitudes were always diverse in their pigmentation)
- Genesis 16:12
By now you may have seen or read two important papers which just came out in Science, 2000 Years of Parallel Societies in Stone Age Central Europe, and Ancient DNA Reveals Key Stages in the Formation of Central European Mitochondrial Genetic Diversity. The details have been extensively explored elsewhere. If you don’t have academic access I highly recommend the supplement of the second paper. It’s also very illuminating if you don’t have a good grasp of the nuts and bolts of archaeology (I do not). I can’t, for example, confirm whether the merging strategies of different archaeological cultures were appropriate or not, because I’m not totally clear in my own head about the nature of these distinct archaeological ‘cultures’ (quotations due to the fact that archaeologists infer culture from material remains, and so they may not be cultures in the sense we understand culture). But the overall finding is clear, in ancient Europe thousands of years ago there were multiple demographic replacements and amalgamations. The post-World War II thesis in archaeology that one could not infer changes in the demographic character from material remains (because the latter can diffuse purely through memetic means) seems to be false. The correspondence is surprisingly tight.
National Geographic has an interesting article up, unoriginally titled Australia’s Aboriginals. There are lots of great data in there, though not much novel for anyone who has tread this territory before. For example, Aboriginals tend to have much lower morbidity and mortality when they are living their “traditional” lifestyle. This isn’t a particular novel or surprising outcome. Rather, it seems like a supercharged version of the same problem which occurs when immigrants move from developing to developed societies, and shift toward massive portions and processed food. This modern regime is even impacting native born segments of America’s population in a negative manner. Interesting and true.
But what concerns me is the background assumption that Aboriginals are timeless and static, arriving ~50,000 years ago from Sundaland, and remaining in a stasis. My issue isn’t normative. And I’m fascinated by the inferences some archaeologists have made about the continuity of specific motifs in Aboriginal art. Additionally, from what I understand the material culture of Aboriginals is especially changeless in relation to other populations in the world. But one thing we know about H. sapiens is that cultural forms of expression are quite protean, especially symbolic aspects which might not preserve too well. Would the Aboriginals of Australia be immune from this? I doubt it.
An email from a long time correspondent who recounts some graduate school interview experiences:
Hi, Razib. Last week I attended a 2-day long interview for grad school, during which I spoke with about 15 faculty, all of whom were biological anthropologists, though of varying specialities. During these informal meetings, the topic of bio vs cultural anthropology came up a few times and a couple of professors spoke very candidly about the divide that exists between the two disciplines and their desire to have bio anthropology split from the rest of anthro. A very common argument was the one you’ve made: that many cultural anthropologists have become glorified activists. This sort of ran counter to the attitude I encountered during my undergrad [identifying information redacted] wherein a ‘four field’ approach was pumped up. I thought this was an interesting little quirk. Basically, when bio anthropologists are amongst only their own (the grad program is separate from the 3 other subfields), they speak openly about the need for separation from cultural anthro because of the latter’s non-scientific ways, but when some of those same bio anthropologists are in the same building as their cultural anthro colleagues, they tout a holistic approach to the field as a whole. I suppose this is to cultivate a positive attitude in the young minds of students interested in all subfields, but it doesn’t seem crazy to think it could have a little bit to do with cultural anthro’s domination of department politics.
Anyway, long story short: your name popped up! It was referenced by a paleoanthropologist who was particularly keen on bashing of cultural anthro. I just thought it was a little amusing and that you should know that the biological anthropologists are with you! Although, I’m sure you know that based on the twitter conversation you had with John Hawks the other day.
The broader concept of finding “patterns of culture” isn’t worthless. And I’m pretty sure that the biological anthropologists above wouldn’t be ashamed to be on the same faculty as someone like Joe Heinrich, who is asking serious questions with sound and transparent methods. Then there is someone like Michael Scroggins, who can write with a straight face that “in this conception, a gene is more rhetorical topic than scientific fact”, who makes a big point of pointing out that I used the term gene in a singular. Are there really people who reduce everything to linguistic analysis? Why yes! Cato had the right of it in some ways. Perhaps sadder is the fact that Scroggins’ ruminations are awesomely persuasive to his colleagues. I’ll leave you a typical example of “Scrogging”:
I encountered Joe Henrich’s work about 10 years ago. As a fellow traveler of Robert Boyd, and admired by Dan Sperber, none of this is coincidental. These are the sorts of cultural anthropologists who I can understand in my bones. Benearth the jargon there is no attempt at signalling artifice. For a taste of Henrich’s research, see the anthology Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies. For those who prefer more theoretical heft, The Origin and Evolution of Cultures will satisfy you (Not by Genes Alone is a popularized condensed form of this book).
If you haven’t heard of Henrich & his colleagues, you’ve heard of their work. They’re behind the popularization of W.E.I.R.D., “Western, Educated, Industrialised, Rich and Democratic.” The concept refers to the fact that much of psychology consists of observations and experiments on exactly such populations, and then extrapolation from those results to make general assertions as to the character of human nature. This is a very popular and widely known idea that crops up in everyday conversation. I’ve been patronizingly lectured about it numerous times by individuals who perceive that I’m being too insensitive about some barbarous cultural practice (in my personal communication I don’t make it a secrete that I prefer small-l liberal Western values; there’s no shame in W.E.I.R.D.ness). Speaking of insensitivity it seems Henrich was accused of exactly such early on in his career by the usual suspects:
I read Noble Savages, Napoleon Chagnon‘s memoir, last week. There isn’t much to say about this book that’s revelatory, but it definitely was a page turner. As far as my personal tastes go there was a little too much autobiography, and not enough science, in Noble Savages. But it’s a long work, so in absolute terms there’s a lot of science to dig into if you want to skim over the personal sections (frankly, I had a hard time keeping all the various tribes and individuals straight). There have been many reviews of Noble Savages since it came out last week. If you haven’t read the profile in The New York Time Magazine, I advise you to do so right now. At Scientific American John Horgan put up a post which illustrates how Chagnon has become a sort of token in the tribal wars between scientists (or scientists and no-scientists). You can see this in two reviews at The New York Times, one which consists of an extended sneer from a professor of cultural anthropology and gender studies, Elizabeth Povinelli, while the second treatment from Nicholas Wade reads almost as a panegyric. Charles C. Mann navigates the middle path in his review, being critical in some instances, but by and large praising the memoir.
Just pre-ordered a Kindle Edition of Napoleon Chagnon‘s new book Noble Savages: My Life Among Two Dangerous Tribes — the Yanomamo and the Anthropologists. I didn’t even know this was coming out next week, but The New York Times Magazine has a piece up, The Indiana Jones of Anthropology, which chronicles the controversial the life & times of Chagnon. My previous posts about cultural anthropology were written with no knowledge about the impending publication of this article, or Napoleon Chagnon’s memoir. But the timing is fortuitous. One complaint by rightfully enraged cultural anthropologists (I didn’t deny that I was attacking their profession in the most extreme terms) is that I didn’t really offer an argument. As I said, the reason is that life is short and I’m not interested in convincing anyone.
But here’s a section of the article above which reflects just what I was alluding to:
In light of my two jeremiads against cultural anthropology, some readers may be curious if I have any positive vision, in the sense of any alternative model. To get a sense of my own orientation, Explaining Culture by Dan Sperber and The Origin and Evolution of Cultures by Peter Richerson and Robert Boyd would be sufficient (Scott Atran’s In Gods We Trust has been highly influential in my thought, but it is a rather dense work whose central topic may not be of interest to everyone). If books are not to your liking, see the resources at the Culture and Cognition Institute. Just to be explicit, an understanding of evolution or genetics is not necessary to gain a first order understanding of the nature of the phenomenon of human culture, but cognition is. When I say cognition, I mean the cognitive revolution and its rivals. An anthropology which binds disparate aggregate social phenomena and explains the variation which we see to any satisfaction must be rooted in what we know of the science of the mind.
If I have something to share, why not share it? Over the past few weeks I’ve been ruminating on some of the possible intersections between historical population genetics and anthropology, especially in light of the discussion that I’ve had in the past with Robin Hanson about ‘farmers vs. foragers’. Entering into the record that such a dichotomy is too stark, and only marginally useful (i.e., I think it is important to separate farmers and foragers in to their own sub-classes, as some farmer types may share more with some forager types, and so forth), it may be that after the first wave of the Neolithic expansion the descendants of the foragers “bounced back” in many regions of the world. It does seem that ancient European hunter-gatherers have left modern descendants. They were not totally swamped out. Using autosomal patterns some genome bloggers have inferred the same pattern, and perhaps even a counter-reaction by “Mesolithic” populations which adopted some aspects of the “Neolithic” cultural toolkit.
Dienekes has an important post up, The womb of nations: how West Eurasians came to be. He outlines a scenario where a rapid expansion of a farming population has overlain much of Western Eurasia, atop aboriginal substrata. A few years ago you’d have laughed at such a model, mostly due to the authority of archaeologists and phylogeographers relying on mtDNA lineage distributions. No longer. This is not necessarily an orthodoxy, and the details of the model vary, but here is my verbal rendering of the simplest scenario:
1) ~50 thousand years hybridization between Eurasian hominins and “Out of Africa”
2) ~40-10 thousand years before the present, crystallization of the Paleolithic order of human population structure, derived from groups seeded in the original migration
3) ~10 thousand to a few thousand years before the present, the Paleolithic order is replaced and assimilated by farmers expanding from a few hearths
Below the fold is a stylized tree representation of what I have in mind.
But the other big feature is that the lake-filling events that occurred after 50,000 years ago were much smaller than those which occurred before. Climactically, the conditions 10,000 years ago should have been the same as the conditions 115,000 years ago. But the lake was only a fraction of the size. The authors find no natural causes which can explain this. So they suggest that the aridity starting around 50,000 years ago is related to the reduction in forest and increase in grasslands which occurred at this time. This vegetation change was a result of a huge increase in the frequency of fire in central Australia, which allowed fire-adapted plants to prosper at the expense of moisture-retaining forest. The increase in fire at this time is generally associated with the arrival of the first people on the Australian continent. It is known that of Australia’s megafauna went extinct at this time, but Magee et al. (2004) show that even the tropical rains were effected by human migration, with drastic changes to the continent’s largest river basin.
If you read some of the academic literature on fire ecology you have a hard time not coming to the conclusion that modern humans terraformed the planet Earth! The hallmark of modern H. sapiens seems to be extinction of large organisms, a propensity to go where no hominin has gone before, and copious utilization of the “red flower.”
There are two interesting and related papers out today which I want to review really quickly, in particular in relation to the results (as opposed to the guts of the methods). Taken together they do change our perception of how the world was settled by anatomically modern humans, and if the findings are found to be valid via replication (I think this is likely, in at least some parts) I was clearly wrong and misled others in assertions I made earlier on this weblog (more on that later). The first paper is somewhat easier to parse because it is in some ways a follow up on the paper from 2010 which documented admixture into Near Oceanian (Melanesian + Australian Aboriginal) populations from a distant hominin lineage, the Denisovans.
In this paper in The American Journal of Human Genetics they extend their geographic coverage. Denisova Admixture and the First Modern Human Dispersals into Southeast Asia and Oceania:
It has recently been shown that ancestors of New Guineans and Bougainville Islanders have inherited a proportion of their ancestry from Denisovans, an archaic hominin group from Siberia. However, only a sparse sampling of populations from Southeast Asia and Oceania were analyzed. Here, we quantify Denisova admixture in 33 additional populations from Asia and Oceania. Aboriginal Australians, Near Oceanians, Polynesians, Fijians, east Indonesians, and Mamanwa (a “Negrito” group from the Philippines) have all inherited genetic material from Denisovans, but mainland East Asians, western Indonesians, Jehai (a Negrito group from Malaysia), and Onge (a Negrito group from the Andaman Islands) have not. These results indicate that Denisova gene flow occurred into the common ancestors of New Guineans, Australians, and Mamanwa but not into the ancestors of the Jehai and Onge and suggest that relatives of present-day East Asians were not in Southeast Asia when the Denisova gene flow occurred. Our finding that descendants of the earliest inhabitants of Southeast Asia do not all harbor Denisova admixture is inconsistent with a history in which the Denisova interbreeding occurred in mainland Asia and then spread over Southeast Asia, leading to all its earliest modern human inhabitants. Instead, the data can be most parsimoniously explained if the Denisova gene flow occurred in Southeast Asia itself. Thus, archaic Denisovans must have lived over an extraordinarily broad geographic and ecological range, from Siberia to tropical Asia.
I have criticized the “pots not people” paradigm on this weblog before. In short, the idea is that material cultural changes reflected in the archaeological record are an indicator of memetic, not genetic, evolution. So a shift from pottery style X to pottery style Y informs you of an cultural switch. This is not implausible on the face of it. In the year 450 the dominant religion in the Roman Empire was a derived Jewish sect, Christianity. The only other de jure recognized religious organization within the Empire was another derived Jewish sect, an early form of Rabbinical Judaism.* But most people assume that there was far less genetic gains to Jews and Jewish-derived people. Rather, it was Jewish ideas which spread to non-Jews, and superseded non-Jewish ideas.
For most of my life I have had an implicit directional view of Holocene human culture. And that direction was toward more social complexity and cultural proteanism. Ancient Egypt traversed ~2,000 years between the Old Kingdom and the fall of the New Kingdom. But it s rather clear that the cultural distance which separated the Egypt of Ramesses and that of Khufu was smaller than the cultural distance which separates that of the Italy of Berlusconi and the Italy of Augustus. Not only is the pace of change more rapid, but the change seems to tend toward complexity and scale. For most of history most humans were primary producers (or consumers as hunter-gatherers). Today primary producers are only a proportion of the labor force (less than 2% in the USA), and there are whole specialized sectors of secondary producers, service workers, as well as professionals whose duty is to “intermediate” between other sectors and smooth the functioning of society. The machine is more complex than it was, and it has gotten more complex faster and faster.
This is a accurate model as far as it goes, but of late I have started to wonder if simply describing in the most summary terms the transition from point A to Z and omitting the jumps from B to C to … Y may hide a great of the “action” of human historical process. My post “The punctuated equilibrium of culture” was inspired by my deeper reflection about the somewhat staccato character of cultural evolution. Granting that the perception of discontinuity is a function the grain at which we examine a phenomenon, I think one can argue that to a great extent imagining the change of cultural forms as analogous to gradualistic evolution or the smooth descent of a ball toward the center of the earth is deceptive. The theories of history which many pre-modern peoples espoused can give us a window into perception of changes in the past: history was quite often conceived of as cyclical, rising and falling and rising. And yet even in the days of yore there were changes and increases in complexity. The Roman legions of Theodosius the Great in 390 A.D. were more complex institutions than those of Scipio Africanus in 200 B.C. The perception of stasis, and even decline, is due to the fact that the character and complexity of societies did not seem to exhibit direction over the short term toward progress. And that short term can be evaluated over centuries. Far longer than any plausible human lifetime. So while it is all well and fine to focus on the long term trend line, the details of how the trend emerged matter a great deal when attempting to construct a model of the past which can allow us to make robust and rich inferences. The people of the past made robust inferences over any scale of time which mattered to them. The world was nearly as likely to get less rich as more rich.
I just finished reading a review of the literature since 1984 on the bioarchaeology of the transition to agriculture. Stature and robusticity during the agricultural transition: Evidence from the bioarchaeological record:
The population explosion that followed the Neolithic revolution was initially explained by improved health experiences for agriculturalists. However, empirical studies of societies shifting subsistence from foraging to primary food production have found evidence for deteriorating health from an increase in infectious and dental disease and a rise in nutritional deficiencies. In Paleopathology at the Origins of Agriculture (Cohen and Armelagos, 1984), this trend towards declining health was observed for 19 of 21 societies undergoing the agricultural transformation. The counterintuitive increase in nutritional diseases resulted from seasonal hunger, reliance on single crops deficient in essential nutrients, crop blights, social inequalities, and trade. In this study, we examined the evidence of stature reduction in studies since 1984 to evaluate if the trend towards decreased health after agricultural transitions remains. The trend towards a decrease in adult height and a general reduction of overall health during times of subsistence change remains valid, with the majority of studies finding stature to decline as the reliance on agriculture increased. The impact of agriculture, accompanied by increasing population density and a rise in infectious disease, was observed to decrease stature in populations from across the entire globe and regardless of the temporal period during which agriculture was adopted, including Europe, Africa, the Middle East, Asia, South America, and North America.
The abstract makes the conclusion more cut & dried than it is. It’s the result of aggregating their literature review and arriving at a net conclusion. Yes, on the balance agriculture did result in the deterioration of health. The old truism that farmers are a small and ill lot in comparison to hunter-gatherers seem to be correct in the generality. But the literature review also makes it clear that when it comes to something like stature there are often periodic reversals of the trend toward decrease in size. There may be spottiness of the record, and sampling error, but I began to wonder if we might not be seeing evidence of evolution & innovation in action!
Consider the checkered history of the potato in Ireland. In the 18th century the Irish shifted toward the potato faster than most other European peoples, and so entered into a phase of massive population expansion. On a per unit basis the potato was nutritional gold. Unfortunately we all know that the blight of the 19th century triggered a series of social and demographic catastrophes.
Until relatively recently the spread of agriculture in Europe, and to some extent the whole world, was pigeon-holed into two maximalist models: cultural or demographic diffusionist. Neither of these models were maximalist in that they denied the impact of culture or demographics in totality, but they tended to be rhetorically brandished in a manner where it was clear which dynamic was the dominant mode of explaining the nature of cultural and genetic variation and their origins. Here are two representative headlines from the BBC:
- Genetic roots of Europe, “New DNA evidence suggests that a few hundred Stone Age hunter-gatherers were the ancestors of many modern day northern Europeans.”
For whatever reason archaeologists themselves haven’t been able to resolve these issues. To me it seems that ultimately even if genetics is not determinate or even fundamentally specially insightful, it will at least sharpen the discussions, and move scholars away from arguments of rhetorical excess.
One of the broader issues which I’ve been coming to greater consciousness of is the idea whereby all pre-literate societies were diffuse to the point of being in a state of band-level anarchy. The “demic diffusion” model to some extent seems to play into this, where simple demographic population growth due to the ability of farmers to extract more calories per unit of land allowed them to “swamp” the hunter-gatherers. This is a “low level” fundamental explanation which does not require any sort of collective complexity beyond that of the village. It is the a classic illustration of a “social physics” model of human behavior. Similarly, the cultural diffusion model often seems predicated an imitation-through-proximity, as a serial adoption of farming practices occurs through choice or necessity (as an analogy, consider the adoption of firearms).
In hindsight I think the major problem with these models is that they downplay by understandable omission the higher order social complexity of institutions and identities which characterize humans. I do not believe that only with the emergence of writing did supra-band level identity emerge. This seems clear from the ethnography, and what we know from the fringes of history. The Inca for example did not have full elaborated literacy, and yet had political dominion and cultural hegemony from Ecuador to central Chile. We need to consider pots, people, and, politics, for human prehistory.
Dienekes points to a very useful review in Current Anthropology, which attempts to take a more subtle and nuanced view of the data, supplemented by genetics. Westward Ho! The Spread of Agriculture from Central Europe to the Atlantic:
The pith: there are differences between populations on genes which result in “novelty seeking.” These differences can be traced to migration out of Africa, and can’t be explained as an artifact of random genetic drift.
I’m not going to lie, when I first saw the headline “Out of Africa migration selected novelty-seeking genes”, I was a little worried. My immediate assumption was that a new paper on correlations between dopamine receptor genes, behavior genetics, and geographical variation had some out. I was right! But my worry was motivated by the fact that this would just be another in a long line of research which pushed the same result without adding anything new to the body of evidence. Let me be clear: there are decades of very robust evidence that much of the variation in human behavior we see around us is heritable. That the variation in our psychological dispositions, from intelligence to schizophrenia, is substantially explained by who our biological parents are. This is clear when you look at adoption studies which show a strong concordance between biological parents and biological children on many metrics as adults, as opposed to the parents who raised the children. This doesn’t mean that environment doesn’t matter, but I believe we tend to underweight genetics in individual outcomes in our contemporary Zeitgeist, just as we may have overweighted it in the past.
At this point some of you may be wondering, “what, I hear about genes for [fill in the blank] constantly!” So why am I saying we underweight genetics? I think there’s a disjunction between the fixation that the public (and therefore the popular press) has on a specific biophysical candidate gene which is given almost magical powers of causal necessity and the more abstract and diffuse statistical genetic reality of correlations between parents and offspring whose effects seem to be distributed diffusely across the genome. The latter is a robust and ubiquitous phenomenon, but because it is not possible to frame the narrative as a “gene for X” it lacks power. In contrast, when you have a powerful gene of large effect whose variation in state has a concrete and comprehensible outcome the narrative is clear, precise, and distinct. There’s an unfortunate problem with this though: quite often the narrative is wrong because it is not robust. It won’t be replicated and stand the test of time.
The example of the “language gene,” FOXP2, is illustrative of the issues I’m pointing to in the most broad of terms. As a matter of fact FOXP2 is a much better candidate for being the “language gene” than is usually the case for the gene for X, but ultimately it is probably not alway useful to term FOXP2 the language gene when the faculty for speech is such a complex trait subject to many biological pathways. The putative “God gene” was a much worse case of a gene for X, and probably a good example of the problem I’m talking about. There’s a pretty robust body of evidence that religiosity has a heritable component, but there isn’t much evidence for a gene for belief in God.
What does all this have to do with dopamine receptors and novelty? The DRD4 locus has been implicated in a lot of behavior genetic variation, and dopamine receptor genes are often pointed to as “master controllers” of a sort for various aspects of personality and life outcomes. Dopamine as a neurochemical has myriad functions, so variation in its production controlled by genetics is a natural candidate of interest for researchers. The problem is that the nature of this sort of statistical and sexy science is that there’s going to be a natural gravitation toward significant results which later turn out to be false positives. Before moving on, I do want to reiterate that as a gene for X the dopamine receptor loci are much better set of candidates than the “God gene,” but here the devil is in the details.
Let’s see what the argument in the paper which triggered The New Scientist piece is. Novelty-seeking DRD4 polymorphisms are associated with human migration distance out-of-Africa after controlling for neutral population gene structure:
One of the things that happens if you read ethnographically thick books like Nicholas Dirks’ Castes of Mind: Colonialism and the Making of Modern India is that you start to wonder if most castes were simply created by the British and for the British. Granted, even Dirks would not deny the existence of Brahmins prior to the British period, but those who work within his general paradigm might argue that a group like Kayasthas were the product of very recent developments (e.g., the uplift of a non-Brahmin literate group willing to serve Muslim and British rulers). The emergence of genomics complicates this sort narrative, because you can examine relationships and see how plausible they would be given a particular social model.
Zack Ajmal is now at 90 participants in the Harappa Ancestry Project. He’s still undersampling people from the Indo-Gangetic plain between Punjab and Bengal, but that’s not his fault. Hopefully that will change. He posted K = 4 recently for the last 10 participants, but I notice K = 12 in his spreadsheets. So this is what I did:
1) I aligned the ethnic identification information with the K = 12 results.
2) I removed relatives and those who were not 100% South Asian.
3) I added some reference populations in. These are all upper case below. All other rows are individuals (HRP numbers provided).
4) I removed five ancestral groups. The three Africans, Papuans, and Siberians.
Then I arranged the rows alphabetically by ethnic identification. Helpfully many people provided their caste information as well. I’ve uploaded a csv with the information. But skim the plots & table below. Those of you who are brown can probably make more sense of them than I can. But I think some of the patterns are pretty interesting already. For me the big thing that jumps out is how uniform some of these caste groups are. Remember that HRP22 and HRP23 are my parents. If the British made these groups up, they were very punctilious about their ancestral make up in constituting them!
The economist Samuel Bowles recently had a paper out in PNAS which caught my attention, Cultivation of cereals by the first farmers was not more productive than foraging. This naturally begs the question: why did farming conquer foraging as a lifestyle? First, let’s look at the abstract:
Did foragers become farmers because cultivation of crops was simply a better way to make a living? If so, what is arguably the greatest ever revolution in human livelihoods is readily explained. To answer the question, I estimate the caloric returns per hour of labor devoted to foraging wild species and cultivating the cereals exploited by the first farmers, using data on foragers and land-abundant hand-tool farmers in the ethnographic and historical record, as well as archaeological evidence. A convincing answer must account not only for the work of foraging and cultivation but also for storage, processing, and other indirect labor, and for the costs associated with the delayed nature of agricultural production and the greater exposure to risk of those whose livelihoods depended on a few cultivars rather than a larger number of wild species. Notwithstanding the considerable uncertainty to which these estimates inevitably are subject, the evidence is inconsistent with the hypothesis that the productivity of the first farmers exceeded that of early Holocene foragers. Social and demographic aspects of farming, rather than its productivity, may have been essential to its emergence and spread. Prominent among these aspects may have been the contribution of farming to population growth and to military prowess, both promoting the spread of farming as a livelihood.
My own working assumption is that the “first farmers” existed in a state of land surplus, and so like the medieval peasants in the wake of the Black Death found themselves released from Malthusian constraints, at least until their natural increase swallowed up their affluence. Bowles gives several reasons to be skeptical of this conjecture. The list in table 2 shows the positive and negative biases in the model when one back-projects later stages of farming to the initial period. Metal tools, well developed distribution channels, and more productive varieties, were features of mature agricultural societies. On the time-insensitive scale the necessity of planning ahead and waiting patiently for the crop to be ripe count against the gains of the farming way of life as well. The main variable which would weight in favor of farming is the land surplus alone, though Bowels argues that the ethnographic data as to the benefits of a surfeit of this input factor of production is mixed. I am skeptical of this point, though I can’t say I’ve dug deeply into the literature.
There are other factors, such as the fact that farmers were immobile, and so subject to attack. A shift toward a few crops also reduced the diversity of the diet, and therefore entailed a trade off between a diet rich in micronutrients, fiber, protein, and fat, to one overloaded on carbohydrates. Finally, a reliance on a few crops also means heightened “tail risk.” Think of the Irish potato blight. Hunter-gatherer populations would usually have a more diverse portfolio, and so be buffered more from environmental shocks.
In my post below I quoted my interview L. L. Cavalli-Sforza because I think it gets to the heart of some confusions which have emerged since the finding that most variation on any given locus is found within populations, rather than between them. The standard figure is that 85% of genetic variance is within continental races, and 15% is between them. You can see some Fst values on Wikipedia to get an intuition. Concretely, at a given locus X in population 1 the frequency of allele A may be 40%, while in population 2 it may be 45%. Obviously the populations differ, but the small difference is not going to be very informative of population substructure when most of the difference is within populations.
But there are loci which are much more informative. Interestingly, one controls variation on a trait which you are familiar with, skin color (unless you happen to lack vision). A large fraction (on the order of 25-40%) of the between population variance in the complexion of Africans and Europeans can be predicted by substitution on one SNP in the gene SLC24A5. The substitution has a major phenotypic effect, and, exhibits a great deal of between population variation. One variant is nearly fixed in Europeans, and another is nearly fixed in Africans. In other words the component of genetic variance on this trait that is between population is nearly 100%, not 15%. This illustrates that the 15% value was an average across the genome, and in fact there are significant differences on the genetic level which can be ancestrally informative. You can take this to the next level: increase the number of ancestrally informative markers to obtain a fine-grained picture of population structure. In the illustration above the top panel shows the frequencies at the SNP mentioned earlier on SLC24A5. The second panel shows variation at another SNP controlling skin color, SLC45A2. This second SNP is useful in separating South and Central Asians from Europeans and Middle Easterners, if not perfectly so. In other words, the more markers you have, the better your resolution of inter-population difference. This is why I found the following comment very interesting: