How Intelligent is IQ?

By Neuroskeptic | December 24, 2012 12:28 pm

“If your IQ is somewhere around 60 then you are probably a carrot”, according to a British spokesman for high-IQ club Mensa.

IQ’s in the news at the moment thanks to a paper called Fractionating Human Intelligence from Canadian psychologists Adam Hampshire and colleagues. Some say it ‘debunks the IQ myth’ – but does it?

The study started out with a huge online IQ test…

Behavioral data were collected via the Internet between September and December 2010. The experiment URL was originally advertised in a New Scientist feature, on the Discovery Channel web site, in the Daily Telegraph, and on social networking web sites including Facebook and Twitter.

The test involved 12 different cognitive tasks, based on the usual IQ test kind of things, and they got a huge 45,000 usable responses.

However, the main part of the study used functional MRI (fMRI) to measure brain activity caused by each of the 12 tasks. There were only 16 volunteers in the brain scan study, which is pretty small.

The key finding was that although each of the 12 tasks made a different pattern of brain regions light up, there were two main components of this: one lit up mostly in response to tasks requiring short-term memory, and the other was associated with reasoning and logic: (EDIT: Picture corrected, oops.)

They did various other analyses that confirmed this, and they also found evidence for a third network responsible for language (verbal) skill.

Finally, the killer conclusion was that there was no reason to introduce the imfamous  ‘g factor’ – a number representing general intelligence affecting performance on all tasks. Although there was a ‘g factor’ statistically, it was explained by the fact that tasks required both the memory and the logic networks (although to different degrees).

g is the most controversial aspect of IQ testing, because if it exists, that means that some people are just smarter than others across the board – not just better at a particular kind of thing. So has this study killed g?

Well, not by itself. There’s a huge literature on IQ and g, going back almost 100 years. This stuff is not based on brain imaging, but just on IQ test scores, and it’s a complex topic. I don’t think one brain study with 16 people can really overturn that, although it does lend weight to the anti-g camp who have been arguing against g for decades.

There’s a sense, though, in which it doesn’t matter. If all tasks require both memory and reasoning (and all did in this study), then the sum of someone’s memory and reasoning ability is in effect a g score, because it will affect performance in all tasks.

If so, it’s academic whether this g score is ‘really’ monolithic or not. Imagine that in order to be good at basketball, you need to be both tall, and agile. In that case you could measure someone’s basketball aptitude, even though it’s not really one single ‘thing’…

ResearchBlogging.orgHampshire, A., Highfield, R., Parkin, B., and Owen, A. (2012). Fractionating Human Intelligence Neuron, 76 (6), 1225-1237 DOI: 10.1016/j.neuron.2012.06.022

CATEGORIZED UNDER: fMRI, media, papers
  • Anonymous

    There may be probably other view of the g factor. Imagine that you have created hypothetical task that requires only working memory, and another one that requires only reasoning. If performances in these two tasks were correlated, it means that there is some underlying factor (g) that plays a role in both working memory and reasoning (e.g. speed of processing). The study says nothing about this view (at least in my understanding).

  • Anonymous

    There will be a big scandal about this paper soon. Basically it's very bad methodologically, not just the brain scan stuff, the factor analyses too. Nonsurprisingly, 100 years of IQ research aren't wrong and these authors just messed up, which was correctly pointed out by people asked to comment. The comment wasn't published and the criticisms didn't find their way into the manuscript. “Neuron” screwed up, as did the authors, who just found what they wanted to find (the for some reason politically correct result).

    Hopefully we'll see the comment in another outlet soon. Meanwhile: don't even bother with this BS paper.

    Stupid quote from that Mensa guy too. Assholes.

  • Anonymous

    The previous comment is obviously a troll, right? Vague generalities, conspiracy theory, and a hint of personal vendetta (or suppressed jealousy). Well done!

  • Anonymous

    I'd recommend looking at this paper: http://journal.sjdm.org/11/111031/jdm111031.pdf. This theory makes specific predictions about the interactions between memory processes and reasoning ability.

  • Anonymous

    The 13.28 post is not a troll. I understand (from first-hand sources) that there is a response to the paper being put together at the moment, and it will show that the paper is horribly flawed. Obviously I can't go into the details right now – but the behind-the-scenes story I've heard is pretty much the same one that the 13.28 post describes. Watch this space.

  • http://petrossa.me/ petrossa.me

    Having scored anywhere between 160 and 125 the only thing an IQ test measures is how good you are at doing an IQ test at that time in space.

    Don't do it with a hangover.

    Other then that it only serves as a very crude method to separate the very inept from the somewhat inept.

    But that's usually quite obvious anyway, making it a superfluous activity.

  • Anonymous

    ” If performances in these two tasks were correlated, it means that there is some underlying factor (g) that plays a role in both working memory and reasoning (e.g. speed of processing).”

    No, it doesn't. It could mean working memory contributes to reasoning, or that reasoning contributes to working memory performance, or that some other, third factor that has nothing at all to do with g contributes to performance on both (motivation? distractability? sleepiness?, time since last meal?). In order to rule out other possibilities like these, you have to first think of them, and then you have to do a study where you at least measure them, or ideally find a way to hold them constant or manipulate them in an experiment.

    The problem I find with much work on intelligence is the seeming disinterest most ntelligence researchers have in understanding performance at a mechanistic level, e.g., what are the actual psychological operations and processes underlying performance, as opposed to characterizing performance at a “trait” level. Not all regularities in behavior are indicative of underlying traits. Factor analysis is fine for testing ideas about traits, if you actually have traits; but it can also makes things look like traits even when they aren't traits. And it is not particularly useful for teasing apart mechanisms.

    The finding that lots of different kinds of tests correlate with each other, and that common variance among performance on those tasks predicts lots of different things does not mean you have actually measured a true latent cause — rather, you may simply have measured a collective effect, which is, to my understanding, what the paper in question is trying to suggest (whether the study provides evidence for the latter is another question, of course).

    But the mere fact that there is “100 years of IQ research”, does not necessarily mean that said research has been correctly conceptualizing the causes of performance at a theoretical level.

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    “The 13.28 post is not a troll. I understand (from first-hand sources) that there is a response to the paper being put together at the moment, and it will show that the paper is horribly flawed. Obviously I can't go into the details right now – but the behind-the-scenes story I've heard is pretty much the same one that the 13.28 post describes. Watch this space.”

    Well, I will certainly watch this space (and I had my own suspicions about the factor analysis – it's a tricky technique) but so far, this is all scuttlebutt.

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    Update: I've now heard from several sources that a critical comment about this paper was written, submitted to Neuron but rejected, and will soon be published independently.

    Guess it wasn't scuttlebutt after all.

  • Anonymous

    Neuroskeptic,

    This seems like a mostly accurate assessment of the article however your closing example is a bit misleading. The article claims, that different types of intelligence relate to different brain networks. It also claims, that while one can generate a higher order ’g’ factor from cross-component correlations, the neural basis of that factor is ambiguous. The article also suggests that the brain imaging data may be used to determine what the likely neural basis of that ‘g’ factor is. The cross-component correlations that may be used to generate a higher order ‘g’ factor are reported in one of the main figures in the article. However, what is evident is that those correlations are accurately predicted by the fact that some of the tasks have substantial loadings on multiple brain networks.

    You write that ‘although there was a 'g’ factor' statistically, it was explained by the fact that tasks required both the memory and the logic networks’ and that consequently, ‘it doesn't matter. If all tasks require both memory and reasoning, ‘ then the sum of someone's memory and reasoning ability is in effect a g score’

    In one sense this is the case, the tendency for tasks to load on multiple system in the brain is likely to be a large part of the basis of the ‘g’ factor. Indeed, this is the conclusion drawn in the article. However, the problem is that not all tasks did require both networks, or at least, not to a significant extent. Specifically, in some task contexts, the networks were very strongly dissociated when measured relative to rest. That is, some tasks had very little in the way of loading on one functional brain network alongside a very heavy loading on another – this is also reported in the article. This observation from the brain imaging analysis is paralleled by the very weak bivariate correlations between the self-same tasks in the behavioural analysis. For example, the short-term memory task – basically a variant on Corsi block tapping – correlated at about r=0.05 with the deductive reasoning task. Clearly, these depend upon quite separate abilities, as both have good communalities with the battery of tasks as a whole but have a miniscule correlation with each other. One can design all sorts of tasks that load heavily on multiple processes; undoubtedly complex tasks will always load on many different systems in the brain and multiple abilities. However, the study provided little evidence for the influence of a monolithic intelligence factor over those abilities when the brain imaging data were taken into account. Thus, they should be considered independent from one another.

    As for whether a composite score, generated from all factors is a better predictor of demographic variables. This issue is also addressed directly in the article. There are instances, in which such a score would show differences in two distinct population measures, when the underlying basis of those differences was quite distinct. Thus, a multifactor model is more informative. Similarly, some correlations were greater when first level components were examined separately. Thus, a multifactor model may be more sensitive to population differences as well.

    Finally, a critical comment was submitted to Neuron however, there was no ‘conspiracy’. It was decided, based on feedback from an independent reviewer, that the author of the comment was heavily biased and that the criticisms raised were lacking in substance. Also, the authors of the article demonstrated that they were both willing and able to address all of those criticisms point by point if the journal chose to publish them. This is a highly controversial topic. No doubt many other researchers will wish to comment on the results and will hold different views. Only those that raise sensible questions should be published. As for comment 13:28, anyone who takes that type of tone in a scientific debate, is self evidently a troll!

  • http://www.blogger.com/profile/07125980057827981457 Manoel

    I'd like to point only to these old posts by Cosma Shalizi on IQ, especially the second one. They're long, but I guess it goes to the (statistical) point, specially about what factor analysis can and cannot do regarding G.

    http://masi.cscs.lsa.umich.edu/~crshalizi/weblog/520.html

    http://masi.cscs.lsa.umich.edu/~crshalizi/weblog/523.html

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    Anonymous: Thanks for the detailed comment. But you say: “However, the problem is that not all tasks did require both networks, or at least, not to a significant extent.”

    But in Fig 1B they show that, in the fMRI, each task is associated with activation in both the 'memory' and 'reasoning' networks (albeit small in some cases) and in Table 2, in the PCA of the behavioural data, for each of the 'memory' and 'reasoning' components, 10 of 12 tasks loaded positively onto each one.

  • Anonymous

    Skeptical??? Neuroskeptic says: 'I understand (from first-hand sources) that there is a response to the paper being put together at the moment, and it will show that the paper is horribly flawed' Good to see Neuroskeptic has swallowed the troll's attack hook line and sinker before detailed criticisms have even been published and authored… Neurotraditionalist, neurodisciple or neurogossip perhaps?

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    I didn't say that, someone else did.

    A lot of people tell me that the response exists and will be out shortly, that's all I'm saying.

    I have no idea if it's any good or not.

  • Anonymous

    The critical comment about this paper was rejected by Neuron. Wonder why? Suppose the peer review process was flawed and journal ed out of line to dare allow this challenge to orthodoxy into print. After all, as first hand sources tell us, 100 years of IQ research can't possibly be wrong

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    Well, I guess we'll just have to wait until the comment is out, and then see whether it's valid. Only carrots speculate about unpublished works.

  • Anonymous

    Something about g that's always confused me: what about people with learning disabilities, such as myself? I score all over the place on an IQ test due to this, and sure you can do the math and come up with a full scale IQ score, but that just covers up the fact that my IQ doesn't tell you about my strengths and weaknesses alone without looking at the subtests as well.

    But the whole concept of g seems to be based on general evenness of ability. How do psychometricians reconcile the concept of g with the existence of people with learning disabilities?

  • Micha? Kulczycki

    The concept of g is based on observation that performance in various mental tasks is highly correlated. It is not an assumption, but a fact (which we struggle to explain). But it is a statistical phenomenon: no one claims that task-g correlation matrix is ubiquitous. There is a lot of individual variance there, and learning disabilities are just one more source of it.

  • Anonymous

    Anonymous wrote:

    “Finally, a critical comment was submitted to Neuron however, there was no ‘conspiracy’. It was decided, based on feedback from an independent reviewer, that the author of the comment was heavily biased and that the criticisms raised were lacking in substance.”

    25 December 2012 17:23

    Come on, this is obviously from one the authors of the paper. Why hide behind the anonymity label? Also, when you write “independent reviewer” do you mean one of the original 'independent' reviewers that reviewed the paper? If so, don't you think this reviewer may have biased by his original review and not too happy to have been caught with his pants down?

  • Anonymous

    Dear Neuroskeptic,

    To me this is an interesting paper not because of its results but because it is a prime example of how circular reasoning can lead to misleading conclusions. In essence, the authors used a factor analytic method that imposed independence on their factors and then claimed that the factors were independent…

    Note that they did observe correlates of these independent factors at the brain level and I can see how this was compelling to them (possibly convincing them that they were on the right path). However, had they allowed for their factors to correlate and hence, found a higher order factor ('g'), they likely would have also observed a compelling brain correlate of this higher order factor (for the sake of simplicity, let’s disregard the fact that any brain imaging finding of this sort from only 16 subjects is highly suspect). Both solutions would be equally valid at the brain level and nothing in their paper warrants choosing one (their solution of independent brain factors) over the other (one general brain factor).

    Now, I am not saying that g is unitary at the brain level. In fact, I tend to think that it is not. Whatever may be the case, this paper is not a proof of the non-unitary aspect of g and only shows that g MAY be non-unitary at the brain level (which, of course, is uninformative). To me, choosing one position over the other, as the authors have, is not science, it is religion.

    As much as I like the exercise, I am therefore forced to dismiss the conclusions of this paper. The authors understandably jumped on what they thought was a demonstration of a fundamental flaw of the ‘IQ’ concept and published it. However, in the final analysis, this paper is only an intriguing (and politically correct) exercise that, if misunderstood (thanks to the authors overreaching conclusions, hubris, and PR attitude), has the potential to set the field back 40 years. The authors appear to have a certain degree of misunderstanding of factor analytic methods and of the state of intelligence research. A little learning is a dangerous thing…

    Finally, note that this paper was initially submitted to Nature Neuroscience and was rejected. This lends credence to the alleged forthcoming critique.

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    I thought there was a danger of using a method that would be guaranteed to produce orthogonal factors… but I wasn't confident I was right so I left that out of the post.

    On the other hand, being rejected by Nat Neurosci is no shame. It is a very selective journal.

  • Anonymous

    Neuroskeptic wrote:

    “On the other hand, being rejected by Nat Neurosci is no shame. It is a very selective journal.”

    Yes, I agreed, there is no shame in being rejected by Nat Neurosci. However, while we will probably never know for sure, given the high 'general interest' of the conclusions, my guess is that the rejection was on methodological grounds.

  • Anonymous

    RE Anonymous post – 2 January 2013 21:04

    What a hilariously transparent piece of rumor spreading. Could it be perhaps, that someone who is writing a rebuttal, is attempting to lay the ground work so that they can garner as much attention for themselves as possible on the back of someone elses work? Aren't there a couple of major flaws in your slightly too sympathetic sounding story though?

    For example, how would you even know if an article had been rejected from one, none, or half a dozen journals in the past? No credible journal or reviewer would share such information. Any that did, would clearly be heavily biased and lacking in integrity, which would call any such review process into doubt.

    Also, isn't the picture of over excited researchers missing a potential confound in their analysis rather fanciful? The exact issue that you raise, has an entire section dedicated to it in the discussion and the authors appear to rule it out with a supplemental analysis.

  • Anonymous

    G is nothing more than useful bullshit. You are not the first to have found this. In fact no one has ever found g, because it doesn't exist. It's a factor of individual differences. Sure you can find neural correlates of this individual differences factor, but that doesn't change anything. Now hurry up and go do something useful!

  • Anonymous

    On 4 January 2013 19:19
    Anonymous wrote:
    “Also, isn't the picture of over excited researchers missing a potential confound in their analysis rather fanciful?”

    It certainly wouldn't be the first time!

    “The exact issue that you raise, has an entire section dedicated to it in the discussion and the authors appear to rule it out with a supplemental analysis.”

    I think that the points raised by 2 January 2013 21:04 are actually pertinent. If you are referring to the ICA analysis, it doesn't rule out the concern at all. It's just another method, doing something similar that finds very similar things. If you are referring to some other part of the discussion, please point that out.

    As an aside, I am really not too convinced by your 'ground laying' charges as 1) I'm not really sure that posting will have any impact on whether or not one would draw attention to one's work but, most importantly, 2) the issues raised by the blogger you chastize are actually things that would be rather obvious to most that know factor analysis well and I am sure many will have picked up on them.

    History will tell…

  • Anonymous

    For example, the short-term memory task – basically a variant on Corsi block tapping – correlated at about r=0.05 with the deductive reasoning task. Clearly, these depend upon quite separate abilities, as both have good communalities with the battery of tasks as a whole but have a miniscule correlation with each other.

    You must correct for unreliability if you want to know what the correlations between different tasks are. Many of the measures they use are rather unreliable. Moreover, their IQ data are from a highly heterogeneous (culturally, linguistically, age-wise, etc.) convenience sample which means that the same constructs are probably not measured across different subgroups. (A specific problem is that online volunteers tend to be of higher than average ability, which in itself decreases correlations between abilities.) This also means that their “demographic” analyses are mostly meaningless. There are plenty of studies with more adequate samples that investigate associations between abilities, and all of them report much higher correlations between short-term memory and reasoning than this study.

    In general, despite the crappiness of both the brain scan data and the IQ data, they found that g factors explained much of variance in both. That they nevertheless concluded in a cocksure manner that g does not exist can only be explained by incompetence. The authors alao seem to be ignorant of much of relevant research on these topics.

  • Anonymous

    ''There are plenty of studies with more adequate samples''

    What, with more than the 40,000 participants or whatever it was in the Neuron paper?

    Looking forward to the publication of the rebuttal, so we can find out who is behind all this juicy 'scuttlebut'!!!

  • Anonymous

    “What, with more than the 40,000 participants or whatever it was in the Neuron paper?”

    You are missing the point; the sample must be representative of the general population. The sample size is clearly large enough and impressive but the likely sampling bias (given that it's a sample of convenience) is a major issue as it bears directly on the claims of the paper. This is very well known in psychometrics.

    The tone and insults of the person that made the comment about the sampling bias has little place here (as your comment about “juicy 'scuttlebut'”).

  • Anonymous

    I am just a random commenter with some knowledge of psychometrics. I have nothing to do with any published or to-be-published articles about this particular study. What galls me about the study and the comments its authors have made about it (apparently including anonymous comments in this thread) is that their arrogant claims are at variance with their modest understanding of the methodological questions involved.

    Their ignorance of psychometrics is well demonstrated by their pride in their big sample size. A big N cannot offset problems with representativeness. They appear to not understand the nature of psychometric data, treating them as if they were measures like height or weight.

    Moreover, the belief that 40,000 is a uniquely large sample in investigations of intelligence just further shows their ignorance. There are many samples of cognitive ability data which are not only larger than theirs but also representative. For example, in the Project Talent study, dozens of cognitive ability tests were administered to a nationally representative sample of more than 400,000 high school students in 1960. Here's a recent study of the structure of intelligence using data from the Project Talent. Another big sample of IQ data is the Scottish Mental Survey of 1932 where the same cognitive test battery was administered to all 11-year-old Scots on the same day (N>80,000). Some of the largest IQ studies that I know of are those using data from the Swedish conscription tests, with Ns>1,000,000.

  • Anonymous

    There seems to be circular reasoning in the above random comment. I don't think a defined population, eg one in Scotland in 1932, or among Swedish conscripts, is any more 'representative' (of anything other than itself, rather than general human cognition) than a big sample of convenience

  • Anonymous

    To anon above:

    To investigate the structure of cognitive ability you would ideally have a sample where the mean and variance of abilities are the same as in the general population, age does not vary much between subjects, and subjects share the same cultural background.

    The mean must be close to the population mean because if you have, say, a disproportionately high ability sample (as is generally the case with online volunteers), correlations among abilities will be lower than in a sample that is representative of most people (Spearman's law of diminishing returns).

    The range of ability must not be smaller than in the general population, because otherwise correlations among abilities will be reduced.

    Age differences between subjects must not be large because measurement invariance is generally not found in between-generation comparisons (i.e., the Flynn effect represents test bias), which means that the factors underlying ability variation are not the same across generations.

    For the same reason, there must not be large cultural differences between subjects. The sources of variation within and between different groups are probably not the same.

    Mental test scores represent differences in ability between members of some defined population. They have no meaning outside of that population, at least not unless such meaning has been explicitly established statistically. The Scottish and Swedish studies discussed above are ideal because they are based on entire birth cohorts from the same country, thus fulfilling all of the requirements above. The sample used by Hampshire et al. fails in all respects.

  • Anonymous

    still unsure how you know the actual abilities of a 'general population' for comparison, how one can even really expect a population to be particularly genetically and culturally homogeneous (even in Scotland) and still unclear why you can't simply segment the tests on a big diverse population with info on age, gender, occupatin, location etc. Could it be that g only rules among identically raised clones?

  • Anonymous

    still unsure how you know the actual abilities of a 'general population' for comparison, how one can even really expect a population to be particularly genetically and culturally homogeneous (even in Scotland) and still unclear why you can't simply segment the tests on a big diverse population with info on age, gender, occupatin, location etc.

    If you factor analyzed or ran regressions on each of those distinct “segments” separately (e.g., only same-aged people from one country), then you would get more meaningful results (but even then you'd probably have problems with range restriction at least). For example, Hampshire et al. report that in their online IQ data old people have much lower IQs that young people. However, it is well known that cross-sectional data like these produce extremely misleading results about age effects on cognition because of the test bias introduced by the Flynn effect. Age changes in cognition in longitudinal studies are a lot smaller than those in cross-sectional studies because measurement invariance in maintained in the former. Hampshire et al. also report that individuals who played computer games had higher memory and reasoning scores but not verbal scores, but I'd think that this is just another age-related effect and gaming is a red herring.

    When so much of the variance in the IQ data is completely unrelated to the constructs supposedly being measured (because the meaning of the constructs is not invariant across different subgroups), the idea of relating this variation to the structure of intelligence or brain scan data is absurd. For example, why would the fact than non-native English speakers have lower verbal scores than native speakers have any implications about the structure of their brains?

    See here for a good discussion of between and within-group sources of variance in factor models.

    Could it be that g only rules among identically raised clones?

    Quite the contrary. A sample of genetic clones would, of course, be completely unsuitable for the study of the structure of intelligence, because due to the high heritability of intelligence phenotypic variance in clones would be greatly reduced compared to the general population. With clones, there would be no g.

    A g factor emerges pretty much always when a bunch of cognitive tests is administered to a sample of individuals. There's nothing elusive about it. Sampling errors and other statistical artifacts may diminish it but even then it rarely vanishes. The Hampshire et al. data, despite the many problems in them, still yield a not small g factor. There's a long history of attempts by psychometricians to make g go away, but they have all failed. As one researcher said, despite strenuous attacks at g it just keeps reappearing like an insistent relative.

  • Anonymous

    I have yet to see a convincing rebuttal to any of the comments posted here and showing very serious flaws with the Hampshire paper.

    For now, we have seen the likes of “scuttlebutt”, “this was addressed” (which wasn't really the case) and attempts at discrediting any anti-Hampshire comment by suggesting some hidden association with an upcoming formal response.

    From what I have read up to now, this paper appears dead in the water.

    Is there any strong response to the critiques?

    Any at all?

  • Anonymous

    To the above comment (presumably not written by a scientist), a peer reviewed paper in Neuron is best rebutted by a peer reviewed response in a journal (one is in the works from the person spreading all the scuttlebutt) not a comment thread in a blog

  • Anonymous

    To the above comment,

    Now THAT't what I call a response. Full of wit and substance…sigh

  • Anonymous

    Regarding 15 January 2013 21:17

    Dear paranoid blogger:

    You do not seem to grasp what 'scuttlebutt' means.

    Another issue is that this thread is for a discussion on the Hampshire paper. As such, one would expect arguments on both sides of the fence. Yet, there seems to be only one-sided arguments with no counterweight responses other that insipid replies like yours. This makes one wonder if there are possible convincing responses to the critiques.

    Whether I am a scientist or not is completely irrelevant. You may not want to respond to the critiques and that's fine. Now, having these critiques formulated here does not make them ipso facto invalid and we do not need to wait for a formal response to discuss the issue here. This is the point of this blog!

  • Anonymous

    on the 18 January 2013 17:23 post…. we have to be careful not to be naive. The negative remarks could simply tell us that theres a multimillion dollar industry that makes money out of measuring IQ for schools, companies, government, military and the rest – it is not going to help their profit margin if someone shows it is flaky and posting anonymous blogs is a great way to discredit it before it damages the bottom line. Ditto academics who have spent all their time using IQ in an unquuestioning way. Besides, people are always more likely to be negative than positive in blog comment threads…

  • Anonymous

    on the 18 January 2013 17:23 post…. we have to be careful not to be naive. The negative remarks could simply tell us that theres a multimillion dollar industry that makes money out of measuring IQ for schools, companies, government, military and the rest – it is not going to help their profit margin if someone shows it is flaky and posting anonymous blogs is a great way to discredit it before it damages the bottom line. Ditto academics who have spent all their time using IQ in an unquuestioning way. However, the jury is out until we get more detailed discussions in the literature. Besides, people are always more likely to be negative than positive in blog comment threads.

  • Anonymous

    To the above:

    Thank you for this very relevant (and refreshing) comment. I think one must indeed be careful not to buy into the critiques too quickly. Having said this, they do appear valid (from what I will admit is my limited perspective) and I still await a convincing counter-response if there is one out there.

    While one must be careful not to swallow the critiques too quickly, one must also be careful not to accept the Hampshire paper as the final word simply because it was published in Neuron.

  • Neurocritic

    via https://twitter.com/autismcrisis/status/453030856324808705 — the commentary on Hampshire et al. has finally been published, and it mentions some of the comments on your post.

    A comment on “Fractionating Intelligence” and the peer review process (Intelligence, Available online 5 April 2014)

    http://www.sciencedirect.com/science/article/pii/S0160289614000270

    Includes the authors’ ultimately unpublished Preview of the paper for Neuron and a detailed timeline, e.g. “pre-publication concerns raise issues about the peer review process.”

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Ooh! Thanks!

      The above-mentioned commentary was much discussed in the comment thread, below.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »