Data matters too
A quick commentary on Bloom (2024)
LAB NOTE
Thomas Chazelle
9/24/20254 min read
I recently came upon a paper by Paul Bloom in Theory and Society (thanks to Adrien Fillon and his Psycho Papers newsletter – in French) in which he argues that we’re doing developmental research wrong.
“Much of developmental psychology is not worth doing”?
Bloom raises a compelling point: developmental psychology studies aiming to determine the age at which children acquire a skill “lack clear theoretical justification”. To summarise, the argument is that determining the developmental profile of performance in a task is often useless because it neither tests a theory nor yields direct real-world implications.
Here are a couple of quotes from the paper, which has the merit of being clear and straightforward (a “going for the throat” style, one could say):
"Many of the talks I attended had a certain structure, and I realized that I’ve been seeing this for a long time—including in colloquium talks, student presentations, and journal articles. They reported work that followed this recipe:
1. Start with an observation about adults—some ability, intuition, opinion, or understanding that adults in our society have.
2. Develop a task to test for the presence of this ability, intuition, etc. in children.
3. Test children of different age groups, usually looking at (a) an age where you don’t expect them to be adult-like and (b) an age where you do expect them to be adult-like.
4. If you find that neither age group is adult-like, test an older age group.
5. If you find that both age groups are adult-like, test a younger age group.
6. State your conclusions. A typical pattern of results is that 3-year-olds don’t get it at all, 5-year-olds are better, and 7-year-olds pretty much nail it.
7. If it’s a talk, graciously bask in applause. Be prepared for questions, but don’t worry; they’ll be easy to deal with. You might be asked if younger children would have succeeded if you made the task easier. (Answer: Great question. Yes, maybe so! We’re hoping to simplify our design for future studies.). Would you expect to get the same findings if you tested children in other societies? (Answer: Great question. We’re hoping to do cross-cultural work in the future!) Maybe someone will push back and say something like: “Why do you think so-and-so’s lab in Berkeley (say) finds that even 4-year-olds get this thing, and you only find it in 5-year-olds?” A good answer is: “Well, they must have smarter kids in Berkeley!” That’ll usually get a laugh, and maybe it’s true.
[…]
Suppose someone were to stand up during the question period and ask:
Sorry, I must have missed this, but why does it matter? Who cares whether this nugget of knowledge shows up at age 3 or 5 or whatever? Obviously, it must come in sometime between babyhood and adulthood. Who cares precisely when? What theory would your data support? Who would be surprised by if the answer is one thing or another? Who would be pleased? Why is this experiment worth doing?"
Beyond the mild rudeness in the style, it’s the kind of concern I think needs to be raised about our research strategies. It speaks to the big debate between theory-testing-oriented and data-oriented research strategies in Psychology. The criticism is sound, but I want to raise an objection: Psychology is just not ready for this.
Is Psychology ready for this?
For paradigmatic sciences with well-established theories and methods, it makes perfect sense to be aiming for theory testing and real-world implications. Psychology, however, is still largely in a preparadigmatic stage – we do not agree on the theories, on the methods, sometimes even on the concepts we are studying, and the field is generally “fractured” in many more-or-less independent subfields. If we want to build good models, we need good, reliable data (especially about the methods and tasks we’re using), and this is clearly one of the many problems of the replication difficulties we’ve encountered in Psychology.
Our current publication system sometimes pushes us to “fake” an ad hoc theory for the sake of the paper. I’m not even talking of HARKing (hypothesising after the results are known) – what I mean by this is we might pretend there’s a very clear and established theory we’re testing when, most of the time, we’re at best testing a specific, stand-alone hypothesis in an even more specific operationalisation. I align with Bloom’s analysis of the systemic pressure to publish resulting in publication-oriented research, and I agree that some of the research he criticizes does fall into that category. I just think that publishing papers that instead pretend to have great implications, or that they’re doing a critical test of a theory is not solving that one issue.
If we want to build formal models, a promising direction for research that allows us to make more precise and constraining hypotheses, we need to feed them with quality data. The more data we gather, for example, about at what age children master a skill or more precisely perform well at a task, the better-equipped we’ll be to form good, robust theories that will try to parsimoniously make sense of all these contexts and tasks. If the emergence of a skill is thought to be linked with, e.g., the maturation of a specific brain structure, or a particular social event, then knowing at what age performance changes in a set of tasks will become crucial.
Now obviously, I would love for all psychological science to be devoted to theory testing and developing better applications. I just think we’re not quite there yet. The trick is our fragile theories don’t allow us to know in advance what data will be important and it’s a tad bit optimistic (pretentious?) for a preparadigmatic science to act like we really know what we’re doing.
Comments

