It turns out dog parks are not a hot-bed of canine rape culture. But academia has a problem, and “grievance studies” isn’t it.
Dog parks are “petri dishes for canine rape culture” claims a paper published in mid tier academic journal Gender, Place & Culture.
It sounds like nonsense. And it turns out that it is. WSJ reports that three academics submitted 20 hoax papers to academic journals, of which 7 were accepted for publication. The perpetrators of this fraud claim that it was done to illustrate the “absurd and horrific” standard of scholarship in what they have pejoratively called “grievance studies” (a dismissive term they made up which is meant to include gender studies, queer studies, and a number of other fields they appear to disdain).
This was not a prank, they say, but ethnographic research into the world of “grievance studies”, to bring to light the dangers this supposedly shoddy scholarship holds for all areas of academia. They argue that the “the risk of letting biased research continue to influence education, media, policy and culture” is worth any personal consequence or ethical breach.
We know that it is fairly easy to get a hoax or fraudulent paper published in academic journals, regardless of field. Plenty of people have done it. This new “ethnographic study” does nothing to prove that so-called “grievance studies” has a worse track record of this than other fields.
These rogue academics seem oblivious to the irony of calling for a reduction in research influenced by bias and ideology, when their own arguments are so clearly steeped in bias and ideology of a different kind. I guess your ideology is bias, but my ideology is common sense?
There are many different ways of understanding the world around us. Science is an incredibly powerful method of developing useful knowledge about the world. Humanities is another way of understanding the world — getting inside the individual human experience and finding ways to show us that the personal is universal. (Indigenous societies have their own knowledge systems that I honestly just don’t know enough about.)
Social science has seen a new clash of these two titans of Western thought, as science has been brought to bear on many fields that used to be the sole domain of humanities. In these fields we see a variety of different research methods across the science-humanities spectrum. This should be of no great concern. Social science is barely 200 years old — we’re still experimenting and working out the kinks. Now, personally I’m no fan of postmodernism in social science — it seems like a dead end. But we couldn’t be sure that it would be before we started, and there are certainly many people smarter than me who think there is plenty still to be learned here.
Our rogue scholars argue that they are acting to defend the reputation of scholarship, and particularly of a science-based way of knowing and a defense of Enlightenment values. However, their main complaint seems to be that these articles were plainly ridiculous and therefore should never have been accepted for publication. They appear to have forgotten that Enlightenment values also include free speech. Free speech is so crucial for good scholarship it has it’s own name “academic freedom”. This means that any academic is free to research any strange and bizarre thing they choose. (There is a limit on academic freedom for truthfulness — for example, University of Canterbury apologized for awarding a Masters degree to a thesis with elements of holocaust denial.) These academics seems to think there should be a further limit of ridiculousness placed on academic freedom.
How would Copernicus have fared if he had faced a ridiculousness test on his assertion that the Earth orbits the sun? Or Darwin when he claimed that humans weren’t made by God but evolved from an ancestor we share with chimpanzees? The Age of Reason was built on the notion that no one gets to say what is too bizarre to be true. Instead, we fall back on our evidence in an open contest of ideas.
This is where many people, even academics apparently, can misunderstand the process of knowledge accumulation we call science. Doing good science doesn’t mean that only true and correct things are ever published in academic journals. Academic journals are where this contest of ideas plays out. We should publish many different articles about the same topic, with many different research designs and points of view. Only after many different studies have been published on the same issue over many years, sometimes decades, can we even tell who was on the right track. One ridiculous paper is just one of many on the subject. Over time, the wheat should be separated from the chaff.
The hyperbole of this stunt risks overshadowing that there are some very real threats to this process of knowledge accumulation in academia. Because the wheat doesn’t get separated from the chaff on it’s own — it takes work.
We should do better at flushing out papers that are fraudulent or that simply have mistakes in them before they are published. It is hard to know the scale of the problem, but both are likely fairly common due to weaknesses in the review process of journals. Recently an eminent nutrition researcher was revealed to have engaged in p-hacking, essentially a type of statistical fraud.
The peer review system is trust-based and assumes all parties are acting in good faith. There are no processes for weeding out those who aren’t. In the past that hasn’t really been necessary — because there weren’t significant incentives for publishing work you know to be false. With the increasing publish or perish mentality that may no longer be true.
Many people aren’t aware that most academic journals fund themselves by charging authors to publish in their journal. These fees run into the hundreds of dollars. This creates an incentive for publishers to accept low quality work.
Not only do academics have to pay to publish their work, but on the flip side they provide free labour as they are expected to peer review the work of others without pay. This means review is often done under time pressure. Reviewers aren’t about to bring extra work on themselves by asking to see the data and analysis that is presented in the paper. This is how fraud and well-intentioned but mistaken analysis gets published.
Journals need to strengthen peer review practice to identify basic statistical errors. They should also stop prioritising statistical significance too highly over the size of the effect — this is what makes p-hacking a viable publication strategy.
Of much greater concern is our ability as a research community to build on our knowledge over time. This is best shown by the replication crisis in psychology, as many findings that were considered well-established have turned out not to be repeatable under controlled conditions.
What universities require for career advancement (publish lots, especially in high reputation journals) and what journals actually publish (novelty, clear result, high P values) have created a dysfunctional environment that focuses on single articles. When academics are each rushing to write as many papers as possible, which are also novel in some way and have a clear result with a high P value, this creates exactly the wrong kind of research output to advance knowledge.
Advancing knowledge isn’t just about writing as many papers as possible. Even if they were all excellent papers the real progress comes when we compare them to each other, to build a more detailed picture.
Our current academic culture would be like trying to build a brick wall, but we have all been encouraged to make bricks. So we are studiously making as many bricks as possible, as quickly as we can, and dumping them all in a pile. Look how many bricks we made!
At the moment it is much easier to build a successful academic career as a brick maker than a brick layer, and that is threatening the whole process. We need to make brick laying valued again.
We do that by changing what gets published by journals and what gets rewarded with career advancement by universities.
We can do this by making it much easier to publish null results (when a study finds no correlation between variables), replications (when a study repeats an earlier research design in a new area to see if the result still holds), and meta-analyses (when a study reviews all of the research papers on a specific question to establish the current state of knowledge).
We need to make it standard practice to include your data and analysis when submitting a paper for review. That data and analysis should then be published online for anyone to access when the paper is published. This makes it easier for others to conduct replications and meta-analyses.
In tandem with this, university research rating schemes and career promotions need to give greater credit for publishing null results, replications, and meta-analyses.
Some progress is being made on the journal side. PLoS One is an open source, online only journal that publishes replications, null results and meta-analyses. In New Zealand, economists at the University of Canterbury have founded SURE — The Series of Unsurprising Results in Economics.
These are steps in the right direction. Now we need to move from isolated innovations to system-wide transformation. A recent report by the Dutch science academy sets out what that should look like. We won’t accomplish it by publishing hoax papers in fields we disdain.