Science, a peer reviewed journal, recently published an article lambasting the quality of peer review in many journals that are not Science. The article described itself as “the first global snapshot of peer review across the open-access scientific enterprise”, and found peer review in that context to be lacking.
As one who leans toward the theoretical and the methodological, I naturally wonder what is the model underlying the claim that “peer review across the open-access scientific enterprise” would be of low quality. My understanding is that “open-access” is defined to include any journal that does not charge subscription fees, but allows readers free access via the Web. So we need some sort of model that explains why the lack of reader fees would lead to a consistently lower quality of referee effort.
Generally speaking, the discussion about scientific peer review tends to be…lacking in scientific rigor. Those who have written on the matter, including some involved in open access journals, all seem to agree that a claim that open access would induce lower referee effort makes little sense. It’s basically impossible to write down into a real model.
So in this and the next column, I attempt to fill the gap and provide a theoretical framework for describing a one-paper peer review process. I get halfway: I stop short of the general-equilibrium model covering the entire publication market. I also don’t specify the cost functions that one would need to complete the model, because they wouldn’t make sense in a partial equilibrium model (i.e., there’s no point in a specific functional form for the cost function without other goods and a budget constraint).
Nonetheless, we already run into problems with this simple model. The central enigma is this: what incentive does a referee have to exert real effort in reviewing a paper?
After the break, I will give many more details of the game, but here are the headline implications of the partial model so far, which don’t yet address the central enigma:
- The top-tier journal does not necessarily have the best papers. This is because the lower-tier journals have papers that have gone through more extensive review.
- More reviews can raise reader confidence that a paper is good. However, the paper is published after only a handful of reviews. Stepping out of the game, situations where dozens or hundreds read the paper before publication would do much to diminish both false positives and false negatives in the publication decision.
- Readers are more likely to read journals that maintain a high standard.
- Readers are also more likely to read journals where the referee exerted a high level of effort in reviewing the papers, and can also read those papers with less effort. The problem of trusting a false paper is mitigated, because careful reviews produce fewer false positives. However, referee effort is not observable.
All of this is still under the assumption that referees have an incentive to put real effort into the review process, an assumption I’ll discuss further next time.
After the jump, the rest of this entry will go into more precise detail (~3,000 words) about the game and some of its implications. Continue reading