Where European scholars are publishing in Public Administration and why we already knew this

The European Union recently commissioned a study to “reflect on the state of the discipline and general trends within the discipline and in practice” of public administration (brought to you by the EU’s “Coordinating for Cohesion in the Public Sector of the Future” Group–or COCOPS). The subsequent report produced a ranking of public administration/management journals through the results of a survey of European scholars, which asked the respondents to rank the order of preference for where they would submit a good paper.

At my own school, faculty have vigorously (and in a healthy manner, I might add) debated the relative importance of journal ranking. And, this debate is certainly not isolated to my current place of employment. But one might question whether any of this debate really matters. Once a given metric becomes an established point of reference among those judged on that metric, is there any reason to believe that any other metric (qualitative or quantitative) will adequately replace it?

For instance, the Journal Citations Report or Google Scholar Metrics are two rather widely accepted quantitative metrics for journal prominence in a given field. JCR, in particular, has been used for years and is prominently featured as the metric of choice on most social science journals’ websites.

Below, I show tables derived from the COCOPS study, JCR, and Google Scholar Metrics. I have eliminated distinctively “policy”-oriented journals from lists in the “Public Administration” category in both JCR and Google Scholar. Even keeping in mind the obvious European bias in the COCOPS report, an almost identical list would emerge based on five-year impact factor or Google Scholar metrics. In ALL three lists, the top five journals in the field of public administration are PA, PAR, JPART, Governance, and PMR.

Note that some journals do not yet have a 5-year impact factor score (e.g., IPMJ). Nonetheless, it seems to me that there are a couple things you could derive from the COCOPS report… (1) traditionally accepted quantitative rankings are endogenous to choice; or (2) they aren’t a bad rubric for some fields; or (3) both.

Continue reading

Advertisement

Peer reviewer incentives and anonymity

Last time, I sketched a model of the peer review process as an extensive-form game. The model described the review process as a noisy measurement: the paper has some quality, and the review measures that quality plus some bias and some variance. With greater effort, the review’s variance can be lowered. The game I described was one-shot, about a single paper going through the process.

I didn’t describe the reviewer’s incentive to exert effort to carefully evaluate a paper, because within the one-shot game, there is none. To get the referee to exert nonzero effort, there has to be another step inserted into the game:

  • Based on referee’s observed effort level, the editor, author, or reader reward or punish the referee.

This post will discuss some of the possible ways to implement this step.

My big conclusion is that anonymity in peer review is more of a barrier than a help. Having reviewers sign their names opens up the possibility of publishing the reviews, which turns a peer reviewer into a public discussant of the paper, and turns the review itself into a petit publication. Journals in the 1900s couldn’t do this because of space limitations, but in the world where online appendices are plentiful, this can be a good way to reward reviewers for putting real effort into helping readers and editors understand and evaluate the paper.

Continue reading

A model of peer review

Science, a peer reviewed journal, recently published an article lambasting the quality of peer review in many journals that are not Science. The article described itself as “the first global snapshot of peer review across the open-access scientific enterprise”, and found peer review in that context to be lacking.

As one who leans toward the theoretical and the methodological, I naturally wonder what is the model underlying the claim that “peer review across the open-access scientific enterprise” would be of low quality. My understanding is that “open-access” is defined to include any journal that does not charge subscription fees, but allows readers free access via the Web. So we need some sort of model that explains why the lack of reader fees would lead to a consistently lower quality of referee effort.

Generally speaking, the discussion about scientific peer review tends to be…lacking in scientific rigor. Those who have written on the matter, including some involved in open access journals, all seem to agree that a claim that open access would induce lower referee effort makes little sense. It’s basically impossible to write down into a real model.

So in this and the next column, I attempt to fill the gap and provide a theoretical framework for describing a one-paper peer review process. I get halfway: I stop short of the general-equilibrium model covering the entire publication market. I also don’t specify the cost functions that one would need to complete the model, because they wouldn’t make sense in a partial equilibrium model (i.e., there’s no point in a specific functional form for the cost function without other goods and a budget constraint).

Nonetheless, we already run into problems with this simple model. The central enigma is this: what incentive does a referee have to exert real effort in reviewing a paper?

After the break, I will give many more details of the game, but here are the headline implications of the partial model so far, which don’t yet address the central enigma:

  • The top-tier journal does not necessarily have the best papers. This is because the lower-tier journals have papers that have gone through more extensive review.
  • More reviews can raise reader confidence that a paper is good. However, the paper is published after only a handful of reviews. Stepping out of the game, situations where dozens or hundreds read the paper before publication would do much to diminish both false positives and false negatives in the publication decision.
  • Readers are more likely to read journals that maintain a high standard.
  • Readers are also more likely to read journals where the referee exerted a high level of effort in reviewing the papers, and can also read those papers with less effort. The problem of trusting a false paper is mitigated, because careful reviews produce fewer false positives. However, referee effort is not observable.

All of this is still under the assumption that referees have an incentive to put real effort into the review process, an assumption I’ll discuss further next time.

After the jump, the rest of this entry will go into more precise detail (~3,000 words) about the game and some of its implications. Continue reading

big sky, big money

A genuine “big sky” shout-out to grad school friends Dave Parker and Erika Franklin Fowler, both PROMINENTLY featured in this week’s fascinating PBS Frontline “Big Sky, Big Money” examining “dark money” in Montana politics and in campaigns around the country in the wake of Citizens United (2010). Parker, a coauthor on research looking at congressional investigations, is a dedicated student of American politics. He’s driven countless hundreds of miles this year collecting information on campaign advertisements from local television stations, filling a vital gap in available information about what’s going on in American politics. And he is rewarded with a spot on PBS Frontline,  the coolest show on TV’s nerdiest channel. Not bad!

even more confounded

I can’t shake this finding by Young and Karr (2011) I mentioned in June.

The authors identify 52 interventions published in leading medical journals that compare observed and experimental evidence – in other words, a correlation was observed and then subjected to a randomized experimental design. They find 0 of 52 interventions – again, zero percent – yielded significant results in randomized trials. Zero percent? Five findings were apparently significant in the contrary direction, and not one false positive? Anyway, the article seems like a pretty fundamental indictment of a whole way of doing business, but their prescription is unworkable. Step 1: Cut all data sets in half.The notion that half of all data be placed in a “lock box” and subjected to an elaborate replication process elevates an important principle to the level of absurdity. Continue reading

the texan tribune

On the Media, an NPR show about, big shocker, the media, re-ran an episode about data last week. One article was about The Texas Tribune, a non-profit and non-partisan media organization, which compiles data from Texas. The data is posted on their site both in raw form and with analysis from The Texas Tribune. Lots of interesting information there.

For example, the annual salaries for Texas government workers can be found there now. Posting government employees salaries, as I ponder it more, however, seems akin to the Sweedish custom of posting everyone’s tax returns online. The practical effects of this policy are reviewed by an admittedly biased writer for The Telegraph in this article.

As they say though, sunshine makes the best disinfectant, so let there be light. (Is that even true? I will run that down next…)

the real csi

How reliable is the forensic evidence collected and analyzed by crime scene investigators? Forensic science is an essential part of the criminal justice system and a staple of American TV culture. It’s easy to see the appeal. Clever investigators whose expert eye and powerful techniques for collecting and analyzing data compel powerful inferences about right and wrong. But PBS Frontline’s “The Real CSI” offers a very different picture. Continue reading