Governing Magazine survey says… Go to a Policy School (and pay close attention in your Public Management and Policy Process classes)!
The European Union recently commissioned a study to “reflect on the state of the discipline and general trends within the discipline and in practice” of public administration (brought to you by the EU’s “Coordinating for Cohesion in the Public Sector of the Future” Group–or COCOPS). The subsequent report produced a ranking of public administration/management journals through the results of a survey of European scholars, which asked the respondents to rank the order of preference for where they would submit a good paper.
At my own school, faculty have vigorously (and in a healthy manner, I might add) debated the relative importance of journal ranking. And, this debate is certainly not isolated to my current place of employment. But one might question whether any of this debate really matters. Once a given metric becomes an established point of reference among those judged on that metric, is there any reason to believe that any other metric (qualitative or quantitative) will adequately replace it?
For instance, the Journal Citations Report or Google Scholar Metrics are two rather widely accepted quantitative metrics for journal prominence in a given field. JCR, in particular, has been used for years and is prominently featured as the metric of choice on most social science journals’ websites.
Below, I show tables derived from the COCOPS study, JCR, and Google Scholar Metrics. I have eliminated distinctively “policy”-oriented journals from lists in the “Public Administration” category in both JCR and Google Scholar. Even keeping in mind the obvious European bias in the COCOPS report, an almost identical list would emerge based on five-year impact factor or Google Scholar metrics. In ALL three lists, the top five journals in the field of public administration are PA, PAR, JPART, Governance, and PMR.
Note that some journals do not yet have a 5-year impact factor score (e.g., IPMJ). Nonetheless, it seems to me that there are a couple things you could derive from the COCOPS report… (1) traditionally accepted quantitative rankings are endogenous to choice; or (2) they aren’t a bad rubric for some fields; or (3) both.
A genuine “big sky” shout-out to grad school friends Dave Parker and Erika Franklin Fowler, both PROMINENTLY featured in this week’s fascinating PBS Frontline “Big Sky, Big Money” examining “dark money” in Montana politics and in campaigns around the country in the wake of Citizens United (2010). Parker, a coauthor on research looking at congressional investigations, is a dedicated student of American politics. He’s driven countless hundreds of miles this year collecting information on campaign advertisements from local television stations, filling a vital gap in available information about what’s going on in American politics. And he is rewarded with a spot on PBS Frontline, the coolest show on TV’s nerdiest channel. Not bad!
The authors identify 52 interventions published in leading medical journals that compare observed and experimental evidence – in other words, a correlation was observed and then subjected to a randomized experimental design. They find 0 of 52 interventions – again, zero percent – yielded significant results in randomized trials. Zero percent? Five findings were apparently significant in the contrary direction, and not one false positive? Anyway, the article seems like a pretty fundamental indictment of a whole way of doing business, but their prescription is unworkable. Step 1: Cut all data sets in half.The notion that half of all data be placed in a “lock box” and subjected to an elaborate replication process elevates an important principle to the level of absurdity. Continue reading
I like that the poster is placed downtown in a location that will see many people walking by. Lots of bureaucrats there. Unfortunately, people ignore the ads these days.
Hat tip to Boingboing.net.
For the past thirty years, students of American government have leaned hard on a metaphor contrasting “police patrol” and “fire alarm” oversight. It’s an interesting and useful idea, but basically unsupported by careful empirical work. My esteemed colleague David C.W. Parker (who blogs about Montana politics here) and I have looked at the partisan dimensions of congressional oversight in a couple academic articles – a 2009 article here published in Legislative Studies Quarterly and a forthcoming article in Political Research Quarterly. This summer we published a short essay, “Oversight: Overlooked or Unhinged?” in Extension of Remarks, the newsletter of the Legislative Studies Section of the American Political Science Association. It’s basically an effort to work through the critique of the “fire alarm” metaphor with an eye on current events. Did you miss it? Here it is again.
Harper’s runs a feature every so often that the bureauphile will seek to emulate in the coming months. Until then, please enjoy the most recent installment from Harper’s chronicling President Reagan’s relationship with the FBI. Continue reading