The European Union recently commissioned a study to “reflect on the state of the discipline and general trends within the discipline and in practice” of public administration (brought to you by the EU’s “Coordinating for Cohesion in the Public Sector of the Future” Group–or COCOPS). The subsequent report produced a ranking of public administration/management journals through the results of a survey of European scholars, which asked the respondents to rank the order of preference for where they would submit a good paper.
At my own school, faculty have vigorously (and in a healthy manner, I might add) debated the relative importance of journal ranking. And, this debate is certainly not isolated to my current place of employment. But one might question whether any of this debate really matters. Once a given metric becomes an established point of reference among those judged on that metric, is there any reason to believe that any other metric (qualitative or quantitative) will adequately replace it?
For instance, the Journal Citations Report or Google Scholar Metrics are two rather widely accepted quantitative metrics for journal prominence in a given field. JCR, in particular, has been used for years and is prominently featured as the metric of choice on most social science journals’ websites.
Below, I show tables derived from the COCOPS study, JCR, and Google Scholar Metrics. I have eliminated distinctively “policy”-oriented journals from lists in the “Public Administration” category in both JCR and Google Scholar. Even keeping in mind the obvious European bias in the COCOPS report, an almost identical list would emerge based on five-year impact factor or Google Scholar metrics. In ALL three lists, the top five journals in the field of public administration are PA, PAR, JPART, Governance, and PMR.
Note that some journals do not yet have a 5-year impact factor score (e.g., IPMJ). Nonetheless, it seems to me that there are a couple things you could derive from the COCOPS report… (1) traditionally accepted quantitative rankings are endogenous to choice; or (2) they aren’t a bad rubric for some fields; or (3) both.
Today’s post is based on a great paper by Peter Meyer, on the invention of the airplane. He also has a set of slides summarizing the paper and offering lots of pictures of early plane designs.
The data set that he put together is of the writings, correspondences, and patents regarding air travel during the period before anybody worked out whether air travel is even possible.
He paints the picture of a real community of interacting researchers. Letters are sent, ideas are shared. Patents are obtained, but then immediately pledged to the world at large. We get a sense of a small community of people that everybody else thought was crazy (until they were proven right), and who longed to see flight happen. Some people, most notably Octave Chanute, worked hard on being an information hub to keep the conversation going.
And then, they stopped. Two members of the community, the Wright Brothers, were especially active until about 1902, at which point they realized that their design could actually fly, and they stopped sharing. By the next decade, the correspondences stop and the patent battles commence:
This rapid takeoff of the industry, unmoored from the original inventors, suggests that much of the key knowledge was widely available. There were great patent battles after 1906 in the U.S. (and after 1910 in Europe) and industrial competition, but the key knowledge necessary to fly was not in fact licensed from one place or closely tied to any particular patent.
Looking to somewhat more recent history, the software world followed a similar pattern. Before the mid-1990s, software was largely seen as not patentable. That was the period when people came up with word processors, spreadsheets, databases, compilers, scripting languages, windowed GUIs, email, chat clients, the WWW. Then, after a series of federal circuit rulings which I will not rehash now, patents showed up in the software industry. If Rip van Winkle fell asleep in 1994, he’d see modern computing technology as amazingly fast and tiny and beautiful, the product of ten thousand little incremental improvements, but a basically familiar elaboration on what was in the commons in 1994.
The 3D printing world has a different history, because the early innovations were deemed patentable from the start. Many authors characterize the 3D maker world as being in a holding pattern, because key patents from the mid-1990s claimed the fundamental technologies. For airplanes and software, the fundamental building blocks were out in the public before the lawyers showed up. For 3D printing, the patents came from the start, so it took the 17-year wait until their expiration for the common tools to become commonly available.
[By the way, I found that last link to be especially interesting. It lists 16 patents that the authors identify as key to 3D printing, though the authors refuse on principle to say that their being freed up will advance the industry. Five of the sixteen are listed as having “current assignment data unavailable”, meaning that even if you wanted to license the described technology, the authors—a Partner and Clerk at an IP law firm—couldn’t tell you who to contact to do so. Orphan works aren’t exclusive to copyright.]
These are loose examples of broad industries, but they make good fodder for the steampunk alt history author in all of us. What would the 1910s and 1920s have looked like if airplanes were grounded in a patent thicket? What would our computer screens look like today if WordPerfect Corp had had a patent on the word processor? What would the last decade of our lives have looked like if the cheap 3D printer technology emerging today were patent-free then?
Last time, I sketched a model of the peer review process as an extensive-form game. The model described the review process as a noisy measurement: the paper has some quality, and the review measures that quality plus some bias and some variance. With greater effort, the review’s variance can be lowered. The game I described was one-shot, about a single paper going through the process.
I didn’t describe the reviewer’s incentive to exert effort to carefully evaluate a paper, because within the one-shot game, there is none. To get the referee to exert nonzero effort, there has to be another step inserted into the game:
- Based on referee’s observed effort level, the editor, author, or reader reward or punish the referee.
This post will discuss some of the possible ways to implement this step.
My big conclusion is that anonymity in peer review is more of a barrier than a help. Having reviewers sign their names opens up the possibility of publishing the reviews, which turns a peer reviewer into a public discussant of the paper, and turns the review itself into a petit publication. Journals in the 1900s couldn’t do this because of space limitations, but in the world where online appendices are plentiful, this can be a good way to reward reviewers for putting real effort into helping readers and editors understand and evaluate the paper.
E.g., scholarships, vouchers, or subsidizing private preferences?
To summarize most of the story so far, the USPTO has no incentive to reject applications on obvious claims. It’s easy to find places where the USPTO refers to applicants as customers, and where its rhetoric leans more toward serving those customers than promoting Progress. Remember all that during the election campaign where President Obama promised to maintain balance at the USPTO and ensure that it serves Progress, not maximizing patent count? You don’t, because it didn’t happen.
This time we’ll consider the incentives of the applicants themselves. Continue reading
Dave Kappos, director of the USPTO, whom I’ve mentioned a few times on this blog, will be speaking on the invention of software patents.
Director Kappos will address the topic of high-tech innovation and the role of software patents in that innovation. He’ll examine how software came to be patented; how those patents are featured across the innovation landscape; and how the USPTO in the last three years has taken concrete steps to ensure the highest level of quality in issued patents while providing avenues for re-examination of existing patents.
I thought this was worth posting for two reasons:
- If you’re in DC, well, you should go. I expect it will be fun and/or hilarious.
- This little phrase: “how software came to be patented”. That is, the director of the USPTO believes that there was a time when software was not patentable, but it now is. Given that the Supreme Court has done little but invalidate patents and tighten the range of what is patentable, and Congress hasn’t done anything on the subject since the 1950s, how did this shift occur?
I want this to be a brief post, so I’ll leave that question as an exercise for the bureauphile. Next time, some more mechanism design.
Hi, my name is Ben Klemens, and I’m honored to say that I’ve been invited to write a bit here on Bureauphile. My background is mostly in Game Theory, Statistics, computational modeling, and other sundry methodological pursuits. But I have read far too much on patent law, so my first few posts will likely be on that subject.
Obvious patents almost seem to be the norm these days, because of all of the Bureauphile’s favorite problems, including regulatory capture and a budget-maximizing bureaucracy.
The parade of idiotic patents has been a common trope since Amazon’s one-click patent in the `00s. The slide-to-unlock feature on the iPhone, Google’s patent on doodles, or Friendster’s patent on circles of friends. I mean, if I stated to you this problem—
I have a smartphone with a screen and only one button. I need a way to keep it from turning on in users’ pockets.
—how long would it take before it occurs to you to require a gesture on the screen to unlock the telephone? As for making it work, here’s an implementation in about 450 lines of code—get it before the author gets sued. Continue reading
GAO won the IgNobel LITERATURE PRIZE: “The US Government General Accountability Office, for issuing a report about reports about reports that recommends the preparation of a report about the report about reports about reports.”
REFERENCE: “Actions Needed to Evaluate the Impact of Efforts to Estimate Costs of Reports and Studies,” US Government General Accountability Office report GAO-12-480R, May 10, 2012.
According to the internets, GAO did not send anyone to collect the award.
Today is the 225th anniversary of our nation’s Constitution. Good work America! Read more here.