Managing complexity: encoding the tax code

I put together an interactive tool to explore the U.S. individual tax calculation.

Here is a screenshot of the most basic tax form. You can click through to https://b-k.github.io/1040.js and add components for kids, check the boxes to add components for a mortgage, student loans, rents and royalties, and so on.

ez

Continue reading

Advertisement

The space between the signal and the action

In the game-theoretic world, the gunner never shoots: the other side looks at the options down the game tree, realizes that one action will lead to his or her getting shot, and doesn’t take that action. In Game Theory textbooks, cases never go to court: both sides calculate the risk-adjusted expected payoff from trial, and if it is positive for one hyperrational side, then it is negative for the other hyperrational side, and a settlement can be calculated based on that. In both cases, knowledge that an event could occur largely has the same effect as the event itself. Continue reading

Bureaucrats at their desks

ImageDutch photographer Jan Banning has traveled the world documenting the consequences of war, the homeless and impoverished, and victims of human trafficking. Asked to photograph a story on the administration of international development aid, something he thought to be “un-photographable,” Banning and a journalist set out to visit hundreds of local government offices worldwide. Between 2003 and 2007, they met civil servants in eight countries on five continents. “Though there is a high degree of humour and absurdity in these photos,” Banning says, “they also show compassion with the inhabitants of the state’s paper labyrinth.”

Where European scholars are publishing in Public Administration and why we already knew this

The European Union recently commissioned a study to “reflect on the state of the discipline and general trends within the discipline and in practice” of public administration (brought to you by the EU’s “Coordinating for Cohesion in the Public Sector of the Future” Group–or COCOPS). The subsequent report produced a ranking of public administration/management journals through the results of a survey of European scholars, which asked the respondents to rank the order of preference for where they would submit a good paper.

At my own school, faculty have vigorously (and in a healthy manner, I might add) debated the relative importance of journal ranking. And, this debate is certainly not isolated to my current place of employment. But one might question whether any of this debate really matters. Once a given metric becomes an established point of reference among those judged on that metric, is there any reason to believe that any other metric (qualitative or quantitative) will adequately replace it?

For instance, the Journal Citations Report or Google Scholar Metrics are two rather widely accepted quantitative metrics for journal prominence in a given field. JCR, in particular, has been used for years and is prominently featured as the metric of choice on most social science journals’ websites.

Below, I show tables derived from the COCOPS study, JCR, and Google Scholar Metrics. I have eliminated distinctively “policy”-oriented journals from lists in the “Public Administration” category in both JCR and Google Scholar. Even keeping in mind the obvious European bias in the COCOPS report, an almost identical list would emerge based on five-year impact factor or Google Scholar metrics. In ALL three lists, the top five journals in the field of public administration are PA, PAR, JPART, Governance, and PMR.

Note that some journals do not yet have a 5-year impact factor score (e.g., IPMJ). Nonetheless, it seems to me that there are a couple things you could derive from the COCOPS report… (1) traditionally accepted quantitative rankings are endogenous to choice; or (2) they aren’t a bad rubric for some fields; or (3) both.

Continue reading

Airplanes and the intellectual commons

Today’s post is based on a great paper by Peter Meyer, on the invention of the airplane. He also has a set of slides summarizing the paper and offering lots of pictures of early plane designs.

The data set that he put together is of the writings, correspondences, and patents regarding air travel during the period before anybody worked out whether air travel is even possible.

He paints the picture of a real community of interacting researchers. Letters are sent, ideas are shared. Patents are obtained, but then immediately pledged to the world at large. We get a sense of a small community of people that everybody else thought was crazy (until they were proven right), and who longed to see flight happen. Some people, most notably Octave Chanute, worked hard on being an information hub to keep the conversation going.

And then, they stopped. Two members of the community, the Wright Brothers, were especially active until about 1902, at which point they realized that their design could actually fly, and they stopped sharing. By the next decade, the correspondences stop and the patent battles commence:

This rapid takeoff of the industry, unmoored from the original inventors, suggests that much of the key knowledge was widely available. There were great patent battles after 1906 in the U.S. (and after 1910 in Europe) and industrial competition, but the key knowledge necessary to fly was not in fact licensed from one place or closely tied to any particular patent.

Looking to somewhat more recent history, the software world followed a similar pattern. Before the mid-1990s, software was largely seen as not patentable. That was the period when people came up with word processors, spreadsheets, databases, compilers, scripting languages, windowed GUIs, email, chat clients, the WWW. Then, after a series of federal circuit rulings which I will not rehash now, patents showed up in the software industry. If Rip van Winkle fell asleep in 1994, he’d see modern computing technology as amazingly fast and tiny and beautiful, the product of ten thousand little incremental improvements, but a basically familiar elaboration on what was in the commons in 1994.

The 3D printing world has a different history, because the early innovations were deemed patentable from the start. Many authors characterize the 3D maker world as being in a holding pattern, because key patents from the mid-1990s claimed the fundamental technologies. For airplanes and software, the fundamental building blocks were out in the public before the lawyers showed up. For 3D printing, the patents came from the start, so it took the 17-year wait until their expiration for the common tools to become commonly available.

[By the way, I found that last link to be especially interesting. It lists 16 patents that the authors identify as key to 3D printing, though the authors refuse on principle to say that their being freed up will advance the industry. Five of the sixteen are listed as having “current assignment data unavailable”, meaning that even if you wanted to license the described technology, the authors—a Partner and Clerk at an IP law firm—couldn’t tell you who to contact to do so. Orphan works aren’t exclusive to copyright.]

These are loose examples of broad industries, but they make good fodder for the steampunk alt history author in all of us. What would the 1910s and 1920s have looked like if airplanes were grounded in a patent thicket? What would our computer screens look like today if WordPerfect Corp had had a patent on the word processor? What would the last decade of our lives have looked like if the cheap 3D printer technology emerging today were patent-free then?

23andme and the FDA and me

23andMe provides a service wherein you send them a sample of your spit, they run it through a machine that detects 550,000 genetic markers, and then they express a likelihood that you are susceptible to certain genetic disorders.

To do this, they combed through the genetics literature for GWASes: genome-wide association studies, searching for the link between certain genes and certain disorders. I was on the team that did one of them: a study searching for genetic causes of bipolar disorder. Our study involved interviewing and drawing blood from over 2,500 subjects (a thousand at NIH and 1,500 at a German sister lab). As of this writing, Google Scholar says it’s been cited 418 times. If you have a 23andMe account, you can find our study as one of a few under the section on bipolar disorder.

The FDA considers this service—running spit through the Illumina machine, then comparing the data to correlations like those we reported—to be a “medical device”, and has ordered 23andMe to cease and desist from distributing this medical device.

After the genotyping itself, this is an informational product, so it gives us a chance to see how an institution built to handle food and drugs that people ingest will handle a product that is almost entirety a list of statistical correlations.

We have limited information about the storyline to date, and I’ll try to avoid hypothesizing about what’s not in the letter, which already gives us enough for discussion. The 23andMe people may have been uncooperative, or it might even be a simple case of clashing personalities, or who knows what else went on. The labeling regulations in the CFR go on for pages, and I won’t pretend to know whether 23andMe complies or not. [More useful trivia from 21 CFR 801(164): “artificial flavorings, artificial sweeteners, chemical preservatives, and color additives are not suitable ingredients in peanut butter.” That’s the law.]

Snake oil

The FDA should exist. It should be policing health claims. We joke about snake-oil salesmen, but that’s a phrase because there was once a time when people really did sell snake oil and people really were dumb enough to use it instead of seeing a real doctor (not that real doctors at the time were much better…). Pseudoscience lives to this day, and if the FDA didn’t exist, we’d see a lot of late-nite ads for guaranteed cures for cancer for the low low price of $19.95, which would cause people to delay seeking real treatment, which would kill people.

But there are differences. As above, the genotyping service is almost purely informational. You can go to the drug store and buy a thermometer, a bathroom scale, a mirror, or any of a number of other self-inspection tools to find out about yourself. While you’re at the drug store, you can check your blood pressure on that automated machine in the back. Then you can go home and compare your data to Wikipedia pages about hundreds of different maladies. As the first pass of several, Illumina’s genotyping machine is exactly like these other tools for measuring the body, apart from the fact that it is unabashedly a flippin’ miracle of modern science. Perhaps there was a time when people said the same thing about the thermometer.

Is the machine somehow unreliable? When working with the Illumina machines(*) what blew my mind is that the machine is very accurate, as in over 99.9% correct over 550,000 data points. So it’s like a thermometer, except the results are more accurate and reliable. A big chunk of academia’s publications are about surmounting problems in doing research, and along that vein it is easy to find abundant articles on the process and reliability of using the machines themselves. Its quirks are well understood.

[(*) They’re BeadStations, but we always called them the Illumina. It’s good to be reminded that the provision of cheap, universal genotyping is a plot by the Illuminati.]

Diagnostics

But 23andMe provides more than just a list of SNPs (single-nucelotide polymorphisms, which I colloquially refer to as genetic markers here). They also provide the odds of being susceptible to certain diseases, using studies such as the one I worked on.

The FDA’s letter explains how this can cause problems:

For instance, if the BRCA-related risk assessment for breast or ovarian cancer reports a false positive, it could lead a patient to undergo prophylactic surgery, chemoprevention, intensive screening, or other morbidity-inducing actions, while a false negative could result in a failure to recognize an actual risk that may exist.

Here, it seems that the FDA is protecting against doctor incompetence, because providing a mastectomy or chemotherapy because somebody walked in to the office with a printout from 23andMe is not the action of a competent doctor. So there is an implication here that the FDA feels that doctors can’t be trusted. What research there is shows that the ability of MDs to do basic statistical analysis is not stellar. Perhaps the FDA should require that patients may only receive medical test results under the supervision of a trained statistician.

How it works

The false negative part is a little closer to the story of the snake-oil doses that prevent somebody with a real problem from seeking real treatment. However, here we run into a problem as well: the results from 23andMe do not report positives or negatives. They only report probabilities, binned into average, above-average, &c.

After all, our paper doesn’t report anything definite either. We report a statistical association between certain SNPs having certain values and the odds ratio for bipolar disorder.

Given a series of such odds ratios, we can apply Bayes’s rule. You may not remember it from Stats 101, but the gist is that we begin with a prior likelihood of a state (such as ‘the person who mailed in this spit has blue eyes’), then update that likelihood using the data available. If we have several studies, we can apply Bayes’s rule repeatedly. In reality, we would do this with probability distributions—typically bell curves indicating the mean likelihood of blue eyes and confidence bands around that mean. Repeated addition of new data, using multiple SNPs described in multiple studies, typically narrows the variance of the distribution, increasing our confidence in the result.

I don’t have the report in front of me, but they found my brother to be at over 50% risk for obesity. We thought it was funny: he lifts weights and can talk your ear off on all matters of diet. But that’s just how the system works. His behavior has had a greater influence on his odds of becoming obese than the genetic markers, but 23andMe just has a spit sample from a U.S. resident. In the United States, over 30% of males are obese, so if I only knew that a person is male and living in the U.S., I’d guess that he has 30% odds of being obese. That’s my prior, which could then be updated using the genetic information to produce a more personalized image. It looks like his genes raised the odds of obesity from baseline. Perhaps this percentage over 50% is the sort of “false positive” that the FDA letter referred to.

[To balance this “false positive”, the tests also did a good job of picking up some rare risks that we know from our family’s medical history to really be risks. It’s probably TMI to go into a lot of detail here.]

FDA v NIH

To summarize the story so far, by my conception there are three steps to the genotyping service:

  1. Run spit through the Illumina machine.
  2. Gather data about marker/disorder correlations from the literature.
  3. Use statistical methods like Bayes’s rule to aggregate the data and studies to report the best guess at the odds of each disorder.

From the FDA letter: “we still do not have any assurance that the firm has analytically or clinically validated” the genotyping service.

Starting from step three, it is reasonable for the FDA to take a “guilty until proven innocent” attitude toward the data analysis and to require that 23andMe show its work to the FDA. But although the updating problem is far from trivial in practice, a review in good faith by both sides could verify or disprove the statisticians’ correctness in well under the five years that the FDA and 23andMe seem to have been bickering. Some of the results may not even need to go as far as applying Bayes’s rule, and may be simple application of a result from a paper.

The application of the FDA’s statement to steps one and two, where the bulk of the science happened, is the especially interesting part from a bureauphilic perspective. As above, the academic literature has more than enough on the analytic and clinical validity of step one (get good SNPs from the machine) and abundant studies such as the one I worked on that used 2,500 subjects to verify the analytic and clinical validity of step two (calculate disorder odds from SNP results). E.g., here is a survey of 1,350 GWASes.

Yet the authors of the FDA letter do not have “any assurance” of validity.

A paper’s admissability as evidence to the FDA depends on the “regulatory pathway” under which the study was done. If a study is done to approve a drug on a European market, that study is not admissible as evidence at the FDA. Evidently, our study done at the NIH is not admissible as evidence at the FDA. Perhaps I need to parse the sentence from the letter more carefully: “we still do not have any assurance that the firm has analytically or clinically validated” the genotyping service, where the use of “the firm” indicates that research from the NIH doesn’t count, but if the product vendor replicates the study under the explicit purview of the FDA (which, given the scale of some of these studies and their sheer number, is entirely impossible) then that does count.

It’s good that the FDA has standards on testing, because drug companies have strong incentives to put their thumb on the scale in every trial. It could indeed be the case that the FDA has determined the correct way to test medical devices and derive statistical results and, at the same time, everybody else is doing it wrong. Or it could be that the FDA as an institution has an acute case of NIH syndrome.

500 million lines of code

Here at Bureauphile, we care about the measurement of performance. This post is about one terrible way to do it: lines of computer code. You may have seen the claim that healthcare.gov is backed by “about 500 million lines of software code”. That figure is from the last line of an NYT article, from an unnamed source. This was picked up by many people, probably because it makes for a nice headline about the bloat of government contracts, with a punch that a simple statement like this site doesn’t work for a lot of people doesn’t have. I ran into it again at Putative, which prompted me to give this some debunking.

First, let’s do the simplest of calculations. If it takes you one second to write a line of code, and you are a contractor working a solid eight hours/day shift, it will take you 17,361 days to write half a billion lines. With a 250-day person year, that’s 69 person-years. Of course, it takes a lot more than a second to write a line of code: more typical would be—I am not making this up— ten lines of code per day. Let’s give them 100 lines of code per day, then we’re still at 5 million days, or 20,000 person years to write all this up. The contractors started this project at the beginning of the year, and did not have 20,000 people working on it.

So the number is prima facie fishy.

But lines-of-code counts, even on the best of days, should not be taken seriously, because it is so difficult to define what is a line of code.

First, different languages have different whitespace customs. In C, you’ll find people who write

if (x==0)
 {
 return INFINITY;
 }
 else
 {
 return y/x;
 }

whereas in other languages the custom is much closer to

if (x==0) return INFINITY; else return y/x;

So this simple sentiment could be one or eight lines of code.

One day, when avoiding work, I wrote a one-line script that counts lines of C-family code. The gist is that I omit comments, then look for lines that have something more than a }, a), or whitespace on them. Here it is, in one line:

sed -e 's|/\*.*\*/||' -e 's|//.*||' $* | awk -e '/\/\*/ {incomment=1}' -e '{if (!incomment) print $0}' -e '/\*\// {incomment=0}' - | grep '[^}) \t]' | wc -l

Following the ingrained UNIX tradition, this is four programs piped together, one of which (awk) has a three-line program specified on the command line. Readers good with awk will notice cases where this is inaccurate; it’s not worth caring.

I’m coming to a relative stopping point with my work on the the Apophenia library for statistical and scientific computing. How many lines of code does it take to implement—if I may be immodest for a moment—a darn solid statistical library?

14,860: using the nontrivial line counter above.

27,934: Simply counting lines, regardless of their content, including documentation and blanks [wc -l for the POSIX geeks].

39,947: There is a testing suite to verify that Apophenia calculates the right numbers, which includes several data sets, totalling 10,013 lines, which we can add in. More on this below.

41,397: It is a not uncommon technique to write code that generates other code. I use the m4 macro language to autogenerate 1,450 lines of code that gets distributed in the package (by a rough count), which we can add to the above.

56,330: I use GNU Autotools for installation, which takes the generation of code using m4 to the extreme: it produces a 16,933 line script based on a 119 line pre-script that I wrote. Can’t use the library without installing it, so add that on.

325,687: When I sit down to a fresh computer (say, one that just solidified in Amazon’s cloud), I have to install other libraries before Apophenia will run. Apophenia relies upon the GNU Scientific Library, which is 197,452 lines by my nontrivial line counter (they use the super-sparse format—it’s 299,716 lines by raw line count), and SQLite 3, which is 71,905 nontrivial lines of code. One could argue that those are a necessary part of Apophenia.

So Apohenia is somewhere between about 15,000 and 325,000 lines of code. If I’m bragging to friends about how efficient my codebase is, it’s the former; If I’m on a Dickensian government contract, it be the latter.

Getting back to healthcare.gov, one question we might really want to ask is: what lines of code might a maintainer one day have the responsibility of manually revising? The press has been reporting that 5 million lines of code have to be changed, which might be getting toward this question, though I am comfortable assuming that somebody just made up that number too.

We know that the site relies on code lifted from other projects, because the press has reported that they screwed up a copyright attribution for one of them. Apophenia is at arm’s length from the GNU Scientific and SQLite libraries, but web projects are more likely to function by cutting/pasting code from javascript libraries into what gets served up. However, if a bug is found in one of these libraries, the .gov maintainers would first file a bug report with the library maintainers, not try to fix it themselves (depending on the situation).

Is data code? The reader will be unsurprised to hear that there are about 141,000 procedure and diagnosis codes in the ICD-10 system. If there’s a database with 50 lines of description for each procedure (also plausible, especially in a sparse-on-the-page format like some XML), then you’ve got 7 million lines of “code” that needs maintaining right there.

The people running healthcare.gov surely bought whatever underlying data sets from a provider, and if an insurance company submits bad pricing files to healthcare.gov, the legal contracts probably say that it is the responsibility of the insurer to fix it. But in managing the real-world project, responsibility isn’t quite so clear-cut: end users will just see a wrong number and declare that healthcare.gov is broken.

Nobody expects the site to run in fifty lines of javascript that the developers tweeted to each other. Healthcare.gov is no doubt tens of thousands of lines of code by any measure, because health care insurance in the United States is in the running for the most complex system on Earth, and this web site has to simplify it and deliver it securely to millions of people in diverse contexts. But defining where this project ends and the projects, databases, and other underlying structures begin is futile, as is measuring the complexity of the task by lines of code.

Peer reviewer incentives and anonymity

Last time, I sketched a model of the peer review process as an extensive-form game. The model described the review process as a noisy measurement: the paper has some quality, and the review measures that quality plus some bias and some variance. With greater effort, the review’s variance can be lowered. The game I described was one-shot, about a single paper going through the process.

I didn’t describe the reviewer’s incentive to exert effort to carefully evaluate a paper, because within the one-shot game, there is none. To get the referee to exert nonzero effort, there has to be another step inserted into the game:

  • Based on referee’s observed effort level, the editor, author, or reader reward or punish the referee.

This post will discuss some of the possible ways to implement this step.

My big conclusion is that anonymity in peer review is more of a barrier than a help. Having reviewers sign their names opens up the possibility of publishing the reviews, which turns a peer reviewer into a public discussant of the paper, and turns the review itself into a petit publication. Journals in the 1900s couldn’t do this because of space limitations, but in the world where online appendices are plentiful, this can be a good way to reward reviewers for putting real effort into helping readers and editors understand and evaluate the paper.

Continue reading