At their convention this summer, after a year of primary wrangling and many more years of labor activism, the Democratic party officially adopted a platform that calls for a Federal minimum wage of $15 per hour. That’s more than double the current Federal minimum wage, $7.25 per hour. They base their number on the principle that a single parent of two children working full time should not be below the federal poverty line.
In contrast, the Republicans have consistently opposed raising the minimum wage, implicitly supporting it at its current level, set by the George W. Bush administration. They often argue that raising the minimum wage will eliminate many jobs whose marginal profit to the employer is lower than the minimum.
In essence, the parties appear to agree that there should be a minimum wage, and agree about most aspects of labor law (e.g. 40 hour week, overtime pay at a 50% premium). They just disagree about the minimum wage number, by roughly a factor of two. Both sides make principled, philosophical, and emotional arguments in favor of their position. Neither proposes any data. What actually happens depends on which party gets 51% control of the legislatures and executive. In mathematics we would call this a step function: policy doesn’t change, and your vote doesn’t matter, until a crucial point, where one last vote crosses a threshold and produces a discontinuous change.
This is a silly way to run a democracy. It’s an even sillier way to run an economy. I think we can do better.
We have two big problems to solve: we make important decisions without data, and we make them discontinuously. This is like trying to drive while blindfolded, in a car with a lightswitch instead of an accelerator. Solving either problem independently is somewhat unrewarding; the system is still broken. Solving them together might be easier.
Let’s focus on the data problem first. How can we get real evidence of what the effect of a change to the minimum wage would be? Many economists have tried, for example, to estimate how many people would lose their jobs as a result of an increase to the minimum wage. The results are all over the map; they don’t even agree whether the effect exists.
The lack of agreement in economic studies is no surprise; they have almost nothing to go on. The best data available for this kind of measurement is in studies like Card and Krueger’s, which looked at 410 fast food restaurants near the New Jersey-Pennsylvania border, when New Jersey raised the minimum wage. That’s a worthwhile study, but it only applies to one industry, in one location, at one time, and only if you are willing to accept a sweeping set of assumptions that there’s nothing else really “different” between one side of the border and the other.
The statistical power of a study with, in effect, two data points, is not very high. Getting more statistical power is hard. In serious sciences, there’s only one universally trusted way to get clean data about the effect of some intervention: a randomized controlled trial. If economics has a bad reputation for reliability, it’s mostly because economists don’t do them. Economists can’t raise the minimum wage for half the businesses at random, and then observe everybody’s tax records to see how the two groups fared.
But the government can.
A randomized trial of this kind, known in silicon valley as an “A/B test”, seems a little unlikely when comparing the two parties’ minimum wage proposals. Surely business owners would not stand for their minimum wage varying by a factor of two, controlled by a coin toss. Luckily, there’s no need. Gigantic randomized trials of this kind would be capable of detecting extremely small effects. For example, there are 28 million small businesses in the US. A randomized trial across these businesses would be able to detect effects of the trial smaller than 1/5000th the natural variation between them. That’s less than 0.02%
Suppose you believe that doubling the minimum wage would result in small businesses falling ten percentiles in growth, relative to their current distribution. With an A/B test of 28 million businesses, you could detect that effect by raising the minimum wage 1 cent per hour for half of them, and lowering it by 1 cent for the other half.
I know people might find any randomization of the law distasteful, but we’re talking about 1 cent per hour. For tiny fluctuations like this, I think that it’s politically viable.
Of course, this is not the only possible randomization scheme. An economist might want to randomize at a coarser grain, like entire zipcodes, sacrificing statistical power in order to study larger scale dynamics. Conversely, we could imagine randomizing at a finer scale, maybe down to individual employees, maybe subdivided in time, to gain tremendous power to resolve microscopic effects.
Remarkably, you can run many such studies simultaneously without creating interference. Specifically, they are independent to first order, and the first-order approximation is extremely good when considering these kinds of small perturbations.
Out of all these countless possible studies, how do we decide which ones to do? The current political process will always choose None of the Above; whichever party is in power would rather pursue their agenda than collect data that might refute it. What would an experimentation-focused process look like?
In typical legislative systems, >=50% of legislators are empowered to take actions that alter the lives of 100% of the population, including the power to run a randomized trial with half the country in the control group (no change) and half in the treatment group (under change). What if X% of the legislature could pass randomized-trial laws whose treatment group is X% of the population, selected at random? Even individual legislators could run “small” trials, to test their own hypotheses about the consequences of government actions.
As much as I love randomized trials, the idea of having my life upended by random chance and one crazy legislator sounds awful, and insupportable. To avoid this situation, we can also limit the “strength” of trials. To define this, first consider that even the slimmest legislative majority can pass bills of “full strength”, whose magnitude is limited only by the constitution in effect. Proportionally, we could allow trials representing X% of the legislature (X < 50) to run trials at 2X% of full strength. How do we establish full strength? In principle, one can imagine a constitutional court, presented with a legislative template with blanks for various parameters, establishing the constitutional limits on those parameters. This certainly seems unlikely in the context of contemporary forms of governance, so let's looks for an easier alternative, at least as a first step. Suppose a Y% majority, Y > 50, passes some piece of new legislation that changes some value from A to B. Maybe it’s the number of weeks of unemployment insurance eligibility. The minority (X% = 100% – Y%) opposes, and the minority believes that a randomized trial is worthwhile. Both the “before” and “after” states are presumptively constitutional, so this sets a lower bound on “full strength”. Then with our proposed equation, the minority can run a trial that keeps X% of the population closer to the old plan. This treatment group, in the strongest allowed trial, would be eligible for B – (B – A) * 2X% weeks of unemployment, or B – (B – A) * X / 50.
There are two obvious extreme cases. One is the small minority, X -> 0. Naturally, in this case, the number of people affected goes to zero, as does the magnitude of the effect.
The other special case is X -> 50, approaching an equally split legislature. In this limit, the minority can run a trial affecting 50% of the population, and people assigned to the trial group will continue to experience value A, unchanged from before the bill passed. This means we have satisfied our continuity goal!
In case that isn’t clear, let me break it down. Suppose the two parties have two preferred states of affairs, and the legislature is always nearly evenly split. Under current systems, whichever party gets >50% of the legislature enacts their agenda, and we toggle back and forth between the two states whenever control switches between the parties. Under this system, the majority enacts its agenda, but the minority can roll it back for half the people, resulting in half the population experiencing version A, and half experiencing version B. This is true regardless of which party holds the majority!
I think my favorite thing about this arrangement is that, when public opinion is split on what policy is best, the resulting trial has the strongest treatment and the largest sample size, resulting in the greatest possible statistical power. That means that instead of an endless deadlock, we will quickly have the best possible data to tell us about the relative effects of each party’s policy.
Obviously not all policies are subject to this kind of trial. I’m not suggesting we randomize our next declaration of war, or constitutional amendment. Still, almost any domestic policy that includes a number can reasonably be treated this way, from tax rates to statutes of limitations.
There are many questions left to answer. How do we prevent legislators from “re-rolling” until a specific person ends up in the treatment group? How do we prevent legislators from creating a multiplicity of redundant trials that add up to excessive strength? How do we maintain budgetary balance, coupling taxation and spending while allowing this kind of randomization? How do we ensure that trials have sufficient statistical power? I don’t know the answers, but I have a feeling that with a bit of help from game theory, we could construct a set of bylaws that would achieve the goal while limiting abuses.
Note that I said “bylaws”. Everything here can be done by most any legislature on its own initiative, just by altering its own bylaws. A constitutional amendment might help, but it shouldn’t be necessary.
Of course, this raises the final question: would any legislative body ever actually do this? As I noted before: whoever has power now is usually happy with the structural status quo.
That might be broadly true, but in this case I think there’s reason for hope. This proposal does reduce the absolute power of the majority, but it does so by increasing the autonomous power of each individual legislator, including members of the majority. If the majority legislators are all truly of one mind, then perhaps there is no advantage, but that’s never true. Fissures and factions are always visible within the majority. A randomized-trial bylaw would empower those factions to show off their own proposals, and arm them with the data to convince everyone else.
In a sense, I think this proposal is plausible because it appeals to arrogance. Everyone believes that their policy proposals are the right ones, and their opponents are wrong. That means both sides ought to favor a trial, for it will surely prove them right.