||How Britain's Getting Public Policy Down to a Science
||Monday, April 28, 2014
Information Access, Citizens’ Service Delivery
||United Kingdom of Great Britain and Northern Ireland
||Apr 28, 2014
This story is part of Governing's annual International issue.
In medicine they do clinical trials to determine whether a new drug works. In business they use focus groups to help with product development. In Hollywood they field test various endings for movies in order to pick the one audiences like best. In the world of public policy? Well, to hear members of the United Kingdom’s Behavioural Insights Team (BIT) characterize it, those making laws and policies in the public sector tend to operate on some well-meaning mix of whim, hunch and dice roll, which all too often leads to expensive and ineffective (if not downright harmful) policy decisions.
A lot of policy and spending is based “on what people think is going to be successful rather than on evidence of what actually is successful,” says Owain Service, a member of the founding BIT team, and now managing director. It is a best-guess approach to ginning up programs and policies, a method that, if practiced in other fields, would be considered “bizarre or even reckless,” noted a BIT white paper. That characterization is hardly a stretch considering the huge amounts of public money expended on programs and policies that very directly impact the lives and the well-being of citizens. Get it wrong, and it’s not just money down the drain. It could add up to actual human or societal harm.
One of the prime BIT examples for why facts and not intuition ought to drive policy hails from the U.S. The much-vaunted “Scared Straight” program that swept the U.S. in the 1990s involved shepherding at-risk youth into maximum security prisons. There, they would be confronted by inmates who, presumably, would do the scaring while the visiting juveniles would do the straightening out. Scared Straight seemed like a good idea -- let at-risk youth see up close and personal what was in store for them if they continued their wayward ways. Initially the results reported seemed not just good, but great. Programs were reporting “success rates” as high as 94 percent, which inspired other countries, including the U.K., to adopt Scared Straight-like programs.
The problem was that none of the program evaluations included a control group -- a group of kids in similar circumstances with similar backgrounds who didn’t go through a Scared Straight program. There was no way to see how they would fare absent the experience. Eventually, a more scientific analysis of seven U.S. Scared Straight programs was conducted. Half of the at-risk youth in the study were left to their own devices and half were put through the program. This led to an alarming discovery: Kids who went through Scared Straight were more likely to offend than kids who skipped it -- or, more precisely, who were spared it. The BIT concluded that “the costs associated with the programme (largely related to the increase in reoffending rates) were over 30 times higher than the benefits, meaning that ‘Scared Straight’ programmes cost the taxpayer a significant amount of money and actively increased crime.”
It was witnessing such random acts of policymaking that in 2010 inspired a small group of political and social scientists to set up the Behavioural Insights Team. Originally a small “skunk works” tucked away in the U.K. Treasury Department, the team gained traction under Prime Minister David Cameron, who took office evincing a keen interest in both “nonregulatory solutions to policy problems” and in spending public money efficiently, Service says. By way of example, he points to a business support program in the U.K. that would give small and medium-sized businesses up to ￡3,000 to subsidize advice from professionals. “But there was no proven link between receiving that money and improving business. We thought, ‘Wouldn’t it be better if you could first test the efficacy of some million-pound program or other, rather than just roll it out?’”
The BIT was set up as something of a policy research lab that would scientifically test multiple approaches to a public policy problem on a limited, controlled basis through “randomized controlled trials.” That is, it would look at multiple ways to skin the cat before writing the final cat-skinning manual. By comparing the results of various approaches -- efforts to boost tax compliance, say, or to move people from welfare to work -- policymakers could use the results of the trials to actually hone in on the most effective practices before full-scale rollout.
The various program and policy options that are field tested by the BIT aren’t pie-in-the-sky surmises, which is where the “behavioural” piece of the equation comes in. Before settling on what options to test, the BIT takes into account basic human behavior -- what motivates us and what turns us off -- and then develops several approaches to a policy problem based on actual social science and psychology.
The approach seems to work. Take, for example, the issue of recruiting organ donors. It can be a touchy topic, suggesting one’s own mortality while also conjuring up unsettling images of getting carved up and parceled out by surgeons. It’s no wonder, then, that while nine out of 10 people in England profess to support organ donations, fewer than one in three are officially registered as donors. To increase the U.K.’s ratio, the BIT decided to play around with the standard recruitment message posted on a high-traffic gov.uk website that encourages people to sign up with the national Organ Donor Register (see “‘Please Help Others,’” page 18). Seven different messages that varied in approach and tone were tested, and at the end of the trial, one message emerged clearly as the most effective -- so effective, in fact, that the BIT concluded that “if the best-performing message were to be used over the whole year, it would lead to approximately 96,000 extra registrations completed.”
According to the BIT there are nine key steps to a defensible controlled randomized trial, the first and second -- and the two most obvious -- being that there must be at least two policy interventions to compare and that the outcome that the policies they’re meant to influence must be clear. But the “randomized” factor in the equation is critical, and it’s not necessarily easy to achieve.
In BIT-speak, “randomization units” can range from individuals (randomly chosen clients) entering the same welfare office but experiencing different interventions, to different groups of clientele or even different institutions like schools or congregate care facilities. The important point is to be sure that the groups or institutions chosen for comparison are operating in circumstances and with clientele similar enough so that researchers can confidently say that any differences in outcomes are due to different policy interventions and not other socioeconomic or cultural exigencies. There are also minimum sampling sizes that ensure legitimacy -- essentially, the more the merrier.
As a matter of popular political culture, the BIT’s approach is known as “nudge theory,” a strand of behavioral economics based on the notion that the economic decisions that human beings make are just that -- human -- and that by tuning into what motivates and appeals to people we can much better understand why those economic decisions are made. In market economics, of course, nudge theory helps businesses tune into customer motivation. In public policy, nudge theory involves figuring out ways to motivate people to do what’s best for themselves, their families, their neighborhoods and society.
When the BIT started playing around with ways to improve tax compliance, for example, the group discovered a range of strategies to do that, from the very obvious approach -- make compliance easy -- to the more behaviorally complex. The idea was to key in on the sorts of messages to send to taxpayers that will resonate and improve voluntary compliance. The results can be impressive. “If you just tell taxpayers that the majority of folks in their area pay their taxes on time [versus sending out dunning letters],” says the BIT’s Service, “that adds 3 percent more people who pay, bringing in millions of pounds.” Another randomized controlled trial showed that in pestering citizens to pay various fines, personal text messages were more effective than letters.
There has been pushback on using randomized controlled trials to develop policy. Some see it as a nefarious attempt at mind control on the part of government. “Nudge” to some seems to mean “manipulate.” Service bridles at the criticism. “We’re sometimes referred to as ‘the Nudge Team,’ but we’re the ‘Behavioural Insights Team’ because we’re interested in human behavior, not mind control.”
The essence of the philosophy, Service adds, is “leading people to do the right thing.” For those interested in launching BIT-like efforts without engendering immediate ideological resistance, he suggests focusing first on “non-headline-grabbing” policy areas such as tax collection or organ donation that can be launched through administrative fiat.
Recently the BIT moved out of Treasury to become a quasi-governmental operation. The move, says Service, was so that the BIT could expand both the countries and the sectors in which it operates, inasmuch as it’s not just governments that are trying to help people make better decisions for themselves and society. Randomized controlled trials, for instance, are now in widespread international use among NGOs doing antipoverty work.
One interesting thing about the whole BIT phenomenon is that the inspiration for it came from the United States, including the work of key academics from heavyweight institutions like the University of Chicago, Yale and Harvard. Applying behavioral insights to policy has caught the interest of the Obama administration, too. The White House has established a behavioral sciences team within its Performance Improvement Council, an interagency group that serves the federal performance community. The team is reportedly working with key regulatory agencies testing out different types of letters to noncompliant parties. The effort has already caught the attention of the likes of Fox News, which quoted a Utah State University professor as saying, “Ultimately, nudging ... assumes a small group of people in government know better about choices than the individuals making them.”
It’s no surprise, really, that the White House’s initiative has caught that kind of attention from critics who decry nudge efforts as “mind control.” But most skeptics here aren’t so much worried about mind control as they are about a more down-to-earth issue: whether elected officials in the U.S. -- particularly legislators, who haven’t always been enthusiastic adopters of results-informed policymaking and budget decisions -- can learn to embrace facts and data alongside emotion and politics.
Service has a sunny take on the topic, one that will face a tough test on this side of the pond. “We find that elected representatives, ministers, senior officials get really interested when we’re able to show the impact of our work. Rather than saying, ‘We’ll evaluate a program for you,’ we are more likely to get traction by saying, ‘We’re going to put this great new program in place, but we’re going to run it as a trial, so that we can see how effective it is.’ The trial is then your policy.”
(By Jonathan Walters | MAY 2014)