Don’t shoot the messenger (or evaluator)

Congressional Budget Office logo

The non-partisan Congressional Budget Office (CBO) last week released its cost estimate of the American Health Care Act, the Republicans’ plan to replace the Affordable Care Act, colloquially known as Obamacare.

The CBO looked at a range of impacts. The headline numbers from the CBO estimate are a reduction in the federal deficit between 2017 and 2026 by $337 billion and a total of 52 million uninsured by 2026 (with 14 million losing insurance next year). There’s something to like (deficit reduction) and something to dislike (loss of health insurance for millions), depending on where you stand on these issues. Without passing judgment on the significance of the potential effects of the new bill, let’s focus on the reaction of the bill’s backers, including the White House, to the CBO and its work.

Even before the CBO report was published on March 9, potshots were being taken at the normally highly respected office. Forbes characterized the them as a “pre-emptive, coordinated attack.” Joe Barton, a Republican former House Energy and Commerce Committee Chairman, had this to say about the CBO: “I don’t know what they do, they sit around and think great thoughts and everything on the issues…One of the things we need to do is reform the CBO folks.” And Gary Cohn, director of the White House National Economic Council said on Fox News that “in the past, the CBO score has really been meaningless.”

The reactions suggest that some supporters of “repeal and replace” already sensed that the new healthcare proposal would not follow Trump’s professed goal of providing all Americans with great healthcare at lower costs than Obamacare. It is also worth remembering that the CBO director, Keith Hall, was named to his post by Republicans. This doesn’t mean that the CBO always gets its numbers right. It doesn’t. But its analysis is transparent and explained in enough detail that one can understand how it reaches its conclusions.

As an evaluator, part of whose work involves estimating the impacts of policy reforms, I can sympathize with the CBO being targeted for attack. Conducting evaluations, which is essentially what the CBO has done in tallying the costs and benefits of replacing Obamacare, is a great way to lose friends and alienate people. Evaluators are never the most popular kids on the block. We don’t control pots of money, we aren’t trumpeting success stories, our job doesn’t involve being ingratiating in order to sell stuff. We dig around and find out what worked and what didn’t, who’s winning and who’s losing.  It’s necessary (and hopefully useful) work, but it’s not a popularity contest. And evaluations always turn up shortcomings. Nobody’s perfect. As the messenger, you can expect to get (metaphorically) shot at.

At a minimum, people get a bit nervous when their organization or program is evaluated. Even if the client who commissions the evaluation outlines the questions they want answered, evaluators are still being allowed ‘inside’; they’re able to ask questions of pretty much anyone connected to or benefiting from the project. Good evaluators pry through reports, extract data from whatever sources they can get their hands on, and double check everything they hear. Sometimes, the evaluation can seem a lot like an investigation.

I’ve conducted evaluations all over the world, some of them under fairly hostile circumstances. Even if the main client wants to have evidence on the impacts of a reform, that doesn’t mean everyone wants to know. There are potential winners and losers who have a stake in the outcome of your evaluation. There are vested interests. Trade union representatives, for example, can be a tough bunch. I once worked on an evaluation of the potential impacts of a mine privatization in eastern Serbia. Layoffs were expected. When I conduct this type of work, it is my policy to meet with representatives of all the affected groups. In this case, everyone knew that the restructuring was going to lead to the loss of about 2,500 jobs. It was the task of my evaluation team to estimate what would happen to their income and job prospects afterwards. The concerns of the workers were legitimate and completely understandable from their perspective, even if the mine was dependent on tens of millions of dollars of budget support annually. My approach to dealing with the trade unions was to open a line of communication with them, and keep it open throughout the study preparation, fieldwork and reporting period. This involved meeting with them periodically, listening to their concerns, and explaining what we researchers were doing.

On a similar study, this one collecting evidence on the impact of downsizing Croatia’s shipbuilding industry, we had a very different experience. There was unfortunately not enough budget or time to meet with the trade union representatives more than once. The antagonism toward the evaluation was considerable. Fieldwork included conducting an employee survey in a room on the premises of the shipyards. In fact, our survey supervisor, a young Croatian woman, was asked by a shipyard manager to turn over the list of (randomly selected) employees she was interviewing. When she refused he locked the door to the room and threatened not to let her out unless she complied with his request. She resolutely stuck to her guns however, risking her safety and wellbeing in the name of evaluation ethics. Luckily, she was able to call someone in the Ministry from the locked room with her cell phone, and secure her own release. But it left her shaken. I have even heard of survey interviewers in some countries being detained and jailed for doing their work.

In some respects, evaluators are indeed like investigative reporters. That makes the work interesting, and occasionally risky. But the evaluator as an investigator is not really the ideal association you want to create. It can sound, well, threatening. Another, and more conducive analogy is that of evaluator as a “critical friend.” This concept was proposed a quarter century ago by Costa and Kallick in a 1993 article.  They noted that critical friendship must begin with building trust, and go on to highlight the qualities that such a friend provides. That includes listening well, offering value judgments only when the learner (i.e. the client) asks for them, responding with integrity to the work, and advocating for the success of the work (p. 50). As an evaluator, you are not trying to establish guilt, or attack or push an agenda. You are there to help the organization or policy maker better understand the impacts of their programs or proposals, and improve them so that their goals can be attained.

Going back to the CBO’s report, it reads like a levelheaded, thoughtful piece of analysis. If its critics have a problem with it, you might expect them (at least in a less frenzied atmosphere) to respond by questioning its assumptions, or offering counter-evidence. When critical voices fail to do this, it is probably because they don’t have good answers.

This does not mean that, as evaluators, we can be smug.  We live in a world where the idea of “evidence-based” does not have a strong hold on the public’s imagination, and is anathema to many politicians. We need to work harder, and use the evidence we have to tell a more compelling tale.