Evaluation, the Unpopular Profession

Popularity vs. accountability

If you want to be popular, probably best not to go into evaluation. Pick another role, another profession.

However, evaluation performs a necessary and valuable function. Like street cleaning and colonoscopies, someone has to do it.

Evaluators are paid (tolerated?) to deliver sometimes unpleasant truths or hard-to-swallow advice. The role has evolved, slightly, since Medieval times when the king’s fool, among other things, had to speak truth to power, presumably in some palatable way like mixing humor and self-deprecation. Mercifully, we evaluators don’t need to dress up in funny costumes and makes fools of ourselves anymore (although sometimes we inadvertently do a bit of the latter).

In all honesty, the point of this blog post is not to deter would-be evaluators from entering the field. Rather, it is to warn you that you may not make as many friends as you would if you were, say, working in sales or marketing.

Also, when I say “unpopular” I do not mean that demand for evaluators is low. To the contrary, there is (still) lots of evaluation and evaluation-type work out there.

The fact is, auditors, tax collectors, inspectors, evaluators and their ilk – in what are sometimes called the accountability professions – are not really meant to be liked. 

President Trump’s firing of six Inspectors General in the last few months notwithstanding, most people know that accountability is important. Like taking medicine or going to the gym, being evaluated or investigated can be disagreeable for the object of the evaluation, but there is also good chance it will make whatever or whoever is being evaluated better.

Why you need a thick skin

I have known of evaluators who have been threatened, fired, had their work trashed, or even been held against their will. Here are a couple of examples (one personal).

I once was leading an evaluation in Croatia on the impact of employee redundancies at the country’s shipyards. The data collection supervisor on our team, an intrepid young Croatian woman, was asked by a shipyard manager to turn over the list of (randomly selected) employees she was interviewing. When she refused, he locked her in the interview room and threatened not to let her out unless she complied with his request. She still refused, risking her safety and well-being in the name of professional integrity and respondent confidentiality. Luckily, she had a contact in the Ministry she was able to call, and the manager relented and unlocked the door. She was shaken, but able to continue with her interviews. The evaluation was completed and well received. The reforms, on the hand, did not happen. 

It happens to evaluators – it has happened to me – that your findings and conclusions are rejected, even when I thought the analysis was strong. 

Last year I conducted an evaluation-type study in an African country for the World Bank. It involved assessing the likely social impacts of a $100 million program. The manager who commissioned my work (in a division– referred to as “Global Practices” – at the Bank, with a focus on social issues) was happy with the analysis. However, the manager whose program I reviewed (from the Global Practice responsible for the assistance program) actually refused to speak with me. After reading my report and its conclusions, he rejected the analysis and brought in someone else to redo it. Not a happy experience, but you develop a thick skin in this line of work.

A popular way of rejecting evaluation findings is to question or attack the evaluation methods, or the evaluator’s qualifications. These are good ways of deflecting attention from the findings.

Often, you won’t know whether the client was unhappy with the quality or scope of the work, or if there were other internal politics at play.

These types of unpleasant experiences tend to be rarer, and less of an issue when the client commissioning the evaluation is not the one whose work is being evaluated. This may be the case with donors or US Government agencies such as USAID or Millennium Challenge Corporation evaluations. They are the agencies funding the work, while another organization implements it, so they are generally truly interested in whether the money is being well spent, e.g. is the program going according to plan and getting results.

You may not be popular, but you still have to be nice

With evaluators, when people are nice to you, it isn’t necessarily because they like you, or see you as a linchpin in their career progression.  Of course, hopefully you’re a decent person with a disarming personality! But quite possibly, their chumminess could reflect a, shall we say, slight bias. You are evaluating their programs after all, and they most likely prefer it that you see them at their very best. If people are quite nice to you at the beginning, but when they realize that you’re serious about your job, they start cooling to you, you’ll know that the amicability was more of a tactic than anything else.

It is your job to look past the surface, dig into the data, find what what’s really happening, and report fairly. Yet, you yourself need to adopt an attitude of goodwill, and cordiality toward others, regardless of what your finding are, no matter how useless your inept or corrupt the program is. (I honestly have evaluated very few programs that fall into that category, quite possibly because everyone knows that the evaluators will be showing up.)

Why is being nice to others who might not be nice to you important?

First, being nice is simply part of being professional.

Second, you want to build relationships. You need others trust you and share information and, if all goes well, accept your findings.

Third, maintaining a pleasant demeanor is a simply good default attitude to have to contain whatever feelings you may have about the program you’re evaluating. Whether you think it is amazing or terrible, you want to keep those feelings separate from the work.

The bottom line is that being decent to others is a soft skill you want in your toolbox.

The evaluator as outsider

Closely linked to being “unpopular” is being an outsider.

Professional independent evaluators are, by dint of their position, outsiders. (This is different from internal evaluators who work within an organization.) You need to accept and embrace that role, even while building trust with the client and stakeholders you meet. You need to obtain information from them, and want them to accept your findings.

However, it is a fine line. As an evaluator, you arrive in a new place, with its own professional or work culture, maybe in a new country. You start poking around, and asking questions. That’s the job. People will be on their guard.

The outsider status is beneficial in that it can shield you from certain biases that you might bring. These biases might include if you were part of the system you are evaluating, such as belonging to one or the other political parties, ethnic groups, clans or other groupings of which you are probably not aware.

A corollary of this is that foreign governments often value non-nationals because they are outsiders, independent, not connected to a particular faction. Sometimes this is justified, sometimes not.  Despite being an outsider, and thus often partially aware (or unaware) of the unwritten and unspoken codes and connections, there is value in standing outside, in not being part of the culture.

One can be less beholden, less biased, and face lower risk of consequences from producing unpopular findings, since everyone knows that when it’s over the evaluator will board the plane and leave the country.

The international consultant, very much an outsider position, also brings an international perspective to the table, based on evaluation experience in multiple countries and cultures.

The evaluator as friend?

I like the concept of critical friend, which I have found very useful in understanding and accepting my role as an evaluator. It implies that you are there to help through constructive criticism. One of the best descriptions comes from John MacBeath, a Cambridge University academic, in a 1998 article on improving school effectiveness:

The Critical Friend is a powerful idea, perhaps because it contains an inherent tension. Friends bring a high degree of unconditional positive regard. Critics are, at first sight at least, conditional, negative and intolerant of failure. Perhaps the critical friend comes closest to what might be regarded as ‘true friendship’ – a successful marrying of unconditional support and unconditional critique.

Good evaluators should be respected for their work. They are not going to be the most popular kid on the block and should not strive for that.

Sometimes evaluation findings are accepted, sometimes rejected, sometimes ignored. Sometimes you are hired again, sometimes you are not. As an independent consultant moving from one assignment to the next, one client to the next, you often never learn of the outcome of your work.  It comes with the territory and you should not be disheartened by this.

I began this post by noting how evaluation is not a popular profession. In this age of online calumny and fake news, where many suspect any criticism of being driven by ulterior motives, the notion of accountability is more important than ever.

Nonetheless, if you stick to your guns, maintain your integrity and deliver credible and useful advice, you may be the best critical friend, the people who hired you have ever had.


The art of the written critique: on giving and receiving

On sharing your writing

For writers, the road to perfection passes through the review purgatory.

The fact that the review process is collaborative makes writing both easier and harder. Easier because the burden of improving the writing is shared. Harder, because you as the writer are exposed to the scrutiny and criticism of others.

In this blog post I will propose some ways of smoothing the rocky passage, from the perspective of both the giver of feedback and the receiver. 

In the fields of international development and evaluation, sharing written drafts for comment is standard practice. While essential to good quality outputs, it is also a laborious process. When a report is developed, it can go through multiple rounds of revisions over a period lasting weeks, and even months.

A silent dialogue

Imagine, in this fast-paced world, a slow-motion conversation in which the speaker takes as much time as he or she needs to reflect and ruminate on a given subject. It is an extended back-and-forth dialogue with the audience that starts…and…stops…and…starts…and…goes…on…for…weeks. The dialogue continues until…no one has more to say, and it is finally over.

On top of that, imagine that the issues under discussion are technical in nature, and would mean little to most people listening in.

Such a conversation might sound agonizingly dull. But it mirrors the way good reports get written. Most of this dialogue is not spoken, of course. It takes place on the page. The back-and-forth is the writing, reviewing, commenting, editing, revising, and rewriting that happens in the “document space.” It is usually very effective. I have seen plenty of reports transformed from sub-par to excellent as a result.

A far from dull process

Being the responsible writer in this ‘dialogue’ is far from boring. It can, in fact, be nerve-wracking, waiting to see/hear what the other person thinks, knowing they will find and point out weaknesses (which is their job), and wondering how tough they will be.

If you are the primary team member responsible for the writing, after you have labored over your draft, there’s always a moment of trepidation after hitting the “send” button. Is the report on the right track? Is it broadly acceptable? How difficult will the comments be to address? How many comments will there be? Should I have spent another day revising before sending it in? If you are a consultant, you may even be wondering, Will I ever get hired by this client again?

In the fields of evaluation and development, you will be in the role of reviewer as well as reviewee. You need to be able to dish it out as well as take it. That is, to give and accept critiques of the reports that you and others write. These are skills most development professionals learn over the course of their career.

(I should note that the word “critique” is not normally used in these circumstances. It’s a little too loaded, perhaps. The preference is for “comments,” “feedback” or “review.” But critiquing is essentially what’s happening.)

When I first began working in this field, I was delighted to find weaknesses in a given document. Being asked to review the work of others gave me the sense that I had arrived, that I could hold my own among colleagues, most of whom were older and far more experienced. It gave me a confidence.

Unfortunately, it also occasionally caused me at times to become a bit cocky in my reviews. I may have expressed my reservations in language that was a little too harsh.  I’ve learned that, on the written page at least, while being straightforward is fine, being severe is unnecessary and unhelpful.  

Who comments?

Comments can be generated by the client, the manager overseeing the task, other members of the team, and by specialists from outside the team. For official publications, an editor will be hired. For certain documents, the feedback process may be formalized as a peer review process, as is done for academic journal articles.

If you are new to the field, you’ll need to get comfortable with feedback, because it will always be there. If you don’t receive feedback from someone, it is not because you are brilliant, I’m afraid. It’s because they either didn’t review your draft, or didn’t read very carefully.

On the flipside, you can always find constructive ways that work written by others can be improved. It may take a couple of read-throughs, but issues will come into focus, like those magic eye pictures or  autostereograms that reveal a 3D image if you stare at them long enough.

Receiving a lot of comments means additional work, of course. But you want substantive feedback: it will make the report that much better. What you want to avoid is receiving feedback along the lines of “I don’t understand what you’re trying to do here” or “this is not what we were expecting” or “the quality is unacceptable.” That generally means you need to start over.

Then there is the pedantic reviewer, who finds fault with every minor issue or who gratuitously asks for everything to be explained ad nauseum, which can also be stressful.

Below are a few things to keep in mind:

On receiving feedback

  • All comments from the client or your manager will need to be addressed, either by incorporating them in the text or by making a good case for why not.
  • Other than the above two cases, you don’t need to address every single comment. Indeed, some comments may contradict one another.
  • When deciding the order in which to address comments, consider plucking the low-hanging fruit first. Addressing the easier comments first will give you an encouraging sense of progress, and additional time to reflect on how to tackle the trickier feedback.
  • Keep in mind that it is good if you get no feedback suggesting that the work’s approach was wrong, or if they don’t ask for a complete redo.
  • Before sharing your work with others, it is paramount to edit one’s own work.  That is one way of reducing the amount of comments you will receive.
  • To avoid going down a cul-de-sac, it is a good idea to communicate with people who will be reviewing the piece before you send them the final draft to review. Share ideas and outlines with them early on, and incorporate feedback. This generates interest and buy-in for the work.
  • Don’t take criticism personally. It’s not you, it’s the writing.

On giving feedback

  • As you review, ask yourself:
    • Is the piece addressing the stated objectives, the questions it poses?
    • Is anything important missing? 
    • Does the structure work?
    • Does the work flow in a natural progression?
    • Are certain elements underemphasized or overemphasized?
    • Are there any errors?
  • Avoid framing comments in a negative way, e.g. “this is incorrect” or “you didn’t understand.” Positive turns of phrase include “I suggest” or “think about phrasing it this way.” You don’t want to demotivate the writer with harsh criticism.
  • Although you probably have not been asked to copyedit, if you do come across grammatical errors or typos, it’s not inappropriate to simply make the correction. When I do that, I’ll add a note to the effect “that I took the liberty of doing some light editing” or that I “made a few edits along the way.” 
  • There will be reports which are poor quality, or completely miss the mark. Remedial measures may be needed, including a complete rewrite, or even another person to write it. Even this situation should be handled diplomatically.
  • You can write comments directly into the report, and also include general comments in the body or in your email response.
  • Use the sandwich approach — start with what you like about the report, and end on a positive note. Highlight the strengths. That’s encouraging for the writer.

As a general rule, feedback makes everyone’s work better. It is the essence of quality control. Having more eyes poring over a report, more brains scanning it, is effective for uncovering issues before a written work is signed, sealed and delivered.

Sometimes it may feel as though you are getting hammered by critics. If your critics are insightful and forthright, what they’re really doing is helping you hammer your work into shape. And that’s a good thing.


Monitoring and Evaluation in the time of coronavirus: Unprecedented real-time tracking of a pandemic

Life was simple. I had planned to devote this blog to helping organizations set up monitoring and evaluation (M&E) systems.

Then global events overtook me. And all of us.

With the coronavirus, the invisible Covid-19, we have woken up to find ourselves in a not-so-brave new world. Socializing is over — social distancing is in. As our economic, social, and cultural lives are shut down by the health scare and accompanying protocols, it seems almost impossible to have a conversation in which the virus doesn’t intrude. It is equally rare to read news, or any article, that isn’t about the topic in one way or another. And I find it impossible to go back to my rough draft on M&E systems. The new coronavirus reality has occupied our minds.

Unlike anything in living memory, this invisible, odorless and often symptom-free virus has abruptly changed our world, affecting nearly everyone. Perhaps remote Amazon communities and young infants are the only humans on the planet still unaware, as country after country shuts down all public life and economies are pushed to the brink of collapse.   

We now live in a world of isolation and uncertainty — over our personal health and economic well-being. This uncertainty is fueled by the fact that for the first time in over a century we are experiencing a pandemic of this scope. Many of us have gone, in a matter of weeks, from fixating over the costs to humanity of climate change, (a medium-to-long-term civilizational threat) to the costs of coronavirus, which are immediate and life threatening in a very personal way. (Ironically, the global coronavirus shutdown seems to be the best thing that has happened for emissions reductions in decades, although don’t count on the effect lasting after the social distancing era ends.)

Covid-19, this shrouded, faceless phantom with a scythe, silently stalking the globe, has triggered massive, rapid policy changes and behavioral changes, each with economic consequences. Last week, 3.3 million Americans filed unemployment claims. Whole sectors — travel, hospitality, dining, entertaining — are on their knees and it is far from clear how many will rise again from the rubble.

Who knows how bad it will be, or what world we will re-emerge into? After countless deaths, will humanity emerge healthier, having survived, and become inoculated against the virus, or will it be more vulnerable? Will we socialize less, now that we have grown accustomed to virtual meetings, or socialize more, because we’ll have been starved of real human contact? Whoever reads this blog years from now will know the answer. I don’t.

Notable from an M&E perspective is our ability to track the number of cases — infected, recovered, deceased —in real time across the globe, as illustrated by this Johns Hopkins University  dashboard. I will venture to say there has been nothing else like it in the history of monitoring, where everyone with access to the internet (several billion people, now) can follow a global phenomenon as it unfolds, with almost hourly updates. This not the Olympics (now postponed), but a deadly kind of score-keeping, nonetheless.

Of course, the numbers we see in many countries must be taken with a grain of salt — would anyone like to hazard a guess about why Russia (population 146 million) is reporting fewer cases than Luxembourg (population 602,000)?

Aside from nature of a country’s political regime, the amount of testing seems to be correlated with the number of cases — the more tests that are carried out, the more that infections are found. Much more accurate data on infections and mortality rates would require testing a large, randomly selected sample of the population in each country.

At present, those who get tested are only those who think they have symptoms, or who are able to get tested. Many are infected and don’t know it. And there are some who would like to get tested but cannot, for reasons of access or lack of test kits or hospital resources.

So, the tracking is not a reflection of reality, but it is a near approximation, and we must use our own powers of reasoning to analyze their accuracy, what they numbers mean, and why they differ. Nonetheless, this sharing and publicizing of data is a remarkable phenomenon, with political implications.

Governments, when they finally decided to react, are passing policies and stimulus measures remarkably swiftly, with massive interventions in public life and in the economy, pumping in trillions of dollars to cushion the blow. This is not to say that the measures are always well designed, and the lack of coordination between countries is lamentable. But, from a policy perspective, it is quite astounding to see how quickly evidence and evaluation of an issue — chiefly by epidemiologists such as Neil Ferguson and colleagues at Imperial College, London — is taken seriously on board and turned into policy.

As a thought experiment, imagine if the world were tracking, on a daily basis, every death from malaria, every case of child mortality, every woman killed by her partner, every rise in greenhouse gas emissions, every time another person slips into poverty. Armed with this real time information, citizens would be busy educating themselves on the issues, how to prevent them, following the rise and fall of deaths, or emissions, in each country. Imagine if governments were spending billions and trillions of dollars to mitigate these problems and find solutions.

Of course, it is quite difficult to imagine such a thing. Why? Because the people affected are too few, relatively speaking, and they are too poor. The problems are too distant, geographically or temporally speaking, from those in power.

But we now have a pretty good idea of where the tipping point is. That is, the point at which society and government suddenly become willing to act. It occurs when the threat to people in middle and high-income countries is immediate and potentially fatal. It is too early to know the mortality rate for this pandemic, but it might lie between 1 and 2 percent, although using the Johns Hopkins the range, is remarkably wide right now. At the time of writing (March 27, 2020) among countries with at least 5,000 cases, mortality is just 0.6 percent in Germany and 0.8 percent in Switzerland, but 7.6 percent in Spain and an alarming 10.2 percent in Italy.

Coronavirus cases (countries with at least 5,00 cases)

  Cases Deaths Mortality rate
Germany 47,373 285 0.6%
Austria 7,317 58 0.8%
South Korea 9,332 139 1.5%
United States 86,012 1,301 1.5%
Switzerland 12,311 207 1.7%
Belgium 7,284 289 4.0%
China 81,897 3,296 4.0%
United Kingdom 11,816 580 4.9%
France 29,581 1,698 5.7%
Netherlands 7,469 547 7.3%
Iran 32,332 2,378 7.4%
Spain 64,059 4,858 7.6%
Italy 80,589 8,215 10.2%

Source: Own calculations, using data from Johns Hopkins University Coronavirus Covid-19 Global Cases by the Center for Systems Science and Engineering

No one is quite sure as to why, although various theories have been advanced, including access to and prevalence of testing (Germany is doing better), demographic factors (South Korea has a younger population) and, of course, government policy on social distancing.

For the historical record, at the time this blog is posted, there have been 585,040 confirmed Covid-19 infections and 26,455 deaths worldwide.  (Between the time I drafted this post in the morning and publishing it this afternoon, the number of cases jumped by more than 35,000, and the number of deaths by over 1,500. That is how bad things are.) For those who are reading this in the future, however, the tally will be many millions of cases, and hundreds of thousands, maybe even millions, of deaths.  

This is not a joyful post to write. Hopefully, in the weeks and months to come there will, again, eventually, be positive news and issues to write about. For now, one can take some small comfort in knowing that M&E systems, if properly deployed, can be used to inform decisions for the common good. In the meantime, stay safe and keep well.

*Image by FunkyFocus from Pixabay


Beneficiary assessments: Questions, questions, questions

This blog post about beneficiaries is built around a series of questions. Why? Because whether you are a program designer, a program implementer or an evaluator, you spend much of your professional life trying to find answers to them.

What policymakers want to know

Policies are generally designed with the aim of allocating resources or creating opportunities to one or more segments of a given population.  A question policymakers often ask themselves is: Who will benefit if…?

Addressing issues related to program beneficiaries depends a lot on how the questions are framed. Typical questions might be: Who will benefit from a new policy? Who is benefiting from the status quo? What are my policy options? or What will be the effect of choosing policy A over policy B?

Existing conditions, and who is benefiting from them, are themselves influenced by prior policy decisions. (And don’t forget that not implementing a policy change is a policy in itself.)

Policy choices and questions like those above also concern actors in the foreign aid / development assistance sector. Governments in rich countries need to be able justify their assistance programs to their taxpayers. They want to be able to justify their foreign aid spending. One important way they do this is by demonstrating that their foreign aid program is making a positive difference in people’s lives.

Policymakers in recipient countries want to exert a positive impact for some or all of their constituents. And they generally want to be rewarded for it, for example by getting re-elected, or their bosses getting re-elected.

Design, deliver and evaluate

The task of program designers is to demonstrate how investments will deliver. How will the money flowing into one end of the foreign aid funnel be transformed into better lives at the other end? This has to be figured out. It is a program logic issue.

The task of program implementers is to deliver on the program’s design, to realize the goals and reach the targets that are set out. Implementers address the question of How do I use the resources I have to produce the results they want? Think of it as a form of foreign aid alchemy. “The base metal” of financing, resources, plans etc. is, if all goes well, turned into a golden opportunity. This is a management issue.

The task of program evaluators is to figure out whether the desired benefits, in fact, came through. Evaluators apply a set of methods to determine What happened? Why, or why not? This is an analytical issue.

When a beneficiary isn’t a beneficiary

Some organizations prefer to avoid using the term “beneficiary” as it has connotations of passivity. Other terms include client, customer, end-user, program participants, affected groups or program-affected-populations. Each has a slightly different connotation. Because “beneficiary” is a catch-all term, I’ll stick with it in this post.

The case of roads

Let’s use investments in roads as an example. (I am currently involved in three evaluations of road projects for the Millennium Challenge Corporation, and so find myself thinking quite a bit about this issue.) While road networks connect virtually everyone nowadays, roads deteriorate over time. They need maintenance, resurfacing, and complete rehabilitation. 

Roads facilitate the movement of people and goods. Similar to roads, canals (mostly for cargo), railways (people and cargo), sewerage systems (wastewater), pipelines (oil and gas), power lines (electricity) telephone lines (communication), and broadband cables (information), improve the flow of things people need and value.

By reducing resistance, these various channels save time and energy compared to alternatives for moving things or people from point A to point B.  When these channels are cut, blocked, or destroyed, havoc can occur. Access is impeded. Very quickly, everything becomes more difficult and costly. 

The beneficiary perspective

From an individual’s viewpoint, the impact will depend on, for example, where that road leads, how often they use it, what they use it for, and how easy it is to reach.  Is there a paved feeder road, a gravel road, or a path? Is there a mountain or river that must first be crossed?

Many other questions can be asked. Are you a farmer who sells produce at a weekly market along the road?  Is your village close by and connected to the road via a well-maintained access road?

Maybe the new road won’t make any difference to you financially, but it saves you time, and improves your quality of life. It’s much more pleasant to drive along a smooth asphalt road at 80 km per hour than a bumpy one with perilous potholes. You might not see money in your pocket, but you will feel more connected to the outside world if you live near a good road instead of a bad one.

It’s also possible that you don’t see the effect today, but will see it years later. If your parent has a stroke five years from now, the new road could make a big difference in how quickly can you get them to the hospital.

The evaluator perspective

Now consider the issue from the perspective of a researcher or evaluator. You’re looking not at individuals but at aggregate effects. These are less precise, but more useful. You don’t want to collect five hundred stories of how the improved infrastructure has changed, or not changed, lives. You want to measure trends and aggregate changes. You want to know the effect of the new road on the population as whole.

And there are questions about distance: What if an old road with a rough surface and full of potholes is rebuilt? Are you a beneficiary if you are living in a town 2 km from the newly rehabilitated road? What about if you live 5 km or 10 km away? Or only people who use vehicles, i.e. not pedestrians?

In the past, it was common for road evaluations to pick a distance on either side of the road in question, draw two imaginary lines, and consider anyone within this band, the “corridor of influence” was a beneficiary. The distance of 5 km was considered too wide, and 2 km came to be considered a more reasonable distance. An alternative, and more conservative way of estimating road beneficiaries is to only survey people directly using the road. This ignores indirect beneficiaries, but road users are easier to count, and gives more confidence in the result.

The data you collect will depend very much on how you draw the line, who you choose to include among your potential beneficiaries, and what assumptions you make.

Analytical tools

The previous questions in this blog post are just the tip of the iceberg. When thinking about how people are affected, we can ask many, many more.  For example, here are questions about (potential) beneficiaries that evaluators will ask:

  • What is our population of interest?
  • How many beneficiaries are there?
  • Where do they live?
  • How many are direct / indirect beneficiaries?
  • How are they benefiting?
  • By how much are they benefiting, in relative and absolute terms?
  • Among project-affected persons, how many are benefiting?
  • Why are some people benefiting and not others?
  • How are the benefits distributed among income groups? Among different stakeholders?
  • Are some people taking advantage of the benefits more than others?
  • Are some people dis-benefiting, i.e. negatively affected as a result of the project?
  • What methods should we use?
  • What are the critical information sources?
  • Who should we to talk to?
  • How should we talk to them?
  • How many people should we survey?
  • How big should the survey sample be?
  • Where (what population) should the sample be taken from?
  • What types of groups should be sampled?
  • What questions should be asked?

Although beyond the scope of this blog post, common measurement tools for addressing questions about road project beneficiaries road are origin-destination (O-D) surveys and traffic counts. A range of approaches and methods exist for each.

Your decision on what approach to take will be influenced by resources at your disposal – money, time, expertise, etc. Your decision will also be guided by what others have done before you.

But, perhaps more than anything else, your findings will be influenced by the questions you ask. Assuming your methods are sound, the findings may vary, but none will be entirely wrong.


Why you may want to avoid independent consulting, especially overseas

Some things to keep in mind

On the face of it, independent consulting in international development is not an appealing career choice.

You’re on your own, with no institution to back you up.  You’re an outsider, a transient professional, an interloper. You touch down for a few weeks in a foreign country and have little time to acclimatize or develop relationships.  You often find yourself counting on team members who up until yesterday were complete strangers. You have to pray that they’re competent.

Of course, there are plenty of independent consultants for whom their career path was less a choice than a default position. It might have been thrust upon them. They may have originally sought the stability, structure and institutional opportunities that come with being part of a big development agency or a consulting firm of whatever size. But that didn’t happen.

Freedom is not always a blessing

Certainly, independent consulting comes with a lot of freedom. But freedom is only a positive thing insofar as you enjoy being untethered and don’t mind not belonging. There are a lot of reluctant gig workers out there.

Although it is rarely acknowledged, there are non-negligible advantages to being told what to do. A professional life where you can mostly focus on completing the tasks you are given. There is less decision making and need for self-discipline. Plus there is no need to file estimated taxes every quarter (as the self-employed in the US must do).

If, in addition to being an independent consultant, you are so “unlucky” as to work as an evaluator, you can expect to enjoy several additional drawbacks. While it is true that someone is paying you to look into a program or project, to collect data and information and ferret out the truth, a lot of people involved in that program won’t exactly appreciate your poking your nose around and asking sometimes uncomfortable questions.

They say evaluators play the role of “critical friend,” the person you can trust who will also point out your faults. Not everybody is reconciled to that concept. Who likes a party pooper? Who likes to get a diagnosis that they aren’t as healthy as they thought?

In other words, independent consulting ain’t for everyone.

But if you must…

Still, there are rewards to be had. A few of us are out there doing this type of work, after all, and not all of us plan to throw in the towel…

If you happen to fall into the sub-sub-sub-category of a) being a consultant who, b) works independently, c) is active in the field of international development, and d) conducts evaluations; then here are a few observations on what you might face.

Last month, my co-author Svetlana Negroustoueva and I published an article “Bridging divides and creating opportunities in international evaluation consulting” (behind paywall) in the Winter 2019 edition of New Directions in Evaluation, a volume devoted to independent consulting in evaluation.*  

In the article, we discuss common divides and some useful competencies that consultants that belong to the sub-sub-sub category use to navigate them.

We consider various divides that consultants likely deal with while working abroad. We identified divides along cultural, power, gender, national–international, language, geographical lines. None of these are insurmountable but, in one way or another, they require a bit of navigation.

Language is a common and obvious divide. Not speaking the language won’t necessarily prevent you from getting an assignment (except in French or Spanish speaking countries). However, relying on interpreters does pose some risks.  Things do get lost in translation. It adds yet another a layer of complexity to your work.

Because you are not part of the system, probably lack a deep understanding of the country, don’t have the relationships, or necessarily speak the language, you come with a built-in disadvantage.

If you are young and female, you may face further challenges. You may find, at least in some cultures, that you are not taken as seriously as your male counterparts.

Privilege and power – those perennial aspects of life that insinuate themselves into so much of our political and social life – are part of the equation, too. Independent consultants have both more and less privilege and power than meets the eye. On the one hand, as professionals who are independent, well-remunerated, and often based in Western countries, we have certain advantages. On the other hand, we face limitations. As outsiders, (often) not knowing the local language, not have the connections, not the institutional backing that our full-time employed colleagues do, our influence is certainly limited.

Most of the divides we identified spring from disparities between you, the consultant, and the social, political and cultural environment you work in.

What doesn’t kill you makes you stronger

I’ve emphasized the difficult and less appealing sides of independent consulting for two reasons.

If you have doubts about this path, maybe reading this will help you clear them up, and push you in a different direction.

However, if you still think it’s a good idea, then embrace the challenge with open eyes.

On a related note, I like the concept of cognitive disfluency. It refers to the benefits that come from the mental effort of completing a task. If something is too easy to do or to learn, your mind is, according to the theory, less likely retain it. Learning to play the piano is hard. But by practicing day after day, you improve. The same applies to many other skills people acquire. Although a more nebulous skill than mastering a musical instrument, wWorking as an independent consultant, at least until you get the hang of it, is fairly effortful.

This brings me back to our article: we conclude that the very process of overcoming these divides and dealing with these issues can strengthen you as a professional, while also making the work more interesting and enjoyable. There is satisfaction to be had from overcoming life’s tribulations.

———-

*Junge, N., & Negroustoueva, S. (2019). Bridging divides and creating opportunities in international evaluation consulting. In N. Martınez-Rubin, A. A. Germuth, & M. L. Feldmann (Eds.), Independent Evaluation Consulting: Approaches and Practices from a Growing Field. New Directions for Evaluation, 164, 127–139.


Don’t navigate blind — let the evaluation questions guide you

Why do we need good evaluation questions?

In this post, I’m going to tell you five ways in which evaluation questions can help evaluators.  

Like ship captains of yore, program evaluators rely on the stars to get where they’re going. Well, not stars, exactly, but a few key questions. Both serve pretty much the same purpose — they help you navigate, help you to get where you’re going. In an ocean of data, where you can find yourself submersed in too many choices, these key questions, commonly referred to as “evaluation questions,” are your lodestars.

Good evaluation questions will guide you in making decisions, ensuring that you are heading in the right direction. After over 15 years of conducting evaluations, this has become a truism I swear by. Now, to see how far we can stretch this metaphor before it snaps, imagine that the ship, the crew and the navigational instruments represent the resources and methods the evaluation team has to work with. The evaluation questions are what guide the team.

Who determines the evaluation questions?

With most evaluations, evaluators are hired to address a pre-determined set of queries. These are normally provided by the client and embody what the client wants to know about the program, the intervention, the project, the policy, or whatever needs examining.

When the work is done, the analysis conducted and report is submitted, what people will want to know is “What are the answers to the questions we gave you?” Even if you don’t like the questions, you need to find a way of answering them.

If the client hasn’t developed the evaluation questions already, then the evaluator can propose them, based on the client’s objectives. Sometimes the questions are not clearly thought-out, or maybe they are difficult to answer. That applies particularly to questions regarding a program’s sustainability. How can you answer that in circumstances when the program is far from completed?

Evaluation questions are not the same as interview questions

Evaluation questions are not the same as interview questions, which are what evaluators use when interviewing people, such as beneficiaries, key informants, program implementers and so on.  Interview questions, for example, might be those asked by a police officer investigating a murder.  The police might ask the suspect: Who is the murderer? Why did he do it? And, Where is the weapon?  Evaluators don’t ask the people involved in, or benefiting from, the agriculture project, “Was the project effective?” Interview questions are more specific, a way of collecting multiple data points which will inform the body of evidence. 

In contracts, evaluation questions tend to be more along the lines of: Are stakeholders satisfied with the program? Is it sustainable? How effective is it at achieving its objectives?

Nevertheless, for both cops and evaluators, it comes down to asking the right questions in order to collect the evidence they need.

Questions should drill down from the evaluation objectives

The evaluation questions should not only embody the evaluation objectives, they should drill down into those objectives. They need to be specific.

Evaluation objectives can be broad, and open-ended. They are useful for explaining why the evaluation needs to be conducted, but not as useful for developing a methodology. For example, if the evaluation objectives are “To assess the project’s effectiveness” or “To draw lessons about the project,” evaluation questions should be much more specific. They should ask, for example, “Are women farmers using the new technology as intended?” or “Do stakeholders consider the hands-on technical assistance they receive to be effective?” However the questions are formulated, evaluators almost always have an opportunity to review them and propose modifications. I recommend doing this as early as possible in the process.

How do evaluation questions help the evaluator?

There is plenty of material out there providing guidance on developing and selecting good evaluation questions. That is not the subject of this post. Instead, I’d like to point out that there are multiple ways in which questions, once decided upon, can be extremely useful to the evaluator.

Evaluation questions are crucial to keeping you on track and staying relevant to your topic. Sometimes there is a temptation, while in the field, to go off on a tangent. For example, the substantive or technical aspects of a project are often very interesting in and of themselves. You or your colleagues may get caught up in discussions on different types of irrigation water pumps, the political roots of the disparities between northern and southern regions, or some other issue. That’s all good to know for context, but such lines of inquiry shouldn’t distract from your main purpose. That’s not what is being asked of the evaluator.  

Regardless of how the questions are generated, once agreed upon, they become your guide, serving your effort in a number of valuable ways. The questions should help you with these five aspects of your evaluation:

  • What issues to focus on
  • What evaluation methods to use
  • Where and whom to collect the data from
  • What interview questions to ask people
  • How to structure and draft the final report

So, keep the questions close at hand, and check in with them regularly. Use them to guide you and help make decisions. Conducting an evaluation is far from being a gentle boat ride down the river. (If it were that boring, I would have bailed out long ago). No, it is often difficult, sometimes treacherous, and (predictably) full of uncertainties. Almost inevitably there is someone or something — on the client side, among the program stakeholders, or even on your team — that will make your life a challenge. Don’t let that distract you. Factor those challenges into your work.  With your evaluation questions to help you navigate, you’ll know where to set your course and be able to focus from there.


Incompetence can torpedo your team. What are you going to do about it?

The year 2018 is over (thank goodness) and we have a chance for a fresh start. For many of us that means time for personal stocktaking. What did you accomplish last year? What did you learn? How can you apply those hard-won lessons to the coming year? Should you keep striving to outdo yourself, or should you settle for what you’ve got and ease into the comfort of routine?

Rifling through the mental files in my “2018 evaluations” folder, I’ve come up with a few of my own lessons. The one I’ll share today is this: One thing you can count on is that you can’t always count on people.  And you need to prepare for that.

As I’ve observed in an earlier post, we live in a world where professional failure is more common than conventional wisdom would allow. Failure is also less interesting than is portrayed by the media and in the self-help industry. It can become a serious headache, however, when it is your fellow team member who is doing the failing. I can use myself as a prime example: I don’t always live up to my own professional expectations. It won’t come as a shock to readers that people are not always up to the task. The question is, how do you handle it?

First, let’s get a few obvious things out of the way. Humans are complex, multi-faceted and, not infrequently, multi-talented. This is a marvelous thing, accounting for some truly astounding cultural, engineering, and intellectual feats that have enriched life on this planet. Indeed, in many professions, it is assumed that employees bring multiple talents to the table. We are not like robots, programmed to do only one or two tasks at a time. This truism applies very much to the evaluation field, where evaluators are called upon to deploy a range of both soft and hard skills.

The fun starts when you suddenly discover that key talents are missing from a team member. While it is rare that a new team member is brilliant across the board, most bring at least basic levels of competence to the table. Most score at least a seven on a 10-point scale across the range of necessary competencies. But every now and then, someone doesn’t. They’re a “one” or a “two” in some important area. That’s the thing with being human. We may be multi-talented, or at least multi-capable, but we also come with built-in limitations, which sometimes leads to a giant team-implosion. Oops!

What competencies are we talking about? I would offer that, in the evaluation field, you must be able to:

  1. communicate comfortably with others;
  2. put together words, sentences and paragraphs in a clear and logical manner;
  3. analyze the information you have collected;
  4. collaborate with others like a mature and responsible adult;
  5. be pleasant and respectful;
  6. do what you say you will do; and
  7. manage your time and priorities.

On top of these soft, but necessary skills, you may also be expected to be equipped with technical skills and experience in:

  1. the sector being evaluated, i.e. agriculture, education, environment, gender, etc.;
  2. qualitative or quantitative evaluation methods; and, if applicable;
  3. effectively leading a team.

Nothing listed above is rocket science, that particular field generally not falling within the scope of international development projects. You still find yourself surprised, however, when a fellow team member is — how to put this delicately? — totally incompetent.

Of course, the safest solution is to only work with people you have worked with before, and whom you can count on. For individual consultants, however, that is a luxury. Instead, what is more typical is that you join a new team on almost every new assignment. Every year, for example, I end up working on maybe half a dozen different teams, the majority of which are composed of folks I have never laid eyes on. On the one hand, it’s a great way to meet people, make new friends, and learn from your peers. On the other hand, you can end up in some frustrating and stressful scenarios.

I’ve had experiences where it soon became obvious that a team member had pretty serious deficiencies in the interpersonal skills department. For example, Team leader Mr. A, a very plausible stand-in for Ricky Gervais in the TV comedy series The Office, would spend the first 10 minutes of a meeting boasting about his own experience and often end the meeting by insulting the people on the other side of the table. Other times you get a bad case of weak ethics and poor writing skills, as with Dr. B, a native English speaker, who couldn’t write proper English despite her academic pedigree. When I came across passages that were surprisingly well-written, a quick check on Google revealed she had been happily plagiarizing them. (Always good to find out that kind of thing sooner rather than later.) Or someone might impress you in person, but not on paper. Local team member Ms. C knew the sector and country very well and asked the right questions during stakeholder interviews, but couldn’t string two sentences together in a logical way in a report. These were all setbacks which it fell to me to remedy, through many hours — and sometimes days — of extra work.

I have to admit that I only had the pleasure of working with one of these people in 2018; I’d worked with the others before that. But it was last year that it finally hit home: I needed a coping strategy for the next time this happened.

So, what to do on occasions when capabilities are missing? For starters, if the shortcomings are yours, it’s a good idea to reflect and take concrete actions to perform better. If the shortcomings belong to others, cursing under your breath or venting to your significant other can have a wonderfully calming effect, but may not be enough to rectify the situation. Is it possible to overcome such defects through mentoring or teaching? Unfortunately, I have found that it is totally unrealistic to attempt to build the capacity of someone (even if you are in a position to do so), over the course of a single assignment. In any case, you’d first need to spell out their failings to them. That could be pretty awkward, right? Furthermore, you don’t really have much time for capacity building — you need to get the bloody job done.

What you need is a back-up plan, especially if you are ultimately responsible for the work (if you’re the team leader) or because you were asked to pick up the slack (by the team leader). Here are three suggestions:

  1. Build in a time buffer: Provide enough slack in your schedule to take into account the extra time that you might need to address the shortfall. For example, if you think a task will take two weeks, try to allocate three weeks.
  2. Build in a human resource buffer. Identify persons, either on the team or not, who could step into the breach. Maybe the organization that put the team together (if you are subcontracted) has the resources to bring on extra help.
  3. Build in a mental buffer: Prepare yourself not be surprised or upset when colleague X lets you down. Unless you’ve worked with them before, and therefore know their strengths and weaknesses, assume that people have a least one weakness, and that it will impact the work at hand.  

In a word, contingencies!

Let 2019 be a year of contingency planning. The plan comes with its own reward:  if you have a decent contingency plan, you will end up with more time, energy, and even inspiration, to focus on the interesting and fun stuff.  


The information pyramid

We are swimming in a sea of information

Like other forms of inquiry, evaluation involves sorting, filtering and distilling information in order to communicate something of importance. (Academics, journalists, attorneys, and private investigators do this, too.) When the work is done, you want to be able to present your findings in a clear, convincing, and attractive manner for easy consumption.

The problem is, there is a vast amount of information out there.  It can easily overwhelm. With the Internet entering its mature phase, we swim in an information glut. I leave for another time a discussion on the differences between data, information, knowledge, intelligence, and wisdom, except to say that (from what I can tell, anyway) there is a lot less wisdom than there is data in the world…

A big part of your job, if you are an evaluator, is to know what information you need, and where and how to find it. To do this effectively, you want to be able to zero in on the essential stuff, while still being open to any interesting findings you may not have considered.

Let the purpose of the evaluation be your guide. Keep the reason you are searching at the front of your mind. Perhaps you seek to understand how well a program has built the capacity of agronomists to introduce innovative irrigation techniques to farmers? In that case, keep your focus on factors that may have a direct bearing on capacity building efforts, while limiting the amount of time you spend on learning about other things. Take note of them, but try not to let them distract you from the main question.

As an evaluator, you do not have the luxury of time that you would if, say, you worked in academia, to produce a dissertation or journal article (often years!). Evaluation, which is often about collecting and applying evidence to problems (in programs, policies, etc.) is relatively fast-paced. So even though you must review the literature, reports and data relevant to your evaluation, you simply will never have the time to read every word. You need to prioritize what you read and develop the ability to scan a document for what is essential.

When it comes to writing, you again will need to be strict with yourself. Avoid padding your reports with unessential information. Have you ever read a report (or a section of report) where you don’t understand what the point is? Have you found yourself asking, why am I reading this? Don’t put your readers through that.

A hierarchy of information

Imagine now that information and its offspring, exist in the form of a pyramid.

At the bottom of the pyramid is, let’s say, all the information in the world. Every piece of observable and non-observable phenomenon, from myriad perspectives. This amounts to untold trillions of bits of data, which are constantly accumulating and constantly changing. Think of this as an ocean of information. For all intents and purposes, this ocean is infinite and growing, much like the universe. It cannot be encompassed. All you can do is dip a sieve into these waters and try to collect what is most suitable to your purposes.

At the next level up is all the available information and knowledge. This is any information that has been processed somehow, whether printed or digital, spoken or written. Much, but far from all of it, is searchable using an Internet search engine. It is still a huge, overwhelming and unwieldy amount. But it has at least been produced by someone. It must also have some meaning, which is why I combine it with the concept of knowledge.

Next comes all the topical information that is out there. Maybe you’re writing about the electricity sector, or artificial intelligence, or breast-feeding. Depending on the area you are looking at, you will still find a plenty to review, and many experts, authors and practitioners you could talk to. If you are writing a general overview or introduction to these topics, you would synthesize all of this. Generally, however, you will not have such a broad focus.

Next comes the information that addresses your subject. It will be quite narrow in focus. For example, what is the impact on the poor of rising electricity tariffs? What does the introduction of artificial intelligence mean for workers in the fast food industry? What is the correlation between breastfeeding and the immune system? Now we are closer to where we want to be. It is still more information than you need, but the amount is manageable. You will only draw on the research and practices and reports that exist, plus any new primary data that you have distilled.

The next level in our pyramid is all the data and information collected for the specific purpose of the evaluation. This is the material you have reviewed with the aim of understanding your subject and informing your audience. It may include a database with thousands of observations and a hundred variables or more. You may have a bibliography of dozens or hundreds of sources. You may have hundreds of hours of interview or focus group discussion recordings. This is your personal store of information and it should, ideally, all be somehow relevant to the purpose of your evaluation.

Still higher up and narrower in scope are the evaluation findings. This is where the rubber hits the road. The findings, which normally come with conclusions and recommendations, are the core information, which you have transformed into knowledge. This is what answers the evaluation questions and backs them up with evidence. In the evaluation world, reports should generally not be longer than 20-30 pages, excluding annexes. That is about the amount of detail which specialist readers who are interested in your subject can stomach.

The summary findings, which includes the executive summary of a report, and may also exist in a standalone short note or slide presentation, is how the essence of the report is presented. This is what most people will read or watch. If the findings are a distillation of all the information you have collected, the summary findings are a distillation your broader findings. As a rule, the length should be about 10 percent of the full report, from 2 to 5 written pages maximum, or no more than 10-30 slides.

Finally, the main story. This is the quick one minute story you tell your significant other or friends or colleagues who ask what you learned, without boring them with all the details. It could be in the form of a few paragraphs and bullet points that result in a one-page policy note that goes to the Minister of Energy, for example, as a policy brief. For example, “We found that most of the poor didn’t suffer as a result of electricity tariff increases because electricity expenditures fell as a share of their total expenditures. And all households now have 24 hour service;” or “We project that artificial intelligence will eliminate, on average, one job per restaurant, while customers have shown a preference for interacting with humans when they order fast food;” or “Breastfeeding was shown to reduce the incidence of illness in infants under 5 if they were weaned only after x months.”

And with that we have reached the pinnacle of our pyramid. Time for the next project.

Post edited July 1, 2019


Not reaching our goals is very normal

One country after another is exiting the 2018 World Cup, packing their bags and leaving Russia. Only Croatia and France are left to play in the final on Sunday. Fans from around the world have had to face the fact that their country lost. My conclusion? Now is a good a time as any to reflect on failure.

Aside from sports, many, many other areas of human endeavor are some mix of success and failure. Superficially, and depending on how we measure it, I’d like to argue that failure is generally far more widespread than success. And that’s not only because there can only be one winner, as in sports championships like the World Cup. Let’s look at examples from completely different fields:

Every year, the US Congress introduces thousands of bills (proposed legislation), but only about 4 to 5 percent of those end up as laws. That’s a lot of effort – given that bills are hundreds and sometimes thousands of pages long – going into something that doesn’t succeed.

Now let’s take an example from the private sector. In the US, hundreds of thousands of new businesses are established every year. In 2015 that number was 679,072 according to the Bureau of Labor Statistics. Based on historic trends, 50% of all new businesses shut down within five years. Not always because they failed, of course, but probably most people start businesses that they hope will last much longer.

Returning to the subject of sports – in baseball, America’s pastime, the chances of making it to the Major Leagues from the minors, i.e. the lower professional leagues, are only about one in 10. Once they reach the Majors, there is more failure – the average batter fails to reach base by getting a hit  (the primary, if not the only, reason for stepping up to the plate) almost three out of every four tries.

What about New Year’s resolutions? Studies have shown that less than 10 percent of people who make resolutions stick to them. Many fail to keep their resolutions for more than a few weeks. Let’s take a longer view.  Many, if not most of us, don’t achieve the life goals we set ourselves when we were young. I certainly myself failed to earn a living as an actor (after two years of theater school in Russia and over 4 years of trying to break into the theater in New York), never ran a marathon as fast as I wanted, never wrote the novel I started (and tore up) umpteen times in my younger years. The list goes on.  Even high-performing individuals and companies often don’t reach their goals, as evidenced by the sports examples from above or the losers in Presidential elections.

Do these examples, and the statistics, mean are we are simply doomed to failure in our greatest aspirations? Do they suggest that we humans are hopeless at everything we try? I would argue no, not at all.

First, what they indicate is just how much effort we put into trying. How much thinking, planning, time, and money are invested in reaching goals! Think about it: Tens of thousands of kids work hard to become professional athletes or reach the Olympics. Hundreds of thousands of people start businesses every year (even in the depths of the Great Recession over half a million new businesses were started in the US). Millions of us still make New Year’s resolutions.

I’m not saying that merely trying is good enough, or that we should excuse failure because it is the norm. But we should understand it in context – not getting all the way there is human and quite normal. And I’m not saying that we should lower the bar. You need to make a darn good effort to reach whatever it is you are aiming for. Falling short of a goal when you try is very different from not getting there because you didn’t bother. Even if most of us don’t get there, we still get something out of it, we land somewhere. And hopefully learn a few things and become a little smarter in the process.

And now for the evaluation angle: When evaluating the performance of a project, it is really important to ask why the goal was not achieved, as well as asking what was achieved. Don’t just focus on the binary succeeded/didn’t succeed parameter. Don’t pass judgement too quickly because a performance metric wasn’t achieved.

Naturally, it also comes down to how we define and measure achievement.  It’s hard not to be impressed by people who start their own businesses, run for high office, train for the Olympics, or get selected to play on your country’s World Cup team. On a CV, all those things look pretty good. But there is a lot of sweat and time that goes into reaching those goals.

And remember, most people who are, or seem, successful, have gone through many failures before they got there. Failure, is after all, the norm. So buck up, and go out there and try one more time.


How the human brain beats artificial intelligence…or why I like going to meetings

As the title suggests, in this post I am going to try to tackle three things: meetings, the human brain, and digital intelligence. Bear with me.

I go to meetings, like most of you. They make up a small, but significant part of my work, when I’m not doing background research, writing reports, and travelling. I actually like meetings. This is not just because, in my line of work as a freelance consultant, you develop a real appreciation for periodic human contact, but because meetings – when they focus on specific goals or have a clear agenda – can be extremely productive. In the case of interviews, which is the form a lot of my meetings take (the other types are team meetings and policy discussions), they are the probably most efficient way of obtaining the information you need when I’m evaluating a program, a project, a sector, or some topic.

What are the alternatives to these meetings,  to learn new things? Mostly culling information from reports, books, and, of course, the internet, mostly via a search engine or social media. A large amount of Google’s search engine activity now uses artificial intelligence (AI). (I like to think of AI as just the latest manifestation of brainless intelligence, but that’s another blog topic.) Yes, the internet has made our lives a lot easier. But we’re fooling ourselves if we think everything that’s knowable can be found at the click of a mouse.

But before I get to this, yes, I am aware of the complaints. I’ve read a lot about how office meetings are unproductive, a waste of time and money, and brain cells. That may be so in the private sector or in management, areas where I am quite happy not to work. However, I find meeting with other people extremely valuable, for two reasons. First, it is a quick and efficient way to learn the most important thing about an issue. Second, it promotes cooperation, through relationship-building. And without cooperation, things tend to fall to pieces. (I’ll try to get to that in yet another blog post).  For now, I’ll focus on why holding meetings is great for information gathering and, in some important ways, much better than Google. Why? It comes down to this – humans are exposed to, immersed in, and able to reflect upon a breathtakingly large amount of real world experiences, interactions, visual stimuli and sensations.  We also feel and use our judgement. This is something computers and artificial intelligence can hardly do, despite the recent advances so breathlessly talked up in the media. In fact, search engines are limited to what they can find on servers.

I am not a fan of the reductive approach, e.g. reducing the human mind, or the soul, to biological impulses to be digitally mimicked. But I think there is a useful comparison to make. Strides in computing power and artificial intelligence notwithstanding, humans still have some serious comparative advantages. You can read online about how the human brain compares to a supercomputer, with some saying it has been surpassed, and others saying, not yet, not by a long shot. There are also some interesting comparisons and discussions regarding the human brain vis á vis search engines, especially Google.

You’ll see that, in a narrow sense and along quantitative parameters, search engines may be superior: processing power to retrieve keywords, access to data, speed, etc. But here is one parameter where computer search engines don’t perform anywhere near as well as humans – they are limited to the written, numerical and recorded, information they can find online. That misses out on a huge amount of information.  What might that be? Well, everything that isn’t recorded: conversations, events, personal notes, observations of others, email exchanges (that Google doesn’t have access to), and so on. An expert or stakeholder who is engaged in the field you are studying – whether it be education policy in the Maldives, the Uzbekistan irrigation sector, energy efficiency in Ukraine – will be able to draw on a depth and breadth of information that the most powerful search engine in the world can only dream of (if androids could dream of electric sheep, that is). Although the information stored on all the world’s servers is vast and growing, it is still a fraction of all the information in the world and inside the heads of its population.

Ask your interlocutor, your key informant (the term used in evaluation) a question, and he or she will be able to draw on countless, non-digital, resources in order to answer you. Google is limited to giving you what it finds on the web. Certainly useful, but limited. Humans still have some value, it seems. That is why meeting them, if you ask good questions, is so invaluable. If you are an evaluator, an investigator, a journalist, or in a similar line of work, you quickly realize that you get more from holding a few meetings with key individuals than from plowing through hundreds or thousands of pages of documents.

Caveats, caveats. There are always caveats. So yes, it is true that not everyone’s memory functions at an optimal level. And some key informants, you quickly realize, don’t have much to say. Or maybe you find yourself talking to the wrong person. And, naturally, you still need to consult the thematic literature, the reports and journal articles and so on, to complement the meetings you hold. But overall – as a professional, doing my job, I’ll keep going to those meetings. And here’s a(n open) secret: most people actually like to talk about what they do and what they know. Most are happy to share.  Also they don’t show you those annoying ads before answering your questions…