Can We Monitor and Evaluate US Foreign Policy Strategy?
Can the State Department learn how to learn?
PART 1 of the M&E series
As the newly arrived Foreign Service officer, my assignment was to help lead a multi-million-dollar program to prevent violent extremism in a fragile South Asian country. The project was well-designed. The strategy was to flood the country with pro-peace messages and content. Everything seemed to be going smoothly until a nationwide survey we deployed revealed a startling finding: The more one interacted with our program, the higher support for political violence one demonstrated. The program was likely doing harm!
Counterintuitively, encouraging peace appeared to trigger a negative reaction in the people our program touched. Without a carefully designed plan to study whether the project was working – called monitoring and evaluation (M&E) – we would have continued the program for years while convincing ourselves that we were doing a great job.
How many other foreign policies might be failing to achieve their objectives, or even causing harm?
We don’t really know. But research from other fields offers a stark warning for foreign policy practitioners: most attempts at changing the world fail. In biomedicine, over 90% of drugs fail clinical trials. In education, 90% of the interventions tested failed to achieve impact. In the private sector, 80% of new strategies conducted by Google and Microsoft failed to move the needle. Employment and training programs studied by the Department of Labor failed 75% of the time.
The good news is that M&E is an important component of all foreign assistance programs developed by the State Department and USAID and a lot of smart people are working to improve practices. The bad news is that there is no expectation for M&E when developing a policy or a strategy – any effort that is not considered assistance. The fact is that the State Department does not routinely study the impact of its policy choices.
This is a major weakness in the practice of US foreign policy. The lack of feedback and learning processes helps explain the State Department's poor reputation for policy effectiveness, its weak culture of merit, the absence of sufficient training, and its weak strategy processes.
Strategic-level M&E practices offer a great deal of promise. Applying M&E practices to policy and strategy can help officials ensure programs are proceeding according to plan, understand the impact of their efforts, and learn what works. As this Substack has explored, high-quality feedback is an essential ingredient for the development of expertise.
Over the next few weeks, I plan to explore how the State Department might develop a process to implement M&E for all meaningful strategic and policy initiatives. I have framed my research around some key questions:
What is M&E, and what are the potential benefits?
What are the drawbacks and pitfalls of M&E practices?
What is the current state of the art in M&E practices?
Can practices developed for M&E of foreign assistance translate to the complexities of higher-level strategic environments?
What are the existing laws and regulations shaping M&E in the government?
What kind of organizational features facilitate effective M&E?
I’ll do my best to answer these questions in the coming weeks, but, as always, my goal here is to start a conversation about this complex topic. I would love to hear about your expertise, experience, and perspectives. Comment here, or email me directly, if you have ideas about how (or how NOT) to apply M&E to higher-level strategic policy challenges.
Dan, you may want to look at OIG reports from prior years. In 2014 I led an OIG study for the U.S. Department of State on how to measure and evaluate the effectiveness of diplomacy itself. That is, work done by economic and political officers.
Dan, as always, you have excellent questions. However, I suggest you are confusing (perhaps conflating) "strategy" with "tactics" and "implementation." Unless I missed something, the point of your piece is to help evaluate and improve how the State Departments implement a piece of some so-called "strategy." Except, as your narration indicates, the "strategy" in question is a bunch of tactical efforts that may or may not be synchronized with policy initiatives and actions over time by other elements of the government and work with partners.
You bring up the private sector's tolerance for failure in certain areas is helpful, to a point, but you fail to carry that forward to where you need for this discussion. In your first anecdote, you claimed the "project was well designed," so what? The design does not equal effectiveness, just as smartly designed medicine or software may fail in contact with the real world. Was "flooding" the zone effective? And, by the way, the program you mentioned as "going smoothly" was not, and this wasn't realized until the survey indicated a design failure (or, best, case adversarial activity that undermined the program's potential, which is still a failure since that action wasn't detected if it happened), highlighting the failure, in this part of the narrative, was tactical rather than with strategy.