Special to the Philanthropy Journal
By Herb Baum, PhD
Evaluators would concur on the importance of funders examining the impacts of the programs they fund. However, we have provided few, if any tools, for the funders to use to compare impacts across programs with different purposes. In addition, programs have no easy to use tools to examine impacts within their programs to decide where it is best for them to use their scant resources. Economic valuation is one way to accomplish both of these.
As an example, many programs try to increase the education level of their participants. Education level and wages are directly linked, and the US Bureau of Labor Statistics shows that for 2015 the median weekly wages for a person with a high school diploma was $185 per week greater than the wages for somebody with out that diploma. That translates into roughly $9,600 per year. Somebody with an associates degree earns approximately $6,200 more per year than somebody with only a high school diploma. These valuations can be used to describe the impact. Furthermore, the net result is that encouraging somebody to get an associate’s degree is more likely to result in the person earning a livable wage and break the cycle of poverty than their only obtaining a high school diploma.
For over a decade colleagues and I have worked both with a federal agency (the Administration for Children and Families, part of the Department of Health and Human Services) and community-based organizations using this approach to help them understand the value they provide and better promote their services Economic valuation has proven to be a tool that can be used by a broad audience and utilizes a standard methodology for comparing impacts.
A valuation approach to impacts can result in reduced cost for planning, more efficient investing, and assistance with planning for future investments for funders. Specifically, planning costs are reduced in that grantees who cannot provide outcome or impact data can be weeded out early in the grant review process. Investing is more efficient because funders have a standard metric to a) judge whether an investment will have value, b) make comparisons within and between programs: and c) cite the value of their investments. Finally, as funders plan for the future they can direct their resources to where they are likely to get the best return, thus maximizing the impact of their resources.
The valuation approach my colleagues and I employ focuses on the monetized value of an impact over the next year. It differs from cost-benefit analysis which examines long-term impact and uses a discount rate to arrive at the current value of future events. By focusing on one year out, we make fewer assumptions (people can disagree on the discount rate used and other assumptions) and the the impacts being measured may change over time. For example, somebody employed in a minimum wage job may be able to work their way up into a job that pays much higher. Estimates of the value of soft impacts (e.g., less psychological stress) can vary greatly, and thus we exclude them.
Data for the valuations come from existing studies, documented in refereed journals, and/or results published by federal and State agencies. By not including any original research our biases are not reflected in the calculations and the numbers are vetted by others. Furthermore, when feasible we recommend that calculations, to adhere to the principles of rigorous program evaluation, account for the counterfactual (what would have happened had the program not been there).Continuing with our GED example, the literature indicates that 35 percent of individuals not receiving their high school diploma will go back to get a GED within two years of their expected graduation date. We use that information to discount the program’s claims of success.
As stewards of the money for your organization you should be aware of all the tools that can be used to demonstrate the impact of your work. Economic valuation should be one of these tools.
Herb Baum, PhD has been a quantitative researcher for nearly 40 years; the last half of which he has focused on program evaluation. His goal is to promote evidence-based decision-making among program executives.