From the September 2008 issue of Wealth Manager Web • Subscribe!

Valuing Evaluation

In the old days, giving was easier. You made a donation to a charity of your choice, and you hoped for the best. But faith alone doesn't cut it anymore.

Demand is soaring for hard numbers, deeper analysis and more compelling evidence that a charitable strategy is effective. Alas, coming up with satisfying answers is not always easy. Sometimes it's downright impossible. Meanwhile, some critics warn that expending resources in search of better answers isn't always in the best interests of non-profit organizations.

Debates are raging in the 21st century over the merits of going the extra evaluation mile. Justified or not, there's no denying the push for sophisticated evaluation and greater transparency in modern philanthropic endeavors.

"There's much more scrutiny in terms of the impact that charitable organizations are making," says Nina Cohen, managing director of philanthropic advisory services at Glenmede Trust Co. in Philadelphia. "There's much more emphasis on being businesslike than before, and so the level of oversight by grant makers has increased in recent years."

Studying the business of giving is hardly new. One of the motivations for creating charitable foundations a century ago was the quest for greater efficiency, control and accountability in redistributing the wealth minted during the Gilded Age. Industrialist Andrew Carnegie, writing in 1889, advised that it was the duty of the moneyed class to donate their "surplus the manner which, in his judgment, is best calculated to produce the most beneficial results for the community."

That's still true, although the definition of "judgment" continues to evolve. What passed for intelligent giving by the philanthropists of yore pales by current standards. Individual and institutional donors are asking tougher questions than their predecessors. But it's debatable whether they're getting better answers, or if the extra effort leads to superior results for charitable programs. Some even argue that it's all a net loss.

Worthwhile or not, the trend rolls on, and it promises to intensify in the years ahead. Philanthropy seems headed for a new age of data proliferation and democratization.

It's anyone's guess if the extra effort will pay off. What's clear now is that there are several catalysts driving the demand for more accountability. One is skepticism in the wake of various scandals in the charitable world in recent years. The rise of a new generation of philanthropists is also a factor. Many hail from business and finance backgrounds, and they're intent on using their skills for documenting what works--and what doesn't--in the world of charitable giving.

Consider 26-year-old Holden Karnofsky of This former hedge fund employee started a charitable organization with friends--in part out of frustration with what he says is a dearth of publicly available information for assessing charities. Gathering relevant data on non-profit organizations is easier if you're in the business, he asserts.

Karnofsky is quick to distinguish between data that tracks effectiveness versus numbers associated with efficiency. It's fairly easy to find data on how much a charity spends on basic operating costs, such as rent and electricity. But efficiency metrics such as those don't tell you much, he complains. More meaningful analysis falls under the heading of effectiveness, a broad label that focuses on assessing strategic outcomes, such as success rates in reducing poverty, feeding the hungry, etc. The problem is that evaluation of this caliber is hard to find if you're an individual donor, Karnofsky claims.

Ideally, one could look to the various charitable organizations for insight, or perhaps to the foundations that have analyzed a particular slice of the non-profit sector. Karnofsky tried both routes as an individual, but the result was frustration rather than enlightenment.

"When you start looking into whether charitable programs work, a lot of what we heard from charities is that many don't have evidence about what they're doing and if it's successfully helping people," Karnofsky reports. Why? "Because collecting that evidence is expensive and, the charities told me, their donors don't want them wasting money on that stuff; they want them spending on the programs."

By Karnofsky's reasoning, that's shortsighted thinking. He opines that when tackling difficult challenges--whether in philanthropy or other industries--"a lot of times it makes a ton of sense to spend more on salaries, on management and more on data collection and generally learning about what's going on in order to improve your system." Yet the opposite view is widespread in philanthropy, he says, and that leads to a bias for spending every penny on programs--even when it's unclear that the programs are working.

Trying to shine more light on philanthropy's effectiveness, Karnofsky and several of his friends decided to analyze charities on a full-time basis and publish their findings on the Internet for all to see. The result is's growing list of reports that highlight its best non-profit picks, along with the runners-up. Certainly the Web site's breadth of data and the level of detail are startling compared with the usual fare available for general consumption. For example, the analysis of Population Services International--GiveWell's top choice in the "saving lives" category--ranges from numbers on the non profit's role in supplying insecticide-treated nets in Madagascar to estimates of the cost per HIV infection averted for populations around the world (see Table above).

But GiveWell's solution creates a new problem: interpreting the data. Certainly for the novice, poring over lengthy profiles is time consuming, if not confusing. You could, of course, accept GiveWell's conclusions about where to direct your social investment.

In fact, hiring someone to do the heavy lifting on research is very much part of the new world order of philanthropy at a time when demand for evaluation is soaring. No wonder that the philanthropic advisory business is expanding in a world with a growing appetite for more analysis and data.

"We're looking to apply investment principles to see measurable results of lives being changed for the good in the developing world," says Andy Lower of Geneva Global, a for-profit philanthropic consultancy based in Wayne, Pa. that focuses on grant making in the developing world. Serving a mix of wealthy individuals and foundations, Geneva claims to offer objective analysis in searching the globe for the best social-investment returns for clients.

"We encourage looking at the difference between the outputs and the outcomes," explains Lower. Traditionally, he says, people focus on the outputs, asking, for instance, "How many school books are we providing? How many schools are we building?" Those are measurable outputs, but they don't necessarily reflect results, Lower warns. Outcomes, on the other hand, would measure such things as whether children really did receive better educations and whether that had a positive effect on their lives.

Is more analysis really the path to enlightened philanthropy? In theory, yes. But no one should expect analytics alone to suddenly improve results. Many of the challenges targeted by philanthropy are stubborn social ills with deep roots and no obvious answers--much less quick fixes.

Meanwhile, ratcheting up evaluation efforts runs the risk of raising expectations too high. In turn, that may foster unnecessary frustration and dry up funding for a program that was fundamentally sound.

Another risk is thinking that deeper analysis will reveal a silver bullet--a lone strategy for achieving better results. Acting on the assessment may promote counterproductive decisions.

For instance, a successful charitable effort may be the byproduct of multiple strategies that look mediocre or worse on an individual program basis. Ending the ones deemed inferior may inadvertently impair the broader success that was a byproduct of several programs.

Studies on after-school programs suggest that no one program has a monopoly on success, according to the Campbell Collaboration, a social policy group that studies social welfare and education. There's "no evidence that any one program model is more effective at changing students' context or improving academic outcomes," counsels one of the organization's reports.

There's also the possibility of rushing to judgment once the data is in hand. Consider the Ypsilanti, Mich.-based High/Scope Perry Preschool intervention of 1962. Aimed at helping poor black children, initial results in the first few years looked middling, according to a 2006 article in Stanford Social Innovation Review (SSIR). Four decades later, the adult recipients of the original aid were faring rather well in terms of employment, holding a college degree, owning a home and other measures. Early assessments, in other words, can be misleading.

There are other pitfalls associated with relying on greater data-gathering efforts, including a lack of methodologies for properly analyzing results. There are also charges that additional information is sometimes wasted.

"A lot of philanthropic foundations have made demands [on non-profit organizations] for more evaluation, but didn't always use the information in decision making," says Julia Coffman, senior consultant at the Harvard Family Research Project, a group that studies childhood and family education topics. "For example, some foundations request information on results for the end of a grant-making cycle, but then make decisions on whether to re-fund before the grant cycle ends."

The various limitations and hazards convince some observers that progress lies, at least in part, in better organizing and disseminating research rather than simply producing more of it. Putting the idea to the test, Teresa Behrens will soon launch The Foundation Review, a quarterly journal focused on bringing more transparency to philanthropic strategies managed by foundations. The intended readers are foundation staffs, and "each issue will include articles on what I'm calling 'results,'" says the former director of evaluation at the Kellogg Foundation. "The articles will focus on what was accomplished with the foundation funding."

Foundations generally are trying to improve their decisions by paying closer attention to results, Behrens explains. Easier said than done. "It's very difficult to figure out what results are important and how to track them when you're talking about trying to create large-scale community changes."

A major obstacle is time. Even a successful program may not show results for years, as the High/Scope Perry Preschool intervention illustrates. Complexity is another challenge, at least for large foundations. A charitable program may be fundamentally sound, but success or failure may turn on management and implementation details, says Behrens. Unfortunately, attributing outcomes to changes in systems, personnel and other non-core variables is "analytically challenging," she laments.

There's no shortage of self-analysis among the larger foundations, Behrens adds. But a foundation's evaluation may have limited relevance beyond the institution. There are countless approaches to fighting poverty, and some--perhaps most--are intertwined with the particulars of each foundation's operational bias. The bottom line: Individuals are not likely to find a lot of valuable information from the foundations, she advises.

Still, there's plenty of insight that can be mined by comparing and contrasting programs. "My hope," Behrens says, "is that by creating a peer-reviewed journal where different approaches are revealed, we can learn a lot more of the decision-making behind strategies and the context for why some things work, or don't work."

James Picerno ( is senior writer at Wealth Manager.

Reprints Discuss this story
We welcome your thoughts. Please allow time for your contribution to be approved and posted. Thank you.

Most Recent Videos

Video Library ››