Are We Confusing Impact With Outreach?

Print More

Rachel Oldroyd of the Bureau of Investigative Journalism discusses how to measure impact. Picture: Daylin Paul

The rise of not-for-profit journalism has brought with it a need to quantify the impact of stories in order to gain funding. But how do you define impact? Where is the line between journalism and advocacy? To whom are journalist’s answerable – their readers or their funders? And should journalist’s work be quantified at all?

Peter Cary, a consultant at the Center for Public Integrity and contributor to the GIJN report “Investigative Impact,” and one of the speakers in a workshop entitled Measuring Story Impact, brought up a comment by moderator Charles Lewis that, before the era of measuring a story’s impact, donors might say “write me a check and tell me how it goes in a few years.”  

In his 2016 book “Democracy’s Detectives,” author James Hamilton analyzed the impact reported by stories submitted for IRE prizes. What he found backed up something Richard Tofel, the director of ProPublica, once said — that “true impact in the real world-change sense … is relatively rare.”

Hamilton’s study found that only 15% of investigative stories submitted for IRE awards led to further investigations, 5% led to hearings, 13% led to indictments, resignations, or firings and 1.5% result in law changes.

Even if we add together all of Hamilton’s categories,  only a third of IRE prize submissions were able to cite impact.

“One or two hits a year is relatively good news,” agreed Rachel Oldroyd, from The Bureau of Investigative Journalism, another of the speakers at the workshop.

Oldryd noted that in a paper entitled “Investigative Journalism Works: Measurements of Impact”, Christopher Heid said that stories that contribute impact, often come from several stories being reported on a single topic over a sustained period of time, rather than one solo piece.

The fundamental problem with measuring impact is that we often confuse impact with outreach, said Cary.

We are too concerned with who’s reading our stories, our unique visitors, media pickups, tweets and retweets, he said. But this begs the question: Does large readership mean more impact?

This can lead to “the danger of dilution,” said Cary. If we have too many ways to succeed,, then nothing stands out at all. Alternatively, if we are looking for our impact to be popular readership, this can lead to “the sex reporter” effect: reporters only writing what we think the public wants to read.

We are often writing to the elite, said Giannina Segnini, director of the master of science data journalism program at Columbia University. To remedy this, she uses what she calls “the taxi measurement” system. Has the local taxi driver heard of the story? This can help determine how far your story has actually reached.

“We want to measure impact to know how our work impacts society, not as a means to justify our work to donors,” added training director at the Philippine Center for Investigative Journalism Floreen Simon.

Peter Cary’s Measuring Story Impact presentation can be found here.


Zita Campbell graduated from Otago University, New Zealand, in 2015 and completed the last year of her Bachelor’s degree at the Univ. of California at Santa Cruz. She then moved to London and worked freelance. She hopes to create influential long-form articles and documentaries on humanitarian and environmental issues.

Leave a Reply

Your email address will not be published. Required fields are marked *