One of the biggest challenges in The Center for Investigative Reporting’s ongoing quest to define and measure media impact is finding a common vocabulary for communicating it outside our own newsroom. As we’ve noted, “impact” can mean different things to different people, and the metrics best suited for measurement can vary depending on your definition, content and goals.
To help work through these challenges on media impact and facilitate a community of practice, CIR has hosted a series called “Dissection,” events convening journalists, data scientists, academics, media funders and engagement specialists for discussion.
During Dissection D, held in Washington, D.C., in April, Steve Mulder of NPR Digital Services made a crucial point: There isn’t a “one size fits all” solution for measuring impact. While CIR focuses on investigative reporting that reveals injustices, other media organizations create a wide array of content and may not necessarily share the same goals.
Mulder cited, for example, this NPR article on different ways to cook an egg, an informative news feature that enjoyed a healthy life on social media. How would NPR go about measuring the broader social impact of that work?
It’s an important question at the heart of our efforts. Even different CIR investigations have a variety of potential outcomes that require distinct indicators for measurement and analysis. But stepping back, let’s look at the definition that CIR’s media impact analyst, Lindsay Green-Barber, has recently posited: Impact is a change in the status quo resulting from a direct intervention.
In CIR’s case, the interventions are the stories and facts we uncover and distribute across various media platforms. But that core definition also fits nicely with pretty much any piece of content, be it a YouTube video, radio story, multipart investigative series or an online news article about eggs. And by boiling down the finite universe of actions and outcomes that constitute the process leading to impact, we think that we’re starting to see similarities that can be used across media organizations and content types.
What’s missing is an internal system to help classify different types of change that can occur as a result of media and a shared list of indicators – both quantitative and qualitative – that media can use to track and measure the impact of their own work. In short: We need a taxonomy for impact.
Sketching out an initial framework for that taxonomy was the key goal during our Dissection E Workshop at Columbia University earlier this month. Co-hosted by CIR and Columbia’s Tow Center for Digital Journalism, with support from the University of Southern California’s Media Impact Project, the daylong brainstorm invited a small group of stakeholders to discuss specific types of actions that demonstrate outcomes as a result of media.
To kick things off, participants got a tour of three ongoing efforts to track and measure impact: the Media Impact Measurement Methodology from USC’s Media Impact Project; Newslynx, a project of Tow Center fellows Brian Abelson, Michael Keller and Stijn Debrouwere; and the Outcome Tracker, CIR’s internal tool for cataloguing real-world change.
Each tool is a work in progress, but share complementary approaches and features that, together, can offer media organizations a more robust picture of how their work is performing both online and in the real world. (You can see an example of how CIR used the Outcome Tracker to analyze the impact of our Rape in the Fields investigation here.)
From there, participants began the core exercise for the day: identifying and categorizing various types of actions that constitute impact. By jotting down individual examples of real-world change on Post-it notes, we compiled an expansive list that helped outline the scope of “impact.”
— Lam Thuy Vo (@lamthuyvo) August 6, 2014
Once the ideas were all plastered up on the wall, we began grouping them into related categories. Some of the key categories that emerged were legislative action, institutional change, media response and collective action.
Segmenting these indicators of impact helped cement and connect some of the common outcomes that are shared across media organizations. But the real challenge is structuring them in a taxonomy that’s flexible enough to let different organizations pick the indicators that are most meaningful to them.
With a day of discussion in our pocket, CIR and USC are now in the process of synthesizing the key takeaways to produce such a taxonomy. We’ll share a draft in the coming weeks (sign up for our newsletter so you don’t miss it!) and ask for your feedback.
With your input, we hope to create a flexible framework that can help media organizations both define and measure the real-world changes their work can provoke, and better communicate with the public about why their work matters. We want to promote a dialogue for evaluating journalism that goes beyond Web metrics and the pursuit of clicks and into something deeper, even if it requires a bit more work.
As always, this is an ongoing conversation, so stay tuned for more updates. In the meantime, if you have ideas or feedback on how a taxonomy for impact could benefit you or your media organization, let us know in the comments or email us at email@example.com and firstname.lastname@example.org.
And if you’d like to join us for the next Dissection F on Sept. 24 before the Online News Association conference in Chicago, drop Green-Barber a line: email@example.com.