Reimagining impact
Impact means different things to different people, depending on things like positions of power, units of measurement, and personal biases. So is impact really the thing we should be striving for, or is there something else we should be considering?
In this episode, we speak to two leaders who are reimagining impact evaluation systems to make them more meaningful and effective. Read the blog here.
Featured in this episode:
- Adrian Brown (Host), Centre for Public Impact
- Pravallika Valiveti (Co-host), Centre for Public Impact
- Zazie Tolmer, Evaluation Practitioner & Director at CoIntent ApS
- Andera Delfyna, Learning & Innovation Expert at Light for the World
Transcript
[00:00:00] Samantha Yamada: This is a call to action for governments, for leaders, for all of you, for all of you to have the courage to measure the impact that we’re making because by measuring impact, we can actually create the momentum for impact.
[00:00:17] Adrian Brown: Hello Anne. Welcome to Reimagining Government. My name is Adrian Brown. Now, you would think that we at the Centre for Public Impact know a thing or two about impact, but that all depends on what you mean.
Impact means different things to different people, depending on all sorts of effects, like positions of power, units of measurement, and personal biases. So. Is impact really the thing we should be striving for? Or is there something else we should be considering? Joining me today in this impactful conversation is the Global Senior Manager, knowledge, learning, and Impact at CPI Pravallika.
Welcome to the show.
[00:00:57] Pravallika Valiveti: Hi, Adrian. Happy to be here.
[00:00:59] Adrian Brown: [00:01:00] Now you have impact in your job title. So what better person to join me for this conversation? Let’s start though, with what does impact mean to you?
[00:01:11] Pravallika Valiveti: I can start off with the general definition of impact, and essentially, impact really means anything and everything that happens as an effect of activity.
So inherently, there’s a cause and effect attribution related interaction in the definition. You know, the butterfly theory is where the wings of a butterfly flap, and then there’s a tornado elephant city. So, that means that we can never really trace the effect or all the effects of a particular activity.
So, my definition is impact, which is all about perspective and what you give importance to. So, if there are numbers or metrics that are important to you as evidence, then that’s impactful for you. Otherwise, there are also mental models and shifts that you can witness. That can also [00:02:00] be impact. So it’s all about perspective.
[00:02:03] Adrian Brown: That makes sense, and you’re making an important distinction there between sort of the theoretical simple model where you have inputs, outputs, outcomes, impact, and everything could be traced through. In a sort of linear, causal chain, one to the next and what we probably usually experience in the real world, which is much greater complexity than that, where there are many things leading to many different outcomes,mes and they’re all interrelated in, in many different ways.
Right, am I getting that correct?
[00:02:34] Pravallika Valiveti: Yeah. And some of the consequences that you see may not be all because of the activity that you implemented in your program. Uh, that will be, I think all of us need to humble ourselves a little bit and understand that we work in a huge system with so many interrelated dependencies.
So maybe an activity did add up to, uh, a particular consequence. Maybe that’s not the only thing that added up to your consequence. All we can do is step [00:03:00] back and observe. What has happened since we, uh, you know, implemented a certain activity rather than claiming that we are the ones who caused all of this?
[00:03:12] Adrian Brown: Who’s the first guest we’re gonna hear from today?
[00:03:15] Pravallika Valiveti: So the first person we spoke to was Zazie Tolmer.
[00:03:19] Zazie Tolmer: Hi, I’m Zazie Tolmer. I’m an evaluator. I work mostly with teams and organization that work on system and complexity informed initiatives. And I’m based in Copenhagen, in Denmark.
[00:03:32] Pravallika Valiveti: To help us unpack her work, Zazie offered a few fun analogies to describe what she does.
[00:03:39] Zazie Tolmer: I often think about evaluation reports as like the worst, who did it in the world, from the very introduction, you know, who’s been killed, why, and how. Right? And then the rest of the report is all about substantiating the statements that you’re making at the beginning. And I kind of liken, um, the work I [00:04:00] do, um, with trying to get on to a merry-go-round that’s already started.
What’s the opportunity? How much speed can I build up to get on and start working with these people? I think equally when you’re doing an evaluation, there’s a lot of kind of unknown or unrecognised work that you do, which is again about understanding the value and understanding the context, understanding the team, building up the relationships so you can have the real conversations, all of that, that takes a really, really, really long time and is ongoing throughout the activity.
[00:04:39] Pravallika Valiveti: The development sector that we work in already has a lot of structures within the evaluation field, as you mentioned. So, what are some of these structures that exist?
[00:04:50] Zazie Tolmer: So the evaluation field, right, is made up of evaluators for sure, that will come in and do the evaluation. External evaluator, and [00:05:00] that sits within a particular paradigm.
And then there are a whole lot of people who do more kinds of monitoring and small evaluations, who are working in programs on the ground and in organisations that are actually doing the lion’s share of the work. There are lots of different ways in which people work in the field. Um, some are, you know, working, uh, for funders and donors advising on.
Evaluation policy, etc. Right. So it’s a complex space, and many of them might have only worked in those kinds of positions. Often, people move around quite a bit, so they do have a strong sense of what it’s like to be on the other side. I think people are trying really hard to make evaluation count, to make it worthwhile for the programs, and to do good evaluation.
That’s honoring and respectful of the work that’s being done, but that is also bringing in useful judgment and learning for the work. But the other thing we are being asked to do, which is [00:06:00] really interesting, is evaluators are being asked to help pitch to funders a different way of doing evaluation.
How do you help us pitch a better way of reporting and being in communication with funders in particular, how do we bring them into the room and have a different relationship with them through. You know, monitoring, evaluation and learning is now a bit of a bridge potentially to changing those dynamics and changing those relationships.
[00:06:28] Pravallika Valiveti: So what does impact mean to Zazie?
[00:06:32] Zazie Tolmer: When I think about impact, I like the simplest, uh, way of thinking about it, which is that it is simply the consequences of your actions, and they can be negative, positive, intended, unintended, et cetera. Right? There are lots of people who think about impact very precisely and who do not agree with that definition. They would say, actually, impact is a specific part of what you’ve just described. But nonetheless, what we’re implying is that there’s a causal relationship. And the other thing that I find interesting [00:07:00] about how we’re defining impact through a causality lens is it’s very intervention centric.
And when you move into systemic and complexity work, you are part of the system. So you’re part of what you’re trying to change. You’re never acting on your own, you’re not gonna change the system on your own. So it’s always about collective effort basically. And unpacking causality that becomes a lot more complex.
The other thing I find really interesting to challenge, and maybe this is my personal take on the word impact, but I find it quite disempowering. It feels like something we do to others. Shifting into system and complexity is an opportunity to also shift into thinking about the work we do with a different ambition.
The reason why I’m working in this space, and the reason I see a lot of people working in this space, is because there’s inequity in the world. Ultimately, it comes down to discrepancies between people’s access and life experience. So if we are moving into that space, and if we [00:08:00] accept that we’re trying to deal with deep systemic inequities in the system that are leading to inequities for, you know, across the system.
And so some are benefiting others and not, then impact becomes a matter of perspective, right? How you experience change. And I think something very interesting happened there. There’s no end goal anymore. It’s ongoing work, right? We’re always going to be dealing with inequity and how we think about our contribution to that change, how we think about who’s experiencing impact, whether impact is the right word, or whether we should be thinking about something else.
Thinking about impact in system and complexity brings up some really interesting challenges.
[00:08:42] Pravallika Valiveti: So this is a perspective that Zazie developed over time as she continued her work as an evaluator. She mentions a point in time where she noticed problems with how evaluation was done traditionally, and she found herself questioning the effectiveness of the evaluation reports, whether impactful and was being [00:09:00] impactful, causing the right change.
[00:09:01] Zazie Tolmer: So the company I used to work for in Australia is called Clear Horizon, and they’re a dedicated evaluation company. And we used to get every end of financial year towards the end, right? And the last like four to three months, people are trying to spend their money and they’re going, oh, let’s, let’s do an evaluation, or there would be a lot of end of investment evaluation.
We got to a point where we decided to stop doing those because we just were like, well, what is the point? First of all, no one actually really uses these, like they don’t get published. I’m not sure they fully use them from a learning perspective. It’s hard to know what actually happens.
But the other thing is that it felt like it was often focusing on things that were actually less important, less at the core of what the interventions were trying to do. When you do impact evaluations on programs, those impacts are often [00:10:00] decided during a design phase, which is often two or three years before the program starts. You start implementing, and then you’re doing your impact evaluation three years, often later.
And so the, the thinking you’re basing your evaluation on is six years old, plus everything has changed around you, right? So, there are a lot of assumptions that I think need to be revisited. And I remember speaking to an evaluator friend who I used to work with. We were at a consultancy together, and she then went over and started doing, working on the programming side, right?
They were evaluated, and she said it was a horrible experience. They did a very extractive, fly-in, fly-out two-week data collection session and then produced this report, which he said completely missed all of the amazing work the team has been doing and the actual change it has been creating. Also, often, these programs, like you get this impression that they’re these [00:11:00] three-year windows, but.
These programs are often like just re kind of packaged, right? So, so they can get more funding. And maybe it’s nine years of work, like 12 years of work. Like you, you don’t know. And I think that really stayed with me is, is so when you’re asking like, what do you look for? I like the idea of doing more kind of goal free approaches or open explorative approaches, particularly in system and complexity informed work.
Right? Because we, particularly as evaluators, if you’re coming in, are not gonna know and if we’re just doing an accountability exercise, well, let’s make it a quick one that’s based on documents or something because it’s, it feels like a very, irresponsible and, unfair, assessment of so much work and resources that go into everyone’s work, right?
It’s, it doesn’t matter if you are not international program or [00:12:00] if you’re working at government service or what have you,
[00:12:03] Pravallika Valiveti: when approaching impact valuation, Zazie shared these helpful points to consider before getting stuck in.
[00:12:09] Zazie Tolmer: The first thing I think we should all do, and I forget to do this, but I see a lot of people forget to do this, is first of all, write down what you mean by impact and what the implication and assumptions are below that, right?
Because we don’t, we just jump in and start to do impact evaluations without kind of actually laying out the implications of, of how we are using that term. Secondly, I would really challenge the term, given how we’ve described it and how we understand it, is this actually still what we wanna be using when we are thinking about evaluating the merit and worth of what we’re doing?
Is it just about impact? And if, if it still is, then I would also question. Are you thinking about impact only as what is at the end of the [00:13:00] causal chain, which we might never get to, by the way, are you gonna think about it with more complexity and think about it at the different layers within that causal chain, but also within the system?
There’s this idea that it takes a system to change the system, so. Where do you start? You know, well, let’s actually talk to the people we believe are not getting the most out of the system, or, you know, experiencing, negative effects from it and let’s start, you know, better understanding where they’re at, et cetera, et cetera.
So, the designers, the funders, the people who are basically getting this thing started, are working in very different ways that are much more about changing that power dynamic and flipping the expertise towards people who are least benefiting from the system, if you like. And then equally, that’s the challenge for the evaluation, too, right?
So how do we do that? And again, there are lots and lots of techniques and tools to help us do that. [00:14:00] The challenging thing is where you have the paradigm clash, where you know people from paradigms that are more kind of quantitative will really challenge the rigour of those approaches and the way in which they’re being delivered.
Lastly, I would really encourage us to separate the terms, impact and measure. What can be measured is not that many things and the way we are often kind of framing that, you know, we are saying we care about this outcome for people. We’re taking an indicator, which is a tiny slice of that outcome, and then we’re, we’re trying to measure some contribution to it.
And then we’re making all these assumptions about the value of that. And the other thing about impact measuring that I see all the time is, and see this, for example, with CO2 emissions, there’s a lot of reporting on we’ve reduced, you know, this many whatever, CO2 emissions in [00:15:00] the last year, or through our investments or what have you.
When you report those things, are you actually really reporting impact, or are you just reporting that you’ve been really busy? You know, that’s what it sounds like to me sometimes. It’s like waste. You know, people report about how much—less waste, for example—but we’ve got no idea of how much waste we’ve actually produced.
So like, how do we actually use these measurements? What kind of judgements are we making? What are we trying to insinuate? I think a lot of that’s not unpacked enough and we’re just like chic going over the cliff like all the time when we use these terms.
[00:15:44] Adrian Brown: So I really like everything that that Zazie is saying there. And of course, we try to incorporate many of these ideas into our work at CPI and, Pravallika, you’ve been a huge part of that [00:16:00] conversation over the last year or so, as we’ve thought about how that connects in with an impact story. So, I suspect you’ll say that you agree with what Zazie is arguing for.
Bcause I think it’s sort of CPI core philosophy, really. But what are your reactions, what are your sort of summarising thoughts as we, as we end this section?
[00:16:18] Pravallika Valiveti: Yeah, that’s absolutely my thought, Adrian. That’s what we’re trying to do within CPI. Maybe I can share a bit of experience if we all agree that this is the way to go ahead.
What I am dealing with is now the step two. So what does it really take for us to define impact as a team? Who all need to be a part of this conversation? How would the conversation pan out if a few people or a few stakeholders are missing? I think something that I can flag is it’s a very emotionally heavy conversation.
It’s not something as logical as a metric could essentially be. To acknowledge in the beginning what impact means to us is something that’s actually not very easy to do. [00:17:00] We’ve spent hours looking at different dimensions of impact. What does impact mean in terms of equity?
What does impact mean in terms of shifting power? Or making a systemic change or changing mindsets and beliefs around a certain topic. And what I’m learning is more, it’s all about organizational culture. The culture that people have around talking about these issues, which does not come naturally, takes a lot of effort to get to that stage.
[00:17:30] Adrian Brown: Yeah. And you’ve been enormously instrumental in helping us at CPI. Which, as we said at the top of this podcast, we have impact in our name, right? But helping us to have those difficult conversations and to be willing to step into perhaps a less comfortable space when we are asking about the impact we’re having as an organisation, and of course, the impact of the work we are doing with partners around the world, and it’s not [00:18:00] easy at all. And I would perhaps go as far to say that if you are in an organisation, that’s finding impact easy, you may be missing something important. Perhaps that’s a warning sign. If it’s a, it’s a very straightforward conversation that you’re having.
Well, after the break, we’ll be hearing from another expert in impact about how legacy should play a role when measuring this type of work. Don’t go anywhere.
Welcome back to Reimagining Government.
[00:18:39] Andera Delfyna: Hello, I’m Andera Delfyna. I am a learning and innovation expert for the Light for the World. Their work improves health systems, enables education for all, and amplifies the voices of people with disabilities in the workplace and beyond. I currently work in disability inclusive development.
I started off [00:19:00] very heavy into monitoring evaluation, but now I’m in kind of a niche space around learning knowledge management and also innovation management. There’s an innovation space within our programming, and it’s really an open space to look through challenges that we face in programming involve different stakeholders in that one co-creation processes to figure out new solutions and test them out in programming and see how that works.
Pravallika Valiveti: So what have been the problems she has seen whilst working in the impact measurement space?
[00:19:35] Andera Delfyna: Impact measurement is seen as something to be done halfway through a program, and at the end of the program, a lot of times, it tends to be a box sticking activity. It’s in the work plan; we have to do this, and the donor asks for it.
You know, those are the reasons that people tend to have or even do evaluations, and there’s a challenge when it comes to that. Sometimes it’s too late. You know, halfway through a program or at the end of the program, [00:20:00] even though you identified something that was going grossly wrong, it’s too late to pivot and do something different.
Also, they tend to be report cards, you know, for a program or organisation, based on how well they did. Also, you involve external parties most of the time. This is someone external coming in to really, you know, investigate what you did and score you. If you combine this with evaluation approaches that are really numerical and statistical working in social impact and development.
It’s impossible to really focus on those very quantitative approaches, to measure an impact. You know, we’re working in such a complex sector. There are so many interwoven issues, you know, together that it’s really impossible for one program to completely unravel. It’s so many social issues, and you know, so many sectors that really need to work together to really create things.
[00:20:59] Pravallika Valiveti: Andera says [00:21:00] impact is much more than a box ticking exercise. Impact is the impression a project makes on the participants. It’s legacy.
[00:21:08] Andera Delfyna: I look at impact as a legacy. You know, what is the legacy of us as organisations, what’s the legacy of us as programming? What are we leaving behind with our target communities, and beyond?
If I look at impact that way in relation to legacy, then it becomes beyond meeting donor requirements; it becomes beyond, you know, executing a program. You know, tick those boxes. It becomes about. How can we utilise our unique position as a program? Each program has that. It can be from the resources you have, whether it’s financial or human resource, the expertise that you have on board or the partners that you get to work with.
Some are in a position to be really influential on a wider systemic level. Even if it’s a community-based [00:22:00] organisation, you still have a unique position in the program, being close to the ground and being able to see at such a micro level what’s happening, what works, and what doesn’t.
[00:22:10] Pravallika Valiveti: It’s worth noting that this is not the legacy of us as individuals but of the work itself. Knowledge needs to be shared in order to reach this shared sense of legacy.
[00:22:21] Andera Delfyna: So I feel like sometimes we tend to guard our knowledge as organisations or programs and sharing with other like-minded organisations that may be in the same field that, you know, can be looked at almost as competitive.
But what’s the point of watching them lead down a path you’ve gone through? And discover the same things down the line when this is something that you could have worked on and shared earlier on and enable them to do better. You know? So to me, knowledge should be fluid. It should be something that’s shared and that we make it intentional to share with others as well, just so that we do [00:23:00] better as a sector,
[00:23:02] Pravallika Valiveti: as well as sharing knowledge between projects.
It’s also important to ensure the participants have their say in the program design.
[00:23:09] Andera Delfyna: So in our programming, we’ve been really intentional about, uh, program participants being not just at the receiving end of interventions but also part of determining what those interventions are. I feel that in traditional approaches to program design, we tend to take away a lot of voice and agency from program participants; you know, they are the recipients of something, they’re the beneficiaries. We decide what you need and what will really benefit you. And then here, take it, you know, and then report back and tell us how great we are. But, what we’ve been intentional about is involving the people that we intend to directly impact.
The program, in its design, implementation, and reporting about, [00:24:00] you know, what’s working and what’s not, also informs the adaptation of the program all the way to evaluation. And we’ve done that quite successfully, and we’ve continued to see how we can expand that and really cement that as the way to go.
So program participants in the sense of the work they are participating in our program from designing it to benefiting from it, but also reporting back and giving us that really constructive input at enabling us to do better as well.
[00:24:30] Pravallika Valiveti: Allowing the participants to have a voice in projects has a profound effect on their enthusiasm when reflecting on the work.
[00:24:36] Andera Delfyna: If you notice, a lot of the time, organisations tend to refer to programs with the name of the donor. Or program X, Y, Z, or program with X donor or something like that, if we really investigate that language alone, you know, there is this detachment. This is not ours; this is for this donor. It goes beyond the language, but it goes to the mindset [00:25:00] that people have when they are implementing a program.
You know, the way that you treat something that you’re doing as a favour to a friend or an acquaintance isn’t the same way. Approach something that you feel is yours. So in really taking this approach to involving program participants, including, you know, staff, implementing staff in designing a program and deciding what the approach is, what we’re really trying to achieve, how do we achieve it, there is this level of ownership that really grows.
[00:25:34] Pravallika Valiveti: Andera shared an example of where this happened during her work at Light for the World.
[00:25:39] Andera Delfyna: We did this in a program with young people with disabilities, working with them to investigate whether the challenges when it comes to big companies and organisations and why they feel they cannot be inclusive involving them in that pre-research before a program in that kind of barrier analysis of what’s really going on on their end and really empathising from the side [00:26:00] of the mainstream actors and what they feel around inclusion and what are some of the fears and concerns that they have. They also came in with their own experience as well, uh, lived experience, having disabilities to bring this together to figure out, okay, how can we change this? When they talked about the program, it was theirs.
It was, we did this, our program does this, this, and that. One of the things that came up from that process was also their determining what they felt was the biggest impact that the program had. One of the things that they mentioned was the increase in their voice and agency, their feeling that their voice mattered.
You know, feeling that they have a say in what their life becomes and they can influence people to also change. And also seeing the change in people’s attitudes and mindset, you know. In the monitoring evaluation framework, you have targets around maybe opportunities that are opened and, and companies that commit and make changes towards [00:27:00] inclusion and the like from the program participants, they’re saying, no, the confidence that I’ve gained is the biggest impact that this has had.
[00:27:10] Pravallika Valiveti: Finally I asked and her thoughts on how evaluating impact should be adapted based on her learnings. Here is what she had to say.
[00:27:18] Andera Delfyna: If that approach to evaluations as report cards, you know, a chance to really show what you did to a donor and you know, if you were value for money, this is the end. And you know, we really wanted to work with this donor again, so we really have to shine and show that we did something.
Then, you know, there’s a lot of pressure that comes when evaluations are. phrased in this way if it’s intentional and it’s known that we do these evaluations to learn and do better. You know, in terms of opening up also for program teams implementing on what’s really happening and also their role in a program and if them, knowing that their suggestions are valid [00:28:00] and can be taken in and be used to adapt programming, there is also that increase in voice and agency for teams on the ground, the power of the experiences and how that can create change.
And you know, you can never underestimate where that can go and how that can really, you know, change approaches to evaluations and what we discover from them.
[00:28:31] Adrian Brown: So again, I found myself agreeing with, with a lot of what Andera was talking about there, and, and many of the things she was describing actually were sounded similar or certainly had similarities with what Zazie had been describing in the top half of the show. But Pravallika, from your perspective, how do you compare the approach that Andera has has explained to us and and what we heard earlier from Zazie.
[00:28:58] Pravallika Valiveti: Absolutely. There [00:29:00] are three things that really stand out for me. Both of them mentioned impact work as a journey versus a destination. They mentioned that impact is not something that you do to prove as a piece of evidence, but then using this activity in a more meaningful way, using it to learn, using it to, provide agency to stakeholders and voice to stakeholders.
You know, sharing power as well as another connotation that I could see. Then they also mentioned involving the system, and spoke a lot about program participants and moving away from the word beneficiaries, which is a more passive role, but then giving them a more active role, be making sure they’re part of the discussion.
And as you also mentioned, the wider system and different web of stakeholders that function in the system. And then, they also shared a little more about what’s the intention behind this whole activity, and there I spoke about report [00:30:00] card and, and then, you know, instead of it being a report card or a box checking exercise, can we make this a little more about learning?
[00:30:07] Adrian Brown: Yeah. I’m reminded actually of the episode we did on paradigm shifts where we were looking at how if you come from a paradigm, which is a based around management and measurement and rational problem solving, then that’s just a completely different perspective to one which is informed by complexity and systems and emergence.
It seems to me that both our guests today have been arguing for a paradigm shift within the impact conversation. It’s almost like a litmus test: If you ask somebody to talk about impact, you’ll quickly realise where they are on that spectrum or which paradigm they’re sitting in.
And so impact, I’m thinking now almost as impact as a gateway or a door you can step through [00:31:00] to actually get into the conversation more broadly about paradigm and the mental models we’re bringing to all of our work, not just the evaluation and, impact aspects, but it’s, it’s actually the door you can step through to say, well, how are we even approaching?
Everything we’re doing. Does that make sense?
[00:31:18] Pravallika Valiveti: Absolutely. I think that’s the gap that we’re bridging between evaluation activities where it’s a questionnaire and we are writing things down and we’re pulling things together, whereas thinking about impact from the beginning and understanding the system, this would help design better programs. This would help us deliver programs in a more effective way. And it’s just that added perspective; that small change would probably lead to a bigger shift in the way we work and even the way we show up for work. If there’s a, a looming definition that we have for the kind of objectives we wanna reach through this program, and it’s not just one dimensional, and it’s [00:32:00] multidimensional, then I think, I believe that it changes how we show up for work.
It changes how we take decisions. And we choose who needs to be in a room and who does not need to be in a room, where decisions are taken. So, you’re right, it’s a doorway to many more things that it’s connected to.
[00:32:19] Adrian Brown: And that’s why your job is quite difficult for Pravallika. So, thank you for everything you do at CPI.
And that’s been a fantastic conversation. Thank you for helping us navigate through it, and that concludes this episode of Reimagining Government. Thank you to my co-host, Pravallika. And if you’re a public servant or policymaker, we want to hear from you. What does impact mean to you? You can call into the show through our answer machine. Head over to speak pipe.com/reimagining government and leave us a message. Please be aware that we may play these out on the show. If you prefer, you can write to us the traditional way. You can email comms@centreforpublicimpact.org to let us know what topics we should cover in future episodes.
And finally, please remember to leave us our review on your favourite podcast platform and let us know your thoughts on the series. Until next time I’ve been Adrian Brown. Goodbye.