We are currently supporting humanitarian responses in multiple locations - Find out more

‘Double literacy’: harnessing AI for humanitarians and social change 

💬 “AI is not happening to you, but it’s happening with you. And no matter how overwhelming it might feel, you always have a choice. It’s important to be aware of that and to take a proactive stance when it comes to AI.”
– Dr Cornelia C. Walther

Listen to the episode, now streaming on SpotifyApple PodcastsAmazon MusicBuzzsprout and more!

In this thought-provoking episode, Ka Man Parkinson speaks to Dr Cornelia C. Walther to hear her expert take on the implications of AI for humanitarians – and actions we can take today to keep abreast of developments.

Cornelia’s experience positions her as an insightful and authoritative thought leader in this space: a former humanitarian leader with over 20 years of experience at UNICEF, she is now a Wharton/University of Pennsylvania fellow pioneering research on hybrid intelligence and prosocial AI.

In this discussion, Cornelia introduces us to the concept of ‘double literacy’: the mutual influence between artificial and human intelligence – and the dual fluency needed to navigate both. Cornelia explains how understanding AI algorithms and adapting your mindset can help you curate your own AI, gaining deeper insights into both the technology and your own thinking.

Explore how developing this mindset can drive innovation in the humanitarian sector – empowering practitioners to use AI intentionally, stay grounded in ethics, and adapt with clarity in fast-changing contexts.

Tune in for new ways to rethink your AI approach and lead with purpose in the face of rapid societal and technological change.


Listen to the episode, now streaming on major platforms including Spotify, Apple Podcasts, Amazon Music, and Buzzsprout.

Keywords: AI for humanitarians, hybrid intelligence, double literacy, algorithmic literacy, critical thinking, prosocial AI, silent issues of AI, humanitarian coaching.

About the speakers

Cornelia C. Walther, PhD, is a thought catalyst in hybrid intelligence and prosocial AI. Her scope combines theory and practice. As a humanitarian practitioner, she worked with the United Nations for two decades in large-scale emergencies in West Africa, Asia, and Latin America, focusing on hybrid advocacy and social and behavioral change. She collaborates with universities across the Americas and Europe as a lecturer, executive coach, and researcher.

She is presently a senior fellow at the University of Pennsylvania’s School of Dental Medicine and the Wharton Initiative for Neuroscience (WiN)/Wharton AI and Analytics. She is affiliated with MindCORE and the Center for Social Norms and Behavioral Dynamics at the University of Pennsylvania. Since 2021, her focus has been on aspirational algorithms and the potential to harness technology for social good; briefly, she has said artificial intelligence for inspired action (AI4IA). In 2017, she initiated POZE (Perspective Optimization Zeniths Exposure) in Haiti, which has since grown into a global transdisciplinary network of like-minded researchers and practitioners committed to inclusive social change

Ka Man Parkinson is a Communications and Marketing Specialist at the Humanitarian Leadership Academy, where she leads on community initiatives including the Fresh Humanitarian Perspectives podcast and the HLA webinar series. With nearly 20 years of experience driving international marketing and communications across the nonprofit space, Ka Man has led impactful campaigns for the British Council and UK higher education institutions.

Passionate about creating meaningful change through compelling storytelling, Ka Man crafts audience-focused content that informs, connects and inspires global communities. She holds a BSc in Management and IT, an MA in Business and Chinese, and a Chartered Institute of Marketing (CIM) Professional Diploma in Marketing. Ka Man is based near Manchester, UK.

Highlighted resources

Read: Cornelia is a Forbes contributor where she writes about AI. Explore her articles.

Learn: Read about Cornelia’s POZE paradigm which she highlights in this discussion.

Connect: Cornelia is also part of the HLA pro bono coaching pool. Explore coaching and mentoring opportunities through the HLA.

Explore: During this discussion, Ka Man mentioned the GANNET AI tool which was demonstrated at our Humanitarian Xchange webinar by Karin Maasel from Data Friendly Space. Watch the webinar recording.

Did you enjoy this episode? Please share with someone who might find it useful! Let us know which aspect of AI you’d like us to explore next!

Feedback/enquiries: please email info@humanitarian.academy or connect with us on social media.

The views and opinions expressed in our podcast are those of the speakers and do not necessarily reflect the views or positions of their organisations. 

Episode transcript

[Music]

Welcome to Fresh Humanitarian Perspectives, the podcast brought to you by the Humanitarian Leadership Academy.

Not a day goes by without hearing something about artificial intelligence. It’s already transforming the way we work, and – like me – you might be finding the pace of change dizzying.

But what does AI mean for us here in the humanitarian sector – where systems matter, but people matter most? Is there a disconnect? And if so, how do we reconcile this? What does that mean, practically speaking?

I’m Ka Man Parkinson, and in today’s episode, I’m delighted to share an illuminating discussion with Dr. Cornelia C. Walther – a seasoned humanitarian leader turned AI researcher.

Cornelia offers thought-provoking and practical reflections on our relationship with AI. We’re at a watershed moment: AI is happening with us, not to us, she says – and it’s up to us to define that relationship.

Tune in – I’m sure you’ll find this conversation as eye-opening as I did. And afterwards, let us know: what stood out, and where should we go next in this exploration of AI for humanitarians? We’re here to share this journey with you.

[Music fades]

Ka Man: Hi Cornelia, welcome to the podcast!

Cornelia: Hi Ka Man, thank you for having me.

Ka Man: Oh, it’s so great to have you here, and I’m really excited to discuss AI and dive deeper into this really important topic.

So to start off, could you introduce yourself to our listeners and tell us a little about your journey from humanitarian leadership to your current work in research, including in AI?

Cornelia: Sure. So I started out actually with UNICEF working for 20 years in humanitarian operations, mostly in West Africa, but also Afghanistan and Haiti. Then worked in headquarters, developed UNICEF’s global communication strategy, and then decided it’s time for radical change, left. And started to refine and write down POZE – a paradigm for social change via individual transformation that I had prototyped whilst working in Haiti, and which looks at individual behaviour and experience as a composition of twice 4 dimensions, with aspirations, emotions, thoughts, and sensations at the individual level. And individuals as part of communities, countries, and the planet at the collective level. That if we really want to bring about sustainable social change or individual transformation. We need to not only look at each of these dimensions, but start to recognise and influence their interplay.

So wrote a first book about that which was published and found appeal so led to a second one, which applied this POZE paradigm in the institutional context in particular, in humanitarian operations. Then Covid happened, and it seemed like, now we need a holistic understanding more than ever. So I wrote another book which also came out.

And then, starting in 2021, I became more interested in the potential of technology for positive social change at scale, wrote a book about aspirational algorithms and then looked deeper into inspiring leadership. Wrote a book about that. And then, since January 2023, I’m a Senior Fellow at the University of Pennsylvania, where I’m on the one hand, the Senior Fellow at the Wharton Initiative for Neuroscience, and, on the other hand, the School of Dental Medicine working on pro-social AI and hybrid intelligence. And my background is, actually my PhD was in law. So I’m a very an, out of the box person, I guess.

Ka Man: Amazing, amazing, Cornelia, what an impressive background you have! You’re something of a polymath, I think – all the different hats that you wear, the different research interests that you have. I like the way that you, it sounded like with ease, you said, and I published a book on this [laughs], and I published a book on this. But obviously that’s no mean feat, those steps you took along the way.

Cornelia: I guess that just happened. I never planned it that way. But maybe two other things that might be of interest. So a – I’m also a pro bono coach in the Save the Children coaching Alliance, and I think that’s how we met at the beginning, and I’m a trained yoga and meditation teacher.

Ka Man: Wow! Amazing. You have a very holistic view of the world, then, sort of scientific and spiritual, I guess, as yoga can be a spiritual practice. I like it. I think that’s probably a conversation for [laughs] maybe a separate discussion. But wow, really interesting.

So I just wanted to go back a little to what you said about your experience in humanitarian leadership working in communications, and you’re working in different fragile contexts. And then you said that in Haiti you started to work on this paradigm, the POZE paradigm – is that how you call it? So could you just tell us a little bit more about that?

Cornelia: So POZE stands for inner peace, and it’s actually what Haitian colleagues came up with when they experienced it the first time, because they said, well, this is actually POZE, inner peace, and it just stuck. And over the years I realised that it’s also a handy acronym for the four core pillars that POZE is actually all about, which is P for a new perspective to life and living. O for the optimisation of interplays which leads to the successive achievement of Zenith and Exposure to life as it is and could be.

And it’s also a meditation exercise that we can do at the end if we still have time. And I started that out because I realised that we are always talking so much about telling people what they have to do and what they should acquire from the outside in, but actually the most sustainable skill or the most sustainable asset that you can offer somebody is to find everything that they already have inside.

So this famous word of resilience is, in a way, something that I think has been used and overused too many times. But that’s what POZE very practically seeks to nurture. And yeah, so the tagline, if you like, of POZE, is change from the inside out and I firmly believe that that is what is the most important aspect that we can offer anyone to find all the resources that they have inside to flourish.

Ka Man: That’s fascinating. So out of interest because you mentioned, like you say, I was introduced to you via Charlotte Balfour-Poole, who’s our Head of Coaching here at the HLA. And I’m curious, so this POZE paradigm, does this inform your work as a coach, then does this underpin your sort of philosophy?

Cornelia: Yes, very much so, because it’s looking at sometimes overwhelming situations of personal or professional change in this, in this multidimensional way which makes it more tangible and approachable because you can distil it into the 4 components like the aspirational, the emotional, the intellectual, and the material side of things. So you can choose the entry point that feels right now the most comfortable to address, but which is connected to the other ones. So if you think about yourself, it’s like this, this 4-dimensional spiral where one part is part of the others. So it’s a continuum. And each of us is this organically evolving kaleidoscope, and when you address one part of that kaleidoscope, then the other ones are changing as well. So it’s a ripple effect. But you have the choice. Where do you kick that ripple off?

And the 4 macro questions that derive from that, and which I like to use very much in coaching, and which turned out to be very helpful are the why, who, where, and what? So why are you here? Who are you? As a human being? Where do you stand in life, and what are you doing to align your aspirations and your actions? And I would actually invite listeners, even if that has nothing to do with AI to maybe think about these questions whenever they have a free moment.

But linking this to AI now, it’s this multidimensional understanding that underpins POZE which also underpins what I call natural intelligence, and which I put as the opposite that stands in complementarity to artificial intelligence. Although one might argue that the intelligence of artificial intelligence is actually a misnomer, because natural intelligence, human intelligence is so much more multidimensional and complex than what we find in the AI space.

Ka Man: That’s fascinating. While it’s, that I can see your naturally, you’re grappling with all the big questions, that’s the way that you’re viewing the world, and that you’re coming at it from what you described with your previous experiences. So that explains the sort of progression if you like into AI, and like you say, it’s not artificial as such, it’s not a discrete, binary, natural intelligence/artificial intelligence. It’s yeah. It’s much more holistic than that. So that’s something that we’ll delve into a bit more shortly.

But yeah, so I think that when we had a quick chat before today’s podcast discussion, and I said to you at the time that I think you’re probably quite, in inverted commas, rare species [laughs]. Someone with such rich humanitarian leadership and experience with UN organisations working in so many different contexts as well as now, your experience in academia and research and AI. I don’t think I’ve encountered anyone with your profile.

So I want to know. I’d be quite interested to hear what’s one key lesson or insight that you’ve learned from these experiences at this intersection of AI research and humanitarian sector that you’d like to share with our listeners.

Cornelia: Think they’re at least 2 that are really, that keep on intriguing me, which is, on the one hand, this aspect, that all this hype around AI, in my sense, is actually an invitation to take a step back and face our own humanity. Because whatever happens right now in the AI space is actually a reflection of what happened in humanity ever and again.

And there’s a lot of discussion about what is going to happen with AI about the gloom and glory, and people are either very, very excited about all the efficiency and effectiveness, and how fantastic AI is. Or there’s a lot of gloominess about. Oh, my God, AI is so terrible! And it’s going to eradicate all the jobs. And in the meantime, and in the long run actually decimate humanity and the species overall.

And I think what both camps are missing is that AI is a mirror, and it’s as good or as bad as the humans that design, deliver and deploy it.

And we can’t expect from the technology of tomorrow to live up to values that the humans of today do not manifest the old saying, garbage in garbage out still holds true. But we have a choice in that regard: values in, values out. But that’s up to us.

And that then leads me to the second part of this, which is the potential of pro-social AI, which refers to AI systems that are tailored, trained, tested, and targeted to bring out the best in and for people and planet.

And again, this starts with human intentions. So in a way, it’s very non-technological. But it’s coming back to something else. That is, agency amidst AI.

Because we are right now navigating a very slippery slope, which is this experimentation with AI, which leads to gradual integration of AI. And from here it’s a very treacherous territory that leads to reliance and full-blown addiction. Like you and I and our listeners, I presume, had the opportunity to grow up in a time and age where AI was not omnipresent, although you could argue that actually, the microwave is a very rudimentary form of AI. But kind of generative AI that has popped into the scene in November 2022, when Open AI released ChatGPT.

But the next generations won’t have that luxury. They will be born into a world where AI is increasingly everywhere. And where the human being, lazy as it is, just has an abundance of opportunities to offload cognitive effort.

And that is very dangerous, because if we can, then we walk the path of least resistance, that the brain being a muscle, we use it or we lose it.

And in that sense, agency is something that I feel requires double literacy which encompasses on the one hand, human literacy. So an understanding of this holistic composition of who we are so of self and society. And, on the other hand, algorithmic literacy, not just what’s the latest tool, but also how it works and why.

Ka Man: That’s all really fascinating. There’s so much I want to ask you. Just delving into double literacy. With this algorithmic understanding, how can we, how can we go about doing that? So, me as a layperson, I’m tinkering with tools like ChatGPT or Copilot, you know, on a day-to-day basis. And I experiment with it to see what I’m putting in, and you know the different variables, what comes out. But how can I, without that expertise or access to more powerful systems, how can I make the system less opaque? How can I delve into that and understand how the algorithm is working under the bonnet.

Cornelia: Something that is really useful, and which I have found working with teachers and with people from various different sectors, is to start to curate your own critical thinking, and to in each interaction that you have with AI, think about your own thinking. To start curating this this double type of literacy, it’s important, or it’s useful to curate critical thinking and to systematically cultivate curiosity and observe your own thinking.

So in a way, metathinking, which is, which can be done using the 4 A’s of the A-frame which is awareness, appreciation, acceptance, and accountability. And that links back to being aware of how you interact. So in a way, observe yourself whilst you are typing into Chat GPT, or whilst you are putting a voice prompt to Copilot.

But do it not like you’re just delegating but you remain in the driver’s seat and then appreciate what you get. But at the same time appreciate where your own contribution lies in this. Accept where the limitations of both are and in all of that remain accountable.

I think this sense of accountability, that no matter how powerful our systems are, or become we, as the human beings, remain accountable for what we put in, but also how we use what comes out.

Ka Man: That’s really interesting. Thank you. So are you kind of saying, advocating that we are much more intentional about how we’re using AI systems and being conscious of our own actions, what we’re asking it to do, what we’re expecting the systems to do, and then being aware of how we sort of combine our own thinking, together with what AI is generating. Is that kind of what you mean?

Cornelia: Yeah, I think this deliberate intention is really the key to a positive cohabitation with AI, because there are 4 silent AI issues, if you like that I named the ABCD of Silent AI issues which are important to keep in mind as we’re getting ever more excited about our technological toys which is agency decay. So the A – bond erosion. Because let’s face it, there are more and more people who establish an emotional connection with their chatbots. There are studies that are coming out, showing that they use AI as a coach. They use it as a companion tool. They use it as a teacher. And all of that is fantastic. But right now we’re gradually moving into another stage where these tools will be embodied in robots.

Now beam yourself 5 years into the future. You will have an always available, always friendly, always polite, intelligent, someone, something around you at your fingertips. Why would you bother with another human being that is just as quirky and cumbersome as you are?

Then the 3rd one is the C for the climate conundrum, because it might not seem, it might not be visible, but free does not mean free. And the simple fact that AI is a huge energy consumer is also important to keep in mind.

And finally, the D for divided society, because we might tend to forget that, but especially for us, working in the humanitarian space. It’s maybe easier to remember that this whole AI discussion right now happens in a very luxurious space. And yes, there are billions of people that have, or there’s now, supposedly 1 billion people who are using AI. But there’s still a couple of 1 billion people out there that do not even have access to safe water, food or electricity. So and they are not represented in these AI systems that are increasingly influential in decision making. So I think it’s important to keep this ABCD in mind when we’re thinking about AI.

Ka Man: So I know that you’re a prolific contributor to Forbes, where you’ve written, published a lot of articles on AI on these topics that you’re talking about now. So we’ll include in the show notes some links, the link to that, so that people who are interested in some of these concepts that you’ve introduced. They can delve deeper into your thoughts on that there.

Just to sort of follow up on something that you mentioned there. Wearing your humanitarian hat, and all of the major challenges that you observed and experienced throughout your time in the sector. And now, with your sort of future facing lens that you wear. Where would you like to see AI go next in the humanitarian space?

I know that there are organisations and individuals who are proactively making strides in this area. For example, I mentioned to you, when we had a quick catch up, that we had a Humanitarian Xchange webinar, where we had a guest talk about the GANNET AI, which is like a AI tool contextualized for the humanitarian sector.

So that was really well received by the people who were watching and listening, and wanted to hear more. So if we’re looking a little bit further down the line, what would you like to see come to the market, so to speak, or what application would you like to see in the next year, 2.

Cornelia: I think there are 2, at least one which is this aspect of optimisation. I think there’s a lot of bureaucratic work right now which is taking people’s time like thinking about my time at UNICEF.

I started out spending a lot of time with the people we were there to serve for. Like I started out in in Mali and in in chat in the border area to Darfur. And I spend a lot of time interacting with people in the field and over my 20 years with UNICEF, I gradually noticed a change in the proportion of time that I spent with people versus the time that I spent on screens.

And I think AI can help us to take away some of that screen time and take over some of the more redundant and routine tasks that are more computer based.

And maybe open up more time for people actually to spend time with other people, be that with other colleagues, be that with people from other agencies or other organisations, be that with local communities or local stakeholders, but to open up more space for quality time with other human beings, which, in my sense, is still core to the humanitarian operation.

And the second one is the journey of continued learning. Because in that sense AI can be very, very helpful that it gives everyone the opportunity to constantly level up their skills. Because AI can be that personalised tutor that is always with you, and that is helping you to gradually grow your own strengths and maybe also to unleash skills that you never thought you would have a chance to hover in more systematically.

Those 2 areas, they are, in a way, looking at the micro level, at the individual and then at the meso, the community, the interrelational. Now, if we take this widening it further out to the collective sphere, we can look at the macro dimension, which is more the country level, and there AI can serve a lot in order to optimise resource allocation.

When you think about countries as this kind of macro matrix, there is a lot efficiency to be found in terms of needs versus resources and the same at the global level.

So I actually wrote about that in a book in 2021, the M4 metrics and which is not that new in the sense that there was the social accounting matrix which was publicised in the seventies. And where the micro meso macro meta level in a way, were brought together in one coherent logic. Now we can supercharge that with AI. So in a way, a social accounting matrix for zero.

But all of this comes back to the logic of, we need pro-social AI, not just AI, because the first, second, and 3rd Industrial Revolution, they were driven by commercial interests, let’s face it. And society today is not really a very convincing case for we are flourishing everywhere and in an equitable way.

So that’s why I think this time around, and what some call the fourth Industrial Revolution, we need to systematically configure it in a way that brings out the best in and for people and planet, rather than considering the social aspects as collateral benefits.

This is not saying that it shouldn’t be, or it couldn’t be also good for business. But I think we need to reverse the equation and aim the social and planetary benefits first, and then look at the profitable side of things.

Ka Man: So how can we make that happen collectively? Is this a case of advocating at the policy level? How can we go about ensuring this pro-social paradigm is sort of front of mind when developments are being made and not focused on the commercial aspect which I assume is dominating the agenda currently. So how, what, what tools do we have at our disposal to try and influence this?

Cornelia: Yes, definitely, advocacy is a big piece, and especially an organisation like Save the Children, has a huge opportunity here to systematically stand up for pro-social uses of AI.

And, on the other hand, the personal mindset, like every single interaction that we have with AI, why are we using it? Coming back to the 4 macro questions that I mentioned before. Why are you using it? Who are you without technology? Without AI? Where do you see the complementarity?

And what would you never delegate to? AI. So in a way, this whole transition of unknown territory that we’re currently navigating as a species. It has very real and practical implication for each of us. But it also has far reaching implications for the organisations that we’re part of.

Ka Man: I was kind of thinking as you were speaking. Obviously, a lot of organisations, most organisations will have policies around IT use generally, social media policies. I’m not sure, I’m making assumptions, I don’t have any insights on this, but I’m assuming that people won’t be ahead of the curve in terms of AI policies. Or even if there is one, it’s changing so rapidly that it may have to be updated regularly. And I was kind of thinking, as you were talking that maybe kind of developing one at an individual level, like what’s our own personal philosophy towards using AI might be a practical step, and that might work towards that intentionality, I think that might, does that sound like something that might resonate with what you’re advocating?

Cornelia: That’s exactly it, because ultimately you can’t really be a convincing advocate if you are not aligned with it personally speaking, so I would call that double alignment. So the alignment offline between your aspirations and your actions and the alignment between your aspirations and your algorithms.

But you first need to get the offline component right before we can tackle the online component. And in that sense all the discussion around the alignment conundrum where there is debate about our algorithms not being aligned with our human values. Well let’s face it, we haven’t spent that much time on defining and pursuing values.

So in that sense, and coming back to something that I said earlier the AI debate is a humanity debate, because it faces us to look at the white elephants that we have shoot underneath the table because they are uncomfortable.

But if we want an AI that is good for humanity, then we need a humanity that is good for humans.

Ka Man: Oh, indeed, yeah. So this leads me to my next question around ethics, because that obviously underpins everything that we do as humanitarians, or even just as individuals operating in this world. With the rapid evolvement of AI, how can we, as humanitarians, ensure that ethical considerations are adopted and embedded into what we do. Do you have any thoughts on that?

Cornelia: Yeah, it’s coming back to what we said earlier that it starts with intentionality and maybe to use the AI label, which right now is sexy, and which finds a lot of attention to couch in the humanitarian values that we stand for, and that matter to us, and to use every single debate around AI as a space to bring in. Yes, comma and What are the humanitarian values what we need to make AI not just a force for efficiency, but a force for good, pro-social force, and that requires hybrid intelligence.

So the complementarity of natural intelligence and artificial intelligence, and in that sense I would say, the success equation of the future is actually NI plus AI is HI.

Natural intelligence plus artificial intelligence gives us hybrid intelligence.

Ka Man: There’s so much to take in and process around AI generally, and so much you’ve you shared so much thought provoking perspectives and views today. So what’s one key message that you’d like to leave our listeners with today?

Cornelia: AI is not happening to you, but it’s happening with you, and no matter how overwhelming it might feel, you always have a choice, and it’s important to be aware of that and to take a proactive stance when it comes to AI. It can feel overwhelming, because there seems to be something new happening every single day.

But that’s one additional reason why it’s important to cherish and cultivate your own agency. And what kind of relationship do you want to have with AI. What do you want to delegate? Where is it helpful to you? Where is it making you better, where is it helping you to grow? And what space do you don’t want to delegate? I’m not saying that that line might not shift. But it’s important to always be practically aware of. Where do you draw the line?

Ka Man: And how critical or time sensitive is it to proactively give thought to this, and actually take steps.

Cornelia: I would say it should have happened yesterday, but if it didn’t, then maybe today is a good time.

Ka Man: That’s a very good call to action. Very good advice. Loved this conversation, Cornelia. I know that you have such a wealth of expertise in this area and drawing on all of your humanitarian experience. So I’m really, I feel very lucky to have shared this time and space with you, to have this gentle introduction into AI for humanitarians.

So we’ve been discussing the potential for a follow-up discussion – either as a podcast or webinar. So this is where we’d like to invite our listeners to let us know if this is something of interest, and if so, what kind of topics or areas would you be interested in? Would you like Cornelia to deep dive into any of the concepts she introduced today?

So you can email us on info@humanitarian.academy, or if you’re listening to us on Spotify, you can also leave a comment on there. So we’d love to hear from you.

So thank you once again for taking the time to talk to us today, Cornelia, and thank you to our listeners for joining us for today’s episode of Fresh Humanitarian Perspectives from the Humanitarian Leadership Academy.

Note

This transcript was generated using automated tools. While efforts have been made to check its accuracy, minor errors or omissions may remain.
Episode produced by Ka Man Parkinson, March 2025.

Newsletter sign up