Sophie and Sarah reflect on AI as a mirror for culture and power, the growing importance of leadership judgment, and why the real challenge now is not speed or adoption but choice. They explore what disciplined experimentation looks like in practice, the vulnerability leaders need to model as they learn in public, and how organisations can create just enough structure to explore AI without rushing to premature certainty. This is a conversation about leading well when the tools are powerful, the future is unclear, and discernment has become perhaps thecritical organisational capability.
If this sparks your interest, please get in touch to hear more about our AI for Changemakers offerings and our research agenda for 2026.
Thanks so much for listening! Keep in touch:
Transcript
Sophie: [00:00:00] Hello and welcome to Mayvin's podcast. AI is everywhere right now, as you might have noticed, louder, faster, and more confident than most of us feel. This conversation grew out of a December webinar that myself, Sophie Tidman, and my colleague Sarah Fraser, hosted on AI for Change Makers, where a surprising number of people showed up just before Christmas to think together about what AI is really doing to organizations leadership and our sense of work.
This podcast picks up where that live conversation left off with a little bit more space to reflect, challenge ourselves, and follow the threads we didn't have time to pull on at the time. Enjoy
Sarah: I am not ready for 2026. You were saying this earlier. I'm not either.
I might be by February, but trying to find my way in.
If you're staying grounded to [00:01:00] one thing, what are you holding onto at the beginning of 2026?
Sophie: I love to have a word for the year. So definitely this year, I think a lot is about, self-care. Being really connected with myself and compassionate towards myself. And secondly, discernment, is a really lovely, word in relation to the world today.
So where we have so much content, so much noise, discerning what's important to you, what you choose to act on, what you choose to let go of. That's the real key differentiator, not just of leaders, but in your quality of life. And I think that's something you learn by really looking inwards, really connecting to yourself.
Sarah: Actually, yes. I'm similar to you. Discernment. 'cause that feels like almost a form of self care.
Letting go of the things that are not, you realize are not so important and holding onto the things that are. [00:02:00] Mine last year was brave and bold. This year feels like it's more about being, yeah, the ambition still in there.
Ambitious, but it's already enough. Keeping things in balance.
Sophie: Yeah.
Sarah: It's like, yeah, be ambitious and push myself and, you know, push what we're trying to do Mayvin as well. But, but, and you know, what we're doing is enough as well.
Sophie: Yes.
Sarah: Don't push it too far.
Sophie: The ambition, the drive isn't coming from a sense of lack.
Sarah: Hmm.
Speaker: Yeah. It's coming from a sense of this is already enough abundance, because we love this work.
Sarah: Yeah. And it plays, it absolutely plays into the world that we find ourselves in and the world that our clients are finding themselves in with the AI hype and the pace of change which is, partly why we want to keep [00:03:00] talking about the impact and implications of ai, from an OD perspective. It feels really important. It feels like an important part of work now.
Sophie: I've done several sessions, webinars around ai, in recent months
you don't get everything in and people always want to talk more and say more, and you always miss stuff. It's just a huge area. When we were developing the session and a lot of our offerings in this area, the discernment, the decision to kind of what, what include what are the really key things that people need to be aware of in order to move forward?
Actually, that's the really tricky thing. It comes back to something we talked about in the webinar about, the human value of storytelling, of prioritizing, of creatively framing a challenge, all these things are even more important in the age of ai, in the age of mass intelligence, huge amount of content.
Sarah: Mm-hmm.
Speaker: Being held.
Sarah: Should we share a bit more then? Yes. Starting off with what came up in the webinar, what we wanted to share more widely than those who were able to come [00:04:00] to that event? What would you pick out as the core to that conversation and the interesting snippets?
Any things that stuck with you from last year?
Sophie: It struck me how many people were there at that time of year, 18th of December. There was a huge amount of energy around it, whether that was just incredibly enthusiastic daily users, and also, several principled skeptics. So there's just a lot of emotional charge to it.
This is the point we're trying to make, likely everyone's going to become some kind of technologist in the decades ahead. The real shift, actually for people is in terms of, their identity, who they are in this new world, and trust as well. do we trust ai? Do we trust the, the organizations we working for with this increase in capability and we don't know where it ends. A lot of our work is being, is helping, being, help, helping leaders, helping change makers be in uncertainty, ambiguity, and this is just [00:05:00]next level on that.
But it is the same emotions that are evoked. The strength is more powerful with ai.
We talked about AI being like a mirror for organizations.
Sarah: It's big picture and, and smaller picture, but in terms of. AI as a mirror. It's this, it's a, it's a realization that as organizations are trying to, or are integrating AI into their, into their practice, into their ways of working, it is showing things up in organizations,
it acts as a mirror of organizational culture. It acts as a mirror of our thinking and the dominant views. So really big picture. We know that. AI is created by all of the thinking and ideas that human beings have already put out there. It's just amalgamating all of it and playing it back to us in a way that it thinks we want to hear.
So already it's playing out power structures, it's [00:06:00] playing out dominant voices in our society, and that's a challenge that we need to be aware of, and conscious of as we use it. So that's a really big picture. Next level down. If you think about it, AI at an organizational level, it's really showing up some, you know, aspects of organizational culture, in terms of how it's rolled out, in terms of how, people feel about.
The integration of AI into the roles, whether it's causing anxiety, whether it feels like there are increasing power dynamics at play in terms of who gets to use it, who can overtly say they're using it, how it, is used in terms of decision making. Lots of people are using AI bots to turn up to meetings instead of going in person. But what does it say about your organization culture, if it becomes habit to do that rather than turn up to the meeting in person. And what does that do to the [00:07:00]conversation? So straight away, that's a very tangible
relational example of how aI might act as a mirror of your organization.
Sophie: And I think a lot of people do in the discussion is, oh look, this is what's happening. Isn't it really scary? Mirrors are very powerful though, aren't they?
Yes, it's amplifying, but it's giving you a lot of information. It's making things difficult to ignore. So if you have a meeting culture, which people find extremely draining, and not collaborative, not productive, not creating the kind of psychologically safe spaces for different conversations to surface conflict and tension in a useful way. Well, yeah, you're gonna send an AI bot, aren't you? But if you are all AI bots and nothing's getting done in meetings, it does rather, um, force you to have the conversation of what are we here for then? And the same thing, um, with, fetishes around, outputs rather than outcomes and mm-hmm.
[00:08:00] PowerPoints just a lot of content in organization, a lot of process going round and round. A lot of governance boards which don't actually come to key decisions. You can, imagine superimposing AI on top of that and you just create more. We definitely shouldn't be using AI to create more PowerPoints.
That's not what the world needs right now.
Sarah: It's definitely not what the world needs.
Speaker: never needed
Sarah: that. It's
Sophie: Nobody's ever said, I need more PowerPoints in my life, and actually we just can't function with that much content. So in a way, my optimism is that it brings that kind of, fetishization of content of process to a bit of a head and we start to have conversations about what really matters and what are we really here for.
And in that way, mirrors can be incredibly useful. But we have to be, curious, and open to it.
Sarah: Yeah. I really like that, that idea of, bringing curiosity into that process of engaging with ai, integrating AI into your ways of working. It's a responsive process.
It's relational. [00:09:00] It's a conversation with AI it's going to mirror some things back to us. Individually it will do that, but organizationally we'll start to see patterns on what it's reflecting back. And it's such useful data.
So if you're an organization. Doing culture change work which most organizations nearly always are in some shape or form. Then AI actually becomes a really helpful tool to provide some data to be part of that change process rather than a separate tech. Change process. It's one and the same thing.
Speaker: Yes. It's a very context specific tech. How you use it can have vastly different, outcomes depending on how you're using it, what content you're giving, how you're giving the data, what you're allowing it access to, how you are working with it, as a partner versus just as a, as a tool. That requires people capability, but also people to feel empowered to do [00:10:00] that and to feel, um, trusted to do that. If frontline workers don't feel like they're empowered to do it and feel like they could be doing themselves out of a job because they don't have psychological safety in the organization.
And the decision making power that organization is going to is not being gonna be able to survive the competition.
Sarah: Mm-hmm.
Sophie: I am very aware of, um, the risk of making any kind of predictions in the current climate, but, I think someone in the. In the webinar I was talking about, well, it's always leaders that make the difference and doesn't really matter about AI because whether it's a good leader or not will really make or break somebody's experience in an organization.
Mm-hmm. Which I somewhat agree with. And also I think there's the potential for this to radically change the structure of our economy. So organizations who lack trust, who lack really positive cultures. Just won't survive potentially.
Some organizations are thinking like, okay, well what's [00:11:00] the best tool to use? I mean, there's amazing tools out there, right? So you could get a marginally better one. Or you know, if you paid a bit more money, if you looked a bit harder. But actually the way that it's going to be differentiated, it's in how it's used and how it's integrated into your organization.
Sarah: Hmm.
Sophie: The tech is not the bottleneck right now.
Sarah: No, and I love, is it Ethan? Icky just says, hold onto the fact that this is the worst version of AI you are ever gonna use. It's only gonna get better from here. So don't get, don't get limited or don't slow yourself down 'cause of
Sophie: yeah. Don't fixate on the tech if you're gonna fixate on anything, experiment with it in a really disciplined way, as a team, not as an individual productivity hack.
Sarah: So, uh, autumn last year, we ran an experiment on teaming with ai. It is bringing people together in person. Keeping that human aspect, relational aspect in the room.
Really important part of this in terms of building trust and confidence, around using ai, but bringing ai. [00:12:00] Into a team conversation rather than it being an individual, endeavor. It becomes a team endeavor. And what, what really stuck with me from that workshop, from that experiment and conversations since and, and some of the research we've done is, it starts to help teams to create their practice and in effect, like a culture, a practice around using AI where there is a sense of norms, this is how we can use it round here. There's a sense of learning from each other, not just thinking, I dunno if I'm quite doing this right, but I'll give this prompt a go and see if it gives me something useful. but what it also keeps in the loop is, that very human skill of critical thinking.
It avoids that sense of, right. I use AI to spurt some information out to me, which I can then use. And it just is a productivity hack. What it turns into is using AI to [00:13:00] develop creativity, but with other people, being critical about what it spurts out, not just swallowing it whole.
Sophie: Yes. And you're learning so much more. I mean, when we've done it in Mayvin, when we've done it as a team, it feels like the learning's accelerated. And it also just feels so much nicer. I've done this with a few teams and before you start the teaming, people have all their stories about AI and their challenges and the speculation about what's gonna happen and what it can never do, but what it's really replacing.
And you just get these cycles of speculation, which is a bit unsatisfying. And then when you start teaming, people just get into relationship with it. And I don't say that in a way that's kind of like, it's not human. We can't be in relationship with it.
I mean, I'm in relationship with my garden. And I'm in relationship with the natural world when I'm going out for a walk we are healthy when we're in relationship with the things around us. AI is here. It is a form of more than human intelligence. [00:14:00] Let's be in relationship with it.
And as soon as we can start using it in a practical, grounded way, you can start losing all that projection. Because some of that as it's a mirror, right? Some of it is our, we don't trust ai. It's trying to, it's out to get us. Yeah, some of that's a mirror of our own attitude towards authority, for example.
So that mirror at the individual level can help gain a bit of personal insight as well, I think.
Sarah: And you're making me think of something you said, which I loved on the previous podcast that you did with, um, with Hugo. If you team with ai, then you. Collectively decide, okay, this is what I want you to do. This is who you are, this is who I am, or this is who we are and this is the role I want you to play. So you decide collectively how it plays a role in your team and you start to create some guidelines, rules, boundaries.
Sophie: Which is all about having great conversations, making sense together with the AI and the rest of your [00:15:00] team. These are all OD skills. These are all deeply human, skills.
We talked a bit about what, what is our real value then? Oh, let's not get kind of like, oh no, we are not worth anything. Now we're gonna be superseded. But actually what really brings us joy, and what do we really offer to the rest of our teams, our organizations, our societies. It's, it's deeply human things about care, storytelling, about framing purpose.
We need to define those things otherwise they're gonna be determined for us. It's by AI or tech bros. Or governments. We need to be active. So I think a lot of the stuff I think we see in AI is things we see patterns more widely is this sense of disempowerment.
We have been getting conceptual with ai, even in this conversation. It raises some big questions, right? But if you are in that space too much, it becomes very ungrounding and potentially quite disempowering. I'm sitting at home and I'm just completely overwhelmed by all this content I'm getting from my colleagues generated by chat g [00:16:00] pt, which is sometimes what I do to my work colleagues.
So actually let's get on a call and think about, are we on the right track or should we scrap that? What's the real question here? That, and that's just a much better conversation, but it's those moments of taking charge.
Sarah: Yeah, I love that when we, we caught ourselves in that moment, a couple of months back we are like, hang on a minute, we are just,
throwing a lot of info around and lots of research, we're using AI to summarize, and draw out key points, but it was just producing a lot of content let's actually get in a room together, look at it together, give it a bit of a critical review, have the conversation about what's standing out, what's important to us, and what we're trying to do here.
An hour and a bit together made a huge difference. So remembering that that's a critical part of working practice. It doesn't need to take up a lot of time.
It can be, an hour. In a week where as part of your team meeting you use that time to team with AI on a question or a problem you're all part [00:17:00] of. So it doesn't need to be big and on top of other work. It's just a different way of working together.
Sophie: Yes. So identifying the right use cases and using it with discernment is really important. I think what we're talking about here is that shift from mass intelligence, which is a huge amount of content, without discernment, and can feel very overwhelming, but huge amount of great idea.
Like, just look at science at the moment, like how many great, discoveries are coming through AI's ability to churn through data and spot patterns. We need to move towards a kind of co intelligence where we're able to, decide in a dynamic way, what's the artificial intelligence doing?
What are humans doing, in a way that's quite, there's not such sharp boundaries i'm just gonna delegate this task when I had this idea, but actually was it chat GPT that had that idea. It wasn't me that had that idea.
I had it in collaboration, in conversation with, yeah, conversation with um, and when I researched this idea of [00:18:00] co intelligence, I'd heard in the AI debate. It didn't come from ai, it came from before that. It's a very, I od term, and it's basically a term about, intelligence is a collective quality and emerges in relationship.
I think that's an important thing when we're talking about teaming. It's not just pooling knowledge. Like, I've learned this and you learned that, and we're gonna share, swap that knowledge. You're actually creating new knowledge because of the different pieces you're putting together and the different perspectives you are putting together.
And AI is a way of providing different perspectives.
Sarah: Yeah, so playing around, that's a great, prompt and way of using it, playing around with AI and getting it to play a role in your team that maybe isn't there.
Sophie: Yes. We wanted to get people to experiment and, and some of the best experiences people had with that was when they prompted the AI to be a, a provocateur, or challenge them.
Tony Nichols, one of our previous colleagues was there, and as a trustee, he was asking chat GPT to challenge him on his role as trustee of an organization.
Think about how they could be more inclusive and the things he should be looking for. And he just got so many ideas. He was buzzing at the end of the call. It was really nice. [00:19:00] Anytime of co intelligence. Again, we're sort of subverting the narrative, which is technology needs to be seamless.
We just need to be, you know, so we're not even conscious of it. That's somehow the dream. Whereas actually, and you've talked a lot about this, we need to be adding friction
Sarah: mm-hmm.
Sophie: Into how we work. And by that, I think what you are talking about is those moments of reflection, those kind of, being like really conscious, like, oh, this seems maybe like it's moving towards group think let's stop right now and add a bit of friction, add a bit of provocation here to help us diverge a little bit actually. 'cause we might be missing something. And be more clear on the boundaries when we're using ai, when we're not using ai.
Sarah: The worst case could be if AI in our organizations becomes part of our group thing and just 'cause it, all it'll do is just reinforce it. So I love that, that idea that it's gotta be counterculture. You've gotta consciously use it, at times, to be a bit of a provocateur.
What am I not thinking of here? What is the [00:20:00] perspective that might really challenge my thinking and get it to bring that in.
In a team you might realize. That actually one of you is really, you know, actually that, that is how you were thinking, but just didn't quite feel able to say it and what brilliant material for some team development and culture change as well.
Sophie: I was talking to a client recently and she was talking about being the only woman in an exec team and how she thought, I'm just gonna keep track of how much I get interrupted, and whether that
Sarah: mm-hmm.
Sophie: Is more than the men in the team. I was like, just use ai. Because yes, you can be a really powerful intervention, but it's bloody tiring being one person in the team who's always intervening, actually, AI is, is, in large, in many situations, seen as a neutral. We do this in our own team meetings.
It, it, it looks about who speaks the most, who, who, who, what kind of, interventions different people make where the energy was highest. And it could easily kind of go, well, on average, Sarah was interrupted seven times by Sophie in this podcast [00:21:00] and only interrupted Sophie once.
Sarah: Yeah, so use it for data, use it as a provocateur. I think that's a good point to jump into, the value of human doing and human being versus what AI does and how that's really shifting. It is really confronting for people thinking about what is the value I bring if actually I can use AI to do a lot of this.
I think there's something important about talking about how we're using AI and bringing that out out in the open. That's a real leadership, you know, lead the way on that. In order to ensure AI is, is seen of, is of value and people can develop and adapt their roles. Some roles we know, will be inevitably changing as, as a result of ai, but most it's gonna be about adaptation and, a, a review reevaluation of, of the human value in your role.
Sophie: And the role for leaders is huge there. So [00:22:00] there's huge ramifications for how you might organize, what forms of organizing might become so much more possible and even more preferable. Smaller organizations, for example, that are much more dynamic. Definitely smaller teams in organizations. So the ones really seeing, big benefits seem to be, organizations where they have small teams, who have a lot of scope and are very empowered.
Sarah: Mm-hmm.
Sophie: And crossdisciplinary. So they have a bit of tech, knowledge, but they also have the specific subject matter expertise to have the discernment that you need. The boundaries between teams will kind of become less. The value of central control will lessen hugely. Um, so how do you maintain coherence in that organization without controlling bureaucratic structures? So you are able to be more agile and adaptive that's where AI starts to intersect with principles around self organizing, and um, is a fascinating area.
I think [00:23:00] we'd love to get into a bit more depth in 2026.
Sarah: Yeah, they're getting into AI and the octopus organization. I think there's a lot to say about that as we decide we can cope with jumping into 2026.
Sophie: Yes. Well, the start before you are ready maxim, I think I borrowed from Steve Chapman, and the 70% rule, from Oliver Burkeman.
These things are very present in my mind for this year.
Sarah: 70% is good enough. That's my kind of be ambitious and it's enough.
Sophie: Yeah. By the time you get to perfection, the world has moved on. Vastly
Sarah: moved on. You're too late. Yeah.
Sophie: Yeah.
Sarah: I feel like your point around, the importance of leadership in this and what that leadership might need to look like in 2026 and beyond is really critical. We do a lot of work around leadership complexity, adaptive leadership, you know, very responsive.
Collective [00:24:00] approach to leadership and bringing AI change transformation into the mix puts a, another layer on that.
Sophie: Yeah, it's definitely something that leaders need to role model in their organization.
Leaders need to know what they're talking about, right? They need to be grappling with it. That makes it very exposing, doesn't it? If you're grappling and experimenting, you're hopefully doing something like the 70% rule, but that's very public. So yeah, the vulnerability that's required as a leader.
That's fascinating. And doing that, I think we are talking a lot about teaming but often the teams are, less intact teams that may be maybe communities of practice.
And how we can sanction those communities to practice, support them, be part of them.
Sarah: Yeah. Nice. So it's about,
providing the. Structure, enough containment that people in your organization can experiment. Can,
Sophie: yes, [00:25:00]
Sarah: they can explore and. Develop what AI can do for the organization. It's not about leaders needing to be at the forefront of that, but provide the vision, and the safety and the boundaries so that people can work with it.
Sophie: Yes. And I think it's not, I think there's a bit of a, tussle at the moment between, and a lot of consultancies are like, develop your AI strategy. You need a plan and actually it's moving too fast for that.
I'm not saying you don't need a vision, you know, we are talking in Mayvin about let's have some really clear use cases that we're really resourcing fully and we are obviously really committed to because it's not enough to just say, go and experiment. And here might be some ideas. I think actually it's more disciplined experimentation as, as Ethan Mollick, says it, with, as you said, the kind of, what are the principles, the guidelines that people can kind of feel safe within.
Communities of practice are a brilliant way of, giving confidence, developing collective intelligence about ai, [00:26:00] creating a bit more safety around it.
Sarah: Yeah. And then, you know, as leaders, leadership teams really engaging with those use cases and really thinking about, well, sharing the learning from that and thinking about
what can we do with this? How does this help us move forward? Yeah. And in all of that, keeping. The human aspect of the organization, building trust, focusing on relationship, all of that is core.
Sophie: It's not an afterthought. You do it at the same time. No.
Sarah: Yeah. Yeah.
Tech teams, people teams do it together.
This is work to do together.
Sophie: And there we have it. If this conversation resonates, it's exactly the territory we're exploring through our AI for Change Makers work, not AI as a technical rollout or a productivity hack, but AI as a catalyst for better judgment, healthier cultures, and [00:27:00] wiser leadership. Our programmes create space for people to experiment together, build confidence, and develop shared practices that keep humans firmly in the loop.
You can find out more about our upcoming offerings on the Mayvin website, and we would love to continue the conversation with you.

