Transcript
Sarah Fraser 0:00
So welcome to Mayvin's research hub podcast series. We're imagining futures for people change today. Mayvin's Research Hub is exploring the key trends and advances impacting our ways of working and our ways of organizing. As we look ahead into the future, in each podcast, we'll delve into a futures trend and consider how here in the present, we can start to create the future that we want in our organizations. I'm Sarah Fraser, and with my trusty partner in imagination, Sophie Tidman, we're going to try and cut through the noise and to be honest, anxiety about what the future holds, and give you some thought starters and hopefully some provocations to work with this future thinking and bring it back to today. Listen on, and we'll delve into the first of these trends. Ed Schrager from Charity Culture Catalyst, joined us in the podcast cabin recently to discuss the potential of integrating AI into the way we work and our ways of organizing. With lots of experience of starting projects and experimenting with AI with various organizations in the charity world, he was able to highlight AI's potential in streamlining data crunching and aspects of working, you know, reducing human bias and increasing efficiency. So that all sounded great, and we also recognized in the conversation the skill, particularly that he has, and others that are AI savvy have, in applying some guiding principles and needing to do this really well, because the danger is that generative AI could have a detrimental impact on the relational fabric of our organizations if we grow to depend on AI rather than using it to enhance and support human collaboration. So here's where we started.
Ed Shrager 1:46
I'm Ed Schrager. I work for a small organization called Charity Culture Catalyst, and we've worked with charities. So it sort of says that's what it says on the tin, trying to help people improve culture, diversity, inclusion, change all that sort of stuff. And recent, recently, well, within the last couple of years, as AI has really sort of become more in the public sphere, we've then been looking at that, and how can that help with the work we do with people in terms of changing culture, improving inclusion and diversity.
Sarah Fraser 2:29
Fab loads I want to link to, and we'll come back to all of that. Thank you. And just for those who met me before, I am director of Mayvin and leading on mavens research hub into the future of organizations. So we sort of got connected through one of my colleagues who was starting to have conversations with you around AI, and I sort of investigated what you're doing, because my background is mainly in working with charities and not for profit. So then I saw AI and not for profit. And I was like, this is perfect. This is someone I need to speak to. So that's how we got connected. And we had a really good first conversation, then a really good second conversation. And now here we are. We've had some exploratory conversations around AI, if I'd I feel like it might be helpful if I sort of frame my kind of how I got into the conversation with you and like, what, what, what I was trying to dig around in. So as part of the research hub, we're looking at a few different themes. One of them is around how organizations are integrating AI into the way that they work. So I am no AI expert. I am not a techie in any way declared non techie, but it's really interesting starting to speak to organizations and even playing around with it for Mayvin, and noticing and considering how the way that we work will change. Obviously, as AI becomes more of a feature of working life. There's all the big conversations about how some jobs may go, etc, but just for you know, the average person in an office type job, the work may change, or the way that you work may change, and I'm interested, from an OD perspective, on how that affects the way people work together, because AI almost becomes another member of the team. Lots of people talk about it as a tool it is, but it also becomes, it has a place in an organized in an organisation's culture, in a in the way that people then end up working together. So I've been digging around in that in various conversations, but as we spoke, there were just some really interesting stories in terms of you using it in some charities, noticing and kind of the value that you put on AI and then sort of what I'm reading into that from the questions that I bring. So that's that's sort of setting out how I'm coming into the into this podcast, and the questions I'm I'm curious about asking. I wonder if there's particular things that you're coming into this with as well.
Ed Shrager 5:14
Yeah so just thinking about what you said there, and from our previous conversations, I think the the thing I see is two buckets or two can't think of a better word of a way of saying it than two buckets. One is the future. So kind of what you're saying about how organizations will integrate AI into everything that they do, and it'll become a member of the team. I think there's that bit which is, needs to be worked out. How does that? How's that all going to work, and how's that going to play out? And then there's the bit which I think I'm more sort of into and more exploring at the moment, which is using AI to help work out how do we transform organizations to sort of get to that point. So there's a lot of stuff in organizations, and my experience is mainly in charities, but I think it's pretty much everywhere where a lot of organizations, particularly big, the bigger ones, because of the nature of organizing, are in this sort of command and control culture thing that's often done really nicely, but it's still got that command and control sort of vibe with lots of hierarchy and lots of sort of bureaucracy and admin stuff. So using, how, thinking about, how can we use AI to help do some of that stuff, to help the transformation of the organizations, and then be able to think about, right well, how, how might this actually then be integrated in the future? So it's a bit of a two phases, to this of how can we use it now, to shape us up, maybe into a better place, to then be able to go right now we can think about how we integrate it. So it's a bit of a sort of using those phases. An example of that is something I've been doing a lot of is with the whole job description thing. So, you know, I've been involved in in various sort of projects of working out job descriptions in the past, and it's a real can but turn into a bit of a rabbit hole of time, of people spending a lot of time getting together,
Sarah Fraser 7:30
Everyone's opinion coming in, trying to work out how it all fits together.
Ed Shrager 7:34
Yeah. And then writing the job descriptions, and then people, because people are people, they write them very differently. So you get this whole sort of different way of written job descriptions that are meant to be the same, but they're not. So you just get this whole puddle that'sactually not really very helpful.
Sarah Fraser 7:51
Someone's got to unravel the policy thing
Ed Shrager 7:53
Yeah, someone's trying to unravel it. The person who's on the end of it, the person in the job, doesn't really understand what the job is from the description. So it's not actually very helpful. So there's a whole load of that. So getting AI to be able to help people write job descriptions or role profiles or whatever they are in a more more quickly, more consistently, more accurately and just better, basically feels like a really, quite easy win for this, for that sort of first phase of, how do we just do some of this stuff a bit more
Sarah Fraser 8:28
Okay. And I want to dig straight into, sort of the one of the questions that I keep coming back to. So, if so that sounds brilliant, and I and I hear how it, it creates, you know, it does so much, such a large amount of work that would take, you know, a person hours, and sort of has it sort of cuts through the complexity of it. And is there a risk in or what, I suppose, what is the risk in AI, doing that job for you? Because I imagine, like when we're working with organizations, thinking about operating models and even getting down to role profiles, etc, one of the most important aspects of that is other conversations that happen in the process, the conversations, the debates, the disagreements, the working that through coming to agreement, they're coming to a common understanding of, okay, so this is, this is what we are trying to do here. Or, well, you think this, but actually what the organization is this. So how can we find, you know, the right space in between. So is there risk that you lose that sort of the relational process, you know, the very OD aspect of a design process, or the change process, by letting AI do some of the work. I don't know if you've experienced that in, like some of the projects that you've done?
Ed Shrager 10:02
So I think it's more that you you create more time to have those conversations. Because that is the important bit, the bit about, you know, whoever the people are involved, having the conversation about, what is this job? What do we actually need in our team and our organization? What? What are we trying to achieve having all of those conversations? But then being able to just say, you know, in your own kind of words, right this is the kind of job we need. And then the tool then creates the job description sort of out of that. That's
Sarah Fraser 10:33
So it does the writing of your notes basically
Ed Shrager 10:37
Basically yes so still having, though all those conversations, because, yeah, the AI doesn't know, know what your strategy well, it can have your strategy inside its brain, so it pulls out, you know, your strategic things in your enablers, and it's got the language of your values and behaviors and all of that. So, so a really key thing just to, just to put into the conversation is with the, when we're talking about generative AI, there's this thing which is called RAG, which is retrieval augmented generation, which is, that's, that's the, the thing that that I'm doing, that people need to do, which is in its, you know, RAG is quite a nice way of saying it. But what it does is you put things like this, the organization strategy, copies of the values and behaviors and the team structures and everything that you've got about the organization.
Sarah Fraser 11:33
Yeah, so here's your training, here's what you need to know. Here's the brain that you need in order, or the filing cabinet that you need in order to give me the answers that I want
Ed Shrager 11:41
Exactly. So you can give it all of that filing cabinet, all of the documents you want and it could be your job, job evaluation, structure, your role profile template, your how you want it to look all of that stuff. You put all of that in it, and then it refers to all of that and uses it to process what you put in so that that's key. You're not just getting this stuff and putting it into Chat GPT and and hoping it comes up with something that you like. You've actually trained the AI using this RAG thing, yeah, so that, so that then draws on stuff you actually want. So that's a that's a key thing to to put in. Yeah, know what you how you want it to come out.
Sarah Fraser 12:25
I feel like I'm going to be the the critique all the way through. I'm not trying to do that. I'm just trying to explore, um, but one of the things that anyone I've spoken to about AI so experts I've spoken to, they always say, like, we just need to remember, it is only as intelligent as we make it at the moment, well, to an to an extent. You know, there's a lot of talk of how AI is going to, well, go beyond what we we can do as human beings. But the that sense that it will reflect back to us the values exactly as you're saying, the biases that we put into it. And it's not going to it's not going to sort those out for us. It's just going to reflect back what's already there. And I know lots of the tech companies are looking at ways to to basically deal with those biases. So thinking about, you know, ensuring that it's offers a bit more diversity, etc, in the opinions it reflects back. But in doing so, it then becomes less. I heard stories of it becoming less relevant, or sort of feeding back really obscure opinions back which don't actually sort of don't make sense to us, in a way. So there's a you can train your AI, and I think there's something about or is it just that we as, then the human being element that comes after looking at what AI spits out to us, we just need to be really aware of the fact that we've trained it. It's not challenging anything that we don't know already, in a way, is just helping us to, I don't know, get some get to a conclusion faster. You see what I mean?
Ed Shrager 14:24
I do. And I think that thing of being the human, there's this guy called Ethan Molik who's written a really good book about AI, and one of his principles is be the human in the loop. So with all of these things generating a role profile or a job evaluation, whatever it is, having a human who actually looks at it and goes, Is this the kind of thing that we're looking for is always going to be key. So having that human in the loop to make sure it's not gone off on one but I think as well. I do, I sort of see that, but then reflecting back on doing this kind of thing before, with humans, without AI, yeah, particularly, and going down the job evaluation route, particularly, is\
Sarah Fraser 15:13
Is it worse?
Ed Shrager 15:15
I my experience? Yes, the the the biases of humans, yeah,
Sarah Fraser 15:20
Because then you've got emotions involved, haven't you and opinions about different people, whereas the AI is just going to give you a
Ed Shrager 15:27
Just the basic thing of
Sarah Fraser 15:29
A data driven answer
Ed Shrager 15:30
Yeah. And I can't remember where the research is from, but the people who did there was probation panels for prisoners, and if you were the first people to go in front of the panel at 9am you were this likely to be given a chance of probation. But then if it was before lunch and they were all hungry, no chance, then after lunch, the percentage of people getting probation request accepted was was much higher. So all those things, you know, if you're a job evaluator or job writing, job profiles or whatever, if you've got all your emotions going on, you've got you're hungry, all those basic things just really affect you, whereas with AI, it's consistent all the time. So I think the bias thing, you know, it needs training, and we need to review what it's coming up with and all of that. But compared to in this world we're in now, compared to how humans make decisions,
Sarah Fraser 16:38
it's more effective in Yeah, yes.
Ed Shrager 16:42
And it's more more obvious, if the bias is there, then it's more obvious. Whereas with humans, people can, you know, clever people can kind of smudge it over and, you know, make a mess of it,
Sarah Fraser 16:56
Make a mess of it, we do that brilliantly. I mean, that's why, that's why Mayvin does what it does, because people make things messy and yeah, so maybe when there's more AI, there'll be less OD. I think it's the opposite, but I think there'll be yeah. My thinking is that, you know, D we as od practitioners, we need to get alongside AI and understand how it's it is part of the organizational makeup and and be working with it, which is exactly what what we're doing here. So I know you've got some brilliant examples of projects you've talked about the doing the job profiles, but and are there others to talk about before I start critiquing everything about AI? I don't feel like that. I think it's really exciting as well. What are your other stories of using AI?
Ed Shrager 17:51
So other ones are the more sort of culture side of OD work. So, you know, helping organizations understand what their culture is and what the culture they might want in the future.
Sarah Fraser 18:07
How did that work? With AI, what did you do?
Ed Shrager 18:10
So the bit with that is, and you, I'm sure you've done these kind of projects before, where you go out and you talk to people, so you might have a one to one interview sort of thing, you know, do 10, 20 people, get do some workshops, get do put some surveys for focus groups, all that sort of stuff. Yeah. And out of those, you generate a load of text,
Sarah Fraser 18:33
Yep, loads of data,
Ed Shrager 18:34
Loads of data. So the data, if it's if it's like, yes, no, stuff, or, you know, ratings analyzing that, you know, doing it in spreadsheets or whatever. Easy peasy. So you don't really need AI for that. But the the projects I've done where using AI to then summarize and analyze all of the text that you get, I find that that is so powerful one, one for actually just transcribing the conversations, so that instead of, you know, you're sitting there writing up your notes and all of that you can, I find, listen better because you don't necessarily have to be taking the notes, because you've got something else doing the notes, yeah, the transcription and then the analysis of it. So getting those, I mean, I've spent hours and, well, probably days, going through line by line of text of all this person said this, and this person said that
Sarah Fraser 19:31
You're bringing back memories of like trudging around London or from far off long places with wads of flip chart paper, and then going back, trying to decipher my own writing as well as everyone else's, and distilling it into something that gives some clear next steps, or whatever it is, conclusions for a client. And I'm I'm always very aware in that process of how, you know, I'm I'm reading it through my own eyes. I'm reading it through my own perspective of the information that's important, the information that's less important, what the message that I'm trying to create, whereas AI I would assume, well, yeah, it cuts through that doesn't it? It kind of doesn't allow you to put your own opinion on it.
Ed Shrager 20:19
Yeah, yeah. And exactly that. And, you know, those flip chart papers. So if you've, if your writing's decent, like, just get, get a camera, take a picture of it, and then again, get the AI to transcribe it into actual words, and then use the AI to do the analysis into, you know, if you're using some sort of culture model, to theme it based on that. So doing that whole bit, and it's not to say that, you know, in the projects I've done, I'm not going to do it myself as well, but it's getting a head start, getting a first pass, looking at stuff, and then not, you know, if I go, Oh, what I find with my own work is, if someone else is there, some there's someone in the room who I remember, maybe I didn't know, you know, I liked them, or I identified with them, what they say comes through more strongly. And I go, Oh, remember, yeah, that person said that. That's powerful.
Sarah Fraser 21:18
I think back to other workshops where, you know, you're really aware of the power dynamics and the, you know, the relative power and in terms of who contributed what. So you know is there, is it a director who said that? And therefore, is it more important, compared to someone else in the organization, having to be really careful about how you yeah, how you reflect that back
Ed Shrager 21:44
And the bit then, which sort of in a slightly different situation. But the same kind of thing is working on a project which had a collective consultation period going on. And like, you know, I've seen a lot of times before when people give feedback on new structures or, you know, a reorgan, people giving their thoughts on it. Yeah, often what they're saying is very emotionally charged. So they really go off on one basically rightly, because it's passionate, they're passionate about it, and it's very important, and they've but they've got a an actual point, which is really important, but it can get lost in the motion the way they say. So if you're reading through that stuff, you're just like, oh god, that was just that person. Forget it. Just ditch it. But actually, if you use AI to process that kind of stuff, it doesn't really care about the emotions and so it can strip out
Sarah Fraser 22:50
It's got this value as a, as a neutral, as a neutral observer, as a, as a neutral participant, yeah, in the way that we might work. So I I hear the real value in that and and used it in that way before as well. And I've lots of spoken to other practitioners and people in organizations working like like you, using it to consolidate a lot of data to make sense of it against models, etc. So it's got that value. And I also heard a story recently from a client where AI started to be used in their organization. They've got a sort of internal AI system that they can use, and so questions get asked to the AI, and they use it to like, to prepare for meetings and things. And interestingly, the story was in this one example of bringing back a perspective, I think it was a decision, strategic decision, that needed to be made, and greater value was placed on the response that someone declared they had got from AI than from the perspective of others in the room. It was just like, oh, AI, this is what AI says. That almost a sense of, I mean, I wasn't there, but almost a sense of, well, that must be right because, maybe because it is the neutral observer. It is analyzing a much greater amount of data than we can kind of hold in our heads, obviously. So I think there is, I don't know, I'm aware that there are that that it ends up having almost quite high status in organizations. I can hear the way that you're using it in in your projects it, it hasn't got that feel yet, or has it, have you ever come across that, that sense?
Ed Shrager 22:58
That AI is kind of regarded as the, the source of the truth,
Sarah Fraser 23:29
All knowing, yeah, yeah.
Ed Shrager 24:55
Not really. I think, I think that would be a classic it depends thing for me. So if someone was sitting in the meeting and they just said, Oh, why don't we just ask Chat GPT and see what that thinks, then that I'd be skeptical about that answer. Yeah. But if they, if they were using, you know, RAG retrieval, augmented generation, and they actually had fed it some good data, and they were saying, actually, what we've put in is this, this and this, and these are the things that we've put in, and then this is the question that we asked it, and this is how we we've prompted it, and this is what we've got out of it. Then I'd be more likely to think, Oh, well, that that's, that's the person who's used it in a in a good way. So it's how you use it
Sarah Fraser 25:44
But this is it. So you're, you're landing on another really important point for me is that there, I mean, we know this, and it's a reminder that there, there are some real skills in using AI well, like, it's kind of like, we all know we need a bit training, and it's there's some interesting data around even Gen Zeds are sort of anxious about the lack of training and skills that they feel they have to use AI. And, you know, people wanting that training and support from what their organizations, fact that women are less or more, more concerned about whether they'll be able to use AI and integrate it into their roles and and sort of anxious about about their skill level. But that, you know, coming back to the this fact that that it is a real skill to use it well, it's about understanding the human element in using it, in creating a culture around using AI in an organization where not understanding where the human responsibility lies. Think you said that in one of our previous conversation, understanding who's responsible at the end of the at the end of the day is critical. Yeah, and just wondering, like it was almost a sense, when we spoke once, that you kind of realized that you're you sort of have an implicit skill in using AI. And as we spoke, I think you mentioned that sense of, oh yeah, I do have very particular processes, standards, sort of ethics, ethics around using AI. I don't know if you want to say anything,
Ed Shrager 25:44
Yeah, yeah, yeah.
Sarah Fraser 25:54
Such an interesting conversation where we got to it,
Ed Shrager 27:17
Yeah yeah, yeah. And it's sort of, I think we were, we shared a bit of Myers Briggs stuff before that as well.
Sarah Fraser 27:40
You're an explorer.
Ed Shrager 27:42
I'm an explorer and sort of creative innovation, you know, just see what comes out of it. Sort of person,
Sarah Fraser 27:50
I want to know how it works
And so I've just been exploring for for the last two or three years, however long it is, without really sort of noticing, until having conversations like we've had and this conversation, and then thinking, Oh, actually, yeah, there is, there is, I have been using certain skills and principles. I think the key one which fits with what we're talking about with the world of OD and organizations and change and all of that, a key thing we do is ask people questions. So whether you if you're coaching or consulting or facilitating or whatever you're doing, it's all about asking people questions. So it's not really turning up and giving a big presentation, you know, or a training thing, telling people what you know from your position as an expert, that blah, blah, blah. It's not that, it's, how can you craft a really good question? And that's that's always been a thing with with OD and change stuff is,
The question you ask creates the change.
Ed Shrager 29:01
What's the question that I could ask this person to help them move on, or this organization or this team, and that that is one of the key, if not the key thing, with AI, is asking it the right question. So if you, if you, if you're good at asking questions, yes, which I think most people in the world of ODR, then you can ask a good question of the AI and go, Ah, this is the kind of this is
Sarah Fraser 29:32
How do you know? How do you know, if you've asked it a good question, that it comes up with stuff that you're looking for? No,
Ed Shrager 29:40
It comes up with good stuff. So I think it's and it is all very exploratory, I think. And the other part of the mindset is thinking its first answer isn't necessarily the not the right one, necessarily Yes. Like, I think that there's a, I don't know it's an AI phrase, but there's this thing about, you know, one shot people expecting, if I ask it a question and it gives me an answer, that's it done. But actually, you you can ask it a question and then, unlike a lot of people, it really doesn't mind if you say that was rubbish, try again.
Sarah Fraser 30:17
Do you actually tell it that was rubbish? Sometimes?
Ed Shrager 30:20
I've not, I'm not. I'm very polite.
Sarah Fraser 30:22
People talk about being very polite with AI, I think it's hilarious.
Ed Shrager 30:25
Yeah, I am. I'm very polite with it. And it does strangely. I haven't actually tested it, but I think it does react better if you're nice and you say, you know, can you give me the best answer so I can impress my manager or something like that. It does. It seems to come out myself. It's like people, but it but it doesn't mind if you say that wasn't quite right, it wasn't what I'm looking for. Can you try something a bit more inclusive, or something a bit more this, or a bit more of that? And it'll keep going and going and going. So it's that iterative thing, which, again, is a very, very much part of what I see as the sort of OD mindset of being emergent, see what comes out of it, being iterative, asking questions. So it's getting into that mindset of thinking through it like that.
Sarah Fraser 31:16
So there's two things then that feel like really important in skill set, and both at the level of just so far, both at the level of thinking about how organizations integrate AI in their culture, but also then at the individual skill level. But one one is like being very conscious and aware of the data that you feeding it, of course, I'm conscious of what the implications of that, and two, the ability developing, the ability to ask really good questions, being really intentional about that, but letting that be an iterative process. Yeah, like that. What else would you say?
Ed Shrager 31:57
So I think there's with with all the different models. And I the one that pops into my mind, and most people's mind, is Chat GPT. But, you know, there are lots. There are others, and it's possible to use different ones for different things and almost use them in combination with each other. So it's almost having a, you know, you can put some, get something out of one, and then put it into another one, and then rinse it out through a different one. And so sort of using it in those ways, in quite a flexible kind of way, which is,
Sarah Fraser 32:36
It's almost like having a few colleagues that you're chatting to and getting their different opinions, yeah, yeah, okay, yeah,
Ed Shrager 32:41
Same sort of thing. And that's probably easy to say for me, being I'm not part of a big organization that's kind of locked into, you know, one Microsoft or Google or whatever, so that you can, it's gonna, it's gonna be harder for people who are in that situation, but using the which is a bit like the iterative things, you know, it's just keeping going around saying, you know, put a different hat on or, and I think that is a real creativity thing. So be I, like, I really like being creative, so sort of having a creative mindset. Oh, just imagine if I could do this and then just trying it. A really nice video I saw a while ago. I can't remember who did it about AI was using this sort of image of using AI is like having Einstein in the basement. And if you if you realize that you've got Einstein in the basement, you'll go and ask him about all sorts of things because, but if you don't realize that he's there, you'll just ask it to, you know, write a song like David Bowie, or write a poem about this, or, you know, write a recipe, or sort of all the really stuff. It's really good at, but it's really basic. But actually, if you think this thing can basically do anything, and knows all sorts of things, if you can get in that into your mind, and then you can ask it anything, yeah, it'll come up with some really good stuff.
Sarah Fraser 34:16
But it is, it's definitely so I've been playing around with AI a little bit, and it definitely feels like it's a it's a new muscle. It's a new it's a new sort of way of exploring ideas, developing ideas, using as a tool, using as a, you know, a conversational tool, to develop something that feels very different, and I almost feel guilty when I use it still. I think that's just because I'm a bit old and bit old school, but I almost feel guilty because I feel like I'm cheating. I feel like I'm cheating like getting to announce but that I really should do myself. I should do the work myself. I wonder if other. Yeah, I wonder if others will feel like that. You don't. I need to get over that. I get that. Yeah, I get that. But, um, what I'm kind of curious they go to, what do you think is the possibility, as you you started to experiment with using AI in the projects that you're doing, you know, transformation projects, organizational change projects, like, what what do you see as the potential, like, what you're excited about as you look ahead?
Ed Shrager 35:30
So I think for me, in the in the world of OD, so coming back to my charity culture catalyst thing, you know, we started that up, me and my business partner, because we wanted to give some of the tools and support to charities. And we're talking small charities, mostly who wouldn't normally be able to afford that kind of stuff. Because, you know, traditional consultants, it's very expensive stuff. So that's where we started in giving people tools of theater, how they can do this kind of thing themselves,
Sarah Fraser 36:09
And where it would potentially can create so much more value in that small organization still needing to do all the things that big organizations are doing. How can they do it with less resources, by using AI, Yeah, amazing.
Ed Shrager 36:24
We sort of started with giving them tools, and then AI comes in, and we can go, oh, and now you can use this stuff. So it's just made that even more possible. So I think the possibilities are for and the thing in OD world, in the in the sort of literature, when you read about people who have done big OD projects, it's usually, particularly if it's American stuff, you know, it's they're doing it for big pharmaceutical companies, or they're doing it for big oil and gas companies or big finance institutions. They've got big budgets. They can afford consultancies to do all of this really groundbreaking OD work, which is great work, but doesn't translate into smaller organizations, whether they're charities or whatever they are. Yeah. So if we can use AI to make more of that good stuff available to more organizations, it could lead to and my the thing I have in my my head is I've got two daughters who are teenagers. They're going to go into the workforce at some point. One of them wants to be a doctor, probably in NHS. I would really like the world of work to be a much more nicer place, right when they get there, particularly for women, but for everyone.
Sarah Fraser 37:43
I remember us having Yeah, our initial email exchange and sort of getting to know you a little bit, and conversations and sort of sense of sharing a bit of why we're doing this, and I'm in it for the same thing. So I've got two kids, and it's like, how can we think about the future of organizations and the workplace being healthier than it is now? Okay? Great. Some organizations have amazing organizational cultures, and there are some great examples. And it's also workplaces are not necessarily easy, like post pandemic. It's not like it's, yeah, there are, there are new challenges, and still lots of issues around equality, diversity, inclusion, etc. And there's new challenges coming in with Gen Z, and you think about Gen A then coming in, but Gen A, so not generation after Gen Z. They're going to be the first generation that will come into the workplace with AI being in their bones, or, as is Ray Kurzweil talks about, potentially even the singularity, where we actually might have the world of the World Wide Web in our head. I don't think, well, I really hope that's not going to happen. But yeah, it could.