Book a Coffee Call

From Human Doing to Human Being in the age of AI

Author / Team Member

AI is moving fast. But the real challenge is not technical. It is human. This is a reflective, practical conversation for leaders, change-makers and anyone trying to stay human while facing into the future of work.
Share this:
   

Sign up to our mailing list

Want to get updated on regular insights and ideas from the experts in the Mayvin team? Sign up to our newsletter and never miss out again!


AI is moving fast. But the real challenge is not technical. It is human.

In this episode of the Mayvin Podcast, Sophie Tidman is joined by Hugo Pickford-Wardle to explore where AI is leading us, and what it asks of us as humans at work. Together they unpack why AI can feel both unsettling and deeply empowering, how it acts as a trickster technology that exposes our assumptions about value, productivity and intelligence, and why integrating AI is less about figuring out tools and more about choosing who we are becoming.

This is a reflective, practical conversation for leaders, change-makers and anyone trying to stay human while facing into the future of work.

To explore more of Hugo's work, take a look at:

📬 The Free Newsletter: AI Optimist– Your weekly dose of practical AI optimism
🎓 AI Night School: AI Courses for you to become AI Fluent

Thanks so much for listening! Keep in touch:

Selfie image of Sarah, Markus and Tim

Transcript

 

Sophie: [00:00:00] Hello and welcome to the Mayvin Podcast where we explore fresh thinking on how we organize, lead, and make sense of work as old models meet new realities. In this final episode before the Christmas break, I'm joined by Hugo Pickford-Wardle. Hugo's been working in AI since long before it entered our everyday conversation.

He founded London's first AI consultancy in 2010, partnered with IBM Watson. Built a range of AI products and through startup Sherpas has brought teenagers into real AI projects. He now teaches thousands through AI night School and his newsletter, the AI Optimist. In this conversation, we explore the paradox that AI is both deeply unsettling and deeply empowering.

We talk about AI as a trickster technology. Why integrating it is ultimately human work and what it means to move from frantic doing to more conscious being at work. We also explore why purpose and connection matter so much as we face into what, what comes next. [00:01:00] Enjoy.

Sophie: Welcome to Mayvin Podcast. How 

is your run up to Christmas going? What's on your mind at the moment? 

Hugo: Hello, Sophie. Lovely to be here. Thank you for inviting me to the 

podcast. On my mind at the moment is really making sure that I can get the right presents for the kids and. Put that on my to-do list with all of the AI stuff that's going on in my world.

Sophie: Could you not get AI to do all of the Christmas stuff 

Hugo: That is exactly what I am doing. Testing out some of the AI shopping features. It's quite interesting actually, when someone wants a pair of headphones, but like really specific set of requirements, it's kind of really interesting seeing what's possible.

Sophie: Yes. Children get to the age where they get really specific about presents, don't they? If [00:02:00] somebody could get me a AI tool that would work Vinted, that would be great. 'cause I spend a lot of my time on it. 

Hugo: All of the sizings. And being able to understand kind of what each of the different labels means in terms of that.

Yeah, I've got this kind of idea of like, the data made me do it because all of those sorts of challenges always comes back down to like, do we actually know like what Adidas versus Nike sizings actually mean to people in the real world? 

Sophie: Right. Okay. We got into it very quickly there. Yeah. Look at that.

It's everywhere. So you are, the AI optimist. So tell me a bit about how you came to be the AI optimist. Where, what was your journey? 

Hugo: Yeah, I am the AI optimist and it's probably worth me first framing, like, what do I mean by being the AI optimist? And for me it's a response to what's going on in the world where we don't want to be in the AI phobia camp.

'cause it's really hard to make great decisions when you are feeling fear. And you also don't necessarily wanna be drinking the Kool-Aid 'cause [00:03:00] that's not always a good thing to do either. You actually want that pragmatic middle ground and that. What I mean by AI optimism is knowing that actually we have agency and control .

And I guess that I play that role because I came to the space much earlier than most people did. I created London's first AI consultancy back in 2010. I have an background in innovation and creating digital products, and so I am also a property data geek . And so I wanted to create a prediction of property prices because I thought actually I saw people, our family buying houses in completely crazy ways that humans do. And I thought actually, this isn't really an asset manager's view of this house. It's a very human view of the house. And I thought if we can create some sort of thing that can use all [00:04:00] of that information to create predictions, it'd be awesome.

And so we created a neural network, which is a type of AI, to go and factor in a whole bunch of human things to make human predictions about property prices and smash outta the park with better results than Rightmove and Zoopla really quickly. 

And that, that led me to discovering this thing called ML Ops, which if you're a person in the world of AI and machine learning, you'll know is the hard part of turning something that's a research model into something you can use in the real world.

So that was a real wake up call to what was going on in the space. And then we ended up partnering with IBM Watson doing a bunch of due diligence on machine vision algorithms. Creating a fill of a lot of chat bots. And I actually exited that business to create an AI startup to try and replace conveyancing lawyers having moved house.

Didn't realize there was no intelligence in that space to [00:05:00] artificialise, so that didn't quite go according to plan. But that kind of gave me a really early understanding of the space, how transformative it was going to be. And I took some years to step back, created a social enterprise to teach teenagers how to prepare for that world.

So Sherpas is, my social enterprise where we, trained over 5,000 teenagers now, through a program called 16 by 16, where we teach 'em how to earn 16,000 of their own money by the time they're 16. Which is all of the entrepreneurial skills that they need. Wow. And then the other program being paid work experience where they get to work on AI projects with big corporates and get a reference from an actual company on their CV so that when they leave school, they've got all of the ways to, to step apart and actually have the skills to support that.

And now the technology's caught up. So I'm back working with adults, working out [00:06:00] how to adopt AI in a way that's gonna create the tomorrow that we all want to live in. So obviously you're part of AI Night School. 

Sophie: Mm-hmm. 

Hugo: I've got the AI Optimist newsletter and I work with companies.

Sophie: So you've seen a lot of people go through learning journeys on AI, and you are still in the midst of your own, which you talk very openly about, which is very refreshing to be, a participant on AI night school for me. What do you notice about the twist and turn of that learning journey, and how would you recommend that people embark on it?

Hugo: Yeah. So I think the first thing is, um, actually starting by starting is really important and being able to go in with a novice mindset, which is something that we're really not taught in the world of work, right? We're always taught to be the specialist, to be the expert, to be, you know, already fully formed.

In reality, you know, this space is one moving so fast that anyone who tells you that they're not [00:07:00] learning is, well, that's a bit of a red flag, really. And then the second thing is that. It's so transformative that you have to really go in with an novice mindset and be able to go into explore and play mode to start off with.

Sophie: Hmm. 

Hugo: So I often run a workshop where we chase a pollen thief that is a bear that has stolen pollen from the Chelsea Flower Show, which is just an exploration of different AI tools in a fund, non-work related game environment, just to go into that play mode.

And you know, obviously what happens when you play is you start to kind of have questions form in your head.

Like, if it can do this, maybe it can do this. And then you'll eventually kind of see into the rabbit hole and see how deep it goes and decide how far you want to go down that hole. And I know you've been experiencing that firsthand. 

Sophie: Yeah. It's such a weird, experience 'cause uh, I think a lot of [00:08:00] things with AI just massively sped up, including your own psychological reactions.

So one moment you're like, oh my God, I've nailed this. This is totally awesome. I can do this and this and this. And then you're like. And what about this? This could be possible. And how did you describe it? Looking over the edge, like the vertigo,

Hugo: that was Francis. Yeah, Francis basically said he was like, felt like he was looking over the edge and had vertigo, and I just thought.

That was like a great way of describing the experience on some days and others, and I still experience that. , I've just had a day of having that feeling of, yay, I've got superpowers. I know that tomorrow I may feel that again. Mm-hmm. Because there is just so much to, comprehend and so much change happening.

Sophie: Yeah. Absolutely. And even on a team level, I think sometimes, you know, we did the teaming, AI jam, with Marcus. We've got a podcast on that as well. But you were there for that as well. And, um, so I think the whole of the consultancy team in Mayvin went to that. So we were like, yeah, teaming with AI would team it, would've use it as a [00:09:00] team, not just one-on-one.

Which I think is a lovely example of kind of like, well there's the dominant narrative of AI is that it's an individual tool. There's lots of other dominant narratives around with AI that we don't necessarily need to buy into actually, and in some ways we need to create our own narratives about how we live with AI, how we work with AI, to kind of feel empowered and to design our own relationship with it.

But even then, back to that story, we were doing lots of one-on-one, we're doing lots of teaming with it, then a few weeks ago, it was like, oh, we're all just using it one-on-one and we're just creating loads of content and this is a bit overwhelming. We're not doing enough of the curation stuff, we're not doing enough of the relational stuff, and we're just getting kind of like completely overloaded.

So we had a, a collective moment, but it's so funny how that stuff can really happen very quickly. So, as as organization development consultancies, what we often say is go slow to go fast, build in kind of the moments of reflection. And those things hold even more true with AI, and you need to be even more on it with the kind of noticing what's happening with your team dynamics and [00:10:00] with your relationships with team members as well as with the AI itself.

Hugo: And at the same time, you're doing that against the backdrop where the rate of change in the world is increasing, that there's higher levels of uncertainty and you're having to make bolder decisions more quickly, usually with less data against this uncertain world, which is a massive ask. And then as you look across the team, you have people kind of asking questions like, is my job safe?

Big one, are my kids' futures safe? Really big one that crosses that boundary of, of work and life. Is this good for the environment or are we just adding more fuel to the fire? And you know, even those questions are. Massive questions and they also kind of really go to the core of who you are as a human.

Sophie: Yeah. I was noticing, we were with a group of, quite advanced change makers a few weeks ago, and we were talking about the [00:11:00] future of organizing. AI was obviously one of the topics.

There was a lot of resistance to even talking about it , and yet they kept on coming back to it and they kept on talking about it in a way that was quite speculative. That was like, oh, what about the environment?

What about this, what about that? And we don't wanna talk about it. Why do we keep talking about it then? So I'm noticing that. Um, because they hadn't formed their own relationship with it yet, it was just a mirror of a lot of different projections that we were placing on it.

When we started experimenting with the AI about stuff that wasn't necessarily related to AI in the workplace, but we were just using it as , as a partner, in some ways that was just very anxiety reducing. 'cause it was like, okay. It's just, it's just this thing. It's, it's just, we're just normalizing it.

Hugo: I think what I see with most people's relationship with AI is that it is driven by what they're absorbing from the world around them. And there are a lot of agendas pushing out. One of the things I talk [00:12:00] about is the silicon supremacy, like this idea that we're being told that this can replace humans.

That is a really confronting thing to say is you are replaceable and it's also just simply not true. And this is a silicon-based system that's basically doing maths and it's creating some really powerful outcomes that can be amazing to use in the way that we work. And it's absolutely gonna transform the world of work.

But it's not human. It can't replace humans. I've actually been, getting people swearing at me on the internet, recently because I've started putting out the AI optimist videos and, well, do you wanna guess the most controversial video that I've posted about ai? What's the topic? The headline? 

Sophie: There's one on climate change, isn't there? 

Hugo: The one that's really getting people swearing at me is that AI is not destroying art. [00:13:00]

Speaker: Oh, yes, 

Hugo: and the point is that if art is an expression of humanity, and AI is not human, it can't be destroying art, what it can be doing is being something in the world that art generally is responding to those things in the world and as a whole economy associated with art.

Is really impacting, but actually what people are doing, because there are all these agendas, is they're getting overwhelmed. They're not having space in their life to really actually sit back and look at what it's doing to their relationship with themselves, with others. And so they're starting to absorb the things that they're hearing.

And what we try to teach at AI night School is first of all, kind of all of the skills to work with it at first principles. So you have an anchor as everything's moving so quickly and a really holistic view so you can understand, you know, what's happening at the top line through to what do I need to do in my every [00:14:00] day?

The cost of doing is dropping down towards zero. And it has been for a long time, like well before AI, you know, even through the internet age, the cost of doing was dropping. That's why we've had such cheap goods produced in such high volumes, and that line's now accelerated the drop because of this technology, particularly in the knowledge space.

Mm-hmm. And the consequence of that is that we need to do some first principles work that we're not used to. Which is what is a business? How do you create and store value in a world where doing no longer is that store? And that's a massive transformation that is happening right now. There are also upsides, right?

What actually increases in value to get balance in the system? If the cost of doing is dropping, what's gonna go up? And for me, the two things I've identified is one, meaningful human connection is gonna skyrocket in value. And I mean business value, not [00:15:00] societal value, because all of your digital channels will get flooded with synthetic content and people are gonna seek out.

Community and humans. So it is gonna be a great time for religions and it's gonna be a great time to go and create communes all over the place. And also, as you actually start to design your strategy, it really puts humans at the heart of it. And then the second thing that's gonna increase in value is gonna be the actual value of holding risk wisely, because we're gonna be able to make predictions.

More effectively at a cheaper cost. The organizations that really think about how to actually change all of their profit models and the way that they organize themselves to actually predict and therefore hold risk in their business, they're gonna do really well. And by that it could be as simple as changing from timely materials billing to fixed price billing in a consultancy.

It could be as [00:16:00] complicated as creating a risk-bearing product in insurance. Those are just two that I've identified. 

But they're, they're huge things, right?

Is like when you think about any organization that you are thinking about how you're gonna develop and design, the thing that I see is that this is innovation work. Like this has been around for a really long time. It just wasn't a priority because we didn't have the burning platform that we do now with AI, and so much of my work actually ends up being innovation and service design through a lens of AI as opposed to actually needing to kind of go and do the AI work immediately.

Sophie: Yeah, absolutely. The one thing I've also thought of, um, I think you've talked about storytelling before as well being, of high value. So I'd love to talk more about that. And also intentionality if that would come into your list. But AI doesn't have any intentionality, well at least not yet. We give it.

Yeah. And um, a lot of the, you know, the, some of the technical things like this is [00:17:00] how you do it. A good prompt, for example. I mean, that comes from, good contracting. In the language of an od professional that's being really clear. This is what I want, this is what I need, this is who I am, this is who you are.

And I think we've sort of forgotten a lot in organizations. Like it's a really difficult time. We've just had the budget, as a recent, participant in one of our leadership programs, said like cutting costs is not a purpose. A lot of organizations, particularly in the public sector, are treating it as if it is because it is dominating the narrative.

GDP dominates our idea of what the good life is. And therefore the narrative about AI is often, let's do what we are doing. Let's just do it faster. But, the challenges in our society, in our economy are not about becoming more efficient.

It's about meeting human need needs more. Being more creative with what we have. Like there we have abundant resources. I see [00:18:00] abundant skill and talent all the time that is not getting used well at all. And people who are deeply, suffering in organizations. So, yeah, I would love to see more creativity around the AI agenda and more ambition.

Like I think one of my early kind of, rules of thumb for kind of using AI is be, be radically. Ambitious. I mean, people I think in this group that I was with the other week, people were kind of saying, oh, Elon Musk creating a new society in Mars. Like, that's just crazy. It's really counterproductive.

It's really difficult. It's like, well, yeah, but people can, there's a feeling associated with that ambition. And if all we're doing on the other side is saying, oh, that's terrible, that'll never work. Why can't we be really radically ambitious in a different direction that's gonna bring with people with us?

Hugo: I think it hits one of the reasons why I'm an AI optimist is what I see. I've seen a lack of places in [00:19:00] our economy for visionaries, particularly in the uk. One of my skills is being able to have a vision that is far beyond. The kind of scope of most people's ability to see what's coming.

I tend to freak people out when I tell them, because a lot of people can't really see, that far ahead. So I had to learn in my career to be able to pull back and cut it up into smaller pieces. What I've seen at Sherpas is being in a social enterprise, being in a purpose-driven organization.

I think I love the intentionality. I think I'm going to totally steal that and add it to my list because purpose is gonna be the glue that holds organizations together for sure. Because the question really is that cost of doing drops to zero. You've gotta ask yourself, why would I have other people in my organization?

And the reality is I have to be on a mission that is big enough to warrant more than one person doing it. And actually a lot of businesses have. [00:20:00] Had a really hard time finding product market fit to find something that's a scalable business and once they do, you lock it down and that's your cash machine, so you don't wanna change it.

Which means organizations are generally status quo machines. They're trying to keep things exactly the same. And that has been because the cost of getting to that point has been so high. You need to absolutely do that. And then now we are seeing those organizations facing a really uncertain world where the rate of change is going.

And that's gonna cause all sorts of changes. But actually at the same time, the ambition level has been really low compared to kind of what humans need and the problems we need to solve. 'cause I love your point about the abundance of resources and talent. I mean, I've, I've been working with teenagers. We have an abundance of talent, a hundred percent.

We also have an abundance of problems to solve. So we've got an abundance of talent, abundance of problems, but where are we thinking that we're gonna run outta jobs to do [00:21:00] well? Partly I think a lot of the employment models will probably change and that that's gonna require a lot of self-development, self-exploration, self knowledge to be able to go on that journey.

And that's gonna be hard for a lot of people. Everything we are talking about is going to require people to be really aware and to move from human doings to human beings, which is a massive ask at population scale. I also think that it's immensely exciting that we have a technology that is the most empowering technology we've ever had.

I worked on a project at Sherpa that was trying to solve systemic level change, and the reality is to solve any systemic level change, let's say removing micro plastics from the Atlantic Ocean, three miles off the coast, right? Tried to kind of a systemic issue, but kind [00:22:00] of quite a narrowed in systemic issue.

You're probably looking at funding in the billions with a cast of tens of thousands of humans to coordinate. And probably half of that money at least, is coordination, logistics and kind of sharing knowledge and communications. And the reality is that's gonna take you 15 years to get off the ground and the trying to even track it and measure the outcomes, it's just gonna be so unwieldy that you can't wrap your arms around it.

Now with AI, what it means is each of the jobs to be done in that massive challenge become tiny. And what you can start to do is create a system that a human can use to start doing a bigger piece of that job.

That means that you can get less humans to work on a bigger challenge. That becomes manageable. So you're gonna see the development of these purpose-driven organizations focused on solving systemic issues using a platform driven by AI to be able to [00:23:00] manage an immense level of complexity and completedness, and that's gonna be amazing, meaningful work.

The way that we can actually do that work becomes more human. Because if you're saying, actually to create value, I need to be a human being instead of a human doing. Less of my time is spent in a machine, on a laptop or a desktop, where we are doing the doing and more of it is on us bringing ourselves to that.

And that means that things like voice mode means that I'll be spending a lot of time walking around in forests and talking. And that is a complete shift in the world of work. And I know that vision is far away from what it feels like today in a dreary kind of December gray kind of, day in the kind of uk just after the budget.

But if you think about how much change needs to happen during that time, how much learning needs to happen in that time, they need to [00:24:00] start because I think there is an option to reject ai. I think that you can create a commune in the middle of Wales and go back to sort of an agricultural way of life.

And I think there probably is gonna be a trend for people doing that. 

Sophie: Hmm. 

Hugo: Because I think that there is gonna be quite a separation and, and how people view it, but also there's this opportunity to really work out how it actually changes things. In this week's AI Optimist newsletter, there was an article about, Gross Domestic Flourishing as a mass Metric instead of grace, domestic product.

And, and that, that for me is a really lovely way of thinking about where we need to be headed towards what I see as kind of like an impact driven economy in the next cycle. 

Sophie: Yeah. Yeah, absolutely. I'm definitely going for an AI supported commune. That's my life vision the next 10 years.

We've had this chat [00:25:00] about wicked problems before, haven't we? Because while I agree with you, I think on some of the aspects of, AI getting better at predicting things, doing a lot of the doing. One aspect of complexity is, um, and Ralph Stacy talks about this in his landscape diagram, which we work a lot with clients on, is the, the other aspect of complexity is the level of agreement.

So, for example, climate change, is highly complex. The science of it's very, very complex. But also the level agreement we have, around, net zero, has reduced recently, politically across the spectrum probably at a societal level as well. And that increases the complexity and generally it feels like there's a lot more fragmentation.

We have, we're in a bit of a narrative wreckage. In terms of back to storytelling. We are in between two different stories. And that's possibly why we're talking about growth, obsessively and budget cuts obsessively. Because we need a new story and this isn't enough for people. And it's also my particular theory around [00:26:00] why, why populism?

I'm sure I'm not the only one holding this theory, why populism is so appealing. I wonder if there's something in terms of the human being actually focusing on the kind of, technologies. I'm talk, I'm thinking about kind of, storytelling technologies like basic technology, how you bring people together, how you, find common ground, how you align around a vision, how you, work with difference, those kind of things, have been neglected as a poor man's kind of, poor woman's generally a form of technology, but they need to be even more sophisticated now if we are to develop the kind of agreement.

And community that's needed to guide, the AI. 

Hugo: So when I kind of envision an AI technology that allows us to make system change, what I imagine it doing is kind of being able to kinda model our system. So you kind of have a map. 'cause first of all, if you have a map, it's really helpful. And then what it can do [00:27:00] is it can start to fire scenarios across it, which can help us identify leverage points.

And those leverage points can then allow us to understand which group of humans need to be having, which conversations at which time to make change happen. And then it can do all of the work, the logistical work of making those humans be in a human space, having those human conversations with the right conversation being set.

I absolutely think you hit the nail on the head as to what role do humans play in that is the critical role. 

Everything else you're actually doing in those projects is the scaffolding to have those human conversations and that human contact. And I think that this comes back to this idea that the value of meaningful human connection is gonna skyrocket.

Because those agreements and disagreements and kind of negotiating points, [00:28:00] they are meaningful human connection. They fit precisely into that as a incredibly valuable thing to be doing, to be able to make the changes that we need to happen. And you think about it, if you can actually change from having four meetings a year to be able to make this happen.

To having 4,000 smaller meetings all factoring into the right decisions being made across that system, you can make change happen because you can spread the outcome of each meeting at a near zero cost. That can inform the next one.

We've globalized and created immensely complex systems that are kind of the human made systems. 

Sophie: Yeah, I would add to that, that I suppose, it's like the biases we feed into the system at the beginning that, you know, what, what human beings are worth speaking to and worth listening to is gonna be really key in that because you, you've just got an AI, policy of the government.

One of its priorities is to, make, consultations more efficient. That's. [00:29:00] You know, our social contract at the moment, that's not the most pressing issue I would've thought. You know, they, um, I think it's really fascinating the stuff they've done in Taiwan to kind of use ai, and technology to increase participative democracy.

So again, I think even a purpose at that stage about kind of what kind of power dynamics do we want to shift, this work. All is inherently political. I think this is the challenge of having technologists approaching AI from one point of view, economists approaching it from one point of view, and, uh, usually the people, specialists approaching it from another point of view, usually a bit later.

Hugo: And that's exactly why, I focus on holistic approaches to AI in the kind of training side because it has to be everyone coming from those different points of view. The reason why we've got these kind of documents being produced, like the AI 2027 that says all jobs are gonna be gone in two years is because it's been driven very heavily [00:30:00] by a technologist world viewpoint.

The reason why we have the technology in the state that we have it in at the moment is 'cause it's being driven by a technologist world viewpoint. You know, I say the kind of the world we're living in. It's primarily the responsibility. Of Star Wars and Star Trek, right?

Like they seem to be the two sources where you're going, yeah, this is what you set out for people and this is what's happening. And they're trying to, trying to go towards that point. Like particularly the billionaires that are all of that age. And actually, you know, if we think about intelligence more widely.

You know, I'm a massive tree lover slash hugger because I, I suppose what I've realized is nature has way, way, way more intelligence than than I do. And there's so much to learn from it, right? We've been living with intelligence that's greater than ours for a really long time. We've just ignored it. And I just imagine like, what if he'd replaced Star Wars and Star Trek with some sort [00:31:00] of David Attenborough programs that really kind of enhanced their view of nature at that same point in time with that same cohort of people, what world would we have created? And that's one of my little thought experiments that I like to do. 

Sophie: Yeah, absolutely. And I think this is really important about kind of being a bit of a trickster figure with AI subverting, diverting the narrative really intentionally when you use ai. I had a good podcast, with, a guy called Rob Chapman, who has a wonderful AI startup and one of his tips was, don't use AI when it's taking away kind of an important social connection. So for example, asking AI for gardening advice rather than his mom. And we've talked before about the James Bridal stuff around actually we think of intelligence, in our culture as, taking things apart, understanding them, in service of dominating them.

And so there's a very strong power dynamic there rather than a lot in nature. Intelligence is about connecting, weaving together. So I think there's something really, I know it sounds [00:32:00] abstract for people, but I think it's a pressing issue of our time about, you know, we are always living some story.

Becoming more aware of that, I think is really key to shifting the narrative.

Hugo: I mean, I work with, boards and c-suite people and leaders and very sensible people who have big jobs, not necessarily the hippie community that you imagine.

A few things have happened as I've had experiences over the last few years. The first is often the first question is, what about my kids? And these are kind of leaders of large companies, small companies, but the first thought is about future generations. The second thought is this breaks the work lifeline.

The first person who said that to me was probably someone I put very [00:33:00] firmly in the, my work life is separate to my home life camp that a very traditional work setting. You know, you, you're thinking about kind of dark blue suits and Microsoft environments and regulatory frameworks and all of those sorts of factors.

And the third thing that I see is that ethics has been on the agenda in a way that it never was before. 

Sophie: Mm-hmm. 

Hugo: And I think, you know, I have started describing AI as actually, you know, the tricks to technology because I actually believe it might be the technology for a mass human awakening. And from an economic point of view, which

yeah, definitely kind of will kind of start to blow people's minds thinking about kind of human awakening and economics coming together. But what we see is that, if you have to create your value through being a human being rather than a human doing. [00:34:00] That is gonna be a very, very significant shift for a lot of people that will require the level of, self-reflection and self-exploration that generally leads people to different ways of being.

And I think that is gonna be really fascinating to see play out in boardrooms and organizations where traditionally talking about human awakening would not be the thing that you are necessarily doing in that room. 

Sophie: Mm-hmm. 

Boom.

We are winding up and wondering.

What would your final bit of advice for, for leaders, for people wanting to start their AI journey? , 

Hugo: , I think the, the key is to start on the really practical so that you have an understanding of what's happening and a [00:35:00] confidence to go and explore without feeling that AI phobia.

And the second is get your anchors in place because it's real levels of uncertainty that do cause people to feel a lot of anxiety, a lot of pressure, and those anchors are gonna be around community. They're gonna be around understanding values. They're gonna be around understanding kind of what contingency plans are in place, whatever they need to feel that.

Mm. And you know, that's why we built the community at AI Night School for people to learn together, to kind of go on that journey together. 'cause togetherness thinks gonna be really important . and I think it's why we also need to have kind of signal in all of that noise.

You know, the idea with the AI Optimist newsletter is a weekly pulse of what's happening and, and what's important. And the first section is always what's urgent. And [00:36:00] it's usually nothing. There have been two times in the last year where there's been anything urgent and it's related to cybersecurity incidents that you just need to be aware of and respond to.

Sophie: Hmm. 

Hugo: Which isn't even about AI. It's about cybersecurity. And so that's the kind of idea is relax, explore, enjoy. Play have fun. 

Sophie: You've said anchors and I think also scaffolding in organizations. Where are you connecting? Where are you teaming with AI to work on something together?

Where are you sharing your learning? Where are you doing the curating? It doesn't need to be loads of stuff, but it's, it's like thinking about the kind of the containment of all that. 

Hugo: And I think for leaders, they have to have their own level of confidence to be able to do that.

Sophie: So 

Hugo: what I see is that the leaders who haven't done their own personal AI journey bit really don't have the same platform to go and do those other pieces from. As soon as they have that in place, the first job for them to do [00:37:00] is to tell everyone else why it's gonna be okay and how we are gonna have fun and explore this together.

Speaker: Yes. And what's the purpose? Link it to the purpose of the business, I think. I think these are all ways of containing anxiety and giving people sort of a, those anchors. Yeah. Yeah. Lovely. Um, always a really good conversation. Thank you so much Absolutely. For your time. 

Hugo: Well, it's been a delight as always.

Thank you very much. 

Sophie: And that's all for this episode. Huge thanks to Hugo for such a rich and generous conversation. If this sparks your curiosity, we'll be continuing the exploration of what AI means for how we work, lead, and relate at Mayvin over the coming months. Thank you for listening. Take care of yourselves and have really good [00:38:00] Christmases.

We'll see you in the new year.

Fancy a chat? Book a virtual coffee call with our friendly team today!

Get started on your organisational development journey today with the help of our friendly experts. We’d love to meet you for a quick cuppa and see how we can help you. Just click the button to get going!
Book your call with us today
Be first to hear about our free events and resources!

We're based in the South East of the UK and work globally.

Quick Links
Connect with us on LinkedIn
pushpin