Navigating Major Programmes

AI Adoption in Major Programmes with Lawrence Rowland | S2 EP 6

Episode Summary

Are LLMs stochastic parrots or reflection of our own intelligence? In this episode of Navigating Major Programmes, Riccardo Cosentino sits down with Lawrence Rowland for an extremely candid conversation surrounding the adoption of artificial intelligence, in major programmes and beyond. AI skeptics and AI enthusiasts alike, this episode was recorded for you. “None of us are keeping up, none of us know what the hell is going on. So, if you can kind of just relax and enjoy it happening, you will also help everyone else so much more. Enjoy it. And enjoy what [AI] is telling us about us.” –Lawrence Rowland

Episode Notes

Are LLMs stochastic parrots or reflection of our own intelligence? In this episode of Navigating Major Programmes, Riccardo Cosentino sits down with Lawrence Rowland for an extremely candid conversation surrounding the adoption of artificial intelligence, in major programmes and beyond. AI skeptics and AI enthusiasts alike, this episode was recorded for you.

 

“None of us are keeping up, none of us know what the hell is going on. So, if you can kind of just relax and enjoy it happening, you will also help everyone else so much more. Enjoy it. And enjoy what [AI] is telling us about us.”  –Lawrence Rowland

 

Lawrence began as an engineer on large capital projects with WSP and Motts, before moving onto Bechtel and Booz Allen. He spent ten years in project and portfolio management with CPC and Pcubed, before transitioning to data analytics and AI for projects, working originally for Projecting Success, and now for React AI. He now helps project services firms find relevant immediate AI applications for their business.

 

 

Key Takeaways:

 

Mentioned Links:

 

If you enjoyed this episode, make sure and give us a five star rating and leave us a review on iTunes, Podcast Addict, Podchaser or Castbox.

 

The conversation doesn’t stop here—connect and converse with our LinkedIn community: 

 

 

 

 

 

Episode Transcription

Riccardo Cosentino  0:05  

 

You're listening to Navigating Major Programmes, a podcast that aims to elevate the conversations happening in the infrastructure industry and inspire you to have a more efficient approach within it. I'm your host, Riccardo Cosentino. I bring over 20 years of Major Programme Management experience. Most recently, I graduated from Oxford University Saïd Business School, which shook my belief when it comes to navigating major programmes. Now it's time to shake yours. Join me in each episode as I press the industry experts about the complexity of Major Programme Management, emerging digital trends and the critical leadership required to approach these multibillion-dollar projects. Let's see where the conversation takes us.

 

 

 

 

 

Riccardo Cosentino  0:53  

 

Hello, everyone. Welcome to a new episode of Navigating Major Programmes. I'm here today with Lawrence Rowland. How are you doing, Lawrence?

 

 

 

Lawrence Rowland  1:00  

 

Hey, nice to meet you again, Riccardo. Yeah. All good.

 

 

 

Riccardo Cosentino  1:03  

 

I met Lawrence online and we had a few interactions on LinkedIn, on my favorite platform. And we sort of got to know each other a little bit. And we had, you know, full disclosure with a few offline session when we got to chat. And we had a really, really interesting conversation. So we felt that it'd be good to share our conversation with a broader audience. But why don't you give us a bit of an introduction, Lawrence, who you are, where you're coming from?  

 

 

 

Lawrence Rowland  1:37  

 

Thanks, Riccardo. Yeah, so where I am now, I'm essentially looking at sort of AI and other technologies for project success. And where I've come from, let me go right back to the start very briefly, trained as a physicist and then went into engineering consultancy, modeling and simulations, transport infrastructure government, moved through like, basically, WSP, McDonald's, moved into sort of delivering implementation and EPC with Bechtel, sort of moved into a sort of broader sense of, of the rhythm of, of major projects with Bechtel and then moved into management consultancy at Booz Allen Hamilton for five years. Again, mostly transport, infrastructure government, still a lot of rail. And then the last sort of 12 years, I've basically spent about eight years within project services consultancy, specifically. So some by myself, some with CPC project services, quite a lot with MI-GSO PCUBED. So the last five years, I spent three years helping a startup in the project services space, called Projecting Success, which is all about data analytics for projects. And in the last six months, I've been doing it by myself for the company.

 

 

 

Riccardo Cosentino  3:02  

 

Thank you for that comprehensive introduction. Really interesting background. As I said, at the top, I mean, I've come across you because you've been very active on LinkedIn, testing out and playing around, so to speak, with chatGPT and GPTs, in particular. And maybe before I ask you the first question, Can you can you give us a bit of overview of the work you've been doing online? Because I think it's quite unique, what you're actually doing and is worth a mention, and it might be a good lead into my follow-up question.

 

 

 

Lawrence Rowland  3:38  

 

Thanks, Riccardo. Yeah. So at the end of the day, all I'm doing is I'm trying to apply some, if you like, ready to use AI for basic projects and program use cases at the moment, because we all know that the main bottleneck is essentially when project companies actually and individuals feel comfortable using it and working on it together. So it's really about raising that discussion. And making sure we sort of chunk it off bit by bit. So all I do is, if you like, I choose the sort of shiniest bangle, but one that works so that we can talk about that. And at the moment, the shiniest bangle are these things called GPTs, from open AI, which are basically the large language model conversations from GPT4, but they're bundled up into, if you like, sort of these use cases and with a with a nice user interface. So that if you have a particular idea that you're trying to do like optimizing portfolios, you can write down what that GPT should do and then the AI will basically have that conversation with a new user trying to abide by your instructions and by the information that you give it. So it's a great, now, that's a great place to start because one of the main things that we're learning about AI, and this is still an open question, is will most of the use for AI both on projects and more generally, will it be largely through if you like, lots of small AIs, which are like, partly decentralized, if you like, so you will have your own AI or a number of AIs, let's call those agents, which then cooperate with each other in some kind of market mechanism, and also the platforms. Or is it if you like big AI, is it corporate AI or a few big foundation models where you've got big moats and you've got five or 10 big players, max even and in projects, you might only have two to five big players, and they have their absolutely sort of supercomputer equivalent model, and that's where all of the value for projects resides. Or do you get a kind of spectrum? So GPTs, I'm exploring those because it's an example of the kind of, there's no right or wrong answer in my opinion, but it's an interesting thing to explore to get, I'm trying to get a sense for how comfortable project people are with, if you like, these agent frameworks which are these more decentralized types of AIs, or whether people are more comfortable with, you know, what, with great big foundation models, which are fine-tuned on entire corpuses of projects and those were held by a few big companies. So, that's where I am with that.

 

 

 

Riccardo Cosentino  6:43  

 

Interesting. So obviously, I've followed your work and so I'm quite in tune with large language model, with agents, but before I ask you the follow-up question, maybe just for some of the listeners that might not be as familiar with with the latest. I mean, hopefully, you haven't been living under a rock. Because I mean, if you haven't heard of open AI and GPT, you've probably been living under a rock. But what's an agent and what's a large language model and then let's start there so that we're all on the same page before I move on to the next bigger question.  

 

 

 

Lawrence Rowland  7:24  

 

Yeah, no, that's good. So a large language model is a type of generative AI, that's called, that as part of obviously, a subset of many other different types of AI, this is specifically it's an AI that is autoregressive in the sense that it, what it does is it predicts a sequence. Okay, in this case, it's predicting, normally sequences of words but once it's set the first few words, it's autoregressive in that it uses those last few words that is created, to keep generating the sequence, and just go on and go on and go on. And, you know, five years ago, it was not at all like sort of certain that that will be one of those sort of big and important things, that bits of AI that will be useful. But there's, we've now got an existence proof that these things tell you very interesting things. So that's what a large language model is. There are other types, there are drawbacks with it. And there are other AIs that are, if you like, coming along, but that's, that's if you like the the product category. And then what's an agent, so agents have obviously got a much longer history, but an agent is, you know, we're agents, right? And agents are, if you'd like, partly defined by their, their own internal motivations, and all requests, but they're also defined by the sense that they're in a some sort of community or market, which is sharing information, and value, and goods, etc, etc. So there are different sort of, I mean, they're so the interesting thing at the moment is that large language models, some people are packaging up large language models, and testing them with already established agent frameworks if you like. So, let me, if we think of it in a project perspective, if you think of a project review meeting, then from an agent, from a language of talking about agents that is a, an agent forum where a number of agents are getting together and they are they all have their different incentives and motivations and roles on that project as internal and external to the both the project and to the company holding that review. And then in a good meeting, there will be some progress that gets made some decisions that get decided that basically, not deterministically either just one of the things that one of the agents has said, there's a process involved. So that's why agent frameworks, were one of the ways if you like, which that to make things like large language models useful because they, they start to want to do things in the world. Now you can exchange information, but you can exchange value, you can exchange money, etc, etc. And so that's really I mean, it's still very experimental, but there are there's, there's a lot of work going on on that at the moment and if people like that, then you'll find that actually, you know, that might be one of the things that's 20 or 30%, of what we'll end up sort of doing in future.

 

 

 

Riccardo Cosentino  10:57  

 

So I played a little bit with GPTs and with the broader product chatGPT4 model, and yes, you can ask chatGPT to write you a  letter, but you can, I think what I've noticed with the agent, and I'm not as versed as you are, but you can almost interact you'll become so I've been using more and more chatGPT as a sounding board as somebody who I can brainstorm things with. And it's so, it's not just please write me a letter, please write my resume, please review my letter. It's also okay, I want to I want to do a market segmentation. So I look for a GPT that does marketing. And I want to do market segmentation on the construction industry. Well, how would you go about it? And so obviously, the GPT gives me an answer and then I start probing I go back and forth, and then we come to an answer. Do I come to a point where I'm satisfied with the feedback I'm getting? And I think I'm getting the answer that I was looking for, or the position that I was trying to establish is that what you're also referring to when you're talking about having interaction with the agents and having agents available, or?  

 

 

 

Lawrence Rowland  12:14  

 

That's part of it. That's the, so the agentic part of it is, if you like, is where it starts to sort of take actions on the internet or even in the real world, for you and on your behalf. So, it might trade for you, it might set a contract up for you, it might exchange some information under certain conditions with another agent. I mean, partly think of it. I mean, if you know, if you're from a consulting background, you'll be familiar with the principal agent problem. So, which is a, which obviously, you know about it, but just for any listeners, there's not, when you're consulting, you have to think about, if you'd like a longer term economic problem, which is that you have a principal who's got an interest in anything, it could be a, you know, it could be a mining company, or whatever. And then you've got an agent that's acting that you're, that you think 100 years ago, right, you'd have shipping agents or mining agents and that kind of thing, who would act on your behalf. Now, when you're the principal, there's a fundamental sort of imbalance in the information between both parties. So you don't know if the agent is acting on your behalf for sure, because you either haven't got the time or more likely haven't got the expertise to know exactly what a shipping agent, or a mining agent should do. And that's called the principal agent problem. But it's just to show that actually agents have been with us for a long time and, you know, hence the fact that we've got that in our language, but it's basically wherever you're either getting information back and then you're and then as principal, you're acting on it and taking it on trust. Right? Then that's if you like, something where you're dealing with an agent, and then you're acting in the real world, or where you were, the agent is on your authority, taking an action in the real world. So it's like expediting or setting up a contract or something like that. So and like you can do, you can use LLMs without that kind of agentic sort of paradigm, just by if you like, asking questions and like getting advice and asking, you know, a little bit like an advisor, treat it like a bit like a, you know, an advanced search engine or that kind of thing. And that's not really an agentic use case. But it's interesting because there is an aspect of agency to it in the sense that if you start thinking, if you start treating it as an open question, if you like, how much agency it is possible that this AI has, I mean, obviously, at the moment, we assume it's, it's very low to none. But if you ask that question, it is a bit like suspending disbelief when you're if you read novels, you kind of have to, you know, you watch TV, you're watching these characters, and you're suspending disbelief that these aren't actors and there's just words on the page. And I find that you actually, you actually go a lot better by suspending disbelief for a little bit that that thing you're engaging with is not an agent. But then what that also makes you do is it makes you more conscious yourself, of how limited you are yourself in understanding your own purposes and motivations, especially when you're being spontaneous. Right? So like, you might say something well, did I know that that was what I was going to say one sentence ago? Well, not really. I often only learn what I think by when I'm, you know, by speaking. So. So because you're dealing with something which has got at the moment pretty low agency, but has definitely got it is becoming more and more agentic. And in terms of what we allow it to do. It allows us to think more broadly about what does it mean to have agency ourselves as individuals? And also, what are the limitations, the natural limitations on us? So there's a sense for example, I mean, like, and that's one of the most useful things that I've got out of the last 18 months of engaging with ease is it is it's put a question, it's reminded me to put a question mark over my understanding of myself and my conversations with others. So, for example, a lot of people will say, these things are just stochastic parrots. So stochastic is as in random, right? And these things, just parrot back, because they've, you know, read the Internet and they just parrot back that, and they're just stochastic parrots. Well, and like, so computer scientists, I mean, putting content too much into this here, but you'll find some that are very much yes, it's just a kind of, it's spotting patterns. And it's just a stochastic parrot. And then you'll find others where, where they agree with that. That is true at the kind of base layer of things. But they'll say that actually, if you look at the behavior, there's something maybe more interesting going on. But that label, stochastic parrot, if you apply it, you start to think well, how often am I behaving myself, like a stochastic parrot? And so it's becomes very valuable? Because the question becomes, especially as we start to think about how to run projects, alongside these things, then you also have to think about, well, how capable am I and how capable is my team in doing those things? Because we have blind spots and blind sides. And we sometimes, you know, if you've had four hours sleep for, you know, sort of three days in a row, how agentic and unbiased are you when you go out and do these things? So because we're encountering this different type of intelligence, isn't it, it's an opportunity to actually sort of reframe our own individual intelligences and then our corporate intelligence, when we're doing things, I mean, let me just give you one example that came out yesterday. There's an absolutely fascinating paper on where they tried to basically recreate wisdom, look to see whether GPT4 can do wisdom of crowds. So the wisdom of crowds thing is this thing where you may get some good answers if you actually ask a lot of people, if you like, a common question, and it's invaluable as a corrective. When you're stuck in your own, if you like, epistemological bubble and you've got your own idiosyncrasies and sometimes it's hard to know whether you're being too optimistic or to pessimistic. So this study, basically, was asking questions that were related to things that you the prediction markets and competitive forecasters used for what's going to happen in the future. And it actually got some really good results. So for it, so that's an example of something where, where, already, you're finding that if you actually use these things correctly, just as one use case, they might be quite good at projecting if you like, you know, what is the spectrum of possible global or local events that might happen? And you can use that alongside individual humans and individual experts to, if you like, offset the fact that, you know, when some sort of global supply shock occurs, and if you're not a logistics professional or economics professional, you almost certainly don't know what the implications will be. But actually these GPT4 is, for example, showing that it's quite good at actually doing that up to the level of actually competitive forecasters. Because there are people these days who actually, there are competitions these days, like the sort of Phil Tetlock type of competitions where people predict try to make they say, there's a 20% chance that there'll be this sort of political problem, or this state type problem or in this year, or under 30% chance of this, etc, etc. And then there are these prediction markets that actually sort of make kind of fake and real bets on these. And so showing that actually, if you're kind of asking the right questions and using him the right way, then if you like, our human intelligence, is possibly can be complemented with the special things that the AI brings and then the things that the AI is not so good at, we can maybe try that we're better at, then those are the things we should be focusing on ourselves.

 

 

 

Riccardo Cosentino  21:13  

 

So it's almost, yeah, I mean, I totally agree with you, but just just maybe to say that, in a sentence, it's augmentation of our intelligence, right? And I think I've heard a few times people referring to think of our large language model of LLM as a thousand interns, that help you with a certain task. And, you know, I think we all work, we all work with interns, and I mean, they are an amazing resource, but they required a little bit of supervision, a little bit of input, a little bit of training, in order to get the best outcome out of them. And, you know, you're typically only limited to training a couple of them, but with an LLM, you have almost unlimited access to really, not intelligent, but knowledgeable resources that can help you carry out tasks.

 

 

 

Lawrence Rowland  22:11  

 

Yeah, and you should really, like move into a place where your team should be, like, kind of using those research assistants or assistant project managers just now, right, learning what the drawbacks, and the strengths are. So you, it's kind of like our intelligence, but in another sense, it's a very alien intelligence, because, and this is, but this is also one of the benefits is that if it doesn't, I'm using the word thinking captions, okay? Because it's a whole other thing. But in terms of its identification of patterns, it doesn't do it in the way that we do it. So and so that's got weaknesses, and it's got great strengths. And also, it's early days. But so one of the ways of thinking about this is, if you think about what's happening with a large language model, it's getting a prompt from you. And it might be two or three sentences, four sentences, that's told in a certain language, with a certain, like, implied level of education, with a certain type of set of like interests and nuances and idiom and that kind of thing, and it then has to, if you like, think about what are the most likely ways of completing and continuing, if you like that dialogue. Now, in order to, now there are lots of different ways of completing that, that are consistent with the ways that those things started. But so it's, but it's like, people like Hinton, for example, Geoffrey Hinton, one of the Turing Award winners in this space now, a lot of computer scientists disagree with him. But there's, but it's provoking even to think in this kind of way. Right? Is that they'll say things like, well, that in order to actually complete that sentence in a useful way, and yes, yeah, so in order to complete that in a useful way, there's a sense in which it's possible that this thing has a certain sort of world model that we haven't actually, given it that learning, we haven't given it a special world model, it needs some sort of world model that it's somehow created itself, in order, because it has to know all of those 30,000 different types of personalities that are consistent with the few words that you've got. So is this a 15-year-old speaking, is this kind of a nine-year-old speaking, and it's doing all of that somehow in the background. And that's not really that, we can't do that. So, you know, we're far more restricted in a certain way. It's doing that on the basis of all of the human learning so that all the documents it was given. So in that sense, it's our culture, it's part of our collective intelligence. But the way it's doing it is something that we're just not capable of doing. And it comes so that, you know, I can't suddenly start speaking Japanese, if suddenly someone is an English, Anglo Japanese speaker, and they suddenly lapse into Japanese, it can mimic that and do that, I certainly can't do that, I might even recognize that that's what's happening. So there's, and so that alien nature of it, if you like, is like profoundly strange, family provoking. And it also gives you a great sense of caution around it. But it's the time to start exploring it now. Because as these things get much more, much, much bigger and much more effective, it's only gonna get stranger. But if you, if you're actually following it now, then we stand a chance of figuring out which are the right questions to ask these things where they perform better than us, which are the times when we can't trust it, or we shouldn't trust it. And where we should trust a human, and which are the ones where it doesn't matter if a human does it, or he or an AI do it, but we just haven't got frameworks for doing this. Unless you're, you know, maybe the closest like thing we need is we need someone like that. So an ethnographer is a combination of anthropologist and ethnographer. And maybe someone that works with animals, right, and actually can work out, you know, a primatologist or something like that, and can pick up on those things, because, and so that's where we, that's where we all are now is like, we need this mix of skills and none of it, none of us have gotten more than one or two of them. So this is why I'm talking about it. It's so valuable because we slowly pieced together how to survive and thrive in this in this new world. I'm sorry, I veered off projects.

 

 

 

Riccardo Cosentino  27:03  

 

Oh, no, this is I think it's a really good, really good setup for what I'm going to ask next. But I think before I go there, I like the concept that you're saying, you know, these are these things are new. And I think you've been very strong advocate even online to say, you know, play with it, interact with it, trying to understand how it works, that right now is rudimentary, and we can still see, you can still peek behind the curtain and try and understand what's happening. And eventually it will stop and then he can, brings me back my reality with my computer, right, when you know, 30 years ago, I started, I got my first computer and you know I at the time we had DOS, right? So when you had DOS, you had to know the CD and if you remember, if Windows 3.1 came out and you had to load it, you know, you didn't turn on the computer and off you go, you got your Windows with your mouse, no, no, you had to load Windows 3.1. Tada, you started with a DOS prompt, you load the disk, and then you go Window, and then you got an interface that you can play with. But I remember operating in DOS, you know, loading up perfect in DOS. And when you do that I had to learn about the folder structure, I had to learn how to recall to drill them into folders. And, you know, nowadays you don't I mean, the dot, those prompts are still there. I mean, if you look hard enough, you open a terminal, you can still get a DOS prompt on your machine. Nobody, nobody needs it, but it's there. But what that allowed me to do is like, once I evolved into, you know, the more interactive version of Windows, you know, I understand the folder structure, and I understood, understood what was happening, if there was a problem, or I was getting an inspector response for the machine, I kind of knew what was happening because I knew what was happening behind the scene behind the interface. And I think, I think that's, that's where we're gonna have, you know, five years from now, God knows where the user interface is gonna be. And we, you know, we're just gonna use, we're just gonna see the use case that we need, but and probably a lot of people won't even know what's happening behind. Anyway, I just wanted to draw that parallel, because  

 

 

 

Lawrence Rowland  29:22  

 

I love that. No, I think that's, I think that's bang on. And so, I mean, there's an even older one, right, in that, in that tradition is that is that the older sort of Unix. Unix was one of the earliest operating systems, but it's still there, in the sort of back of all your things and like, there was a few, like, a few instructions in, in Unix, if you like, you know, like, sort of the things that do read and write and that kind of thing. And if you knew those 10 or 20 there was, you know, some people say that if you like each one of those, those commands, those primal commands, if you like, sort of spawned sort of sort of these the unicorn companies, right? So each one did one of those, unbundled one of those things and so, but there is a sense that it unless you understand what's going on under the hood at the start, yeah, I love the way you put that. Exactly right.

 

 

 

Riccardo Cosentino  30:23  

 

Okay, so they okay, I, you know, this, this podcast is about major programmes. So I, you know, I think is a perfect segue. Now, you know, we introduced large language model, but by the way, that's true, just one of the forms of AI. But it's, you know, what will we focus on right now, but, you know, this, this is a bit broader question. And, you know, how is, it's very broad so we might take some time to unpack this, but how is AI going to affect the way that major programmes? How's sorry, it's even more generic, I always say AI don't affect major programmes, and then we can start there and see if we can unpack such a broad question.

 

 

 

Lawrence Rowland  31:05  

 

Yeah. So I think, the first thing to say is, it's probably already impacting it. But we're less aware of it because if you look at the surveys of professionals in different company, in certain different countries, for example, if you like like, sort of India, I think 80% of professionals are using chatGPT or other open source, and then over in like Europe and things, it's something like 20 or 30%, and etc, etc. So, the fact is already that you've got people using these things, whether they're disclosing it or not, on your project. So that's the first thing to like, there's no such thing anymore. Fortunately, or unfortunately, as if you like a kind of untarnished baseline. So you're already starting from that. That's the first thing to say. So there may be some of the things that have already happened, for example, that you want that, that we will want to unwind. So I'm not saying that all of the first forays will have been there. But that's the first thing to say. So then the second thing to say is, is there an almost based on that thing, which is, again, the first thing that has to happen, and companies will be, in my opinion will and projects will be, will be sort of split into like two or three, depending on what they do about this is how much they actually become transparent, like from now, and find a way of doing that, that doesn't, if you like, piss off their team. So you've got a whole spectrum of use cases, some people that won't touch it, some people are already using it but not telling people, some people that are kind of using it and disclosing that they're using it. And some people are using it probably on things they shouldn't be using it. I don't I don't want anyone using it on a safety case. Right? So, the very first thing, if you like that will impact is that companies and projects will slot into like aren't yet they're ignoring it. These people are spending a lot of time figuring it out. And they're becoming unproductive because they've got into a tangle about it. And they're spending too much time thinking about it. And then you know, and then there'll be one of those where they come oh, here's a good policy and approach. So that's, so I think that's the very first thing almost that has to happen, because any goal happen to be at these different speeds and I don't know what the right answer is. But until it becomes a thing, like, there was a point when, like, sort of when you could bring smartphones in and start, and, you know, what do you do about your email? You know, and that's related to projects. And, you know, there was a time, though, when no one was discussing it at a time where people anyway, so you get the idea. So I think that's the first way in which projects will be affected. And that will be dependent on partly on what people choose to do. And that's really that's, that's a now question. And then very broadly, I guess, the very first things that will happen first will be the things the design type phases will be more affected. And then the kind of reporting and handover elements will be more affected because those ones are easier to kind of control and review. And then the sort of hard asset and the operations and the commissioning type stuff will be broadly less affected initially. So that's a very kind of big handfuls type of stuff.  

 

 

 

Lawrence Rowland  32:05  

 

And may I push on the last statement? Why is that? Why do you think the hard assets are gonna sort of lagging in terms of adopting AI?

 

 

 

Lawrence Rowland  35:08  

 

Yeah, it might. It might just be my, I mean, I'm just trying, I think everything will be affected. I think it's just a question of like, which first, right? And so, and I might be blindsiding myself. But so I haven't I haven't really got a clear answer for you, which is it, which is hopefully useful, interesting in itself is that, that's almost just a guess. So, and I've been thinking about this, and I haven't got a good answer for you. So it's, I think, it's just I guess the things that I'm trying, I make a bit more progress on the, if you like, the intangible stuff at the moment, if you like, and on the documentation and those kinds of things. So that's, that's just based on me playing around, but other people might be getting different results. Right? And so I don't know, Riccardo, but I think the fact that we don't know, I mean, do you? What do you think? I mean, have you?

 

 

 

Riccardo Cosentino  36:19  

 

So my, yeah, I mean, just picking this up, of course, I think they've worked for processes that you describe, right? Reports, I think they're ripe to be disrupted by and large language model are a perfect fit into helping managing those workflows. And when it comes to hard infrastructure, I think having those workflow processes are a little bit more complex, and in order to disrupt them with the AI, and, by the way, I don't have an answer either so I'm totally speculating. So maybe to disrupt those workflow processes associated with the hard infrastructure. Maybe the AI needs to advance a bit more, maybe our understanding of AI, need to advance a bit more before we can really adapt that. I mean, you said it yourself, you know, safety case, you don't, you know, you'd need you still need a human in the loop kind of thing when it comes to safety. I don't think we were ready to delegate that no matter how accurate the predictions can be from AI. I mean, it's, you know, I think as a human race, I don't think we're ready to delegate that. And so, yeah, I think it's a situation of maturity, both of the human race in adopting this or accepting the disruption and also maturity of the technology that might need to go further, you know, would you, how is AI going to help with commissioning, right? Your commission in a railway? How is AI going to help with that? I mean, I think we still have a little bit of ground to cover. But I think what is becoming clear is that there are a lot of sub workflow processes or subtasks that are going to be disrupted even in the IT infrastructure, right? Maybe the entire commissioning process is not going to get disrupted, but there are going to be tasks within that process that are probably already been disrupted now.

 

 

 

Lawrence Rowland  38:30  

 

Yeah. And I think so, the more bottom up, and the less subsystems there are and the more at the component level, you're talking, when it comes to, if you like, sort of the hard assets and physical performance, and the more sensors there are already, then the quicker that stuff will happen. And yeah, you're absolutely right, that actually some of that may well be happening now. So there's a sense in which those are the ones where the data is where if you'd like the AI for the physical world, because it's got the data is actually able to sort of make great progress, but it will happen at that subsystem level. So I mean, imagine if you've got a behavior report, right? So for the latest for this asset, and it's running self-tests and that kind of thing, and you're getting 20 pages a week on this one particular asset and you've got a thousand of them across your network, so and then let's say you've got might have a 200-page manual for how to use it and what the different ways of operating it are. And then you've got another manual for, if you like, for how you're meant to operate and maintain it. In this particular organization, and that kind of thing, then actually, if you have got the sensors and you have got the data, then there are probably some incredibly intelligent and incredibly sort of hands-on engineering people that are already sort of working out how to use AI to solve those, to proceduralize some of the basics to that, right? And yeah, I'm just not seeing that yet. But that's because I'm not looking. And so and so maybe, actually, what will happen is we get all this general stuff at the front in terms of improvements on design and in the early stages. And then we, and then the stuff, if you like, on the right-hand side happens bottom up, but then suddenly, by the time you will notice it, it's already very advanced, and could even be superior to what we're doing because it's got that really fine-grained data and context that it's using. I mean, and because, you know, already, you can take photos, and soon will be videos of these things. And it will be able to look for cracks, you know, say what should I do to, you know, what do I need to do to, to sort of check for defects on this component, and it will, et cetera, et cetera, then, it may be that if you'd like, we get this slow start, but lots is happening. And then suddenly, it comes down to limelight and we realize it's really advanced. So that's another possible, possibly path I suppose so. But I haven't thought about that until you asked the challenging question.

 

 

 

Riccardo Cosentino  41:37  

 

I mean, the physical infrastructure is my area of interest, right? I mean, we do operation, I do operation and maintenance. And so we've been trying, I mean, we anecdotally understand how the industry, that sector is ready and ripe for disruption. But I think there's still some challenges. And I think we all understand that anecdotally, and I think it will happen. However, you know, some of the foundational blocks are just not there yet, as you said, the sensor stack is not there, in a lot of the, we call dam infrastructure you know, or pavement track, you just don't have sensors, so you still survey it, but you don't get the live feed, you don't get the live updates. You do get condition assessments that you can use to monitor. And then, not only as I said, there's no, there's no two-way sensor communication, but also some of the data, as I said, a condition assessment, now, can still be like a PDF file, right? So then you got the challenge of how do you make that data digestible. But we understand how machine learning, especially machine learning can really spot patterns in degradation curves that maybe the human can't really rationalize. And, you know, there, there are certainly artificial neural networks that do that. But I like to actually switch gears a little bit, because I am going to provoke a little bit and then I'd like to get your view, you know, the talk of the town is generative AI, right? So we talked about large language model, that's really generative AI. But I'd like to ask you, in your view, the level of maturity for, you know, harder part of generative AI associated with our industry. And then the holy grail is, you know, CAD generative AI, do you think that's possible? Do you think it's something that can be achieved?  

 

 

 

Lawrence Rowland  43:49  

 

Yeah, so I haven't thought about this very much. But, I mean, obviously, they're already, we know there are startups in this space. And so we know that, actually, we're going to get some results very soon if that is actually possible. And likely, I would have, my guess is, and it's only a guess is, that is that that will be very sort of powerful, very quickly. Just flipping back a little bit to last bit of what you were saying before we move on to this new (inaudible) it's related, interestingly, it to, you know, back to the hard assets and that kind of thing. We were talking from a frame of like AI for it. But as as I know you know, and I know, as I know, you're also interested in I would say that actually a lot of those kinds of, a lot of the technology and the thought pattern is that that you want to sort of boost that to boost an integrated view of, of those hard assets and of asset performance are actually, there are actually other technical developments that you probably be wanting to look at as well as AI. So you're starting to veer into things like, sort of some new approaches that are emerging in things like system engineering, et cetera. So, and I think that those, then those compounded with some, some use of AI will be great. But it's almost like, that's the bit where you were actually, there are other advances that are happening in parallel to AI that are, you know, just over the horizon for many of us. But there's, you know, there's a lot of exciting work going on. And you and I were talking, for example, about sort of this thing called applied category theory, for example, which I think is one of the useful things that might, in the end, improve things like digital twins and things like that. So there's a lot of like this parallel work going on, that's not explicitly AI focused where those might actually tip those things into, into success. But back to what you're saying, I think, I think, yeah, I think they're already using, you're asking about generative AI for things like design, and CAD, et cetera, as I say, they're all, people are already doing it. And then there's also a lot of like, there's a lot of design tools already being created, even outside construction, which is a shorter filter into construction. So like, if you think about something like Figma, the design tools, for example, then they're using, I think they're using large language models, for example. And then like, nearly every SAS product will be, will be essentially using, we're both using large language models to facilitate sort of text to drawing, but there will also be other generative AI, that will, if you like, look for different types of options, because it will be not just not predicting text, but predicting, if you like shapes and, and image patterns and that sort of thing. But yeah, that's not my, it's not my area. So I'm guessing a bit here as well, Riccardo.

 

 

 

Riccardo Cosentino  47:23  

 

It's not my area, but as I said, given that this is, you know, we're in a project environment, we're in the building environment, I mean, I wanted to touch upon that, because that seems to be a lot of people, it seems to be a bit of the holy grail, you know, how can you imagine if we could automate, fully automate CAD design? What sort of productivity gains we could have? That's certainly something that I'm hearing more and more but, and I, like you, I have not looked into this. I mean, I'm still I'm still focusing on things that I can touch today. Okay, so that was a good conversation we just had on generative AI. But again, maybe let's push it forward, because I'd like, you and I have obviously talked offline. And we've explored, you know, different types of AI and we've explored also, you know, what's coming next, because this is, this is an evolving, this is an evolving industry sector and the advancement are quite significant. And I think it is worth mentioning that AI machine learning is not something that happened 10 years ago, I mean, we, I learned in my journey that we've been doing machine learning for the last 60 years. It's just that the computing power has now changed. And now we really take it to the next level because it's less theoretical and more practical, probably butchering those words, but just to give a sense. So I'd like to explore with you what's next? We know about open AI, we know about the large language, the various open source large language model, we know that Google has issued Gemini you know, these are obviously competing products but what do you see behind the corner in the landscape of new developments? I mean, you know, I think if you follow all the press releases, you probably get a sense of what's coming but I think for people who have a day job and they need, in a family, they might not be up to date with all the latest releases and the latest technology so I just gotta say, so I'll, we'll focus on that is you know, what is Meta doing and what Yann LeCun is doing. And I think that that's an area that I like to explore with you because you brought it to my attention. And I've learned about it and I found it fascinating. So I'd like I like my audience to also get a preview from you the same way I got it.

 

 

 

Lawrence Rowland  47:23  

 

Yeah, so it is really exciting. And like, just obviously caveat to this that I'm not an AI engineer. And so this is kind of just my gleanings from it. But then I do try to follow it quite closely. But, and there aren't enough really good AI engineers so you're gonna have to do sometimes with people like me. And then when you need some, some detail, push harder on someone else. But yeah, there's several, several things. Let me just sort of lay out a few of the big ones. And then we can maybe walk into one or two of them, right? So let's, let's start with, if you like, more of what we're doing now. So there's the first thing where we're obviously, waiting for GPT4.5 and then we're waiting for GPT5, these things will just continue to get a lot better. There's then if you like what's going on with, with Gemini and what's what we're already waiting with Gemini is we're waiting for access to, because most of us haven't got access to, they're talking about 1.5 pro and 1.5 Ultra, and most of us haven't actually got access to that. But when you get into that already, you've got this multimodality, which is already much more powerful. So like, as you're saying, that they're spotting video, they're able to, like do video to text as well. And they're also got much, much longer context windows, right, sort of million token type thing. And with much better recall from that much and so there's so that is all that is going to be another step change. So that's another area, there's then some other related stuff in DeepMind that we're not seeing yet, where if you listen to Demis Hassabis, you can tell that one of the places that he's gonna go is he's going to reach back to, and I don't know how many people were paying attention in sort of 2016 and 17, with AlphaGo, and their AlphaZero, et cetera, et cetera, where they were, basically, doing sort of Monte Carlo searches, they were basically searching the space of possible states. And you can tell that that's one of the things that's coming up with Gemini in the future, that they're actually going to be augmenting these generative models with, with some advanced search, which has already paid huge dividends in, you know, for example, the work they've been doing in biology and DNA as well, as well as like AlphaGo and AlphaZero. So that's another discreet area, which, you know, they are a long way out, but we're not seeing any instances of it yet but you know, that'll be the next few months to a year. There's then some these new things that have been around in the last few months, which are called selective state type models, which essentially give memory there's a model out which is on the way to it which is called, Member, which people are saying that they'll be able to combine those with the different large language models. And those things are very good at giving a better memory if you'd like compressing the things that need to be remembered through time and so that those can be more efficiently recalled and now that'll be incredibly powerful. So that's, that's another one. So we got and then there's what's, there's two more, right? So there's then all the rumors which haven't gone away, by the way, with open AI and they're QStar. Okay, so QStar is a sort of, we don't know what it is, but it looks like it may be a combination or so open AI have been sort of, it's just a rumor, but we do know that reinforcement learning and QLearning is one type of reinforcement learning is due for a comeback. You have to remember that open AI started with reinforcement learning they started with like learning the Atari Games and that sort of thing. So actually, they didn't do the Atari game one but they started with these, these playgrounds for reinforcement learning. And then there's something called A-star search. So QStar, for example, so this may be another sort of tree search. So, so there's something whereby, if you if you think about all the different options that come out, so when something when, when a large language, large language model is predicting sequences, it's actually, it's actually got a lot of possible sequence completions that it's thinking of, and it just gives you one of the top probability ones. So the, so these sort of Q learning and A-star type, development type approaches might be more sophisticated about choosing the right path. Okay? So they're not just thinking about the next best thing, but they're actually choosing better whole sequences. And then the last area, which is one that you touched on, is, is like, Yann LeCun and Meta and there are others as well. But they've got a sequence of models called Jepa, J-E-P-A, which are based upon Yann LeCun's chief scientist's strong belief that we're only at very early days of this, because these, these things can, you know, not as good as a three-year-old child, et cetera at learning. And that's where Yann LeCun has not just, he has an idea of actually, if you'd like it learning a world model, but also a whole number of other items. There's a fantastic paper he does, where he indicates that there'll be a discriminator, which, and then there'll be a whole other, there'll be a whole other number of sets of different things that if you'd like, cooperate, as well as the generative mechanism, which is a little bit more like the way our brain sticks we have if you'd like different parts. So I'm conscious that I've said a lot of different things there. Let's just spin through them. It's just but it's just we might want to go through to one or the other.  

 

 

 

Riccardo Cosentino  57:11  

 

But at the moment, I think, before you do that, I mean, the reason I'm taking down, taking you down this path is because I want, I want my listeners to also get the sense that, what you see now is just the beginning. And you know, especially for people that might say, oh, well, you know, AI will never be this, will never be that or, you know, for the skeptics on AI, I think it's important to understand that what you see right now is just the beginning, I heard a great analogy is it's, again, this is we're dating ourselves, and you know, for the audience, that my age or at my age, we remember cell phones, and you know, you know, the Nokia, the Nokia cell phones with a video and you know, the Gemini text prediction. Right? That was the beginning. And, you know, I think there's people there saying that the large language model we see today is the equivalent of those Nokia phones, right? When you used to write a text, you have to use T9 and press the number button three times to get the letter. And so I felt it's important to not only talk about what we have today, but also what's in the works, because it's moving at a fast pace. Anyway, I just want that premise.  

 

 

 

Lawrence Rowland  58:37  

 

No, that's brilliant. And thank you for spelling that out. Because so I mean, that's partly, that's the importance of what, what you've just said, is important to what I'm saying. I'm giving you some slightly confusing details. But even going further beyond them what you said, there are advances that or, if you like, in the bag already, they are already happening, that even if you, if you take these architectural and methodological advances, even if none of those happened, right, then we'll still get the advances due to scale and increased in efficiency, and the advantages of more data. And the advantages, also of more synthetic data, which, when the things self-play and then except so those things will continue to advance anyway. But then, if you like these other things, it's like which of these things will affect will, will end up being the sort of most desired architecture will like, I don't think anyone knows for sure. But let's face it, probably one or two of them will, maybe another one I've not heard of yet, like, will be and those will make it even better. So yeah, let's just sort of recap just very briefly recap them. So we've got like the, just the advances that you're gonna get anyway, you know, if you think, you've got to remember, something like open AI, GPT, GPT4, they've already had it for more than a year, et cetera, et cetera. So, and we're on GPT4 Turbo now GPT4.5 hasn't been released yet, but we don't know how, we don't know how long they've already had some of these. When they released Sora, which is the text to video, which is, which is another area by the way, which is combining a diffusion model with a transformer. We haven't even talked about that one. But that's, that's another advance just in the last few weeks now they have that Sora model for many months, right. And they haven't even released it yet. Because there's like huge alignment and control issues with it. Because like, sort of AI safety issues with it for obvious reasons, especially in an election year, if you're creating videos. So we don't know what else they've already got. So there's lots of stuff that they're already sitting on. And they just haven't given it to us yet, because because no one can be sure it's safe enough. So they have to wait until they've, quite rightly, until they've they've made sure it's safe. There's then the advances going that the Gemini are doing in terms of much, much longer context windows. And we also don't know, and also much better multimodal, which open AI will catch up with soon, I'm pretty sure. And there may be even some architectural advances that are in there that we're not seeing yet, they haven't told us about. There's, as I say, there's the Sora, I just introduced that, but then these transformer diffusion, combined with diffusion models, which is like the Sora, which, is huge, which will be huge, because when you actually see what they're doing and the applied physics that they have and have it do that, there's obviously a lot more going on, than meets the eye in order for them effectively to do that. There's then the selective state models, like Mamba, which will give some memory, there's then the other developments that DeepMind and others are doing, which are around sort of bringing back some of the sort of Monte Carlo Tree Search type things that were so effective five years ago. And then working out how to combine that with generative AI. And there's some of the developments to open AI that others are doing, which are around more reinforcement, sort of working out ways of combining reinforcement learning back into generative AI. And so between all of those, we just, you know, there's just, we just know that there's going to be a lot of advances. And the last one, it was, again, the sort of the world model, and the (inaudible) approach of Yann LeCun and Meta. It's useful looking at these, partly because it helps to pull out and help people, it helps you understand some of the weaknesses of the generative AI that we've currently got. Because when you understand that these computer scientists are trying to find things that, that if you like, cover the do much, much better, then it helps you also understand the generative AI that we have now and it helps you be more patient with it, because it's like going on, this is what, it has to be bad at this because, you know, etcetera, etcetera. So let's talk about the world model briefly because it helps you to understand some of the weaknesses of the current generative AI sort of batch. So I've said before that one of the weaknesses of generative AI is its autoregressive nature, in the sense that it's once it always comes, it normally comes out very good at the start. So you'll get the first few words, right, or, you know, or having a good chance of being, you know, roughly right, and et cetera, et cetera. But again, each time it's presented with, it's got lots of choices for what's the best possible sequence to continue with, if you might, you know, so it's, you know, Mary Had a Little Lamb, you know, it's going on predicting what the rest of that poem is, et cetera, et cetera. So, when it's doing that, what it does is when it commits to a sequence, if you like, if it gets one of those wrong, because each time it's got a thousand possible paths, it can go down, and it is going down the most likely paths, but it might choose one that's like, it quite often chooses one that's not an optimal path. And because it's autoregressive it means that it's looking to the what it last did and so once it's wrong, it just keeps on getting worse. It's like drive, it's like a snow, it like snow plows its way into kind of get stuck you So, unlike someone like that, Yann LeCun quite rightly say, well, this is definitely not, this is definitely sort of missing a lot of components of the intelligence that like that we have, for example, we, you know, we rarely get so completely and profoundly stuck and unable to sort of get back on the road. So what's going on there? So, if you look at like, what the (inaudible) in the world model type thing is trying to do is someone like Yann LeCun, is he disagrees with what I was saying, if you like half an hour ago, if you remember, I was I was saying that I was talking about stochastic parrots. And I was saying that there's a sense in which some computer scientists believe is they're just stochastic parrots. And there's some sense in which other computer scientists while they agree, in a sense, it's just a stochastic parrot, they'll also say that someone like Greg Brockman is an example. Right? He will say this, he'll say, he'll say that, from open AI, he'll say that there's, there really is a sense in which it knows who it's talking to. So if it's in a conversation, it gets a real sense of, if you like, yeah, this is the kind of person is this, you know, this, when they speak, they often speak sarcastically or they speak or they're genuine, etc, etc. And so in that sense, it's saying that all the pattern matching that's going on, is equivalent to some sort of world model. The thing has, the thing has to kind of like, know that there are these types of people in the world that could be speaking to it, in some sense, right? Now, someone like Yann LeCun will say, well, no, say that that's absolutely not the case. You need to learn a world model and a proper world model, he would say, so like the Jaipur architecture is one of the roads, is one and they've already got one for image and they just released one for video, and it's joint embedding predictive architecture. I think it is, right? But one of the key things that's going on there is, if you think of doing a sequence Yann LeCun says is not happening, is it saying you're like using like, tokens or like words to predict next words, or you're using pixels to predict or it's not quite pixels, but using pixels to predict pixels, right? That's not what we do. What we do is we abstract the salient features. And then we predict forward that higher level abstraction, and then, and then we make a more detailed, more fine-grained prediction based upon that high-level prediction, I'll unpack that, because that's a very confusing thing to say. But what it means is, if I suddenly turn round, and like I see something like just charging at me, right? I am not making a very fine-grained prediction about exactly what is charging at me, and exactly where it is, in my kind of in a kind of sort of 2d projection of where it is. So it's exactly here. And he's actually here, and therefore it will land on me. What I'm doing is I'm making a very, I am making a prediction at a high-level abstraction, there's something dangerous that's like approaching me with high velocity, right? I don't even know where it is at the moment, because I haven't, it's not in focus, right? So that's that high-level prediction, that high-level prediction is there's something big charging at me. Right? And then somehow, my brain and body together, then go on in like three seconds, like x seconds, that thing will be on me. Right? And it's approaching from this direction. Right? And therefore it will say, therefore, you need to move, I'm going to move, and I'm going to move to the left, or I'm going to move to the right, right? And then the, then the fine grain motor, motor stuff takes over and we end like our fingers and our feet moving, move and do the details, kind of the detailed stuff, right? So it, so that's where, even when something is in a hurry, there's a world model that's happening where we've said it's an object, it's an object that's coming towards us, etc, etc. And then and then you go back down to the fine-grained prediction later on. And so it's it's a compression technique, but it's a compression based upon like abstractions that are useful. And for us, we know our abstractions, right? So we know or many of our abstractions that we find useful so we find the ideas of time and space and planning and objects, and like corners and edges. All of those things we find them useful and we think of, we taught, abstractions like intentions and, and desires. And, you know, these are all like high-level abstractions that don't get presented to us like directly, but we've abstracted those away. So what Yann LeCun is saying is that, that a world model will be at that higher level, right? You learn about, like, in our case, its objects and desires, and intentions and edges and things like that. And that's what you learn, you don't learn about the details of words and pixels at a very fine grain, you take, you get that fine-grained data, you abstracted away, you make a prediction with your word model that's quite coarse, at the level of objects and edges and that sort of thing. And then only later on, does it come down into the fine-grained stuff later on. And it's also therefore much, much more efficient, you can actually compress into a number of, you know, you can like instead of having like, you know, we use the obstruction animals for like, let's say, we've seen 100, or 1000 animals in our lifetime, or whatever it is different types, you know, we use that abstraction of animal. And that's a very efficient abstraction for us, because it connotes a lot of things for us. And so Yann LeCun with his joint embedding prediction architecture, once you start building and training that world model and that world model then makes predictions and then it starts to do the fine grained stuff. So it will still predict things like sequences of words, and it will still predict things like images. But it's not just like, just predicting forward, it's predicting up and along. And then down into the detail if you like. And that's he, as I say, he has a much bigger architecture that he's planning to build out. You know, this is a really good picture in one of his papers, which, which is like, if you like got a break got to kind of brain of this artificial intelligence that he's trying to build it sounds, it sounds crazy, but you know, where he's got these sort of seven or eight elements is worth digging that, maybe I'll find that, maybe I'll find that and put, you can put that in the show notes. Maybe it's probably worth, otherwise people. So but, so that's what Yann LeCun and Meta are doing. And that helps you, hopefully to understand some of the weaknesses of generative AI, because at the moment, generative AI hasn't got a clean answer to that challenge, because but each one of those that is only one of like, at least six things that I'm aware of, and probably computer scientists were aware of 20 of them, right, the things that are kind of sort of growing in the dark at the moment. And that will be combined with transformers and possibly replace transformers as being the main architecture. And so we just know that these things are continuing to get better and better. But yeah, so hopefully it but I think it hopefully it gives the confidence that each time there's a new one that comes out and that kind of thing that's really powerful. Each time, it's worth understanding a little bit about kind of how it works, because there will always be a weakness. And there'll always be a strength that compensates for you know, so it might always be something better than the other type of AI and that kind of thing. And so the more that you're familiar with that, the more people can help you select the right AI for the right use case.  

 

 

 

Riccardo Cosentino  1:13:31  

 

That's fascinating. And I know, we went in probably level technicality that it's quite deep, but I wanted to go there, because I that's the only way you can do justice in terms of explaining or just giving a sense of the rate of progress and the rate of development, right? I mean, if you, if you haven't used chatGPT yet, please do it. Because it's free, you could just you know, you just can go to chatGPT and there's a model, there's a version of the model that is free. And you should try it just to get a sense of what it can do. And then you probably gonna get your mind blown. And I want to contextualize your mind getting blown using chatGPT3, by saying that that's just the beginning. And I think people are sort of, you know, they're so impressed by what they see, most people. But that's, I mean, for me, when I started my journey is realizing that, you know, that's just the beginning it's just the beginning. And it's important to contextualize that, as you progress in for your career, and as you progress a professional. In order to keep your skill set up to date, it's important that you understand what's coming.

 

 

 

Lawrence Rowland  1:14:49  

 

Yeah, that's a brilliant summary, Riccardo, and it's just like a different type of intelligence. Just like your dog is a different type of intelligence and your cat. There's different types of intelligence, etc.  

 

 

 

Riccardo Cosentino  1:14:59  

 

Oh, by the way, sorry, I have to say this, it's an, the opportunity is how you are so sure chatGPT is great. You can write a letter for you, you can write your resume. But I mean, that's, you know, that's like using a laser to cut your cheese, right? I don't know, that's, you know, what they do, opportunity is, how are you, how are we going to exploit that, and I think is, Andrew Ng, from Stanford that, you know, one of his lectures, is inviting all of the computer science students to say, learn about this, learn about machine learning and take it to the industry, all of the industry are ripe for implementation of machine learning. And your job as a computer scientist is to help the industry exploit the potential of AI machine learning. And then that's why I use the, you know, laser cheese analogy, because it's, that's where we are, we, you know, sure you can, you can have fun with it, but the opportunity and the business opportunity, and the, and of what this new technology can do, can bring to the table is anparalleled. And I think a common person that we follow on LinkedIn, or you actually know, at a (inaudible), posted this article from the 90s, where this journalist said the internet is just a fad. So if you feel and I have to reshare that because I think it is incredible. And I think that we're, I think we're in the same place, we were the turning point where we have something new, people are scared of it or impressed, there's two camps. But for the people that are scared of it, this is not going away. This is this is here. And it's it's up to us how we exploit it, how do we make the best use of it, the same way that in 2000, people made the best use of the Internet. And today, we have a technological ecosystem for consumers that did not exist 25 years ago. And that's because there was a foundation of technology, it was rolled out and then entrepreneur and people exploited that and build and build applications and build stuff on top of it, which now we get for, you know, we now have smartphones, and then that would have not been possible hadn't been for the Internet.

 

 

 

Lawrence Rowland  1:17:42  

 

Yeah. And then if you're younger in your career, earlier in your career, then this is such an opportunity for you because it's, it's all about learning how to learn, and if you can do that, then there'll be a completely different conversation in five or 10 years with another related or different technology. But if, so for me, it's just this curiosity thing is that is it, it's a little bit frightening and intimidating for all of us. But it, if you whether, if you need to look at it, in my opinion, if you, until you find a little bit of awe, actually just keep looking at it until you find and go, oh, it can do, how can it do that? Right? And then just keep that curiosity until you can find that awe, then I don't think you can go far wrong, because you'll, you'll be able to follow it without struggling because you'll know that you won't expect to understand it all, but you'll follow it because you become interested in it. And it will always surprise you and you get used to it also surprising you and it's humbling. And so that's the other thing, if it's humbling, humbling you, then you're on the right track, because it's like, you know, it's like a seven-year-old asking you questions that you actually don't know how to answer right? And you just forgot that you didn't know the answer to that question. Right? And it's annoying and inconvenient and charming all at the same time. And like none of us are keeping up and none of us know what the hell was going on. And like that, so if you can like, if you can kind of just relax and enjoy it and enjoy it happening, then you're also, your help everyone else so much more because like, they all know that they won't have to be like they weren't, you know, they think oh, I need to spend a week and try and really understand it and that kind of thing and like like no just like enjoy it a bit. And like enjoy not knowing and enjoy what it's telling us about us. Because it's all these things that we, all these questions that we stopped asking when we were ten. Then it come back and like and then also, as Riccardo was saying is like, if you watch the process of this particular technology revolution that you're at the start of some people were saying, it's like the people that are making the Internet comparison are saying it's actually a bit like 1992, 1993. In the Internet thing, and I got onto the Internet on like, '96-'97, '97, actually, probably, and I was early, early, but it's even earlier than that. So although it's bigger than the Internet one, and then if you can experience that, along with all your friends and colleagues and family, it also just means that the next, you know, the next one that comes along, will, you'll just know, more of a feel for actually, the kinds of things that happen. As society changes as a whole. And, you know, it's I mean, it really is that it's a thing of, you know, you think about the cars and people walking in front of the cars wearing flags. And you think about and these were all the right things. I mean, they were all the right things to be worrying about, because horses to get stuff, the (inaudible) all the right things, but we're in that at the moment where we're currently with our horses and cars. And there are all sorts of pitfalls, right? So things that no one will expect, like one of the things that they say that no one expected, they were in the late '20s was like, there were two technology revolutions that came together, right, there was cars. And then there was the after World War, or during World War I, there was obviously a whole there was an automatic weapons, so machine guns and that kind of thing. And those two combined created bank robberies, because they could do hit and runs, because they had the Tommy Guns and they had the fast cars. And so we wouldn't have had this whole culture of, you know, which we see in all the movies, right? And no one could have predicted, who could have predicted that, you know, you're in a quiet bank, and now someone's gonna drive up with this new technology with, you know, carrying a Tommy Gun. So, none of us know what the hell is going on. Just like, like no one predicted, bank, lots and lots of bank robberies in the '20s and the early '30s. Right? So, just watch it and enjoy it. What else can you do? Right?  

 

 

 

Riccardo Cosentino  1:22:35  

 

I completely agree. Anyway, I think this was, it was a terrific conversation, I truly enjoy it. And as I said, at the beginning, Lawrence and I had an offline conversation, obviously building up to this podcast, and I can assure you that this conversation was exactly the same type of conversation me and him had. So you haven't missed, I mean, you missed some of them, but you've got a sense of the topic that we covered. And, Lawrence, I can't thank you enough for joining me today for giving me your time for giving me your knowledge and giving us your knowledge. Yeah, you might not be a computer science, but you know more than most, and because you're not a computer science, you also explain it in lay terms in a way that even I can understand. So I want to thank you for that. And, you know, hopefully, we can continue our conversation and maybe we can continue having or we can have other podcast episodes where you can come back and talk about the latest trends. And with that I give you the last word.

 

 

 

Lawrence Rowland  1:23:49  

 

Well now I just want to thank you, Riccardo and thank you for you know, I picked up on your, on your sort of series towards the end of series one and I only listened to a couple so far. But I really enjoyed the digital twin one and I enjoyed the start of your last, your first one for series two. So just want to thank you for the kind of level of conversation that you're bringing to, to our profession in the industry. And I really appreciate it and I really appreciate the chance to be part of that. So thank you. All the best.

 

 

 

Riccardo Cosentino  1:24:24  

 

Thank you very much. Until next time, bye now.  

 

 

 

Riccardo Cosentino  1:24:29  

 

That's it for this episode of Navigating Major Programmes. I hope you found today's conversation as informative or provoking as I did. If you enjoyed this conversation, please consider subscribing and leaving a review. I would also like to personally invite you to continue the conversation by joining me on my personal LinkedIn at Riccardo Cosentino. Listening to the next episode, we will continue to explore the latest trends and challenges in major programme management. Our next in-depth conversation promises to continue to dive into topics such as leadership risk management and the impact of emerging technology in infrastructure. It's a conversation you're not going to want to miss. Thanks for listening to Navigating Major Programmes and I look forward to keeping the conversation going.