Navigating Major Programmes

The AI Revolution in Major Programmes with David Porter | S2 EP14

Episode Summary

In this episode of Navigating Major Programmes, Riccardo Cosentino chats with fellow Oxford MMPM alumnus David Porter about the game-changing potential of artificial intelligence in construction and project management. David shares how AI can drastically improve project forecasts and decision-making, even in an industry slow to adopt new technologies. He also discusses innovative strategies from his company, Octant AI, that tackle data management challenges and boost project performance. The AI revolution is here—will you be ready? Should AI be considered a general-purpose technology (GPT)? "So, you know, this is a huge, huge change that we are facing. And there is going to be massive disruption, you know, like, I mean, there just is. And so those who learn to use the tool, like those who learned how to use an internal combustion engine to put an airplane in the sky, those people are the people who are going to be our leaders." – David Porter

Episode Notes

In this episode of Navigating Major Programmes, Riccardo Cosentino chats with fellow Oxford MMPM alumnus David Porter about the game-changing potential of artificial intelligence in construction and project management. David shares how AI can drastically improve project forecasts and decision-making, even in an industry slow to adopt new technologies. He also discusses innovative strategies from his company, Octant AI, that tackle data management challenges and boost project performance. The AI revolution is here—will you be ready? Should AI be considered a general-purpose technology (GPT)?

 

"So, you know, this is a huge, huge change that we are facing. And there is going to be massive disruption, you know, like, I mean, there just is. And so those who learn to use the tool, like those who learned how to use an internal combustion engine to put an airplane in the sky, those people are the people who are going to be our leaders." – David Porter

 

David Porter brings a wealth of experience from the construction industry, having spent his entire career in this field. As the co-founder of Octant AI, he has been at the forefront of developing AI tools that enhance project performance and decision-making. His insights into the challenges and opportunities of integrating AI into construction projects provide a compelling narrative for the future of project management.

 

Key Takeaways:

 

If you enjoyed this episode, make sure and give us a five star rating and leave us a review on iTunes, Podcast Addict, Podchaser or Castbox.

 

The conversation doesn’t stop here—connect and converse with our LinkedIn community: 

 

 

Episode Transcription

Riccardo Cosentino  

You're listening to Navigating Major Programmes, a podcast that aims to elevate the conversations happening in the infrastructure industry and inspire you to have a more efficient approach within it. I'm your host, Riccardo Cosentino. I bring over 20 years of Major Programme Management experience. Most recently, I graduated from Oxford University Saïd Business School, which shook my belief when it comes to navigating major programmes. Now it's time to shake yours. Join me in each episode as I press the industry experts about the complexity of Major Programme Management, emerging digital trends and the critical leadership required to approach these multibillion-dollar projects. Let's see where the conversation tak es us.  

 

Riccardo Cosentino  

Hello, everyone. And welcome to a new episode on Navigating Major Programmes. I'm here today with my esteemed colleague and of course, a participant, David Porter. How are you doing, David?

 

David Porter  

I'm really well. Thanks, mate.

 

Riccardo Cosentino  

And where are you joining us from today?

 

David Porter  

I'm at Noosa Heads, which is in Australia. It's a beach. So this podcast is cool, but you're cutting into my surfing time right now.

 

Riccardo Cosentino  

Good to know. Good to know. A bit of background. I think David and I met through the Oxford MMPM, we were both alumni of that program. And you know, we met a few times and David has a very interesting, very interesting story to tell about what he learned from the, from the program and how he applied what he learned from the program. And so I felt it was a good addition to these podcasts (inaudible), but maybe David, a little bit of introduction about yourself when you're not surfing, what do you do?

 

David Porter  

Well, thanks. Thanks, Riccardo. Yeah, I guess I'm a construction and projects person from way back. Pretty much spent my whole career in that. I'm a civil engineer. I think around about the end of 2015, I, I sold the company that I owned and it was a good opportunity to really find out whether what I'd been telling people to do was the right thing to do or not for the best that could be done. So I looked for a place to go to, to explore that, which was the MMPM program at Oxford, Cohort 7. And while I was there, I met my co founder, Kong Kuang, who had an idea that he could use artificial intelligence to essentially operationalize or put into practice some of the theories of reference class forecasting, outside view, and the like that we learned at that, course, and that was the subject of his dissertation. And he achieved something really remarkable with that. And I guess since that time, that was a proof of concept. And since that time we returned to Australia to commercialize it. And that's a very difficult task, because it's a difficult data environment and construction is not really known for its adoption of technologies. So we probably had a task there, but , when you're at Oxford, I was just saying, Riccardo, when you're at Oxford, you really believe you really can change the world. And, and so so we've persevered and I hope we're getting closer to that.  

 

Riccardo Cosentino  

And you mentioned, so, you mentioned that you have a startup, maybe, what's the name of the startup?

 

David Porter  

Well, it's called Octant AI, O-C-T-A-N-T AI. And in fact, we've just released a new product last week, which is an online AI machine that helps you to accurately predict the upturn financial performance, cost and revenue and profit margin and the like. So that's in, that's in beta and, and it's free. So I encourage everybody to go and have a go at it. If you haven't got any data, there's preloaded data in there and we're really just looking for feedback from people, what they like about it, what they don't like about it so that we can improve it. So yeah.

 

Riccardo Cosentino  

And, I mean, you mentioned this latest release and obviously maybe we can talk a little bit about the product. Obviously you were doing AI before AI was cool. I think when I met you, I think it was a couple of years ago, obviously there's been an explosion of the usage of AI and it's super hyped, but you, embrace machine learning and AI way before that. So I think you introduced the tool, which is basically is using AI to apply reference class forecasting, but, maybe can you elaborate a bit more on, on what the, the base tool does as well as you described the edge one, but what does the base tool do?

 

David Porter  

The base tool, that's why it takes a long time. It's taken us pretty much eight years of R&D and 38, 000 projects and five years of industry collaboration to get it to work effectively. But that process has led to what we call the single adaptive project performance algorithm. So it has behind it in the back, it has one single really complicated algorithm, which uses multiple different sorts of machine learning at different times, that's called ensemble and transfer learning and the like. So effectively you can, with the right training data, you can forecast or predict any project variable at all, or combination of variables, as long as you have the training data. And the rule is, like the real breakthrough about that, is that it enables you to start very simply with not very much data as you develop data and as your data strategy, you know, unfolds, you're able to take new data sets and layer that on top of your existing data to make your machine, your own machine more complex. So what it really does is it enables people to build their own machine learner because that single algorithm allows people to add the things that they think are important to their outturn of their project. So for example, somebody in the equator might think that rain is very important to their project. And so they want to collect data on that and load that into their own machine. But somebody else who lives in a desert may think that that's just completely pointless. So they wouldn't bother. And so that, that's what we're really doing. We're really augmenting human expertise and human insights by using data to enhance those estimates and forecasts, and at the end of the day, the forecast is, is critical to anything. So we all know that in, in projects, every month we do a forecast and we forecast lots of different things. And we spend a lot of time and a lot of money doing that with, with, with lots of expert people. The reason we do that is because in any undertaking, the only thing you can change is the future. So your expectation of what the future is, is going to be, if you do nothing, is what drives your interventions into projects. And that's, you know, I mean, we have, that's why we do plans in the first instance, right? Otherwise, we wouldn't bother doing that. We just start the job and when we ran out of things to do, it'd be finished, you know? So being able to improve forecasts in a more timely and accurate way, enables people to make better decisions. And if they make better decisions, then assuming that they can implement those decisions effectively, which maybe they can, maybe they can't, I'm not sure. But optimism bias would say that they would all think they can. So assuming you can implement them effectively, if you make better decisions, you'll get better outcomes. And that's just, that's the bottom line.  

 

Riccardo Cosentino  

And what's the persona? What, who's the user, who are you targeting as your main user of the tool?

 

David Porter  

It's a really interesting question and we've had, this has taken us a lot of time to learn because I guess, you know, we found at an enterprise level that the main user is the, usually, the CFO's office or the head of risk or the head of innovation and so forth, right? But at a corporate level. But we're trying to develop the technology in a way that allows it to be used at every level in an organization, all the way down to project managers and people in the field and the like, right? That's been a challenge because it's not unusual for Octant to, when it assesses how, how a project is performing, it's not unusual for it to give you bad news, right? Things are not going to turn out as you optimistically expect. And since that bad news is often based on the project team's own estimate of, of the future, that can be a little bit threatening for people. So they don't much like it, you know. But you know I feel that, I feel that that's the case with all the technology innovations that have ever occurred in the world is that there, there are some people that see that as a you know, as a, as a threat to them, and it's not a threat to anybody's expertise. In fact, we use that one of the fundamental inputs into our machine is the expertise of the people who are using it. So at the moment, it's really seems to appear to the corporate level, but. I guess we're trying to, with Octant Edge, we're trying to develop a knowledge of how we can push that further down in the project so that everybody can participate. And that just takes a bit of time.  

 

Riccardo Cosentino  

And being a reference class-like tool, I'm assuming is a top down view of the project rather than a bottom up based on a limited number of parameter of variables rather than having like, you know, I'm just trying to make a distinction between your tool and other tools that I've seen that are actually taking the full construction schedule and trying to, you know, thousands of line items of contraction schedule and trying to reassess the construction schedule. Yours is more of a top down approach.

 

David Porter  

Yeah, I try and equate it to human expertise. So there's lots of people in the world who are very smart, very good project people, very experienced, and they have a lot of tacit knowledge, you know, which they've developed over their careers. And what that really is, that tacit knowledge is a a pattern that they can observe in the data that is the report that they're reading or, or even going out on site and look around. And that pattern is built up from their experience. That's why we pay them more money. So they might go and have a feeling about a project. They look at it and they go, ah, something's not right here, you know. Not really sure what it is, but something's not right. And then they go digging around and lifting up rocks and so forth like that. And that pattern is not built from the bottom up. It's built from the top down. It's built from your, your experience of those projects and all the things that you've read and so forth like that. So what Octant does is that same thing. It uses, it's a pattern builder. So it looks from the top down and uses that reference class outside view, but then attaches very large amounts of data to enable it to build new patterns dynamically as it is exposed to different project situations. But it's not the only, that's not the only way that you can do AI. And there are, as you say, there are many, many really great innovations that are going along at the moment. I have a colleague in Germany who is, who is doing an AI-linked innovation with emails and documentation so that you can better manage construction claims. And we know there's, you know, there's quite a few companies doing scheduling. There's InPlan out of the UK. There's Nodes & Links, ALICE Technologies in the US. There's we have Foresight from our mutual friend, Atif. These are all good innovations, but I use different sorts of machine learning. So, you know, like InPlan uses a deep learner type machine learning which is very accurate. It's very good, but you need an awful lot of data to get it to work. And so they've had to get their customers to share data. It would be great if customers shared data, but since our machine is a financial machine and talks about cost and risk and profit and the like, our customers tend to say there's no way on the planet I'm sharing this data with anybody else, you know? So, which means that we can't use that sort of machine learning. We have to use different sorts. But yeah, I think there's this idea of top down and bottom up is a, you know, I mean, you can read about it in, in Daniel Kahneman's book, Thinking, Fast and Slow, right? It's kind of like a thinking fast type machine. I guess you'd, you'd put it like that.  

 

Riccardo Cosentino  

Interesting. So the AI obviously is the forefront of everybody nowadays. And, you know, there are, there are a lot of skeptics and one of the remark that is made is machine learning, algorithms or deep learning, it's really difficult to understand what's actually happening. It's a bit of a black box. So how can you trust the output if you don't actually understand what's happening and I'll leave it at that. I mean, we could go even deeper, but how can you trust the output of AI?

 

David Porter  

Well, that's the burning question, isn't it? You know, and once again, different sorts of AI, because they function differently, they have different ways of measuring the output. And different of those is relevant at different times to different organizations. So, for example, if you, if you've got a driverless car, right, there's no way on the planet that you're going to understand how that AI works, but you don't need to, you just need to be confident that when it's going to have a crash, it puts the brakes on, you know, and the way that we've tested that around the world is by having lots of driverless cars drive around and monitor when they fail, you know, you know, that's been the longest part of the exercise. So that's a suitable kind of AI for, for driverless cars, but it's not really a suitable kind of AI for treating of cancer because, you know, I mean, if somebody's got a cancer and their oncologist says, Oh, I'm going to use this particular treatment on you. And they say, well, how will that work doctor? And he goes, well, I have no idea that machine just told me to do it. So I'm just, you know, like, that's not really going to cut the mustard. And I think that any new innovation that does things that people are don't really believe can be done until they are done. Like ChatGPT, nobody, two or three years ago, nobody believed that anything like ChatGPT could occur, and now everybody is just normal, right? You know, so everybody's kind of, that's one of those human things. If you see something often enough, you start to believe in it. But that is, that is an issue for us, you know, because, as I mentioned, everybody bases decisions on the forecast, so if you're coming up with a forecast that's different from what we traditionally have forecast, right, which is humans applying their expertise, then naturally people are going to go, well, how do I know that's going to give me an answer? And the answer to that question is that we test it. We use out of sample testing. Which you can go and Google, cross validation if you like, and Wikipedia that, and that'll explain it to you. But even that is hard for people to accept, even though it's statistically very good. And of course, our machine, like every other, is a probability machine, because no one knows the future, and not even AI machines. And we don't really get this that, you know, like for, if, when people are doing in the normal circumstance, AI aside, every month, somebody does their forecast final cost, right? Like we, we had a budget of a hundred million, we spent 20 million and our forecast final cost is a hundred and two million. Well, that's a probability, clearly, because until it happens, it's just not certain. But if you ask the human expert. To tell you what that probability is, they are unable to do that because that's not the way our, our brains work. We can of course, apply some Monte Carlo simulation to it. And Monte Carlo is a wonderful piece of mathematics. Generally, it's not giving us the accuracy we want. And if it was, then there would be a study somewhere that you would be able to go and say, Oh, here's 500 projects that had a P90 and look at that. They regressed to less than 9% of the final number. So it works, right? But you can't do that because that study doesn't exist. And in fact, the studies that do exist tell you that instead of being right 90 percent of the time, it's probably right 10 percent of the time, right? So, the way we do stuff now, we can't, we don't test. But the point is it's the way we've been doing it. So we are, and it's the best we've been able to do, you know? And so that's normal for us, but along comes an AI machine that says, well, that's a different, that's not, that I can do better than that and then we're naturally skeptical of new things, as we should be, right? People should always, anybody comes along and gives them an AI machine. It says it can do X, Y, Z. The first thing they should say is, how do you know, show me the data. Cause if you're going to be, take a view of progress and base it on data, then you have to be judged by the data as well. You can't, you can't have your cake and eat it. And so we do, there are very standard methods, actually, of testing machine learning, and we just follow those, we would follow the, I guess we would follow the gold standard method of machine testing, and that's out of sample testing.  

 

Riccardo Cosentino  

I guess it also doesn't help if the, if your tool provides a different answer than, than what the internal expert had done, right? That there's going to be immediately skepticism, doesn't mean that the tool is wrong. It's just different. It's a different answer, but that immediately, as you mentioned, it's like people feel threatened. You mentioned, you mentioned several times data, and I think anybody that's been in the construction industry, in the major programming industry knows that we are not very good at collecting, managing our data streams. What has been your experience as things improved, data environments getting better, our companies getting better at managing their data and how have you overcome that problem at your end?

 

David Porter  

Yeah, this is the burning question, isn't it? It's all about the data and we probably pricked the hardest industry to do this in, to be honest, but it's where we come from. So they're all the people we know, you know, so, but you're right I mean the construction that the data, there's nobody who would deny, and I've never met anyone in construction who have a different view, that the data environment is not good. It's a hard environment in which to get AI machines to function. And some people have overcome that process, as I mentioned, through using deep learning type data and so forth, which enables you to deal with lots of messy data, since we're doing like cost and so forth like that, you know, numbers, our machines, are numbers, not our machine, we need to have pretty well-structured data. And of course, that's a major problem and we've had to overcome that in, in different sorts of ways, but the way we've currently overcome it is to reduce the data training footprint for the, for training to a very small number. In fact, it's such a small number, it's difficult for, it's difficult for even us to understand that it's working pretty well. So that must mean that what we're doing now isn't working pretty well because it's just an improvement, you know. But yeah, I think, and some of the data problems I've noticed in construction, I have little names for them, you know, what my, one of them is like the, like data hoarding. So you'll have, even in the same organization, you'll have different silos where they're like, Oh, well, this is my data and you can't have it. So, you know, I, and data structure in different machines use different structure. So my favorite one is data embarrassment. It's you, you go and talk to people and you go. Oh, we can use your data and they go, Oh, but our data is very ugly. It's very ugly. Wait, we're going to clean it up. We'll come back in two years when we cleaned it up, you know, and, and it what's, what's been, I think one of the things that I would like the world to understand is that machines are not humans. They mimic outputs, but they don't think like humans. They're, they don't function that way. And so when people try and have a data strategy that might take several years to implement, they have a tendency to think about the structure of their data in their data strategy, like a human, because they are a human. So I think it's really important, with any data strategy, to do that hand in hand with somebody in your organization or get yourself a consultant or whatever, who has knowledge of what capabilities for machine learning exist right now, and which are likely to be existing in the future, because that will help you to structure your data such that it's attractive and usable by machines, because you're not structuring vast quantities of data to be useful for humans, because we're already at the limit of what we can do. So it'll be machines that are doing it. And of course, you then have to decide what sort of machine do I want? So what sort of outcomes to, you know, am I prepared to share data or am I not prepared to share data? And it's really wise, I think, to have some sort of expert that you can call on while you're doing this data strategy. In the, the default position, unfortunately, in our industry is that that expertise or that knowledge of what the way the data should be is mostly linked to people doing digital twins and BIM and so forth like that. And, you know, these systems, BIM and digital twins require really well structured data because they don't work otherwise. And so that's really influencing the way people are improving their data sets and people are improving their data sets, but what's important, I think, to remember is that we talk about data asset, right? Well, the data asset is the data you have right now, today, and the past you've collected. So, you know, the data asset isn't the data you're going to have in two months, two years time, is that you haven't got that data, but if you can work out ways to use the data that you already have, in the form that it already is, to generate better outcomes, then that's really, that's the low hanging fruit, right? That's where you get competitive improvement and you get better knowledge and better decision-making support. And so we shouldn't jump too quickly to the assumption that because the data is messy for us as humans, it's therefore messy also for machines. So go and get yourself some expertise and have a think about that.  

 

Riccardo Cosentino  

Well, especially with the advent of a large language model, I think the way you can, you could manipulate and play with unstructured data is, it's been significantly improved with DLLMs. I'm very curious because, I mean, I'm talking to someone, someone that has been developing an AI based business before AI was cool. So you've been at this for many, many years. So I'd be very curious to hear from you, what's the, in your mind, what's the state of, of AI for major programmes? I mean, you, obviously, were probably one of the pioneers, but where do you see AI for major programmes today?

 

Unknown Speaker  

Hmm. That's a really good question. There's a lot of chatter about this, you know, but in my mind, they fall into several categories. There's like, convenience AI, which everybody assumes is gonna, is the thing, right? Like, if you look at the commentary that's going about, there's a lot of people going, Oh, well, AI will help us to take meeting minutes, and it'll do this, that, and the other thing, and that'll make us more productive, right? And it's true, it will, and it does. You can get AI that does that right now. But there's a reason that, there's a reason that people think that, and it's not only just that convenience AI is easy, easy to do. Now you can use ChatGPT to say, write me a report about X, Y, Z. Although people should remember, of course, that ChatGPT has no requirement to tell you the truth. It just has a requirement to tell you stuff. It's objective is to produce some stuff that you like and for, and, and that makes sense. I mean, it's, I use it all the time. It's fantastic. You know, like I, I write stuff up and then I go, because I have a tendency to write too much, right? Like give too much detail. So then I give it a chatGPT and I say, summarize this to 50 percent of its current words. And, it comes back and there it is. It's got me this summary, which is mostly right most of the time, like, so which is a great achievement. That's a terrific achievement. You should probably read it before you send this out. But yeah, so that convenience AI, I think the other reason that we like this convenience AI is because it appeals to us as professionals. It's non threatening. And that's a, it's a very attractive story for us. You know, it's like well, I've spent 30 years developing this expertise and I really hate doing meeting minutes. So there's some AI now that helps me with the meeting minutes, but it's never really going to help me with my expertise because that's what it is. It's convenience AI, right? Wrong. That's not what's going to happen because then there are other sorts of AI, you know, there are, there are, I think AI that improves people's expertise and that's with any form of machine learning and you see that I think that's where all the really good action is happening right now. And I've mentioned a few firms that are doing that and I think that that that has the most benefit for us both as a business and as professionals, but as professionals, we therefore have to accept the fact that the way we do stuff now isn't the only way to do it. And these new tools, which use a different paradigm, are things that we have to learn how to use to help our customers and help our clients. Right? And just, you know, saying the tool can't possibly work because my expertise is sacrosanct is not going to help you. And then of course, you know, there's the large language models. I think the opportunity in large language models is absolutely massive, right? And that's a, that, like, large language models, that's a world changing thing and that, that is probably a bigger threat to professionals than any of the other AIs because at the moment, those large language models, with chatGPT, is that they're using publicly available data on which to do analysis. But if you think about it, there are some very, very, just think you're a lawyer, right? There are some very, very good data sets that are not publicly available, but are very powerful, like, for example, all the law cases that have ever occurred about a particular thing. And if you train your large language model on those, then you, you know, the opportunity for something to get real useful and mostly right outcomes is massive. You know, my brother's a judge and he doesn't believe this can ever possibly happen, but I look at it and go, well, I can't like it could happen right now. The main restriction to it, of course, is that, in the legal field, is that, is professional liability. And so that's one of those barriers about how do we use this AI in the real environment in which we live.

 

Riccardo Cosentino  

Yeah, I think I think there was a, there was a case in the United States where a lawyer used chatGPT. And as you said, chatGPT doesn't have an obligation to tell you the truth. And they spit out all these cases to prove the theory of this lawyer. And the lawyer filed a brief. And I got he's barred because the judge looked at the cases and half of them were made up and but they sounded really convincing, right? They sounded really convincing. So yeah, I mean, I think he talks to two of the points that you made, right? I mean, you gotta, you gotta read what you send out and you gotta be careful. You gotta understand how the tool works.

 

David Porter  

Exactly. And I kind of think about it. If you, so this is the fourth industrial revolution we're in now, right? And the third one was pretty much a non event, which was the advent of computers and the like. But the second industrial revolution really was a huge, huge change in the world. And that is the, the adoption of two really important pieces of technology that changed everything. Electricity and the internal combustion engine. So sometimes when a new piece of technology comes along, it changes everything. And my example is in 1895, Lord Kelvin, who was then the, he was a very smart man, Lord Kelvin, who was then the president of the Royal Society, said that something along the lines of for all intents and purposes, heavier than air human flight is impossible. And then in 1803, eight years later. The Wright brothers flew for 12 seconds, but people didn't believe it then. Like they filmed it. They just didn't, didn't believe it. Right. And in 1807, they went to the French, to a French exposure and the people were not believing it. And there was a newspaper article that said they are either liars or flyers and we will soon see. So, you know, this is a huge, huge change that we are facing. And there is going to be massive disruption, you know, like, I mean, there just is. And so those who learn to use the tool, like those who learned how to use an internal combustion engine to put an aeroplane in the sky, those people are the people who are going to be our leaders.

 

Riccardo Cosentino  

Yeah, I mean, the debate right now is, is AI a GPT and not, not the chatGPT, GPT, but rather the general purpose technology. Right? I mean, that's what electricity was. That's what the combustion engine was. And there are, I think there is gathering consensus that the AI will be, will be like the same as electricity and the combustion engine and it'll change. It'll change society forever.

 

David Porter  

Totally. I mean, I am, I'm worried about that sort of thing because disruption is not so good for humans. Might be good for shareholders, but it might not be so good for the humans, right? But, you know, every time humans and technology have gone off face to face, the technology wins. So, I am worried for my kids and for my grandchildren, really, as to the world that they're coming into. What, what professions will they do and how much will those professions change? Because, you know, prior to the internal combustion engine and electricity, we were on an impact prior to the 1950s, we were very much a manufacturing based industry, so the general technologies were human, were like mechanical muscles, excavators and those sorts of things, planes and so forth. And what we have now is a mechanical brain. But our society has turned into a service society, you know, where we, we're much larger parts of the economy about services than, than they were in the 1950s. So that's the section that's going to get disrupted. And that bothers me, that worries me. And the other thing that worries me greatly is, I guess, I feel that one of my sons is doing a PhD in this, in actual fact. Is around how that data is used and by whom, because we run a risk in society of creating more robber barons than what happened in the early part of the century, in later part of last century, where who has the data has the power, whether or not they have a right to have the data, they have it anyway. And they're collecting vast quantities of it from us right now and making money out of it. And we don't even know it. So, you know, that bothers me as well. I don't know how we're going to get around that.

 

Riccardo Cosentino  

I mean, that's a topic for a different podcast, David, I think we're at time this has been, this has been a great conversation. I'm thankful for you joining me in the podcast, thankful for you skipping some of your surf time in order to talk to me and yeah, I want to thank you and maybe we can continue this conversation in a, in the next season of Navigating Major Programmes and see, and see how, how the situation is evolving in major programmes.

 

David Porter  

Yeah, thank you so much, Riccardo. It's always a pleasure. I like catching up when we get in the same country at the same time. That's cool. So yeah, thank you very much. These are really important conversations that stuff that you're doing here is really important because there is so much that's changing and so rapidly. It's really hard for people to keep up. It's really hard for people to know what to do. And the more people can be exposed to, you know, not only the benefits, but the risks of this sort of stuff, the more we're likely to get sensible decisions. And in major projects we need that because once you make a decision, lots of stuff happens that's very expensive and has a big impact on society as a result of that. So, you know, they're in no way immune from AI machines or the like and AI is going to infiltrate every single part of them so people having these discussions and disagreeing with each other even is just hugely valuable. So thanks very much for giving us the opportunity.

 

Riccardo Cosentino  

Thank you very much. Have a day. Bye now.

 

David Porter  

Yep. You too, mate. See yah.

 

Riccardo Cosentino  

That's it for this episode of Navigating Major Programmes. I hope you found today's conversation as informative or provoking as I did. If you enjoyed this conversation, please consider subscribing and leaving a review. I would also like to personally invite you to continue the conversation by joining me on my personal LinkedIn at Riccardo Cosentino. Listening to the next episode, we will continue to explore the latest trends and challenges in major programme management. Our next in-depth conversation promises to continue to dive into topics such as leadership risk management and the impact of emerging technology in infrastructure. It's a conversation you're not going to want to miss. Thanks for listening to Navigating Major Programmes and I look forward to keeping the conversation going.