This a shortened version of Making Risk Flow podcast, episode: “A guide to Understanding the Data Challenges in Reinsurance”. In this episode, Juan is accompanied by Paolo Cuomo, the Executive Director of Gallagher Re. Over the course of the show, Juan and Paolo discuss the challenges with data in reinsurance, how cyber development is learning lessons from the CAT property space, and why large amounts of data can actually be ineffective for insurers.
Listen to the full episode here
Juan de Castro: Today I’ve got Paolo Cuomo joining me for this episode. I’m really excited about having Paolo with us for a number of different reasons. First of all, Paolo brings a really broad understanding of the insurance industry, starting from his early consulting days through plenty of years in a number of different insurers, from Brit to Charles Taylor to Beazley, and then lately moved on to a reinsurance broker. The second one is, Paolo, you’ve been really involved in everything insurtech. I think you were one of the founding members of InsTech London. I would love to hear about that in a second, too. You’ve been a driving force in innovation in the insurance industry in the UK. And probably the third reason, the one I’m most excited about is that you have really strong opinions on many different areas, which always makes the conversation really insightful. Paolo, thanks so much for joining me today.
Paolo Cuomo: Great to be speaking with you, we’ve overlapped on so many occasions and each time, whether it’s online or in person, it’s always been an interesting experience. So I’m looking forward to this discussion. As you say, it’s been almost 15 years that I’ve been in the insurance industry. I feel lucky to have been through this period of technology-driven change, and clearly, it’s far from finished. But if I think back to when I initially started at Beazley, those were the early days of Solvency II, and concepts such as data accuracy and data ownership were fundamentally new ideas to people throughout the value chain in commercial insurance. And so having watched how that has evolved, what has evolved well, and frankly, what has evolved far slower than it ought to have done in terms of usage of data around digital transformation has been an interesting decade and a half. Similarly, having observed organisations such as Cytora and many others of those exciting what were very much startups back in 2015, 2016, when I created, along with Robin Merttens, InsTech London, seeing some of those companies scale to be really core parts of the insurance value chain now. So delighted to be here, look forward to our conversation.
Juan de Castro: Fantastic. And are you still involved in InsTech or what’s your role there?
Paolo Cuomo: Not hugely. Robin and Matthew Grant now run InsTech with an ever-growing team. And I think for those people who’ve been members and InsTech or attendees over the last few years, will have seen that the organisation, the business there has pivoted, I think, very effectively from what was primarily an events-driven organisation back when we were trying to create a group of people who wanted to talk about digital transformation. And right at the beginning, it wasn’t even called InsTech, it was called fintech and insurance. That’s how it all starts, really. What’s evolved now, I think, and as I say, people who are members of InsTech see this as an intelligence-led organisation that’s trying to make sure it has the best understanding of what is going on in digital insurance, be it the startups and the scale-ups, the understanding between insurers and brokers who really get the role of digital, the role of technology, and those who are sort of still trying to play catch up. So thoroughly recommend any of your listeners who aren’t aware of the InsTech and the InsTech ecosystem to get more involved there if they want to be at the forefront of what digital transformation is doing for our industry.
Juan de Castro: Definitely. So you’ve recently joined Gallagher Re. Perhaps you can tell us a bit more about what Gallagher Re does and your role there.
Paolo Cuomo: Absolutely, yes. I joined in October of 2022. So a few months ago. Gallagher Re is the reinsurance broking division of Gallagher. Gallagher is a large multinational broker and risk advisory organisation. Gallagher has had a specialist reinsurance broking expertise for some time, but just over a year ago it acquired the Willis Re business, which brought about a meaningful change in size. We’re now over 2000 people globally focused on finding the optimal initiative solutions for clients. And by clients we mean cedent, the primary insurers who are looking for reinsurance solutions. I’ve joined the advisory and analytics practice. What we’re doing there is we’re pulling together people with a broad range of backgrounds and skills to complement the expertise and experience of the core brokers, allowing Gallagher Re to offer an innovative, broad set of solutions to our clients. It’s a great time to be joining both in terms of where Gallagher Re is in its journey, but also to the heart of what we’re going to discuss, the increasingly data-driven world where we’re combining genuine human experience, many years of experience that individual brokers have with a data-driven decision-making framework. The combination of those two is a fascinating time, I’m sure, you and I, if we were to be speaking in 10 or 15 years time, it’ll be a world where there is an awful lot of what we do that will be fully automated, but certainly, for the next decade or so, it’s that augmentation of the human that is the heart of how we’ll be adding value to clients.
Juan de Castro: Definitely. As you said, you’ve only recently joined Gallagher Re., it’s one of the reasons I thought it was the perfect timing for you to join this episode, because it would be interesting for you to share what you’ve seen on the insurer side for the last 10-15 years and then what you’re starting to see on the broking side, specifically in the reinsurance broking. Let’s start there. When you look at challenges, opportunities, priorities, focus, how do they compare your focus back in an insurer with your focus now at Gallagher Re?
Paolo Cuomo: Yeah, it’s a fascinating question, maybe before we talk about differences, we talk about similarities, which is the entire industry is absolutely understanding the role that data needs to play. And so we’re no longer in a place, I think, where people are challenging the role of data. As we’ll get on to later, it’s not as simple as more data is better, absolutely not. But the need for accurate, complete, relevant data, I think is clear in everyone’s mind. If you think about the core of an underwriting business, that is around the underwriter having the right data to make a decision about whether he or she wants to write the risk, and if so, at what price. On the broking side, you want to work out what data is required to support your clients in the case of a reinsurance broker the cedent, but also what data is then going to be needed by the reinsurers to understand that. And it’s not quite as simple as whatever data was useful for the initial underwriter is what’s relevant for the reinsurer. So there is absolutely a role to be played by the broker in the middle, working out where there are gaps and helping fill those. And I think as we talk about data, it’s not just the data around the individual assets, it’s also the data around the hazard characteristics. And really, if you’re trying to understand whether or not you want to underwrite a risk, you want to know the data that allows you to understand that asset. And you want the data that allows you to have a view of the risk around that asset, what those hazard characteristics are. And it’s only by bringing those two sets of data together that you can do an optimal job. Where that isn’t always available, then it’s often the role of the broker to help understand what’s the best complementary source of data to try and fill those gaps.
Juan de Castro: So when you think about the challenges around data, from Gallagher Re’s perspective, is the challenge that the primary insurer doesn’t have that data available? Is it aggregated? What are the challenges in terms of data?
Paolo Cuomo: Yeah, I almost go back even a stage further in the value chain to answer your question, and that is, how does one make sure the primary insurer has the optimal data? And there was a great little Insurtech in Lloyd’s Lab a few years back called Layr, who I think were based out of Atlanta. And what Layr were doing is they were working with small companies to actually link into their HR system, their finance system, their procurement systems, to understand what was going on in the company and therefore to make sure the right coverage was in place. And this to my mind was a great example of where you can ensure that the underwriter knows everything they need to know about the company they’re underwriting. Now, as you get to larger organisations and more complicated covers, you find that the data that the underwriter would like to have from their client isn’t always available, maybe the format they want or in a timely manner. And so you’ve already got a challenge at that point. And then as you move further down the value chain towards the reinsurer, if the primary carrier is struggling to have the data he or she may want, then clearly there are going to be gaps as you go. And let me try to use another example to help your listeners. The marine space has been an interesting one over the last year or so because we’ve seen a number of startups and a number of carriers do a far better job with data of understanding the position of ships and if those ships are going into a war zone and therefore maybe they need an endorsement on their cover. Now in the old days that involved someone somewhere sending a fax to someone and maybe that got reviewed and got stamped and it was all good. We’re now in a place where because the data of where ships are is immediately available to anyone, then you can combine that with your appetite for where ships can be operating and therefore you can essentially automatically place a relevant endorsement. Now that is a great example in a small class of business but we’re far away from when it comes to having that sort of visibility of data that straight through process when it comes to most other classes.
Juan de Castro: Does it also vary in terms of the data required and the granularity of the data across facultative and treaty because you would imagine perhaps treaty or covering a portfolio. All you want to know is the shape of the portfolio. Is that the case or not?
Paolo Cuomo: There’s an excellent question there and I think I mean ultimately a facultative policy is essentially no more than underwriting a normal risk. You’re just writing it at a sort of a higher attachment point and therefore you want to understand the data about that specific risk in as much detail as possible in order to get comfortable. You want to write it at what price? Where by definition treaty reinsurance the last however many decades it’s been written exactly as you say, is about underwriting a portfolio of risk. Now, over time, the better that portfolio can be understood, the easier it is for the reinsurer to make a decision. And I think what we’re seeing at the moment is reinsurers increasingly wanting to make sure that they have the level of data available to make those decisions. And that certainly becomes the case in some of the classes, such as the sort of property CAT classes at the moment, which are having an interesting sort of renewal period for 1.1 where it’s an opportunity for the reinsurers to push for quality and quantity of data that maybe historically they didn’t feel in a position to ask for. And a key role for the broker in these situations is trying to make sure that data is available for them if they want. If the data isn’t directly available, then what are the ways of filling those gaps? So I think we are in a world where there is absolutely increasing demand for data. The flip side is that too much data isn’t always the right answer. There was a fascinating podcast sort of in, I guess, the autumn of 2022, where Adrian Jones and Mark Geoghegan were talking about data. And Adrian’s comment was that data is essentially friction. And this sounded odd. And where he was coming from was if you have a renewal and nothing has changed, then that’s completely automatable. You just do it. But the moment actually you’ve got new information in the form of new or updated data, then a new decision needs to be made. And to our point, in 10, 15, 20 years, I’m sure machines will do all of that. But in the world we currently live in, if there’s a data-driven decision that needs to be made, that needs to be made by a human being. And so as you add more data into the process, there are more steps that require human engagement and those take time. So the sort of simple view of more data is better is, I think, not true. What you want is the right data at a timely moment being available in an effective format to the decision maker.
Juan de Castro: That is a really interesting point by Adrian because we see those challenges, the wrong data every day, right? So there are two scenarios where, let’s take the renewal as an example. In a typical insurer, the renewal book, there are two scenarios under which underwriters have to touch pretty much every single risk. The first scenario is where the underwriters don’t have any information on whether the risk has evolved or not in the period. So then with a lack of data, they feel the need to understand the risk, ask the broker has the risk evolved, et cetera. But there’s almost the opposite scenario, which I think is where you’re trying to get to, which is if you get too many signals that the risk has evolved and you don’t have a good way of understanding which ones are positive and which ones are false negatives, you end up touching all renewals too, right. Because you don’t understand which ones are real signs that the risk factors have evolved or not. So I guess the future of the industry needs to be able to have enough data so that we underwriters only touch the renewals that have really changed, which hopefully are no more than 20-25% a year.
Paolo Cuomo: Absolutely. And what we’ll increasingly learn is, as you say, which bits of data give useful signals. So one of my colleagues was in Florida post-Ian, and one of the things he was noticing was how the enforcement of some of the new building codes had meant that some of the buildings there had done a very good job of withstanding the wind. And so data that articulates that a building’s roof has been changed in line with the building code could be very useful because it changes your view of the risk of that asset. There’ll be other aspects of what’s changed in a building which frankly won’t impact the risk, so therefore that would just be noise. That ability to understand which data points are important is vital. And I think if we look at let’s stay on property and property CAT for a while, if we look at, so to speak, the more evolved insurance world, so Europe and the US. We by now have a very good understanding about what are the attributes that we want to know about an individual property to understand its risks. We want to then combine that with a good model that helps us understand what is the likelihood of something occurring. And the combination of the two is what allows for a good underwriting decision both at a primary level and then at a reinsurance level as well. And I think what we’ve seen is, as I say, a really good understanding of the asset attributes and major perils. And obviously, wind is a primary example that we have models in place that we understand and can use for making decisions. Now, there’s an increasing number of important secondary perils where the models aren’t as advanced. And in fact, that is part of where the industry is evolving but also a key role that a reinsurance broker such as Gallagher Re plays, which is saying, right, where there are current gaps, can we help develop models that can make a real difference to the understanding of risk and filling those gaps? Because otherwise, all that a reinsurer can do if they have no effective model on a secondary peril is apply some kind of sort of broad loading, which is never the optimal outcome for anyone. Whilst if we can come along and say right, here is a hail model which will really allow in those particular geographies to understand the potential impact of hail that allows for a more sophisticated decision making to be done. And so we’re absolutely seeing that as a sort of a core component of what a broker can bring to the mix. Now if we look at those parts of the world that are maybe still sort of evolving when it comes to the placement of certain insurance, there may be Latin America, parts of Asia, they are behind where we are in Europe and the US, both in terms of the data around the individual assets and the quality of the models. But they’re absolutely catching up. And something that we’ve done here at Gallagher Re is we’ve developed models for both flood and quake in MENA, so Middle East and North Africa, which allow for a better understanding of the risk. And the benefit there is, not just for the cedent and the reinsurer to understand the risk, but if you can get to a place where you’re able to apply a more appropriate price to the risk, then ultimately it helps the clients because whoever it is who’s trying to insure a property there if there’s a better understanding of the risk to that, then they’re going to get a more appropriate price. So this increasing availability of data in those markets and the improvements of the model are of benefit to everyone.
Juan de Castro: This is also part of the evolution of the broker advisory service. Both the insurance and reinsurance side is how can you advise the client on not just better understanding of the risk, but also what can the client do to have a better risk and then ultimately a better premium.
Paolo Cuomo: One of the joys of working in the Lloyd’s market in London, of course, is there’s a lot of small players and that comes with, on the carrier side, that comes with all sorts of opportunities around innovation. However, these small players, they cannot have big data teams, they cannot have big modeling teams, they can’t afford to buy in some of the excellent market solutions that are out there. So they rely on the brokers as a core part of helping them do some of those analytics. And you will have seen in terms of how Cytora operates with some of these companies where they’re having to make tradeoffs in terms of which third-party data sets they can bring to bear. If through a relationship with the broker, they can derive additional insight, then what that does is it allows some of that insight into some of those tools that might otherwise only be available to the larger carriers. It means they can be brought to bear for the smaller carriers as well, which is an absolute advantage for them.
Juan de Castro: The timing of this episode recording is perfect. Recording it before Christmas and then releasing it after Christmas and 1.1 sits nicely in between. How is 1.1 going? What are you seeing as challenges in the industry?
Paolo Cuomo: You say it’s wonderful. From your point of view. From my point of view, of course, anything I say now will be proven to be wrong when you publish it so far be it for me to give any predictions. I think what is absolutely interesting is a) to the point we made earlier that this is a market where there is absolutely an opportunity for people to be sort of discussing the availability of data. It is also, I think, in a really important way, a market that’s emphasising the role of the broker in the value chain. If the placements of the current 1.1 reinsurance were straightforward, then we wouldn’t be having the depth of conversation, the depth of sort of innovative thinking that we’re seeing in December. And when this is published, one hopes that there’s good sort of positive news about all the coverage being put into place. But there is, without doubt, absolutely a role still for the human on the broker side, on the cedent side, on the reinsurer side, ought to be talking. We are not operating in a world where the machines can do this yet. However, we’re absolutely operating in a world where increased data allows the human beings to make the sort of data-driven decisions that they wish to.
Juan de Castro: So you hope that the 1.1 2024 will have changed to make it a more effective, better data from insurers to reinsurers?
Paolo Cuomo: Yeah, I think it’s talking about a renewals a year ahead in a world that is in as much flux as ours, I think would be a very bold thing to do. However, exactly to your point, if a certain market situation allows more relevant data to be made available to all parties, then that’s clearly going to continue to be the norm. I think, as we know there are three things are required to make good use of data. One is for the data, or a proxy for the data to exist. The second, an appetite to use that data, and the third for a mechanism to get that data to the decision-makers. The technology used to be a burden in terms of getting the data around. I think we’re in a place now where if the data exists and if there is the appetite to use it, the technology is no longer the excuse.
Juan de Castro: If this is the case in lines like property CAT that have matured quite significantly in the last 30 years, where are we on new types of risks like cyber? Is it going to take us another 30 years to really have the right data for insurers and reinsurers to be able to evaluate this risk? Or do you think we’ve learned the lesson?
Paolo Cuomo: I think with cyber we’ll hugely benefit from what we’ve learned over the last 30 years when it comes to areas such as property CAT. It’s so critical for clients because this is a concern that they don’t necessarily fully understand, but they realise it can be a big risk. It’s an exponentially growing class, so it’s very important to market. It’s a profitable class. It’s a class that’s driving some really exciting innovation. So it’s fantastic that it’s available as a class that we can all work in. And it is one that as we develop the models and we identify what data allows us to be comfortable, that we understand that the risk is one that will benefit, as you say, from the learning of the last 30 years. It’s also one that will benefit from how technology has evolved. So in the early days of property CAT models, the technology they were run on was inherently slow. It was great having a model, but if you were literally taking days or weeks to effectively run it, then there was a limit to what you could do with it. In a world of ubiquitous cloud computing, there’s no real meaningful challenges to the speed of running the models. I think there is, again, in cyber, absolutely a risk of having too much data. Sometimes too much data creates a genuine problem with a sense of spurious accuracy, where somehow you believe that if you’re collecting vast volumes of it, then that’s going to give you a degree more confidence, when in fact, that may be misguided. So what we’re seeing, and it’s interesting, maybe we can briefly compare the challenges of cyber to property. If we take a property risk, anyone can understand that there is a building with certain physical characteristics. If the wind blows or there’s a lot of rain, then that building can be at risk in some way and people understand what are the questions to be asked, what is the roof made of? How close is the property of the river? Anyone can visualise that. Now, cyber doesn’t have those same option to visualise physical attributes and so that makes it trickier. The cyber threat landscape can also change faster. So whilst there’s ample evidence of climate change impacting weather events, fundamentally we are not getting hurricanes next month in a place that’s never had hurricanes. With the cyber landscape, the threats can change far more rapidly. So there’s a need absolutely, to have models. These models are harder to explain to many people. And what we’re seeing still is that models give different answers. Now, some people get nervous when models give different answers and say, well, hang on, how can that be a good thing? I think what happens is that if you understand how models are working, you, therefore, understand why they’ll be giving different answers. That allows for the decision makers to much more clearly understand what’s going on and therefore start to get increasingly comfortable around how well we understand the nature of the risk. Now, if one wanted to be difficult, you can say, well, underwriting cyber is constantly looking in the rearview mirror because what you’re doing is you’re just responding to the later data you received about the latest problem. But yeah, we need to be honest with ourselves. Most volatile classes have a degree of rearview mirror. Terror is another class that is on. If we take, frankly, some of the property CAT space after certain hurricane events, we fundamentally reassessed, you know, how our models worked. And I think, you know, 2005 with the Katrina-Rita-Wilma hurricanes, we realised that we needed to fundamentally re-parameterise models and new ones recreating stuff. So I think anyone who tries to say the fact that we need to evolve our models when something happens and they see that as a bad thing is missing the point. I think the ability to rapidly evolve models when we learn new things is a positive. And yes, cyber is absolutely benefiting from what we’ve learned in the property CAT space with property models over the last few decades.
Juan de Castro: Definitely. And this also goes back to your previous point about more data by itself only creates friction. You need to identify what data has the signal that you’re looking for. And that is the stage we are at with cyber, which is we are still at the stage where the industry has some hypotheses of what correlates with losses. So we are still at the stage of let’s capture a lot of data about the risk and then over the next few years, as losses evolve, then we can identify what is noise versus what is proper signal. And then I believe what you are saying now is in that phase of the models, maturing is where you need to be able to make sure that you’re able to adapt and evolve those models constantly using the latest information. But that is what’s missing. Whereas you said in property CAT you’ve got roof shape and building materials, et cetera, you know what correlates with loss experience. And potentially in cyber we are in one step before that.
Paolo Cuomo: Absolutely. One step before that, but learning rapidly and realising that we can constantly benefit from the sort of the tweaking of the models and whether it’s a top-down or a bottom-up model, taking sort of fundamentally different views of the risk and then combining those. The other thing, of course, within cyber is it’s man-made nature means that as we start to get our heads around a particular area of challenge, there’s nothing that’s stopping the bad actors identifying other potential ways. So that ability not just to retrospectively understand what the signals and noise are telling us, but also to try and proactively see any indicators of a sort of changing threat landscape. That’s critical.
Juan de Castro: It’s quite interesting. So this is the first episode of Making Risk Flow where we’ve got somebody from a reinsurance broker, and at the end we see the challenges are probably very similar to the primary broker, the primary insurer. It is all about having the right data that flows through the value chain. So do you think if you could travel in time, five years, ten years from now, is the solution you’re getting the right data at the very beginning of the value chain?
Paolo Cuomo: So I guess there are two parts of that. One is in terms of the asset, absolutely. The more you can source that upfront, the better. The question of hazard characteristics, should we say, that is something where the right use of third-party data is always, I think frankly forever, is going to be a necessary complementarity. So if you ask someone all about the characteristics of their house, they are best positioned to tell you all about that. If you want to think about whether the trees outside of their property area or a particular height and likely to be susceptible to wildfire or whatever, that is not necessarily something that the asset owner is best positioned to answer to you. So it’s that combination of getting the asset information accurate, timely, and then having that flow through the value chain whilst combining with the third-party data sets, if we want to call them that, around what are the potential risks that would then be brought to bear on the asset. There are interesting challenges. So for example, if a primary insurer decides to buy third-party data around the trees in the area or around the subsidence risk in a certain part of the UK or whatever it may be, that’s not necessarily data that they can then pass on to their reinsurer. So again, one of the roles for a reinsurance broker is to understand what data has been used by the primary carrier to make their decisions, some of which the asset data is then passed through and made available, but some of which will not be shareable simply because the data has been procured by a primary carrier to do something with and that primary carrier isn’t empowered to pass it on. So I think we’ll see some data increasingly slide seamlessly through the value chain, but there will still absolutely be a role for each of the players in the value chain to be working out what they want to add in terms of third-party data or how they want to be manipulating the data in terms of what models do they want to be running it against. And to the endless question of are there too many steps in the value chain, et cetera, my strong view is that there are the right steps in the value chain. We just need to make sure that each one of those is operating as efficiently as possible. And to the point we’ve made several times now where you need an experienced human to use the data to make a decision that we shouldn’t try and kid ourselves that we’re getting rid of that human anytime soon. We should be looking at all the auxiliary processes that fit around that expert, that all those auxiliary processes happen as efficiently and as smoothly and as low-friction as possible.
Juan de Castro: Love it. I think this is a beautiful wrap-up of the episode. I think you’ve probably summarised how to solve some of the challenges with data and the rules of different actors in that value chain. So before wrapping up, if you could choose somebody to go out for dinner within the insurance industry, who would you choose?
Paolo Cuomo: You talked about time travel earlier. I would probably want to go back in time, actually, and sit with those sort of early pioneers of the sort of initial property CAT models. What they were trying to do was push the very limits of the computing power they had available to them and the data sets they had available to them. And I think it would be fascinating, maybe rather than going to dinner with them would be a fly on the wall of their brainstorming and problem-solving sessions. Because whilst the use of catastrophe models appears so normal now that none of us could imagine a world that we don’t use them. Back when they were first starting to be introduced, this was a fundamentally new, bold, different way of thinking, sort of moving to a more numbers, mathematical approach. And it would be great to have been a fly on the wall at that time and to see all the naysayers who were pushing back on those ideas. But ultimately we are where we are now, which is great for the industry.
Juan de Castro: Great answer. Paolo, it’s been fantastic to have you join me in this episode.
Paolo Cuomo: Thank you for your time. I was delighted to talk to you.