This is a shortened version of Making Risk Flow podcast, episode 3. You can listen to the full episode here
In this Episode, Juan and his guest Dan McNally explore what data insurers need to drive growth and manage the cost of data.
Juan: Today I’m joined by Dan McNally who leads partnerships and business development, at Cytora. Would you like to give a brief introduction of yourself?
Dan: Thanks, Juan. I’ve got 20 years of experience in the insurance sector, 15 years with the insurance company side of things, from running underwriting teams and operations teams, and about five years in the broker market – product development, insurer service models, running at MGA, a real mix of things.
Juan: Fantastic, thank you. Perhaps should we start with an overview of different types of data used in the underwriting process?
Dan: Absolutely. I think it’s great that we’re starting with data because data really is the raw material that help drive the insurance business. It’s a logical place to start that process where in terms of the key types of data that we see, which ones are core to insurance companies achieving what they need to in the current market and going forward.
Things like the data they get from brokers in the commercial lines market is still absolutely key. The insights that brokers are able to provide are really important. We’ll talk about some of the benefits and limitations of that.
The second one would be the insurer’s own data. The data insurance companies have amassed over a period of time – they have a huge amount of data. The challenge in the past really has been how they leveraged that.
The third one is kind of the exciting bit of the insurance market, not often do you use that term to describe anything in the insurance market, is third party data. I think people are really excited about it right now. We can talk about some of the positives of third party data. Again, it has its limitations. It’s not going to do everything for you.
And so those are probably the key data types.
Juan: Yes, we often talk about those three main sources of data. So submission data, internal insurance data and external data, the three of them are required to really enable digital risk flows. But why do you need the three of them, why isn’t it enough to use whatever data the broker has filled out in a proposal presentation?
Dan: Yes, certainly. I think some brokers would probably ask that question so let’s start with them.
As I mentioned earlier on, I’ve been on both sides of the fence on this. I’ve been on the broker side, I’ve been on the insurance side. So I can really understand the different perspectives on this specific bit. So brokers have a number of challenges in terms of providing information to insurers.
The first challenge as a broker is that you have to deal with multiple insurers because you’re trying to find the very best insurance solution for your customer in the market. As a result of that, you could be sending your presentations to 20, 25 different insurers. I’ve seen broking houses with 600 different agencies across the insurance market. They’re not going to suit every customer, but 20 to 25 markets for a specific customer is perfectly reasonable. Each of those underwriters in each of those companies will have some key things that they’re all going to ask. So they’re all going to ask about what’s the customer’s name, what’s the customer’s address, what’s their trade. Those are going to be very consistent, but as you expand out of that, insurance companies have their own questions they want to ask. Their own slants on the data that they want to get. So for a broker that suddenly starts to build up and they have to decide what’s meaningful and what’s critical to put into the submission.
And they’re making sure that they cover their responsibility in terms of things like the material facts, and for the rest they make sure that in the event of a claim, the customer has got the appropriate cover. The information has been shared with the underwriter appropriately, but they can’t go to the nth degree of providing all the information for all of those underwriters. It’s not tenable or economic.
I think the other thing that sometimes we can forget, that speaking from an underwriter perspective, they will look at a policy and think that all of this information is key and want the broker to be able to provide it. However, if a broker is dealing with a £2000 – 3000 premium at a 20% commission, maybe they’re making 400 pounds commission on that. Over the course of 12 months, as any organisation knows, you can burn through 400 pounds worth of activity, servicing a customer at multiple meetings very, very quickly.
So all of that puts pressure on the time that the broker has to be able to amass that information and provide it. The other pressure that’s on them is quite often the customers see buying insurance as a grudge purchase. It’s not something that they want to spend the time on, spending a whole day answering questions. What they want to spend the time on is running the business. That places a couple of pressures on the broker.
First of all, there might be other things that they’d like to ask, but don’t feel they’ve got the time with them. And the second bit is that brokers are brilliant at keeping the customers. Quite often they’ve got an excess of 95% retention with their customers. They keep those customers for years on the one hand, that gives them an opportunity to get to know their customers really well. But on the other, as that relationship develops, I’m sure we’re all guilty of it in life with the relationships we have, the longer you’ve known somebody, the harder it is sometimes to ask some of the questions to just check whether things are as they were three years ago or four years ago, or really investigate where that businesses or customer is going and where that risk is going over the next three or four years.
So all of that puts pressure in terms of what the broker can bring together. And then there’s one final pressure. Well, one final driver, which is quite often people forget that the broker’s primary customer is the policyholder. The broker is there first and foremost to try and present their customer as well as possible. So yes, they’ll present the material facts, but he will also, or she will also be selective to some degree in terms of how they present that information. All of that creates a situation where the broker gives some really important and valuable insight to the insurance company, but it will be limited.
It will sometimes be imperfect. Sometimes it will be outdated and there’ll be a little bit of bias in it. So that’s why as you started this question Juan, that’s why that one type of data is never going to be the total answer for an insurer.
Juan: Yes, and there’s also some natural tension there. As you said the broker needs to present the same information to 20, 25 insurers at the same time. Each of those insurers, want to create a competitive advantage, by having better information than the competitors on the risk. Right? So almost by definition, the information coming from the broker gets commoditized. This is also something we see in our clients – how do you get better, more granular data on the client to create that competitive advantage and in the end do better risk selection. So you could think ‘why doesn’t then the carrier, just get in touch with the insured and capture that information themself’?
Dan: It’s an important question for us to ask, I think there are a couple of factors there.
So one is, a lot of the time a broker will feel quite guarded in terms of they have primacy with the customer, with the policyholder, and there is a significant proportion of brokers that will feel uncomfortable with insurers being in contact with, because it takes away some of their power in the situation.
For those brokers that feel more confident about their customers having engagement with insurers, it can be a really powerful way of getting insurers interested and excited about the customer and putting the very best terms and products out for them.
And there’s an economic factor. Often insurance companies would have sent an underwriter or a risk surveyor to go and have a look at the risk. Every time they do that it’s an added cost element. That’s made it difficult for insurers to do it for anything other than the medium to larger corporate clients. Or, potentially some of those higher-risk clients where they might want to have a look at some specifics because the concern isn’t necessarily about the volume of premium they’re going to get. It might be about the claims’ potential.
That’s why they start thinking about third party data – are there other sources of data that give a direct insight into that customer without having to be in the physical locality with them.
Juan: Yes, let’s talk about that data. There are a lot of questions in the industry around it. What kind of data is available? Is it reliable? What can it be used for? To what extent can some of these challenges can be solved with third-party data, and how can insurers create that competitive advantage of having a better representation of the risk through external data.
Dan: There are a number of factors that we can reflect on as we think about the use of third party data. Over the last couple of years or the last few years, in some cases, insurers have become confident using third-party data as a core part of their risk review.
So it might be things for the early stage of looking at risks, like clearance. They might be looking at the location of the business, where those people are trading.
As we move into triage one thing that we see a lot of insurers thinking about is how can they get a better understanding of the trades that our customer is involved in? We see, opportunities for customers to understand the nuances of the trade? It’s not good enough to just know that it’s a manufacturer. Sometimes they want to get beyond the fact that it’s a plastics manufacturer. They want to get into the types of plastics manufacturing the customer is involved in. There are some really great forms of data to do that. You can start to utilize some of the publicly held data, things as customer websites, company LinkedIn profiles or different publicly available streams of data. All that can help in terms of not only the triage process but as you develop further into the underwriting process.
We also see the core data, ie. flood data will be used early in their triage process, so quite often people will talk to direct providers like JBA. It’s interesting that what we see sometimes insurance companies do, they are starting with a third-party data source, and then they’re layering onto that.
Then you get the more innovative stuff. When people are trying to understand the quality of risk, the culture of the customer, their approach to health and safety in their business, their attitudes towards their customers, those are different data types that people are starting to actively use now as well.
Now, they can start to take some of these data aspects and think about rating and pricing and risk appetite in a different way. Could they start applying these to really create that differential advantage you were talking about earlier? That’s a really interesting area that we’ve seen people try and do more.
Juan: I think that’s a great explanation, because often when people think about third party data, two obvious categories that come to mind are the commercially available ones, which you mentioned ie. JBA flood data, or the likes of Company House, which are the most straightforward sources of data.
I think the third area that we discuss is how do you properly understand the activity of a company. It’s not something that you just pull from a third-party source. It’s something that needs to pulled from a number of sources, analysed, and then inferred.
Earlier you mentioned three characteristics of external data: reliability, needing to be up-to-date and cost-effective. Tell me a bit more about that last bit, the cost-effectiveness of third-party data.
Dan: I think there are a couple of key aspects in it, some data sources can be really powerful, but they can be expensive, and if you’re calling it every time you have a risk, it’s not economic for your business.
There are two ways for insurers to be able to manage that cost:
The first is to manage the application of the data, to the risks where it’s most pertinent, so the data source really is giving insight into that particular product or class of business or that particular territory. So we see quite a lot of insurers in the early days kind of consuming data, but not having a lot of control around that aspect.
And then the second bit, which links back to what I was talking about a few minutes ago, there are different stages of a risk review.
So if we were looking at a risk on a piece of paper in front of us, in an old school way, we would do a two-second flick of the document and think whether this is the kind of risk we want to write, and then we’d probably spend a few more minutes thinking if it has a real potential and we would develop it.
And then you’d probably do your full underwriting process, sit down and go through the document and get into the detail. The use of data is very similar to that. You want to think about where you’re going to apply your data in the most effective way, both in terms of the effectiveness of your efficiency as a business. So when do you want your underwriters to have to pay attention to that information? When is it most useful to bring it in, in terms of driving the decisions that you’re making, but also when can you bring it in at the most cost-effective time? Is it something that you absolutely need at the start of your process to make the decision on or, is this something you want to consider further on? A lot of those expensive data types, can probably be driven to the back of your process, the more detailed underwriting process. And by then, you should have done your clearance. You should have done your triage. You should have done your initial underwriting job. You should have allocated it to the right underwriters. You’re only applying expensive data sources when it’s a case that you’re really serious about and you’re investing the underwriter’s time and also your business’s money in that data source.
Juan: I think this is a great example of why we talk about digital risk flows, because it really is a workflow. It is not just about adding as much data as you can, but it’s about adding the right data and the right time.
We’ve talked quite a bit about external data, but you mentioned at the beginning that one of the key sources of data was insurers’ own data. Why is that data also required in digital risk processing?
Dan: A lot of insurance companies have been established for quite a long time, and they’ve either seen the customer that they’re looking at before, or they’ve seen other customers that look like that customer. All of that is their intellectual property that needs leveraging. Insurance companies have had a wealth of data for years, but what they’ve struggled to do is to be able to access it and utilize it effectively, and take action as a result. So if you went into an underwriting business, 15, 20 years ago, they had manual files and quite often those manual files would be sent off to storage and nobody knew what was it.
We moved forward. Over the last five, 10 years, we’ve started seeing more of that information, get attached in electronic files on policy systems. But again, insurance companies have really struggled to interrogate it and utilize it. Now, what we see is insurers are quite often building things like a data lake in their business where they’re starting to store some of that data. Still, there’s this gap between how do they apply that information in the lake to the decisions that their underwriters are taking at the front end of their business today.
The second bit might be, let’s take that manufacturing risk, and let’s say it’s plastics manufacturer. If they have a hundred others, that they are currently underwritten, then that’s an opportunity for them to learn from claims patterns from a typical risk size, from various attributes that you might want to look at in terms of that specific segment of risks.
They also may want to take that knowledge and learn what gives a competitive advantage in a marketplace. You might be able to look at elements of the pricing and think about what the rating was and where you won those risks.
Juan: Yes, at the end, it all comes down insurers being able to triage submissions as early on in the process as possible, so that underwriters are not wasting time on opportunities you don’t want to quote.
Dan: I think that’s absolutely right, and there’s a different aspect to it as well. Quite often, if we put ourselves in the shoes of an underwriter, let’s say we’re an underwriter sat in the Manchester market, working for RSA or Zurich we will have a good sense of where are we going to be competitive and on what makes a good risk and what are the aspects we should look at. It’s probably driven by the cases we’ve looked at in the past on a personal level. And if we only apply that to our triage or to our later underwriting, then that’s very limited in terms of the scope. I mean, you might be an expert in your niche, but you’re still probably pretty limited unless you’re able to harness the knowledge and the understanding and the performance of your organization across the UK, or potentially wider – into different territories. That’s really powerful. That’s helping your underwriters almost start to use a collective conscience in terms of their decision-making.
Juan: Yes that’s really helpful.
So we’ve talked about submission data, external data and insurers’ internal data. How do you need to pull all of those at different points of digital risk flow? What is the value of having all these data integrated into let’s say a new business submission process?
Dan: There are some key things that we’re trying to achieve through this.
Things like faster customer responses are great for an insurance organisation. Most insurers I talk to would love to be able to respond quickly to their customers, but what we typically see is from a broker perspective, they send a risk into an underwriter in the local branch and then it kind of disappears.
So what’s typically happening behind the scenes is that it’s being sent to an offshore team to start collating some of that information, or an administration team. They’re keeping some of that information in doing some manual checks, looking at some data that might get sent back a day or two days later.
In some cases, we speak to insurers where it’s 5, 6, or 7 days later before the underwriters really get a chance to do the proper triage on it, can they quote it and get back to the broker. All that time the insurer is losing the initiative. And we know that the earlier they get back, the more control of the negotiation they have, the more engaged the broker is, the higher perception of that insurer’s appetite it’s going to be, all that helps drive the revenue.
Additionally, they won’t have to ask the broker any of the follow-up questions because they’ve got the other data sources, which is really powerful with the broker, because it inspires trust and engagement and collaboration. The underwriters don’t have to spend their time having to collect information, having to chase, or having to re-key to a different system. That time really is spent in exercising their judgment, creating relationships, and driving the conversion with the brokers. So that changes the dynamics, the level of performance, the culture, and the tone within the business. I think that’s really powerful.
And then, the last bit for me is the quality of the decision-making that’s happening within the insurance business is getting better and it’s getting better at a couple of levels.
It’s going to get better at a transactional level because, the right risks are being brought in, in terms of either driving revenue against the expense ratio or driving the longer-term sustainability of the book and the mix of the portfolio. So all of those things can be driven at an individual risk level.
But also, those data points brought together can also be used in different ways, ie how to proactively use data beyond a single risk level and start thinking that more of a macro level if they want to grow with a broker in PI or D&O or commercial combined business.
If a broker can give the names and some of a small number of data points about those customers, the insurer may be able to tell what their appetite is, and maybe even pre underwrite some of that. That can really change the dynamic and I think that that in itself is transformational for insurance businesses, the opportunity to do that.
Juan: I think that it’s a fantastic summary, it’s exactly what we see in the high-performing insurers that are deploying digital risk flows. In the end, what they’re doing is having underwriters only focus on the submissions that they are going to be able to quote.
You can automatically filter out the out of appetite or low-value ones. It’s a much more effective and efficient process because they already have all the information available when they start underwriting it. They can get to the broker faster and create that engagement with the broker early on, which drives conversion and drives the capacity.
As you said, having those business development conversations with brokers is what drives superior growth and lower expense ratio.
Dan: The one other facts that we probably haven’t mentioned in terms of data is traditionally the insurance companies have captured the data on the risks they’ve taken forward and try to underwrite.
But how do you build a data picture of the business that you’re not writing today? Because that might be the business that you want to write in two years’ time. By getting that data you can convert more of the business today, but you also can also build a picture of the marketplace and think about where you want to be tomorrow. And that just takes it to the next level in terms of future transformation.
Juan: Absolutely. Thanks Dan!