In this panel episode of Making Risk Flow, host Juan de Castro dives into the transformative power of generative AI in insurance with three industry experts: Allison Thornicroft, VP and Business Solutions Lead at Arch Insurance Group, Neeren Chauhan, Chief Innovation and AI Officer at Tokio Marine, and Yaemish Rughoo, Information Technology Program Director at Everen.
Together, they explore how AI is slashing submission clearance times from days to hours, achieving 95–97% data extraction accuracy, and optimising underwriting workflows. The conversation unpacks actionable frameworks for successful AI adoption, from email intake to underwriter workbench, and emphasises the importance of incremental implementation, human oversight, and third-party data integration. If you're navigating digital transformation in insurance, this episode delivers practical strategies, real-world examples, and lessons from the frontlines of AI-driven operational change.
Listen to the full episode here.
Allison Thornicroft: We are not going for 70, 80% accuracy. We are going for 95 to 97%, and we only put high quality reliable results in production because we don't wanna deal with, should I deal with, should I trust this data? That is not something we want anyone, particularly the underwriter, wondering about. Working with Cytora has been fantastic. If you didn't remind me, Juan, some of your employees feel like mine. So, I mean, it's really been very collaborative in that sense.
Juan de Castro: Hello. My name is Juan De Castro, and you're listening to Making Risk Flow. Every episode, I sit down with my industry leading guests to demystify digital risk flows, share practical knowledge, and help you use them to unlock scalability in commercial insurance. For those that don't know me, I'm Juan. I'm the chief operating officer at Cytora. I'm joined by Allison, Yaemish, and Neeren. Do you wanna start with a quick introduction of yourself?
Allison Thornicroft: Yeah. Absolutely. Thanks for having us. Allison Thornercroft. I work at Arch Insurance. I am leading a domain product manager there over our customer and intake space. So that'd be submission intake clearance, produce management, broker management systems, etcetera. I've worked at Arch for about fifteen months, but I've always been in the p and c space, part of that, Liberty Mutual, and then Hanover Insurance before that. So good to be here. I live in Austin, Texas also.
Yaemish Rughoo: Hi, everyone. My name is Yaemesh Rughoo. I'm currently program director at Everen. I've been there for one month. Prior to that, I was twenty years at Beazley in various roles. The last role I had was head of technology for Beazley Digital. Very happy to be here. I look forward to the discussion.
Neeren Chauhan: My name is Neeren Chauhan. I'm with Tokyo Marine. I lead innovation and AI for Tokyo Marine in The US for one of the companies, and I have spent close to decade and a half in the industry ranging from personal to commercial to specialty lines. So I know a little bit about a lot of things to be dangerous. Some of which I'll share and some of which I'm learning already from the conversation. So happy to be here.
Juan de Castro: Thank you. So let's start with I mean, we were talking about the introduction was all about Gen AI. So let's start with where it is each of your organisation in terms of Gen AI adoption to set the scene, and then we'll go into, like, specific tips of what's worked well and what hasn't. Do you wanna start, Nereen, this one?
Neeren Chauhan: October 2022, it seems like a long time, but that's when OpenAI came out with ChatGPT. So it's only been a couple of years. And what has happened in the industry is that number one theme that I'll point out is that there has been a lot of excitement. People are really excited about what's interesting. But the challenge at the same time has been in last couple of years to figure out how do you discern useful from interesting. So people get excited about different use cases, but then they realise that it might take a lot of investment. You might need to clean up a bunch of data before you start using AI. There are people challenges. So all of that is creating a spectrum on which people are all the way from having tried a few things to some people jumping in right now. In general, my sense is that over next six months, there will be more and more adoption. The pace will continue to increase, but I feel like there will be some companies who will completely miss the generative AI wave, and they will jump on to agentic. There is more excitement about that, and those use cases, as was mentioned in the previous conversation, they're somewhat easier to implement. They're smaller in scope sometimes. Now they have their own challenges that they come with, but in general, I'm very optimistic about agentic AI.
Yaemish Rughoo: We're not using any AI tools at the moment, and that's what's attracted me to go there. It really is Greenfield, and there's a lot of opportunity there. So I'm very excited about that. At Beazley, though, we, Cytora were one of the first partners that we engaged with who were using Gen AI, and we've been partnering with you now for about three, four years now. We've developed the clearance process, which is now automated, so we can now use Cytora to ingest submission data and clear all submissions that come through. We're now working on triaging, the next step after that was obviously decision ready or quote ready data. Other areas within Beazley where we're using a bit of AI is obviously on the claims side. There's lots of opportunity there. But also what I'm quite interested in is really using it within technology. So I think we always talk about some business use cases, but I also feel like within technology, we need to be adopting some of those AI tools and showing some of our business stakeholders, like, the benefit of those. So I'm really looking forward to doing that, Everen.
Allison Thornicroft: So we've been working with Cytora for about a year at Arch Insurance, and we've had kind of multiple tracks moving forward at the same time. So we are live with clearance intake, data extraction, and document classification. We are also live with several use cases around what we describe as full extraction, so extracting from loss runs or schedules of certain kinds. And then I think most recently, we're gonna be going live in our claims space, probably around the triage space. I oversee the underwriting side, not the claims side, so look to Juan for confirmation there.
Juan de Castro: Yeah. That is true. What are your priorities? When you're thinking about debusing technology, like, what are you trying to solve for, and how is GenAI supporting that?
Allison Thornicroft: So many of our underwriter leaders have told us and we've heard it as we've gone out into the market and spoke with our brokers, speed to quote matters, and we need to hurry up because we're not meeting expectations or competitors are beating us. What we know from looking at our data and analytics at Arch is if we're first to quote across multiple markets, we have a 50% higher rate of binding that business. So time matters. And quite frankly, the clearance or the intake step of the process is low value in terms of what the broker or the underwriter needs to get something out to the market. So we are looking to reduce what used to be clearance in days to clearance in hours, and Cytora is a key part of how we're driving towards that outcome.
Yaemish Rughoo: We are very similar, well, Beazley were very similar I should say. We spent a couple of years really investing in our, in distributing our product via APIs, and that is, like, speed. Right? That's what brokers want, price, speed. But what we've realised and what we've seen those same two years is our email submissions haven't shrunk. They've kind of plateaued. So we're still getting we've almost doubled our submission intake via APIs and emails. So what we're trying to do is similar to what you're doing, Allison, which is we need to process those email submissions as quickly as possible, get them through clearance, triage them, and get them into a position. I think, ultimately, we wanna auto quote. That's ultimately where we wanna get to, but definitely wanna just put it in front of an underwriter so that they can just review certain exceptions and just go, right, click a button that's going out the door within a couple of hours. I think our clearance process, just can keep me on the side, probably about four hours at the moment on clearance. So we're quite good on that point, but I think for triage and for decision ready submissions, we I think that's what we wanna get to.
Neeren Chauhan: Overall, my observation has been in last eighteen months or so, insurers, especially carriers, are more focused on the use cases that lead to productivity gains. So doing things faster, cheaper, that's been the focus. The biggest excitement for me personally when you think about AI and technology and insurance is that you get the most accurate price for the risk in real time. Like, there's a lot of static models, models that look back and evaluate the risk. So there is a ton of opportunity there, but I don't see a lot of work happening in those domains just yet. And part of the reason is no one wants to be the number one in taking some of that risk. So I see a lot of opportunities in document ingestion, document processing, underwriter and claims support, ton of opportunities in legal, ton of opportunities in actuarial analysis. We don't do a lot of it in our world today, but I was at Allstate. Call centers are probably gonna go out of business at some point. So there is a ton of opportunity in some of those domains, but still some gap in the core of the insurance business. And we are starting to use Cytora for some of those productivity related things as well, but my hope is that we continue to move towards where the technology can really benefit the insured ultimately, and there is more work to be done there.
Juan de Castro: To describe the like, what does the architecture and the workflow look like for those kind of small? I mean, you're less familiar with the claims side of things, but at least on the underwriting side. Like, what does the workflow look like and the different systems involved?
Allison Thornicroft: Yeah. Absolutely. I'm happy to walk through that, and I'm pretty sure claims is copying what we built on the underwriting side. So pretty similar. So, essentially, most of our submissions today are still email based. We haven't leaned too hard into, like, portals and such. So vast majority of our flow is coming in that way. We're seeing north of 250,000 submissions a year. And by year end, we will have over 90% of that volume flowing through a system that includes the Cytora products. So, essentially, what we do is we set up through our email box. We'd assign on the back end, basically, an identifier unit that we can then transmit the submission email and the ID to Cytora. They then send back their extraction schema to us as well as our document classification tags. From there, we start to kick off some parallel processes. So if needed, we're going to human in the loop, huddle. Although recently, I'll share, when we first went live with some of these models, we turned what I refer to as the governor on for a 100% of submissions regardless of how confident the Cytora model was. We've now lifted that. We're seeing 70% time savings associated with that as effort, and we're seeing confidence or precision levels of, like, 95-97%. So really impressive stuff. So, yeah, we would go human in the loop, then we would complete our conflict checks that we do a part of intake. We also will be parallel auto filing into our document repository, all the documents based on what the classification type was. And then we've also got a third piece kicking off where we'd have an orchestrator basically listening for the emitting of that first two workflow events and then saying, oh, okay. Here, we've got a loss run. I wanna go extract the loss run and then serve that up in a workbench underwriter ready UI. So that's how we're building things.
Juan de Castro: Great. At the workbench, this is an in-house built one or is it commercial one?
Allison Thornicroft: It's built on a commercial platform.
Yaemish Rughoo: I was gonna say that's pretty much the same process that we've built at Beazley apart from the lost runs. I just wanted to add a little bit more, like, just to kinda help frame the problem. Right? So at Beazley, I think we receive our own application form only 13% of the time. So that means we're receiving competitor application forms all the other time. So training the model for our competitors' application form, making sure that we can make adjustments, train the model, that's where the real value is. You'd expect training on your own application form. Sure. You can do that all day long because you know what the data is. But doing it on the competitors' application form, I think that's where the true value comes from.
Juan de Castro: Did you wanna explain what because when people hear training, that immediately people think it's about training templates or training forms. It's like, almost like the old way of doing this. What does training is like a site or a platform, for example, in a cyber context look like?
Yaemish Rughoo: The cyber intake form now has changed complete has revolutionised over the last couple years. The amount of data that we ask for on the cyber form is incredible. So for us, when you are looking at data from an application form and understanding what that value is. So we look at accuracy and completeness of data when we're looking at an application form. So we're looking at those two scores. So when we get new application forms or versions changed, it's being able to tell the model, well, this is now where you need to go look for that data source. And then the model will then pick that up and then retrain itself, and that's where you get the time saving.
Juan de Castro: Yeah. I think this is quite a specific challenge to cyber, which is, it's not about, like, what data you get from each application form. It's like the risk controls differ by application form. So the typical one is, like, does the client use multi factor authentication for remote access? Right? And then there would be another application form that just asks a slightly different question, and it does require underwriting judgment to decide. Are you happy to take this version of the question as the answer to the schema point you want? So it's not about training templates. It's more about training the interpretation of the questions, because it's a very interesting challenge, which is very specific to cyber because it's still quite an emerging line of business. Neeren, you're earlier in the process, but is there any thoughts on how you're thinking about the.
Neeren Chauhan: No, nothing to add there. I think the one thing that I'd say which kinda connects to my previous point, I'm really excited about the third party data sources that you connect with as well. Initially, we won't be using that, but, again, I think that's where a lot of impact is also gonna come from to give me the risk insights and help me accurately assess the risk and price it in addition to automating everything else. And the other thing I'd say is what I find fascinating is that a lot of this change is happening because sometimes the push is coming from the brokers and agents who are sometimes ahead in their own adoption of AI and automation tools. And that's, I think, a good thing for carriers too. Because carriers tend to sometimes get lost into their own internal debates and move much slower than they should. Uh, but I think this whole broker piece, so underwriting, I think, is a very interesting domain that will continue to evolve. And my hope is claims will catch up to that as well.
Juan de Castro: Lessons learned from the work you've done so far, what would you have done differently? Did it deliver on your expectations?
Neeren Chauhan: There's no silver bullet. Everyone that I talk to in the c suite is debating endlessly to figure out that one solution that would change the world. We should not wait for that. So my biggest learning is, incrementally start moving forward, take the first step, and then you'll figure out the next.
Yaemish Rughoo: Well, I'm a big believer in the whole value realisation framework. Having a strategy, having objectives, having key results, like, making sure that you've got a team set up for that kind of framework is super important for a couple of things. Right? One, you empower that delivery team to come up with those key results. Those key results are small, like, small wins that you can start to measure and learn from and experience rather than waiting for the whole thing to be built. I think the second thing is when you build Gen AI, when you build these kind of tools, I think what it does, it focus on other areas. And I think what we know is that Beazley and Beazley Digital was then we started to ask questions. Okay. We can get these submissions via APIs. We can start to automate the clearance process. But is our clearance engine right? Do we need to fine tune that now because we seem to be getting a lot of referrals through clearance, for example, or do we need to actually look at the product itself, which is a big question, uh, that we always have with the underwriters, which is, well, is the product actually geared for this end to the market, which is the SME market? So I think a couple of takeaways for me if we're gonna do things again is we need to look at everything, not just the submission intake process, but look at all of the other functions around it as well.
Juan de Castro: And it's quite interesting on the value realisation framework. Both Arch and Beazley are two great examples of I'm less familiar with how do you work internally in Tokyo. But Arch and Beazley, you've got, like, very clear OKRs. Actually, it's not just in a piece of paper. It's like you live and breathe them. And every time you have a conversation, you bring up the OKRs there. Every initiative is aligned with your objective, which is very cool. Allison?
Allison Thornicroft: Yeah. Absolutely. So we've been on our journey with Cytora for just under a year from contract signing. I think from when we finished contract and kinda picked up the work to having our first release into the intake space, we did in three months. We followed up in 2025 with several releases since then. Overall, it's been a fantastic experience. We are not going for 70, 80% accuracy. We are going for 95 to 97%, and we only put high quality reliable results in production because we don't wanna deal with, should I trust this data? That is not something we want anyone, particularly the underwriter, wondering about. Working with Cytora has been fantastic. If you didn't remind me, Juan, some of your employees feel like mine. So, I mean, it's really been very collaborative in that sense. In terms of anything we would have done differently, when I thought about this question, I said, maybe we would have had our future state business process workflows more refined and planned out. But at the same time, the collaboration with the Cytora team really helping us understand the power of the product, informed a lot of that design as well. So, unfortunately, Cytora had to sit through some of our internal sausage making, but they've been good sports about it. It's been a fantastic experience, and we're excited to expand the use cases with you in the future.
Juan de Castro: Sounds like we have to wrap up. Thank you so much to the three of you. Making risk flow is brought to you by Cytorra. If you enjoy this podcast, consider subscribing to Making Risk Flow in Apple Podcast, Spotify, or wherever you get your podcast so you never miss an episode. To find out more about Cytora, visit cytora.com. Thanks for joining me. See you next time.