24
.
06
.
2025

Practitioner’s Playbook: The Blueprint for Risk Digitization POCs

In this episode, Juan de Castro is joined by Rich Lewis, Cytora's Sales Director, and Zaheer Hooda, Head of North America, for a deep dive into what makes proof-of-concept (POC) initiatives in risk digitization succeed—or fail. Drawing on firsthand experience from working with leading carriers, they break down five essential capabilities insurers need to get right when implementing digitization initiatives—from extraction accuracy and full-spectrum intake handling, to scalable deployment and human-in-the-loop exception management. They also provide a practical, inside look at how insurers structure effective proof of concept  processes, including live workshops, data preparation, success metrics, and how to align POC design with measurable business outcomes. Whether you're a carrier planning a digitization journey or a leader seeking to optimize underwriting workflows, this episode offers tactical guidance to ensure your technology investments deliver meaningful impact.

Listen to the full episode: here

Juan de Castro: Hello, my name is Juan de Castro and you're listening to Making Risk Flow. Every episode, I sit down with my industry-leading guests to demystify digital risk flows, share practical knowledge, and help you use them to unlock scalability in commercial insurance. Today I'm joined by two of my colleagues, Richard Lewis and Zaheer Hooda. So let's start with an introduction of both of them. Rich, do you want to go first?

Richard Lewis: Yeah, thanks, Juan. My name is Richard Lewis. I lead our EMEA sales function. I've been in Cytora for around six, seven months now, but my background is in insurance-specific applications. For the last 14 years or so, I joined from Salesforce, where I was responsible for their insurance practice, focusing on their middle office and back office applications. So, underwriter, workbench, policy admin, quote rate apply, FNOL claims, billing, etc. And then prior to that, I was working for Oracle in their marketing application space, again, focusing on insurance as well as financial services. So, great to be here.

Juan de Castro: Thank you, Rich. Zaheer, do you want to introduce yourself?

Zaheer Hooda: Yeah, thanks for having us. Excited to chat today. I joined Cytora a few years ago to head North America sales and operations. My specific role is how do we work closely with U.S. Carriers to help them achieve however they want to digitise their risk objectives. Prior to Cytora, I was with Hiscox U.S., the commercial insurance carrier, where I led their data analytics and automation. And prior to that, I was at McKinsey for several years, focused on the intersection of technology and service-based organisations.

Juan de Castro: Thank you both. So as I said earlier, we're going to focus today on the process in which when insurers are starting to think about digitising their core workflows, how do they go from thinking about improving metrics, how do you improve productivity? How do you boost your broker service? How do you drive consistent decision-making around the organisation? So from those objectives into actually driving the business impact. And obviously, the first steps in that process is considering, like what technology they need to deploy to achieve them. So the first question for you Zaheer is, as insurers and brokers are thinking about embarking on these risk digitisation initiatives, what areas do you think are important for our clients to make sure they get right?

Zaheer Hooda: Great question. So when we're talking to clients and they're thinking about this digitisation journey, it usually starts at some level of one platform or two platforms. So the key question underneath that is, within these platforms, what are the specific capabilities that they would signify that for them it would signify a successful implementation? And for that, we see five specific areas across deployments. One is the most obvious in our category, at least is the extraction performance. This is straightforward, but key, but often gets confused with just simply extraction. So the key question, if you step back again from a carrier's perspective is, can the platform achieve the level of automation that they're looking for in that specific use case? So data extraction is definitely one of them.

But another piece of that is the full digitisation suite. So when we talk about digitisation, that includes many areas, enrichment, inference, business rules, applications. So the performance, I would say, is that first piece. Second, you move into how much of the full intake spectrum can we handle for them? Because the reality is these operating models that are internal to an organisation are not segmented by the intake spectrum. So you have everything coming in usually through one funnel or maybe a couple of different entry points. So these operations include multiple different document types. So that's at least one angle of it. Can you handle unstructured loss runs, unstructured SOVs in proposal forms? The other angle of it is transaction types. So you also have, outside of just new business, you have claims renewals. Can you handle all of that? So the key is, do the clients gain the confidence that the platform can handle this diversity across these multiple spectrums on the intake? Now we've talked about performance, the ability to handle everything that comes to the door.

The next is, how quickly can you scale across the organisation? Now, for folks that have been in the insurance world, you know that systems implementations, policy admin systems, or what may be, can take months or even years to scale. So the key question here is, can clients scale quickly, let's say within months? So they need to see that power is the key point. Can they scale quickly within months and they have that power within their hands, specifically across multiple lines of businesses and use cases? And I would say the key point here is without vendor dependency.

Fourth, what is their target state architecture? What is their existing tech stack look like? It's critical to understand that and then to understand where our platform fits within that tech stack. The IT teams, the tech teams, the business teams need to feel comfortable that whatever comes from our platform can easily, quote unquote, integrate with their downstream system, automatically mapping out what comes from us. To their downstream system. So how do we demonstrate that as well? And then finally, within our category, probably more important than other categories, is the human in the loop experience. We know exceptions are going to happen. That's just the reality of these complex insurance flows. So what do you do to manage these exceptions? There's probably two different areas. One is, can you alert the user when human engagement is needed? And then the second piece of it is, once the human is engaged, once the person is engaged on the team, the admin team, the ops team, how do we make it easy for them to find and correct the right data values? I think priorities may vary by organisations, but really across these five areas, if we can provide confidence that we can pull that off, and that gives them confidence that this would be a successful digitisation deployment.

Juan de Castro: That makes sense. I think you've touched on those five areas. One is the digitisation performance, the accuracy of the data and the writers will receive. Then the ability to digitise the full intake. And the third one you mentioned was the ability to quickly scale across all lines of business and geographies. Then you touched on being able to support the target state. And the fifth was the ability to human in the loop operational experience of the day-to-day operations. So I guess then let me jump to Rich. So I guess the first step, once they define those five as their key priorities, the next step is typically doing a proof of concept and a proof of value. So how do those five dimensions translate into the way they structure these pieces?

Richard Lewis: Yeah, really good question, Juan. So in my view, they need to be directly linked to the five points that Zaheer has just talked through. So in line with looking at your accuracy and looking at your extraction performance, what you're trying to do within a proof of concept is to create an environment where it's as realistic as it possibly can be and as close to what it will look like in a production environment as possible. So one of the ways that we really advocate in terms of doing that, and don't get me wrong, it puts us under a lot more pressure and gives us more gray hairs than I care to think about. But one of the ways is that we process these risks, these documents live within a hands-on workshop. So as Zaheer mentioned, it's really important to get a really good cross-section of input documents, of submissions, and that you're providing those to help us to configure the platform. But then you also hold back a bunch of those so that you can run through those live in the workshop. And that's the ultimate measure. We've really moved away from the times where a platform would require two, three, four weeks to prepare for a proof of concept. Training models just doesn't really exist anymore. It's very much a legacy approach. So we really advocate for these live submissions and, as I say, getting it to be as realistic as it can be in terms of that process.

So the second piece is really around that comprehensive data set. So making sure that it's a representative data set for the type of intake that you would look to manage. So make sure that you're including whatever's important for you. So it might be ACORD forms, application forms, it might be loss runs, it might be SOVs, bordereaux. It could be claims notifications. Whatever that use case is that you define, making sure that you test a wide array of types of documents from structured, unstructured, handwritten, semi-structured documents in all different formats from PowerPoints to PDFs to Excel. So making sure that there's a really robust, varied input that's representative of what you're going to see on a day-to-day basis.

The third is really around showing the usability of the platform and the ability for clients to scale new use cases, adding new data points, creating brand new schemas within that specific session. So that could be another line of business. It could be that you look at a particular new business underwriting flow and then compare that to how that might look for endorsements, MTAs, renewals. It could be you say if you compare underwriting with claims, for example, so that's a really important point as well.

The fourth point, I guess, is really around understanding the target architecture and as importantly, what the target workflow will look like. So often, the typical entry point for our clients is integrating with an email inbox, and that's the ingestion point. Sometimes, the workflow is that it may be the email documentation may be surfaced from within an underwriter workbench or from within a portal. And again, on the output side, thinking about that target workflow is you configuring the target output schema to be aligned with the downstream systems that you're looking at, be that a workbench, be that a pricing tool, be that the policy admin system. So that's really important as well. The final point is really around the exception handling. So making sure that you're not just setting these sessions up just for the underwriters that you're actually bringing on board, the underwriting assistants, the operations team. So when it comes to things like the human in the loop for the exception management, making sure they're confident and comfortable within leveraging the capabilities that exist within the various different platforms that are available in the market. So I guess those are the five elements. The way that I think about things and align that with the business outcomes and the dimensions that Zaheer talked through earlier.

Zaheer Hooda: Yeah. So I love that. And just one point to add to build on Rich's five key areas, which, of course, are running lockstep, right? Because at the end of the day, just to reiterate that, if we have five clear business objectives of how a client sees us being successful, the POC needs to resemble that as much as possible. It is the best proxy for them. So after several POCs, this is kind of where we've landed as an organization. So that's where Rich talks about all these areas. Anything that we're doing in our POC has to be tightly linked and coupled with one of the five areas that we talked about, or we need to change it. And if they don't see that, then we'll refine the POC. But that is the crux of all of this.

Juan de Castro: It sounds like, based on what you were saying, Rich, a key element in that POC is that live workshop session that you're just describing. I guess two questions in one, which is probably not I should do, but I'll do. One is, what does it look like, the timeline of executing a POC? But then, if you could also deep dive into that live POC session, like what happens in that session, I think that would be truly helpful.

Richard Lewis: Yeah, so the way that I think about these things, I guess, is everything's centered around this live proof of concept. So that's processing submissions intake live within that session, not knowing how the platform is going to, how the platform is going to respond. So that can be, as I said earlier, quite nerve wracking, but it's the ultimate test of the capability of the platform. So really important, I guess, is that's the main focus part, but there's obviously a lot of preparation that we'd need to go in from a client perspective. To make sure that they get in the most out of that session. So in terms of the the preparation, I would say there's probably four critical steps in terms of data points that you would expect clients to bring to the table to help us prepare for that. So one is pretty much aligned to the conversation that we've just been having between ourselves and the points that Zaheer made. So agree exactly what that success criteria looks like. If you're going to be evaluating vendors across a bunch of criteria, be really specific in terms of what those success points are to allow you to do a qualitative and quantitative assessment across those dimensions. So that success criteria is going to be critical.

The second thing is provide your target output schema. So if you select a particular line of business and based on that line of business, make sure that you're providing a comprehensive schema and define that in a way that is required, that your output platforms require the structure that is required for the downstream systems. So we've got the success criteria. We've got the schema.

The next piece is once you define your schema is to provide a ground truth. So along with the success criteria, the ground truth is basically the expected values that you're anticipating the platform to deliver. It's almost how you're marking the accuracy level from the extraction performance. And it's really, really important that you do this on a manual perspective, having an underwriter or a client having to go through and pick out the various different data points across that ground truth. You can't do it from an extraction from a policy admin system because there's a lot that goes on between a submission being received and then actually a risk being bound within a policy admin system, for example.

And then the fourth point is really the submissions. So provide a handful of submissions, three to five submissions, the ground truth associated to them to allow the providers to configure the platform. So we'd expect to receive those assets one to two weeks before, no more. No provider should require those assets before that time period. And that sets you up for a really good, productive live session. So within the live session, obviously, we're running the extraction, looking at the performance of the submissions that we haven't seen and we haven't had the ground truth for. So it's real life as we can get this scenario to a real life production environment. The other thing within those live sessions, as we touched on earlier, is how easy is it to make those refinements? If we're not getting the expected output, how easy is it to augment that, to change that, to be reactive, to achieve the data points and the accuracy that you require?

So there's something quite magical about going in and changing certain descriptions to change the behavior of the platform and allowing you to get the outputs that you require. The other point, I guess, is around the flexibility of the platform and really seeing how easy it is to add an additional field, to create an entire brand new schema. So this is what you're going to be doing. You're going to be testing various different data points as you go throughout your digitisation journey. And then because of the live environment in which we're going to be processing those submissions, the third and final step is really about what you do after that proof of concept. So it's really around getting those output values, which you should be measured on across to the client as quickly as possible, immediately after the workshop. That then doesn't allow for any kind of human interaction with any of those and you're getting a real honest look into the extraction performance. So those would be, I guess, the kind of three stages. Typically, in a proof of concept, we would run through, as I say, we'd probably have one to two weeks notice on the preparatory work. You then run a half-day workshop for the proof of concept and allow prospects and customers to get hands-on with the platform. And then the kind of follow-up is looking at the accuracy and feeding that accuracy back to the various different vendors for feedback.

Juan de Castro: Okay, about follow-up questions on what you just said. In this case, I promise to go one by one through the questions. And the first one, you were referring to live processing, live session. Can you explain that a bit more? What do you mean to say live? In the POC, the platform is fully integrated with the client systems, or are you talking about perhaps qualified app?

Richard Lewis: Yeah, really good question. So in most POCs, you're looking to assess the capability of the platform. So what I mean by doing this live is we turn up to the session in the meeting, you share 10 or 15 various different submissions with us that we then process through the platform and view the output results. So the major thing that we’re testing here is the performance and the accuracy, going back to Zaheer’s first point, so that’s really what we’re testing. What we don't often test is the integration capability. All of these platforms are fully API enabled. So you shouldn't necessarily have too many concerns in terms of the integration capabilities. Where we do see some challenges, I would say, is if you focus on pure extraction. So I'll give you an example around occupancy. So you can extract a value from a slip or an MRC or from an SOV around the kind of occupancy of a particular property. However, that may or may not be in the same format, usually within your pricing tool or your underwriter workbench. Those will be pick list values of 80 to 100 different values. As I say, there's often a disconnect between what you might receive from a client or from a broker and actually the information that's then required to push that into a downstream system. So that's a really important point to make sure that as part of your target output schema, you're not just focusing on the extraction because if you're just focusing on the extraction, there's a whole load of transformation that needs to go on within the kind of post extraction process. And if you think about all the different variations of what an occupancy could be versus what the target occupancy output you require, you're into a situation where it's a bunch of mapping tables and that's just for one specific field. So that's where we see a lot of deployments with some clients that are using kind of older technology fall foul of. So, yeah, we wouldn't typically integrate, I guess, with downstream systems as part of the deployment or part of the proof of concept, I should say. You would save up for deployment.

Juan de Castro: That was very helpful. A different question there. Well, this perhaps is more of a comment because as you were describing the POC, it was quite obvious how you test the digitisation performance. But one of the points I hear mentioned at the very beginning was about how does the platform scale across use cases, lines of business, geographies. I think you mentioned, Rich, that's what would be over that live session. You should also be like doing a full configuration from scratch. Is that how you test that type of capability?

Richard Lewis: Yeah, exactly. So we see that clients are looking to become self-service in this capability. I think that in the past, then a lot of customers have been beholden to the vendor and a lot of customers, I think particularly as they move much more towards kind of an enterprise-wide ingestion capability versus specific proof of concept or line of business or use case specific deployments, there's a huge desire from the insurance market to bring this capability in out and to be responsible for their own destiny. They don't want to be beholden to a third party, which obviously makes sense in terms of the speed to market, the ability to test and build experiments and see what's working and what's not working. So the ability to then be agile and flexible and to be able to add those fields, create those schemas from scratch, add on different languages, tailor this based on different geographic requirements is really, really important for sure.

Juan de Castro: And the last question on what you said, you mentioned as part of a POC process, you would request, like I think you mentioned a handful of examples in advance. Obviously, if it's a handful, it's not for actually training the platform. So what is the goal of having that handful of cases in advance?

Richard Lewis: Yeah, so it's absolutely not for training the platform. It's to refine the configuration. So if you combine the submissions, the handful of submissions of three to five per use case, and then the ground truth is to make sure that we're not overly fitting the descriptions based on a limited, on a very narrow data set. So what we want to get to is a position where we're confident across three to five submissions that we've completed for the platform in a way that it will be able to perform well in the live extractions. But as you say, there's no pre-training, there's no annotating documents as you would do in kind of legacy machine learning technologies. It's literally just so we can test the configuration to make sure that we're extracting the right values in line with the ground truth.

Juan de Castro: So that makes sense. Back to you, Zaheer, you've obviously led dozens of these PoCs also, you're in the U.S.. We would love to hear your thoughts on what you enjoy the most in these processes.

Zaheer Hooda: Yeah, a bit biased because all this conversation from Rich about live POC talk as the blood flowing. I think it is the live POC. I mean, when the energy and excitement around the live POC, let me see if I could try to capture that. I mean, Rich mentioned it, right? Nerve wracking, magical, but it's quite thrilling. So you go in, you sit with the carrier's team. You're like, all right, let's pull up the platform. Let's configure it together. Let's process it. Real data, like their submissions, just really just saying what Rich said. You get their real submissions. You process it in real time. And they see that power right there. It's massive, I mean, for folks that have been in this space from just a few years ago, I'm sure many folks remember the painstakingly have to go form by form. Like, oh, here's where the effective date is. And the limitations in that approach were massive. So now that world is just completely gone. And then you watch the carrier to be able to realise that, oh, I could take control of this. I could see. And that usually happens in the middle of the POC. So beginning of the POC, they come in the live POC. They're like, oh, we're going to see amazing life extraction. They may be talking it a bit up, but then they see it. It is magical. It is thrilling. It's exciting for us. It's exciting for them. But midway through, I think the POC, when they start using the self-service platform, they start to realise that they actually have full control. And they can make the platform do exactly what they want to do. They see it happen. It's right in front of them. No smoke and mirrors. Fully transparent. Hands-on. And that first moment when they start to realise that the technology can deliver, you can't get tired of that. That's probably the best part about a POC.

Juan de Castro: What do you say about what he was saying, Rich? Anything else to add?

Richard Lewis: Yeah, I kind of think about it in two dimensions. So one, I think, is in terms of the output. Because the output is ultimately what you're interested in. Because that's what's going to drive the business benefit. You're straight through processing. Create capacity within your organisation. Improve the speed to market to get back to brokers. So the accuracy is clearly going to be really important. I think that the magical part is maybe when it's not quite extracting the values that we expect. And you go and make a tweak. Or you add in an additional data point. And you can see within three or four minutes, then just that data flowing through as an output. So that's the real magical point. You can see clients go on a journey of almost of discovery in terms of the excitement. And their eyes were opening. And you can see in their body language, they're leaning in. And they start to drive the platform themselves without too much direction from us. And that, I think, is what technology should do. That's the beauty of it.

Juan de Castro: Yeah, that's fantastic. The one last thing we have not touched much on, because I think one of the things that I mentioned, you mentioned, Zaheer, at the very beginning, was about this operational experience. So the experience that the client teams doing exception management have with the platform. So perhaps you can summarize, like, what is it typically the feedback from those teams when looking at a platform like ours?

Zaheer Hooda: So I think if we look at, again, from an organisation's lens, they probably have two different user types when they are using the specific platform. One is how do you have an operations view, like you said, where they could see the intake and the flow of risk coming through and being able to see who's on point, who's working it? Because, as you can imagine, the volume of what's coming through the intake is massive. And to be able to have a tool to be able to see what's coming in, what stage it's at in the process. Has it been processed? Is it being reviewed? Is it completed? Has it been passed to the downstream system? So having that visibility is key. And then the second one is that human in the loop experience, which is massive because that's where a lot of the time I would say is lost in the sense that even if you have an amazing extraction, let's say you have the best extraction, but having a read-only view of that extraction is not really going to help. Because as you can imagine, there's a big aspect of change management and trust in the system. So when an ops team comes into this, number one, they can see the full view. But when you look at the admin team going into our exception management platform and saying, how did it find this value? There's no way across these 200 pages, being able to just quickly click on it and within seconds, you see where it was found. And if you disagree, you could quickly capture something else. That right there is really what drives that transformative experience. Where not only do they trust the output, but the trust is enabled by the ability for the platform to show where it was. Or in some cases, well, I found this value five times and in one of the five times it was conflicting data points. Being able to see that gives that confidence in the platform. And over time, you see that time improving too from an inefficiency perspective. And that's exactly what these guys are looking for.

Juan de Castro: Thank you, Zaheer. This has been really useful, getting both of your perspectives to understand what a POC looks like and what our clients experience. So final question back to you, Rich. If somebody listening to this episode is thinking, okay, how do I know more? What should they do?

Richard Lewis: So I guess there are a couple of answers to that. I guess if you're interested in finding out more, then there's lots of information on our website: cytora.com. You can also request a demonstration through there as well. Secondly, and I guess more topical to this, if you're interested in hearing more around our POC best practices, then we have a guide in terms of how to run these proof of concepts in line with the best practice. So happy to share that with anyone who might be interested.

Juan de Castro: Fantastic. Well, thank you, both of you. I think it's probably like a 30 minute summary of best practices and experiences from running POC. So thank you, both of you for making the time.

Zaheer Hooda: Thank you.

Richard Lewis: Pleasure for the opportunity. Bye now.