Friday, 16 March 2018

Re-Imagining an insurance company - do you really need a policy management system?



I have a fair amount of background in the insurance industry, I even am a fellow of the FLMI (brag brag) but I have always been puzzled by why insurance companies needed policy admin systems’... True, my knowledge of technology is more modern, so I probably can’t appreciate the marvels these systems did, but to me, the technology exists today to do things much simpler, cheaper, and much more flexibly.

Before I had the guts to write this blog (nobody wants to look totally stupid), I chatted with a few friends, a solutions architect who has experience in the insurance industry, a friend whose company sells banking systems, and a couple of actuaries with experience across the industry including in product development. But of course any errors/omissions are mine.

To me, insurance policy admins deal mainly with workflows. There is a specific flow for every activity in the insurance industry, let’s take a policy issuance for example. The flow might be something like:




There are 5 actors in this workflow, the customer, the producer, and 3 actors internal to the insurance company, the team that does the admin side of policies, the underwriting team, and management of the underwriting team for exception handling.

The producer and customer/prospect meet, data captured and application made. If the data is complete the application reaches the underwriting team, else it goes back for completion. If it is the case is insurable it proceeds, else it is rejected.

The underwriting team decides whether it is a standard case, or a non standard case within limits. In both these cases a notice of quotation is sent to the customer followed by a formal quote. If the case is non-standard but not within limits, it is escalated to underwriting management for approval or rejection in which case the notice of quote and quote follow, or the rejection letter.

Upon receipt of the quote, the customer can decide to accept and the policy is issued.

You may be wondering why I have colour coded my flow chart in white on dark blue, dark blue on white, and turquoise on white. Basically, while humans are involved at all stages in the flow chart, the white background show steps that are ready to be performed either by automation, or the use of "Data Science"/Analytics/ML/AI.






In this workflow above, I have added one actor “Analytics/ML/AI”, and instead of admin, we now have an automated process.

In this case, the rejection rates of application and delays to issue policies due to incomplete information disappear since the check of details (and verification for existing customers ensuring consistency) is done on the spot using smart forms. 

Once the forms are received electronically (note in the earlier workflow I skipped the scanning of paper forms which would just have made the comparison even more drastic), it is evaluated by the model/algorithms, firstly for insurability, and whether the case is standard. The non insurable and standard cases are dealt with instantly.

The underwriting team only has to focus on non-standard cases, and underwriting management with non standard cases beyond limits.

This process eliminates individual bias in the process since all policies are treated identically, considerably shortens the time it takes to evaluate the straight forward cases and shortens turn-around-time, while alleviating the burden from the underwriting team, allowing them to focus in fewer cases and be able to again turn around faster.

Of course analytics and automation can also play a role in the other hugely time and resource consuming area in insurance, claims.

Let’s take motor insurance claims for example, start with a very simplified process flow:






When a motor accident occurs, the customer sends pictures and makes a report (with help of producer or company employees) and a case is created. At a later stage, if necessary a police report and claim document are submitted and added to the case file. An estimate of involved cost will be provided by a 3rd party, usually an adjudicator. Then the claim, based on these submissions, is processed.

The assumption is that below a certain dollar value, an insurer does not stringently verify the claim, however above that value, two sorts of checks are carried out, one on the claimant, checking the probability of fraud, and the second on whether the documents support the damage. Anything that is not in line will warrant an investigation. These increase the cost of the claim to the insurance company. Then based on whether the all clear is obtained, the claim is approved and paid, or rejected.
What analytics can do is perform the fraud checks and evaluate claims automatically.



Using Analytics/ML/AI firstly allows the insurer to verify all claims if need be since the cost of checks is much lower and is the same for all cases. Hence, it is not possible for wannabe fraudsters to game the system by claiming just below the threshold amount they think the insurer uses.

3 different techniques are likely to be used, and combined.

For the customer check, graph databases are extremely efficient. For example, a classic case of fraud is the whiplash injury fraud where a group of people take on some roles: driver, passenger, victim, witness... and simulate an accident where the victim claims whiplash injury. Then they change roles and go again. Graph databases detect such circles very easily.

For the damage estimation, this is done via AI/ML; there are quite a few providers nowadays who offer this service even as an API. Hence being able to estimate the damage and the cost of repair is done via computer vision, being fed the photos of the current damaged vehicle.

To extract information from the accident reports, existing NLP or text mining algorithms including Name Entity Recognition algorithms can be used. This can be used for simple hygiene, extracting the weather, road conditions, speed... Then based on this information, models can be built to estimate the damage. The results of this and the damage estimation can be combined and validated.

The nice thing is that this process can be run quickly and across all claims; only those that do not clear the process would require human intervention to review for approval.

Hence the time to process claims (and the cost) drops as the Analytics takes care of most cases quickly and consistently, and the cases that do not clear the process are then referred to humans allowing focus and dedication to quickly clear more complex cases.

Ok, so what I am saying is that Analytics/ML/AI can help add consistency to insurance processes (applications, claims...), make the processes run faster and allow humans to really focus on these cases requiring specific attention.

No big deal, at least not enough to warrant the disclaimers and bragging at the beginning...

Actually, I will go further.

As I have shown above with 2 examples, most of the administration of insurance policies can be relatively easily converted to workflows, with some document management thrown in. In fact, I would argue that most insurance policy admin systems are just that, workflow and document management systems, with formulae (for calculating the next premium for example) built in.

The newer insurance systems acknowledge that by highlighting that they come with workflow templates to allow users to quick start including their products. But is that enough?

I would argue that these do not go far enough. 

I am not a technical guy nor am I very familiar with many cloud providers, so I will use GCP (Google Cloud Platform) as an example.

They have this very interesting product called data flow which allows you to create workflows just like the diagrams above, and each box has a specific task to accomplish. Another neat aspect of dataflow is that the boxes can be the running of algorithms, for example a simple decision tree to decide whether a risk is standard or not, or the application of a clustering model to decide the premium to be applied in a particular case. Hence, most non manual processes can be orchestrated with dataflow.

When dataflow is fed by another product pubsub that basically acts as a giant reservoir that holds requests and releases them as and when they are ready to be processed (either by pull or push) it becomes even more powerful.

It could be argued that traditional insurance systems can also be tweaked to make the workflows more automated, but where size comes into the picture, the winner is clear.

Most insurance systems are chunky. They come at a certain price and can manage a certain number of cases. So if you are a new company, you might be getting something too big for you.  On the other hand, once you decide a size, it can be quite hard to quickly scale up and scale back down again.

This is what GCP allows insurance companies to do. Pubsub allows insurance companies to accept huge volumes of applications that arrive in a very quick succession or even simultaneously and make sure none are lost (the order they are attended to may be messed up but no application should be lost). 

Then since it is a no-ops cloud product the size of/number of machines required temporarily to process this huge workload can be put online in minutes, process the applications, and shutdown when they are not needed.

The data, once processed by dataflow, can be pushed to big table (to take advantage of low-latency and be easily updatable) (and say cloud storage for pictures for example).

A simple example will illustrate the point. Let’s say, an insurance company decides to offer accident coverage to participants of the Standard Chart Marathon, all 50,000 of them. The applications come in fast and furious during registration, these need to be processed, approved quickly, and then you won’t need such processing power immediately.

What of the next event you want to cover is the OCBC Mass Cycling event but this time want to insure the bikes...

The other nice thing about dataflow is that creating a new workflow can be done in python, not some arcane language.

Basically insurance companies using a cloud service with something like GCP’s pubsub, dataflow, ML and storage like Big Table (which can include metadata about pictures) and something like Cloud Storage for actual pictures/scans together with APIs (for example to verify payments have been made) can substantially simplify their processes, make them much more responsive to business needs, and allow themselves the flexibility of creating new products and/or new processes that suit them and their customers rather than relying on pre-packaged generic products.

Again, I’d just like to reiterate I am using GCP as an example and I am sure other cloud providers can offer similar advantages.

However the conclusion, to me, is still valid. If the main reason insurance companies are tied to traditional insurance management systems is legacy, new insurers are at an amazing advantage. Technology today allows them to create new, flexible workflows, be able to customise products for micro-insurance/event insurance, and process them in a timely manner by quickly scaling up and down as required, and use very generic skills (python) to do so.

Does that mean that traditional insurers are going to be extinct? No. It’s just that their legacy is a burden, but no one stops them from using new technology to create and manage micro-insurance/event-insurance, new products to serve their customers better, to run in parallel with their current systems as these are sunset and the functions ported over.

The key is to find ways to efficiently serve your customers exactly how they want to be served, customised products, flexible options, true customer centricity. This is the true advantage and the technology is just an enabler.

The insurance industry is ripe for disruption and the appropriate use of technology, sacrificing some sacred cows, and re-thinking and customising products while retaining flexibility is the way forward.