By: 25 January 2024
How developments in AI are impacting claims fraud

Damien Rourke, partner, Clyde & Co

How can you tell when someone is lying? Is it their body language that gives them away? Their eye movements; their gestures; the way they touch their face whilst speaking? Or is it the nature of the lie itself? Not enough detail or perhaps too much detail?

The truth is, unless you’re a skilled practitioner, catching someone out in a lie can be difficult. That’s something the insurance industry has learnt the hard way. In 2022, the sector detected 72,600 fraudulent insurance claims, according to the Association of British Insurers (ABI). The actual number will be much higher, as many go undetected.

Insurers have made many attempts to root out these fraudulent claims. Voice stress analysis was much heralded in the early 2000s. Israel developed a technology that listened to a claimant’s voice during a phone call to detect abnormally high levels of stress. Cognitive interviewing was a psychological approach involving skilled call handlers interviewing claimants using a gentle but persistent form of questioning developed for use with children who were the victims of sexual abuse.

Now, with the rapid development of artificial intelligence (AI), a new approach is being pioneered. The key is the inconsistency of the liar. In short, telling a detailed lie about a car accident requires imagination, a good memory, the ability to act, a degree of confidence, intelligence and flexible thinking. Telling a lie once is easy; telling it over and over again is hard. Details will change. Terminology fluctuates. Aspects become over-embellished. It’s these changes and inconsistencies that AI can take advantage of.

“In 2022, the sector detected 72,600 fraudulent insurance claims, according to the Association of British Insurers (ABI).”

Take, for example, a faked car accident in which a driver and three passengers are involved. All four people must provide statements, complete claims forms, complete medical forms, speak to physicians. The chances that all four alleged victims offer consistent statements and accounts over a period of several months is low. Human nature just isn’t like that. Our memories are not fool proof.

Those inconsistencies existed before AI; there’s nothing new about them. But what is new is AI’s ability to ingest large volumes of data, analyse it and then identify the inconsistencies. What would once have taken a claims fraud specialist an entire day to accomplish can now be achieved in minutes.

At Clyde & Co, we’ve been investing heavily in developing the capabilities of our Newton platform for several years. To identify fraud, the system must go through two key steps. First, it must be able to absorb and understand written materials – and this is where one of the big leaps has been made. Early on, the insurance sector focused on template-based optical character recognition (OCR). The system was taught that on form A, for example, the information contained in box B would always be a date or a name. That’s fine for a well-structured document, but this old system can’t understand a set of free-form doctor’s notes. The newer approach we’ve adopted is holistic OCR. This means teaming the OCR with an AI in order to allow it to read and understand an entire page rather than only the information in defined areas. This allows the AI to operate at a much higher level of comprehension.

Step two is the analysis of the data. We’ve integrated ChatGPT into our platform to do just that. Within seconds it can spot the inconsistencies and even draw a table to show them graphically.

The importance of these developments is not lost on an insurance industry facing a potential rise in fraudulent claims due to tougher economic conditions. Every historic indicator points to the link between economic hardship and a growing volume of fraudulent insurance claims. But there are long term wins, too.

Clearly, the holy grail would be predictive AI – the ability to identify fraudsters or fraud hot spots. Whilst an attractive idea, the ability to predict the future based on historical data and patterns of data is questionable. The reputational damage for getting such a prediction wrong would be very serious.

A more realistic development is that, as insurers and law firms tap into the benefits of AI, so too will the fraudsters. If we use AI to check for inconsistencies, fraudsters may attempt to do the same, effectively checking their documentation before submitting. We’ve seen a similar situation play out in the cyber security sector. Both attackers and defenders are now using AI. Fraudsters have access to free-to-use AI tools like ChatGPT so we may well see criminals on the dark web offering AI services to fraudsters.

“The importance of these developments is not lost on an insurance industry facing a potential rise in fraudulent claims due to tougher economic conditions.”

One area of concern is the growth of so-called deepfakes and shallowfakes. This is the manipulation of media through the use of AI. Faked news footage, faked audio of a politician or a phone call to the police, faked dashcam footage, faked photographs. These advanced techniques involve creating realistic but fake audio, video, or image content, which can be used to fabricate evidence in fraudulent claims. AI-driven solutions are being developed to identify these fakes by analysing inconsistencies in digital fingerprints, patterns, and other anomalies that are imperceptible to the human eye. It’s our hope that in the next five years, AI will evolve an increasingly sophisticated ability to detect and counter these deepfakes and shallowfakes.

Another win that could prove highly effective for the insurance industry would be AI’s ability to scan and analyse social media. Currently, human investigators spend hours trawling through social media sites like Facebook looking at claimants’ behaviour pre- and post-‘accident’ and seeing who their connections are. These investigations regularly reveal valuable clues about insurance fraud. Teaching an AI system to do the same task would not only speed up the investigation process but also reduce costs and broaden the scope of searches.

We’re on the edge of an AI-driven revolution in fraud detection. For the insurance sector, this holds out the promise of massive financial savings and potentially the longer-term reduction in attempted frauds as criminals see less prospect of success. Insurers face the challenge of keeping pace with the significant leaps forward in AI technology. But it will also be as important to consider the ethical dimension of these advances. The move from identifying fraud to predicting fraud will be one of the biggest challenges for our industry in the next 10–20 years.

 

Image: Damien Rourke, partner, Clyde & Co
Guest Post
This post was created just for Claims Media by a guest contributor.