By: 17 October 2024
Claims fraud and GenAI – fighting smart!

Philip White, lawyer at The Legal Director (pictured), explores how insurers are using generative AI to tackle claims fraud. He discusses the escalating tech battle with fraudsters, regulatory considerations, and the need for a balanced approach, combining AI tools with human oversight to effectively counter deepfake risks in insurance:

Portrait of Philip White, a professional man wearing glasses and a navy suit jacket over a white shirt, smiling and looking slightly to the right against a plain grey background.Insurers’ use of generative AI (GenAI) to improve claims triage and detect deepfake submissions is developing fast, but so are the techniques and approaches being used by fraudsters. Insurers are constantly having to balance the advantages of settling claims faster and more efficiently (streamlining checks and balances) against the risks of inviting increased claims fraud costs and losses. 

Fraud threats posed by GenAI

Research from claims automation company Sprouted shows 65% of insurance claims handlers have noticed an increase in fraudulent claims since the cost-of-living crisis hit. Additionally, 19% of those surveyed suspect that up to one in four claims are now using fake supporting documents that have been created or altered using artificial intelligence (AI) and digital tools1 

GenAI is being used to create customised emails, instant messages, image and voice content, and to eliminate obvious errors and spelling mistakes – traditionally a trigger focus of most internal compliance fraud training programs at the moment. In addition GenAI enabled chatbots have the potential to scale up fraudsters’ ability to contact victims, but at a much-reduced cost.  Although voice cloning technology is a little further behind, certain commentators consider that at the current rate of innovation it will not be long before credible voice scamming emerges, requiring a shift in how we verify with whom we are speaking.  

What does regulation have to say? 

The FCA’s approach to consumer protection around AI is based on a combination of the FCA’s Principles for Businesses, other high-level detailed rules, and guidance, including the Consumer Duty”2. In other words, all the existing frameworks for resilient, risk mitigated, well run regulated businesses continue to apply, including having appropriate systems and controls with oversight to manage outsourced tech provision. No change there then, except that the FCA is also expecting firms to focus on and think about the additional risks posed by GenAI on their activities and outcomes for their consumers and customers.  

Don’t lose the arms race, broaden the thinking! 

Deployment of AI tools in claims fraud detection is a critical part of the solution but not the full answer. There is certainly no shortage of tech providers stepping into this space to offer a panacea of detective and preventative solutions to this challenge.  However, unconstrained by laws, regulation and governance, fraudsters will always have the propensity to evolve faster with deepfake manipulation. 

Instead, Jed Stone of Issured says we need to be thinking more critically about how technology and processes can be adapted to reduce opportunities for deepfake materials to be deployed in the first place. For instance App technology can be deployed on a customer’s phone which accesses the phone camera to capture and submit claims evidence, increasing a claims manager’s trust in the chain of veracity of the source material. Deployed blockchain technology means images accessed in this way can be verified more readily through the App, capturing metadata from the images to verify when and where images have been taken and/or manipulated. The burden of detecting and verifying submitted deepfake materials can be materially reduced if the source of capture can be trusted.  

In addition, identifying whether AI is behind a scam or fraud is often down to context and potential flaws in the scam approach which might be spotted by an experienced eye, leading to a more in-depth claims investigation being required. Therefore, whilst deployment of AI detection tools is critically important, it cannot replace or remove the need for human step involvement at the appropriate point in the process.  

So what’s the solution?

Investing in the latest sophisticated GenAI fraud detection and prevention technology alone is not the answer. As with all technology driven change, it is as much about the wider picture thinking as it ever was, just that change is occurring at an even faster rate.  

In a regulated marketplace, firms (and their outsourced providers) must be able to demonstrate a robustness of thinking: 

  • To show what risk evaluation has been done to identify possible opportunities; 
  •  and threats from deployment of GenAI, within or into their activities; 
  • To consider how processes can be adapted to help sidestep the technology “arms race”, with more focus on being able to trust the source of submitted data and it’s capture; 
  • To consider how to maintain appropriate human oversight, experience and checks in any automated processes, helping question context and providing sanity checks; 
  • To consider how reporting on effectiveness of controls can be linked to management visibility of claims outcomes.  

One thing is clear – the race is on! 

Guest Post
This post was created just for Claims Media by a guest contributor.