Accuracy matters, particularly in the legal industry it turns out
A recent New York Times article reads:
“A man named Roberto Mata sued the airline Avianca, saying he was injured [in August 2019] when a metal serving cart struck his knee during a flight... Mr. Mata’s lawyers [rejected a call to dismiss] submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines…”
There was just one hitch: no one - not the airline’s lawyers, not even the judge himself - could find the decisions or the quotations cited and summarized in the brief.
That was because ChatGPT had invented everything.
The lawyer confessed to using the generative AI tool, explaining he used “a source that has revealed itself to be unreliable.”
He told the judge he had no intent to deceive the court or the airline, and was simply “unaware of the possibility that [ChatGPT] content could be false.”
He had even asked the program to verify that the cases were real. “It had said ‘yes’,” the NYT item continued.
Putting Intelligence back into Artificial Intelligence
This is unlikely to be the last case of ChatGPT misuse in the legal field - or any other field – not least because a key use case for generative AI is reviewing and summarising large bodies of text.
It’s seductive in a legal case precisely because a vast amount of largely unproductive grunt-work goes into establishing precedents, analyzing statutes, and combing through text-based evidence.
But tossing out AI because of this case – or others where AI has ‘hallucinated’ facts – would be an error.
The lawyer in question made a mistake, not in using AI to expedite his research, but in using expecting off-the-shelf ChatGPT to provide reliable answers.
Accuracy matters in the legal profession, and it matters in the Insurance industry.
First, like litigation, there’s a lot of paperwork in areas such as claims and underwriting, including large amounts of structured and unstructured data to process and analyze. With tight margins, rising premiums, and a shortage of human resources, especially in areas like claims management where “eyeball time” menial tasks are high, AI holds boatloads of promise.
Second, there are powerful AI-based systems already at work in insurance. Our ‘Digital Coworkers’ powered by InsurGPTTM rapidly process hard-to-analyze unstructured data because of hyper-focused training on insurance documents and data sets. This allows Digital Coworkers to provide accuracy and reliability of responses in excess of 95%.
Examples of areas Digital Coworkers – powered by InsurGPTTM - are making a big impact:
- Improve Submission-to-Quote ratio - Process large volumes of unstructured data held within Commercial submissions, then pass structured data into your raters and underwriting systems so that your Underwriters can produce quotes faster and more frequently.
- Find potential coverage gaps - Identify and surface potential coverage underrepresentation contained in email correspondence, to ensure you are properly pricing a policy.
- Setup claims faster - Accurately analyze and output succinct summaries to your claims adjusters and systems, to keep your customers and claimants up to date without burdening your adjusters.
- Increase response times - Identify and alert adjusters to time-limited demands contained within any large-volume correspondence, improving speed to evaluation and negotiation.
The opportunity to create workplaces where the humans do all the engaging work and Digital Coworkers focus on the menial grind is compelling.
And no amount of foolish or malicious use of ChatGPT or other AI technologies should convince us otherwise.
Learn more about InsurGPTTM
To learn more about InsurGPTTM, please contact us or click on the below image to find out more.
