In early 2024, McKinsey surveyed 100 companies. 63% of executives polled cited AI implementation as a “high” or “very high” priority. Still, at the same time, 91% of them admitted they did not feel “very prepared” to do so in a responsible manner. These figures track with the 75% of respondents to a recent Roots Automation poll who answered they either had not yet started their AI journey or were still researching the topic.
For underwriters whose business is risk, weighing the benefits of AI against the downsides is particularly important. Findings in a Dec 2023 Harvard Business Review article that “80% of all AI projects fail” (roughly double the rate of failure for IT projects a decade ago) suggest this caution is well-advised.
So, how best to reduce or eliminate risk when implementing AI in underwriting?
No Magic Wand
One way to mitigate risks and increase transparency is to assign precise roles to AI-powered underwriting tools and set guardrails around what they can and cannot do.
Jennifer Krawec, the Head of Global Risk Solutions Incubation at Liberty Mutual Insurance, points to a particular insurance document processing challenge: “You get shipped boxes of legal documents that you can’t possibly read through – but now you can point these tools at them and find out what’s important,” she says. “That’s something that can provide real value immediately.”
This solution to a longstanding challenge underscores a key difference between enabling decision-making by underwriters and automating it. “Enabling” means getting at more pertinent data faster, from more sources, and with greater accuracy.” Whether it’s unstructured information from a vast range of document types or double-checking document compliance, these clearly defined tasks performed by AI empower underwriters without creating conditions for opaque black box judgments.
The Human Factor
A key factor to successful AI implementation in underwriting is human oversight. Popular Large Language Models (LLMs) like ChatGPT appear increasingly competent because they are designed to “test” their own guesswork against optimal outcomes. (It basically asks itself, “Does my guess on this string of words look right?”) However, in highly specific domains such as underwriting, a human subject matter expert (e.g., an underwriter) typically reinforces optimal AI behavior.
“You can ask a Digital Coworker to do common tasks using natural language,” explains John Cottongim, co-founder and CTO at Roots. “Then you assess its performance, and if the process was undertaken well, you can break it down into discrete steps that can be repeatable, traceable and trackable.”
Underwriters' rigor and experience are crucial to ensuring AI systems adapt appropriately to industry changes, whether regulatory or based on customer needs.
Thoughtful Governance
A common objection to AI – that systems can “hallucinate” facts – has also surfaced concerns among insurance decision-makers. “We do need to hold people accountable for the outputs; we need the human teams both making the key decisions and monitoring bias that can creep into AI systems,” says Cottongim. “It’s a question of applying common sense and not being swept up in the hype too rapidly.”
For Krawec, the solution is a conscious and tailored approach to governance around AI. “Fast-moving enterprises don’t typically like to pump the brakes,” she says. But governance is the key to managing the risks while gaining the benefits.”
Sean O’Neill, senior partner at Bain & Co., has a more positive outlook to share with underwriters considering those risks. “The bar is actually higher for validation, traceability and transparency in the generative tools we’re seeing [now],” he says. “We can’t have black box solutions where we can’t figure out how it arrived at its answers.”
Experiment to Learn
The balance that needs to be struck is explicit control around governance while giving underwriting teams sufficient freedom to deploy tools in clearly defined ways around discrete processes. Rigorous proof-of-concept roll-outs not only identify use cases where AI-powered underwriting can yield massive improvements, but the technology also highlights steps where enhanced governance or audit trails give reassurance to underwriting teams and their customers.
That, ultimately, is the test: when customers and underwriters trust AI implicitly and feel comfortable that the governance around them has been well-designed, the benefits are virtually limitless.
Dive deeper into how Insurance Document AI delivers greater transparency into underwriting data – and helps underwriters manage risk more effectively.
View an on-demand panel discussion, “Unlocking Value with AI-Powered Underwriting,” for more exclusive insights and knowledge on AI for insurance from Sean O’Neill, Jennifer Krawec and John Cottongim.
Click now to discover critical use cases and success stories from insurers using AI to extract more decision data faster to deliver improved bottom-line results and better customer experiences.