Artificial intelligence technology has continued to evolve, and it’s affecting many areas of insurance from claims to underwriting to customer service, according to panelists at the 2025 PLUS D&O Symposium in New York City.
But has the technology developed so much that it could replace human underwriting in the next five years?
“I think it is the biggest question out there,” said Jeffrey Chivers, CEO and co-founder of Syllo, an AI-powered litigation workspace that enables lawyers and paralegals to use language models throughout the litigation life cycle.
Another way of asking this question is whether AI can develop judgment, not just in underwriting but across all business domains in which judgment is an essential part of the job, he said.
“Is there any change here with respect to a model’s ability to exercise the kind of nuanced value judgment and other types of judgments that go into a mission critical job?” he asked. “Thus far, the answer for me has been no,” he said. “If the answer is yes at some time in the next five years, I think that’s what changes everything.”
Claire Davey, head of product innovation at Relm Insurance, said that major shifts are already happening in other areas of insurance that involve more administrative tasks, however.
“It depends on how the organization wants to deploy [AI] and utilize it,” she said. “But I think many jobs, particularly those that are administrative, are at risk of being phenomenally changed by artificial intelligence technology. It is going to be a landmark shift in commerce that we’ve seen in a generation, and insurance is no different.”
That said, she agreed that underwriting jobs are safe, for now.
“One of the key governance controls and duties with AI technology is that it does require human oversight, so while AI could perform some underwriting stages, you would hope that there is still a human reviewing its output and sense-checking that,” she said.
AI’s Underwriting Judgment
AI technology is having a material impact on the insurance industry in other ways, panelists agreed. To start, the litigation landscape is already seeing a transformation.
Within five years, there will be a lot more adoption of generative AI across legal and compliance functions, Chivers predicted. “And I think five years from now, a couple of things will be really prominent.”
He said much debate will continue to emerge around transparency and any red flags discovered within an organization due to AI.
“Do you attribute knowledge to management if you had an AI agent in the background that surfaced these various red flags or yellow flags even if nobody reviewed it?” he said. “I think the transparency that generative AI brings within a big organization is going to be a big subject of discovery litigation.”
He added another area to watch is the degree to which companies are handing off decision-making responsibilities to AI.
“If we are in a world where companies are handing off that decision-making responsibility, it just raises a host of issues related to coverage,” he said.
This decision-making responsibility needs to be carefully considered with a human in the loop because of generative AI’s shortcomings, he said.
“It’s not a quantitative model, and it also really lacks what I would describe as judgment,” he said. “And so when I think about how do you understand these large language models and what they bring to the table in terms of artificial intelligence, I think the best way to think about it is in terms of different cognitive skills… [L]arge language models have certain cognitive skills like summarization and classification of things, translation, transcription, [but] they completely lack other cognitive skills.”
Allowing AI to participate in too much decision-making can be particularly dangerous because of one of its best skills so far: linguistics and rhetoric. This means AI models can excel at masking the fact that they lack the judgment to operate as an intelligent agent, Chivers explained.
“If you allow the large language model to generate things like plans and plans of action, it literally generates these for itself. It has some objective in mind, and it writes out 10 steps for itself as to how to accomplish that objective. And it takes each of those steps and generates ideas about how to execute it. And then it goes about, and if you give it access to other systems, it will be able to function, call against those systems and cause real world impacts within your organization,” he said.
“At the moment, I think it would be basically insane to allow the current iteration of large language model agents to actually run wild within systems.”
Underwriters’ AI Judgment
Beyond the use of generative AI within underwriting, how are insurers underwriting to companies that use generative AI as a part of their business model?
“I think the risk profiles of insureds who are either developing or utilizing AI are shaped by the use case of that AI,” Davey said. “So dep