Applying AI in Captive Insurance: Practical Use Cases and Considerations

extremely large blue and orange digital brain hovering over a conference table of employees in a rotunda of offices

April 07, 2026 |

extremely large blue and orange digital brain hovering over a conference table of employees in a rotunda of offices

At the 2026 Captive Insurance Companies Association (CICA) International Conference in Palm Desert, a session titled "Beyond the Buzz: Real AI Applications in Captive Insurance" examined how artificial intelligence is being used in practice across captive insurance programs, along with the challenges and considerations that come with adoption. 

The panel included Steve Bauman of AXA XL, Esther Becker of Becker Garland Actuarial, and Julie Bordo of PCH Mutual Insurance Company, who shared perspectives from underwriting, actuarial, and captive owner roles.  

The discussion focused on moving beyond general concepts to practical applications, with an emphasis on how artificial intelligence (AI) is currently being used and how organizations are approaching implementation. 

The panel began by addressing a common barrier to adoption: hesitation around AI. Ms. Bordo noted that concern around the technology is widespread but emphasized the importance of approaching it with a practical mindset, stating that decisions should be based on "facts, not fear."  

That perspective carried through the discussion, with panelists emphasizing that AI is already embedded in many aspects of business operations and should be viewed as a tool rather than a replacement for human decision-making. Ms. Becker reinforced this point, noting that "humans are essential," particularly in evaluating outputs and guiding how AI is used.  

Mr. Bauman added that the concept of "garbage in, garbage out" still applies, emphasizing that data quality remains critical and that human oversight is necessary to interpret results and ensure accuracy.  

With that context established, the panel shifted to how AI is being used today across captive insurance operations. 

In underwriting, AI is being used to evaluate submissions and prioritize opportunities. Mr. Bauman described how AI tools help underwriters sort through large volumes of submissions by applying predefined criteria, such as existing client relationships or alignment with underwriting appetite. This allows underwriters to focus their time on submissions that are more likely to fit the program. 

AI is also being used to support multinational programs, particularly in translating policies and documentation across jurisdictions. Mr. Bauman noted that processes that previously took weeks can now be completed in seconds, with human review ensuring accuracy of terminology and context. 

From an operational perspective, Ms. Bordo provided several examples of how AI is being used within her organization. These include automating project management tasks, summarizing meeting transcripts, and generating regulatory tools such as state-level compliance grids. In one example, a team member used AI to build a regulatory tracking tool that became a resource for members. 

AI is also being applied to contract review and marketing functions, including generating graphics and supporting communications. These uses are focused on reducing time spent on repetitive tasks and improving efficiency across the organization. 

Claims management was identified as an area with significant potential. Ms. Bordo described the use of AI tools to summarize medical records, deposition transcripts, and adjuster notes. These summaries are reviewed and refined by staff, but they significantly reduce the time required to prepare reports. She noted that a process that previously took several hours can now be completed in a fraction of that time.  

AI is also being integrated into underwriting and risk evaluation processes. Ms. Bordo described efforts to use AI to analyze publicly available information, such as customer reviews and news reports, to identify potential risk indicators for prospective insureds. This allows the organization to identify exposures that may not be captured through traditional underwriting data. 

In addition to operational uses, the panel discussed applications in analytics and actuarial work. Ms. Becker described a project involving the development of an AI-based tool to support loss development factor selection. The goal was to create a tool that could be easily validated by actuaries rather than a complex model that would be difficult to interpret. 

She emphasized the importance of transparency, noting that the tool was designed to provide not only results but also the underlying logic used to reach those results. This allows users to evaluate whether the output is reasonable and supports governance and documentation requirements. 

Ms. Becker also shared an example of using AI to generate code for an analytical tool. By describing the desired outcome, she was able to produce functional code that could be validated and incorporated into existing workflows. This demonstrated how AI can support technical tasks even for users without coding experience, while still requiring human validation. 

Another example highlighted AI's ability to connect and analyze large, disconnected datasets. Ms. Becker referenced a project involving transportation data, where AI was used to combine multiple data sources and generate insights that could be applied across a broader system. While outside the insurance industry, the example illustrated how similar approaches could be applied within captive insurance programs. 

As the discussion moved to implementation, the panel addressed several challenges associated with adopting AI. 

One of the primary challenges is user adoption. Fear, mistrust, and misunderstanding can limit how AI is used within an organization. Ms. Bordo emphasized the importance of demonstrating practical value and providing clear guidance on how AI should be used. 

She also noted the importance of having defined policies and governance structures in place, including clear rules around data usage and privacy. Organizations must ensure that AI tools are used in compliance with applicable laws and internal standards, particularly when handling sensitive information. 

Cost was also identified as a factor, particularly when considering enterprise-level AI systems. While these systems provide greater control and data security, they require investment and ongoing management. 

Another challenge is ensuring that data is integrated and usable. The panel discussed the importance of having a "single source of truth," noting that data stored across disconnected systems may limit the effectiveness of AI tools. Integrating data sources and ensuring consistency is a key step in realizing the full value of AI. 

Governance considerations were also discussed in detail. Ms. Bordo emphasized the importance of authenticity and the ability to verify outputs, particularly in light of concerns around misinformation and data reliability. Human review remains a critical component of the process, ensuring that outputs are accurate and appropriate. 

She also noted that AI should be used as a starting point rather than a final product. While it can support drafting and analysis, final outputs should reflect human judgment and refinement. 

At the board level, governance includes establishing AI policies, ensuring appropriate oversight, and aligning AI use with organizational objectives. This includes considering how AI impacts risk management, operations, and overall strategy. 

The panel also addressed broader concerns about AI's impact on the workforce. While there are concerns about job displacement, panelists emphasized that AI is more likely to change how work is performed rather than eliminate roles entirely. Ms. Becker noted that interacting with AI can enhance skills and enable individuals to focus on higher-value tasks. 

At the same time, the panel acknowledged that organizations must actively manage how AI is introduced and used. Providing training, encouraging experimentation, and starting with smaller tasks can help build confidence and support adoption. 

As the session concluded, the panel returned to a practical takeaway: organizations should begin experimenting with AI in controlled ways and build from there. As Ms. Becker summarized, the approach is to "start small" and build confidence over time. 

April 07, 2026