Captive.com logo

Captive Insurance News

Free Captive Wire Report

Tax Considerations for Captive Insurers

A FREE 16-page special report courtesy of Captive.com

Dig deep into important issues and trends in captive insurance. Download this FREE special report featuring practical knowledge and insights from nine respected captive insurance thought leaders!

Show Me My Free Report

Is Your Predictive Analytical Modeling Wrong?

Global Technology-2-SF
July 24, 2017

"All models are wrong; some models are useful." This quote, displayed years ago at the entrance to a major insurer's investment group, reminds us of how ubiquitous predictive analytical modeling has become in insurance. The original quote, "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful," is attributed to George E.P. Box, a renowned British statistician.1

Given how pervasive predictive analytical modeling has become in all corners of insurance, the quote serves as an important reminder that models are mere approximations of the real world. While they certainly can be beneficial, models should not be used as a substitute for required critical thinking as supplied by management and the board of directors.

A casual Internet search on "predictive analytical modeling in insurance" turns up hundreds of articles. A Deloitte white paper published on the Society of Actuaries website states that "The use of advanced data mining techniques to improve decision making has already taken root in property and casualty insurance as well as in many other industries." Captive.com recently featured "What Is the Future of Individual Claims Reserving?" describing how modeling is changing the face of claims reserving. From assets to underwriting, there is no corner of the insurance industry that has not been changed as a result of this technology. 

The problem arises, however, when we allow ourselves to be lulled into thinking that the models are infallible. Long-Term Capital Management (LTCM) provides an excellent case in point. LTCM was a Greenwich, Connecticut-based hedge fund founded by John Meriwether, the former vice chairman and head of bond trading at Salomon Brothers. Its board of directors included Myron S. Scholes and Robert C. Merton, who shared the 1997 Nobel Prize in Economics for their work on derivatives pricing. LTCM collapsed in 1998 after having lost $4.6 billion in 4 months. The firm used an absolute return strategy based on arbitrage models that predicted the likelihood of the scenario that wiped them out was less than a hundredth of a percent. 

Roger Lowenstein, author of When Genius Failed: The Rise and Fall of Long-Term Capital Management, which recounts the demise of LTCM, wrote a piece titled "Long-Term Capital Management: It's a short-term memory" published in the New York Times September 7, 2008, immediately following the financial crisis. Mr. Lowenstein states the following in the article.

The Long-Term Capital fiasco momentarily shocked Wall Street out of its complacent trust in financial models, and was replete with lessons, for Washington as well as for Wall Street. But the lessons were ignored, and in this decade, the mistakes were repeated with far more harmful consequences. Instead of learning from the past, Wall Street has re-enacted it in larger form, in the mortgage debacle [and] credit crisis.

In the wake of Long-Term Capital's failure, Wall Street professed to have learned that even models designed by "geniuses" were subject to error and to the uncertainties that inevitably afflict human forecasts.... Whether this wisdom endured may be judged by events of the past year, when not only Bear Stearns but also scores of banks and financial institutions have written off hundreds of billions of dollars—a result of blithe faith in models....

Assuming the insurance industry is not looking to replicate the mistakes of the financial industry, the question becomes how wrong do the models need to be before they become useless? And, how do we guard against the implicit bias that models are correct? Alton Cogert, president and CEO of Strategic Asset Alliance, offers some useful pointers for how to test the usefulness of a model. These include the following.

  1. How long has the model been in existence? Longevity does not inherently correspond with usefulness, but it does provide some assurance that the users find it helpful and it has been vetted.
  2. How accurately does it reflect the real world? A corollary we would add is how simple or complex is the model structure? Too many variables in a model may make it very difficult to ascertain what each variable is contributing to the output. Overly complex multivariable models may produce dramatically different results with very minor changes to several variables, which leads to confusion as to what is actually driving the outcome.
  3. Closely aligned with question 2 above is whether the users of the model can easily explain the assumptions, structure, and output of the model in layman's terms. In other words, do the users and even the builders of the model have the ability to describe how and what it does without having to lapse into jargon? If not, how do you interpret the results rationally to determine if they are accurate?

There is no denying that predictive analytical modeling and artificial intelligence are becoming more entrenched in insurance and are here to stay. The bigger question is whether this will be a net positive or negative for the industry. Pew Research Center released a report earlier this year titled Code-Dependent: Pros and Cons of the Algorithm Age. It makes for very interesting reading and should be required reading for all management and boards in our industry. There is no replacement for "human" common sense.


  1. For further reading, see "Your Asset Allocation Model Is Wrong!" by Alton Cogert of Strategic Asset Alliance.

     

Captive Insurance Company Reports
Follow Captive.com on Twitter

Twitter Feed