TigerRisk’s Anna Neely and Nathan Schwartz on how catastrophe models do not provide all the answers.

Perils

Measuring the risk of losses from catastrophic events has always been one of the knottiest problems for the insurance industry. 

Prior to the 1990s insurance companies relied on traditional actuarial techniques, calculating long-term averages to smooth out the high severity and low frequency nature of catastrophe risks. Then came the age of computer modelling, using the increasing processing power and the sophistication of simulation models developed by specialists in meteorology, seismology, physics and engineering. 

But the reliance on such models is now being put to the test. As catastrophic events pile up, from hurricanes on the Gulf Coast to wildfires in the west, many insurers are realising that while computer models are valuable tools, they may be expecting too much of them. 

There are two dimensions to this and the first is the limitations of the models themselves. Modelling firms are constantly improving and updating their models, but they need to wait for peer reviewed science to accumulate to adjust those models. Insurers that rely solely on such models to price risks can fall behind as models take time to adapt to new data. 

The second challenge is the unique nature of each insurer. Catastrophe models are good at some things, but with today’s technology they cannot create a model that, in addition to estimating the probability of extreme events, can also underwrite individual risks, adjust for all locations and policy parameters, create a good estimate of individual expected loss and account for the risk of accumulations. 

Even if a perfect model could be built for the industry as a whole, it would not take into account the individual approaches of different insurers. Each company has its own methods of selecting risks and settling claims. Strategies for mitigating losses, such as rigorous insurance to value, texting policyholders to prepare for an imminent event and getting claims adjusters quickly to the scene of a catastrophe, are also different between insurers. These approaches work, they make a difference to losses, and so should make a difference in the measurement of risk. 

While catastrophe models are an essential tool for today’s insurance industry, they can only be one part of an effective solution. 

To achieve a complete view of risk insurers need a solution that is custom-designed for their unique situation. This view will be underpinned by a catastrophe model, but it must also include industry loss history and the insurer’s own underwriting, claims and loss experience. This bespoke company view of risk can help insulate them from radical shifts in vendor model results. It will also provide a valuable tool to analyse how their business practices and the risk profile of their policyholders deviates from industry standards. 

This view of risk forms the critical foundation upon which all risk management decisions should be based, from pricing and underwriting each policy, to enterprise risk management, to reinsurance buying decisions that limit the risk.

At TigerRisk Partners we work closely with all our catastrophe reinsurance clients to make sure that they use the catastrophe models for what they are good at, but do not rely on them for all the answers. We help clients use industry loss experience to show areas where the catastrophe models are missing risk and to interrogate their own loss history to identify when models may be overstating or understating their own unique exposure. 

Building a bespoke view of risk, combining the best that catastrophe models can offer with industry data and a company’s unique experience is the sophisticated solution.