A Brave New World
InsurTech and the personalisation of insurance is a subject I’ve written about a lot. The concept is simple. Old world methods of assessing risk are blunt instruments with a one size fits all mentality. In the brave new digital world, risk can be assessed using an order of magnitude greater volume of data and perspective. For many insurance lines, this use of non-traditional sources of data will be unique and specific to an individual risk. It will be analysed, assessed and manipulated by sophisticated computing capabilities to create a personalised perspective on each and every individual risk.
Today, individual data from wearables, telematics and IoT sensors have put insurers onto the first rung of the personalisation ladder, albeit to improve product distribution rather than inform underwriting decisions. But nonetheless, the personalisation of insurance is the direction of travel.
In a study by the Universities of Stanford and Cambridge, they developed a machine learning algorithm to accurately predict human personality types just by analysing an individual’s ‘likes’ on Facebook. With as few as 100-150 ‘likes’, the algorithm could determine someone’s personality more accurately than their friends and family, and almost as well as their spouse!
When it comes to risk appetite, machine learning algorithms can determine the difference between an over-confident personality (riskier) versus a well organised (safer) human simply by analysing their use of “always” and “never” over “maybe” or “possibly”, or by the way they arrange events with a specified time and location rather than a generalised intent: “let’s meet at 10am at Starbucks” versus “lets meet up Saturday”.
So, it makes sense, doesn’t it, for insurers to tap into social media data (in addition to the other non-traditional sources of data) to get a better insight into an individual’s appetite to risk?
This is what Admiral wanted to do back in 2016 when they launched, then hastily withdraw, Firstcarquote. An app-based auto insurance product aimed at young drivers in the UK. Admiral planned to use a customer’s Facebook data to determine their attitude to risk and apply a discount to those deemed ‘safer’. Unfortunately for Admiral, they hadn’t fully understood the Facebook policy on third party use of data and the use of social media intelligence for personality profiling was pulled. Now Firstcarquote runs without this social media intelligence.
Big data is not a game that can be played by different rules
In January this year, the Wall Street Journal reported that New York’s regulator is “going to allow life insurers to use data from social media and other non-traditional sources when setting premium rates”.
However, this comes with the caveat that insurers “will have to prove the information does not unfairly discriminate against certain customers”. The key word here is “prove”. This is the point the FCA also made in their recent thematic report on household insurance. In the covering letter from last October, the FCA specifically called out the issue of effective “control” (or lack of) by insurers over pricing practices. In particular, the FCA called out the issue of differential pricing and the risk of discrimination against consumers using rating factors based directly or indirectly on data.
The point is that the regulators recognise that times are-a-changing, but the rules haven’t! The recent FCA pricing review showed surprise at how little the insurers knew or understood about where they are at (when it came to using data in pricing strategies) or where they are going on pricing based on personalised data.
Do insurers have “data control” over their business?
To help me answer this question and provide an expert opinion, I turned to my friend Duncan Minty, who “helps organisations achieve greater certainty on ethical issues”. His latest post is called “Ethics, data and analytics: the problem that insurers will have with identity” and is very thoughtful and informed read.
He told me: “You only have to go back five years and insurers were saying that rating was so complicated they didn’t understand it anymore. If they publicly said that today, they’d be in front of the regulator answering questions about [lack of] control.
“Policymakers and regulators are not going to be generous when it comes to insurers using social media data. Insurers will need to be fully transparent, non-discriminatory and openly fair to every customer. But there is a problem with this because of adverse selection. If an underwriter knows something about a customer that leads to a different price, they are duty-bound to use it, even if it means another customer pays a higher premium because of it.”
Duncan makes a great point, however the problem is that this data genie is already out of the AI bottle, and there is no going back. The solution, therefore, is likely to be that a new ‘bottle’ is needed, one that the regulator can retro fit over the data genie.
In chatting this issue of big data and the personalisation of insurance through with Duncan, we boiled it down to three key areas that should be considered from a risk and regulatory perspective.
1. The essence of control
Do you know what you are doing? This is the fundamental question that all regulators ask the regulated. Duncan explained further, “this question of control is central to the debate about the use of social media data by insurers, whether that is in distribution, underwriting or settling claims. It is also critical to understand for the new breed of InsurTech startups who are pushing the algorithmic approach to automated personalisation of insurance.
“All insurers, traditional or InsurTech, need to know, and be able to demonstrate, that there is no bias in the data they use or the algorithms that determine such things as pricing.”
Insurers simply need to know what they are doing and be able to explain it!
2. The insurer ME versus the real ME
“Personalisation is a misnomer”, Duncan explained, “the insurer thinks they’re creating a profile of a person, but they’re not. What they’re doing is collecting lots of independent data points, such as analysing your selfie smile for signs of mental health, or looking for whether you drink regular or bottled water. They use these data points to create an insurer’s view of the customer, but it can never be as accurate as the actual and real me.”
And this is the thing about the ‘personalisation’ of insurance; can the insurer ever really rely on their version of Me? If a customer knows that their behaviour on social media is being monitored and assessed, and as a result, this will influence a decision on how much they’ll be charged for car insurance, or whether they’ll get a mortgage at a better rate, it’s entirely plausible that the consumer will game their behaviour accordingly, isn’t it? We already see this in the Instagram age of projecting a lifestyle that doesn’t match reality.
In which case, from a regulatory perspective, what does this mean for an insurer’s ability to prove that all their customers have been treated fairly and equitably?
As Duncan explained, “there is a contract between the insurer and customer and the insurer must be able to demonstrate [to the regulator] that they can meet their contractual obligations and that this is free of bias or errors and inaccuracies”.
3. The structural danger of personalisation
One of the key benefits of insurance is financial certainty, however, the inevitable consequence of personalisation is price instability. In a personalised insurance model of the future, the customer with a clean record of no-claims always enjoys a low premium. That is until they make a claim. Then they see their personalised risk profile change resulting in their premiums going up accordingly.
Duncan explained, “We buy insurance for price stability. If insurance pricing becomes unstable and we see big changes in premium, then it follows that customers might decide to not buy insurance at all. The danger is that customers feel they are being penalised just because they’ve called in their insurance cover.”
Which begs the question for insurers and InsurTechs, if they take personalisation to its extreme, what does this mean for insurance?
When Duncan talks on this subject at conferences, he first shows a picture of a push bike. This represents old insurance. He then shows a picture of a modern bike with rockets attached. This represents new insurance with data fuelled rockets strapped to the side; going much, much faster, but also with less and less stability.
“At this point, I ask the question” Duncan told me, “does insurance know where it is going? Is it onto a superhighway, or off a cliff?”
Is personalisation the beginning of the end of insurance as we know it?
In 2016, the FCA published a statement on the role of big data in consumer financial services. The FCA identified increased segmentation of the market as a risk. The consequence being that customers deemed ‘riskier’ would be priced out of affordable insurance.
In other words, the massive growth in new sources of data combined with the predictive capabilities of AI, machine learning and their algorithms, lead to ever smaller consumer segments. Theoretically the personalisation of insurance leads to segments of one.
The ability of the insurer to personalise the risk rating for everyone individually ultimately leads to the fundamental shift away from the mutual sharing of risk. This not only changes the way that insurance works, it also has a significant social impact dimension to consider.
Fortunately, we’re not there yet, but the warning signs are clear. And there is much to be done, by the incumbents, the InsurTechs and the regulators in better understanding the consequences of the personalisation of insurance using new tech capabilities that are still in their infancy.
The author Rick Huckstep is Chairman of The Digital Insurer and a keynote speaker, strategic advisor and investor in technology startups.