Google Tanslate

Select Language

Sign up and be the first to know

About Hugh Terry & The Digital Insurer

Hugh Terry & The Digital Insurer Video

Contact Us

1 Scotts Road
#24-10 Shaw Centre
Singapore 228208

Write an article

Get in touch with the editor Martin Kornacki

email your ideas at [email protected]

Pre Registration Popup

itcasia2020 Registration Popup

Share Popup

Prime Member: Find out more

Access a unique programme!
  • 56 pre recorded lesson of online content from industry experts over 7 courses
  • The best in digital insurance for practitioners and by practtioners
  • Online MCQ after each lesson
  • Join the discussion forum and make new friends
  • Certificate upon completion to show your expertise and comitment
  • 3 months to complete
  • Normal price US$1,400 Your Prime member price is US$999
  • Access to future versions included in your Prime membership!
Become a member

Prime Member: Contact Us

Reach out to us. Please fill up the form below
Let us know how we can help. You can expect a response within 24 hours
Services of interest
Untitled

Arthur D. Little

Arthur D. Little has been at the forefront of innovation since 1886. We are an acknowledged thought leader in linking strategy, innovation and transformation in technology-intensive and converging industries. We enable our clients to build innovation capabilities and transform their organizations. ADL is present in the most important business centers around the world. We are proud to serve most of the Fortune 1000 companies, in addition to other leading firms and public sector organizations. For further information, please visit www.adlittle.com

AI’s impact on us, and the world of insurance

View Newsletter

This article looks at three ways generative AI is going to impact our world:

– Personal productivity: This may be in the workplace by creating reports and drawing together data pools, but also our private lives by learning our habits and manipulating our home environment to complement them. 

– Use cases in insurance: AI will increasingly replace the time and labour intensive tasks that are normally performed by a human, today; and 

– The dangers and risks: This technology may pose risks not only to insurance, but human society a whole. 

Artificial intelligence (AI) will touch every part of our lives, but not like the doomsday scenario suggested by the Terminator franchise or the wayward HAL in 2001: A Space Odyssey. 

In fact, In many ways, it already is. Lots of behinds the scenes processes in all kinds of industries are being controlled, or at least assisted or augmented by AI. 

The new normal

Every day, our life is being changed – and arguably improved – by the way we perform everyday tasks. The latest form of AI is generative AI and is the first type of AI that many of us will be aware of, and perhaps even consciously interacted with. But what exactly is generative AI (genAI)?  

GenAI is simply a system that is capable of following instructions in order to generate text, images, or other media. It is trained to identify the patterns and structures being sought in order to generate new data with similar characteristics.

It is a new way for humans to interact with technology that is more human, because it responds to simple language inputs rather than a complex process of button depressions to coax it into life. It is because using it is more like our everyday interactions with friends, family and society at large, it will be adopted more rapidly and readily accepted as a part of normal life.

New ways of learning

The way generative AI can collate, analyse and interpret data will have profound effects on human learning. It will lead to what is being referred to as the ‘democratisation’ of data or knowledge, in a way that digital assistants via the internet of things (IoT) has only scratched the surface of to date. 

Our society has been based on the value of knowledge and experience. It is highly valued, because it is scarce. 

Generative AI will provide insights to humans in areas of learning that they have no experience of. And it’s going to be available at almost no cost and available to everyone. 

This democratisation of knowledge has never been experienced in human history and it will have very deep and profound implications for society. 

Some will argue that if genAI delivers such knowledge immediately, this will change the nature of education, because students won’t have to learn or remember anything. 

There is some debate about how students are already using genAI to ‘cheat’ the system and produce their essays and projects – and whether AI should be used to identify the cheaters. But this is looking at this the wrong way, argue others.

The reason people cheat, is because there is a focus not on knowledge and learning in modern universities, but scores and writing a perfect paper does not necessarily demonstrate the supposed author has a great mind.  

In any case, while machines will undoubtedly become ‘smarter’ than anyone using them, we will still need deep expertise in all areas. This is because while computers can learn and know, they cannot necessarily understand. 

Computers will always struggle with the ways humans use language and the nuances of human interaction, many of which are non-verbal. And we’re a long way from developing the Lexus-6 human-like replicants of the the Bladerunner film. 

This will mean that humans will simply have to learn the skills that AI cannot replicate and which people do best, in any case. 

Microsoft Copilot, the gen AI that is integrated into the company’s Microsoft 365 productivity suite, is already changing the way many users work. 

Copilot works alongside the user in Word, Excel, etc, to improve productivity and, Microsoft claims, creativity. 

The addition of Business Chat this year brings together documents, emails, chat  and meetings and can produce a report explaining how a project was completed. This may be as a simple record of fact, but it may also highlight ways in which processes can be improved. 

Data collected from software developers who use GitHub published in September 2022 showed that those using Copilot said they were 88% more productive, 77% said they saved time by spending less time looking for information or examples and 74% can therefore focus on more satisfying work. 

Insurance use cases

The use cases for the insurance world – and other sectors – is that it will revolutionise the application of voice-based interactions with technology, such as chatbots and robo staff.

It will deliver a human-like response and humanise the technology in ways that have not been possible before. 

Here is where this technology will succeed first, but until now, it has been largely focused on the middle and back offices, improving the operational efficiency in claims, fraud protection, etc. (See graphic below on the top 100 AI companies, source: CB Insights)

Healthcare is a perfect example of where AI will begin to make inroads that customers can see and appreciate, because in almost every country, healthcare still relies on pen and paper, said Dr Ben Maruthappu, co-founder and CEO, Cera Care in a recent BBC interview. 

“More than half of organisations in our sector, use pen and paper to manage their operations,” said Maruthappu, which is why AI has been a target for the digitisation of the back office since its launch. “AI has been a a key part of our DNA and we’re now using AI in more parts of Cera, because we feel it can empower our staff, our carers and nurses to do even better for patients.” 

Cera now has more than 100 billion data points on its platform and this is being used to analyse and create algorithms that are beginning to predict if our patients are going to become unwell before they do. 

“In 80% of cases, we can predict if someone is going to go to hospital a week before they would and this gives us a window of opportunity to try to reduce that risk and allow them to be better in the home rather than needing to go to hospital.

Because many of the patients are cared for over many years with multiple conditions, Cera will have insights such as drowsiness not being simply due to fatigue, but could be indicative of an infection in a dementia patient. 

“Being able to evaluate and pick up on these patterns of symptoms, we can spot infections or worsening of health conditions early and deal with it. 

“And we’ve been able to reduce hospitalisation rates by 70%, which given we care and look after the oldest and most vulnerable, makes a major difference to their lives as the people we serve, typically go to hospital seven or eight times a year. We’re able to get that number down to more like three.”

Microsoft, with OpenAI is shaping to transform insurance by incorporating Chat-GPT into organisations in a number of ways, including: 

– content creation and summarisation for proposals, reports, presentations, summarising internal meetings and customer conversations;

– semantic search using natural language and context, smarter and faster searching will also be continuously trained as staff interact with customers; and

– code generation where developers will spend less time writing lines of code and more time designing new statistical models and mathematical tools to give the actuaries something to think about.

In addition to the support this will give claims, fraud and underwriting functions, genAI will revolutionise customer support by improving the service that contact centre agents can provide, empowering agents and brokers.

It will also allow for the creation of bigger and better virtual assistants – perhaps even the kind that people not only don’t mind talking to, but even prefer. 

Managing the risks

There are three key risks that people seem to be most concerned about. 

The first is that bad actors will use the technology for bad things. That has happened ever since man picked up a stick and used it as a weapon rather than a tool. It will happen, because the technology is portable, requires no specialist skills and can be stolen or shared easily. 

The other risk people discuss – if not actually worry about for now – is that we are funding AI researchers to create artificial intelligence and who will be in control of that technology? Humans or the machines?

Aidan Gomez, is the co-founder of Cohere AI, which develops large language models for business, and which recently secured $270 million of new funding from backers including Nvidia, Oracle and Salesforce Ventures, valuing Cohere at around $2 billion.

In a recent interview with Financial Times, Gomez said a doomsday scenario was “exceptionally improbable”. 

“There are real risks with this technology,” said Gomez. “There are reasons to fear this technology, and who uses it, and how. So, to spend all of our time debating whether our species is going to go extinct because of a takeover by a super intelligent AGI is an absurd use of our time and the publics mindspace. Theres real stuff we should be talking about.”

This real stuff includes social media accounts being flooded with bots that are indistinguishable from human interactions, which could be used for the purposes of political interference, and how mitigation strategies are required to verify that humans are participating, not machines, said Gomez.

“There are other major risks. We shouldnt have reckless deployment of end-to-end medical advice coming from a bot without a doctors oversight. That should not happen. Thats just not the right way to deploy these systems. Thats not safe yet. Theyre not at that level of maturity where thats an appropriate use of them.”

Gomez is not anti-regulation, but wants people to understand that the more fantastical stories about the risks have no foundation. 

“Theyre distractions from the conversations that should be going on,” he added.

Is there more to it?

However, AI expert Yoshua Bengio in another recent FT interview said he supported the idea of a moratorium on advanced AI systems, simply because the main protagonists – OpenAI, Microsoft, Google – are divided on where AI can go. He sees this disagreement among AI experts as an important signal to the public that science doesn’t know the answers yet. 

If we disagree, it means we dont know. . . if it could be dangerous,” said Bengio. “And if we dont know, it means we must act to protect ourselves. 

If you want humanity and society to survive these challenges, we cant have the competition between people, companies, countries — and a very weak international co-ordination.”

Sam Altman, CEO of OpenAI, the creator of Chat-GPT, doesn’t see a doomsday scenario, and warns about thinking of genAI as human, rather than what it actually is; a tool. 

“It’s so tempting to anthropomorphise Chat-GPT, but it’s important to talk about what it’s not as much as what it is,” Altman said in an interview with ABC News. 

“Because deep in our biology, we are programmed to respond to someone talking to us. You talk to Chat-GPT, but really you’re talking to this ‘Transformer’ somewhere in a cloud, and it’s trying to predict the next word in a token and give it back to you.

“It’s so tempting to think this is like an entity, a sentient being that I’m talking to and it’s going to go do its own thing and have its own will and, you know, ‘plan’ with others. But it can’t and I can’t imagine until the far future.’

Like Gomez and Bengio, there are risks even in the normal use of this technology, and he wants regulators to be on board from the beginning of the journey. 

“I would like to see the government come up to speed quickly on understanding what’s happening, get insight into the top efforts, where our capabilities are what we’re doing.”

And Chat-GPT is not the only player in this space, with Cohere and the recent announcement of Meta’s Llama 2, there is already a growing competitive market. Competitive markets cannot resist regulation and this will help regulators and legislators to police the market. Assuming, as Altman, warned, they get up to speed with what it is they need to regulate.

So, who benefits from genAI?

Well, pretty much everybody. Personal productivity is being revolutionised through a new, humanised interface.

Its development is much like the that of the iPhone, which we had no idea would create a new interface for the way we interact with technology. The iPhone led to the proliferation of apps and this is exactly what we’ve got with Chat-GPT, but on a completely different scale.

As for the services we will consume, those are the ones that integrate humans with technology more quickly. Chatbots at the point of service entry will become routine, but they will provide better service, as they will deliver self service by opening up your own records – like a step on the way to an ‘open insurance’. 

When necessary, there will be a handoff to real human operatives, but chatbots will become so clever, it will be near impossible for most consumers to differentiate the two. 

It could be that the AI is so clever, it can kind of work out what personality any particular customer might prefer to deal with. So AI will step out of the back office operations like underwriting, claims and fraud and transform customer service.

Increasingly, the first point of contact will be a microsite where the adviser’s avatar services the client and can be authorised to act on the advisers behalf.

When the adviser is needed, they can step in and offer added value services. This puts the customer in control while improving their overall experience.

With services becoming increasingly black box, standards will be required. In Singapore, the Monetary Authority of Singapore (MAS) has been running projects on AI standards and found that models need to be understood by consumers if they are to protect against detriment.

The role the regulator plays in each jurisdiction may differ, but is likely to reflect their existing approach to consumer protection within the financial service sector – and in most cases, that means ever increasing levels of compliance for companies operating in that field. 

Not the end of the world as we know it, then

It seems that while there are risks, the Terminator nightmare is just that – a bad dream and a cliché for technological revolution, not reality. While criminals will use genAI for nefarious ends, we are a long way from being pushed down the food chain by a machine of our own making. 

There will be considerable disruption and the speed of that disruption may increase, but we should not underestimate the capability of humans to develop things to create value.

As genAI commercialises, it will create a very powerful way for customers to engage with the companies they want to deal with at the level of service.

Private data will become more valuable, and some customers will retain data or choose with whom they wish to share it. Some form of open insurance would make it easier for consumers to share – and for companies to access with consumer consent.

Personalisation will increase and perhaps regulation will require companies to indicate the degree of human intervention used. Perhaps there will even be a kite mark or standard imposed that indicates ‘100% human generated’ or guided by an expert, even if the consumer interacts with an avatar.

That may be necessary to ensure consumers trust is retained.

Look out for someone delivering a strong proposition with an expert engine that covers the entire market to which consumers need only provide access to their policy data. Because retaining the human touch doesnt mean not exploiting genAI. 

In fact, a successful genAI business  that uses humans to add to the value chain may offer a compelling mix of operational efficiency and customer service. 

Wouldn’t that make a nice change?

Livefest 2019 Register Popup Event

Livefest 2019 Already Registered Popup Event

Livefest 2019 Join Live Logged-in Not Registered

Livefest 2019 Join Live Not Logged-in