How many of you out there remember when testing was a purely manual task? Come on, don’t be shy. Put your hand up.
In many ways, insurance companies were the pioneers, as they were some of the first businesses to computerize. Testing in those days was always left as the last step. In many cases it was the first time users encountered a computer. Department heads were often forced by senior management to give up their best people for weeks on end to test the SYSTEM. How many of you were among those conscripts?
Yep; how I long for those days – NOT.
But then again, the stakes weren’t that high. Back then, the end user was a claims clerk or an underwriter or someone from finance – always a person from inside the insurance company. Customers were mostly insulated from system issues – all the correspondence was either by mail, by fax/phone or in-person. Long turn-around times were the norm. If the system failed, there were always legions of people to type the renewal certificates or statements or claims letters manually.
Y2K and Professional Testing
Up until the mid 90’s, testing had been considered an in-house job – too difficult and expensive to get outside firms to do. The in-house testing, however, carried a lot of baggage:
- Testing was done in business silos – Each team, underwriting, claims, agency, etc. creating their own test cases leading to many complexities,
- Testing was cumbersome as testers needed exhaustive planning to ensure they created the right data to test different scenarios – at times stretching decades into the future for life insurance,
- The end-user experience was limited to how fast a green-screen could be completed,
- 100% test coverage was never met, in majority of cases it was not even known how to measure,
- End-to-end integration testing was not considered as a part of manual testing,
- Coverage was limited with little traceability to business requirements.
However, the Y2K bug changed everything, giving rise to a whole industry focused on making software testing a full-time professional activity. To be commercially viable, the new testing companies needed to be smart, innovative and transparent. Smart in how they overcame the domain knowledge gaps. Innovative in the ways they automated the testing process. And transparent in how they presented the fruits of their work and differentiated from their competitors – after all, if testing is done well, it becomes invisible.
This testing “arms race” of the 90’s was most apparent in India, with firms pouring huge resources into people skills, industrialized processes, test case repositories, and stitching together best-of-breed tools into comprehensive quality assurance platforms. The results were stunning:
- Faster time to market due to realistic, pre-existing test cases that were being continually augmented after every engagement
- Higher level of quality through the ability to manage and reuse test case data
- Significant reductions in costs through the use of automation and management by exceptions
- Greater stability as end-to-end, complete application coverage and full traceability could be achieved
Overall, insurance products could be delivered 40% faster, while quality and stability were significantly improved. One consulting firm noted that this new industrial approach represented an 1800% improvement of efficiency over previous methods.
However, while software testing was evolving to industrial scale, the focus of insurance software was moving from the back office to the front office – from the green screen to the web front-end.
The Dawn of the Dot-com Insurer
As the world-wide web began to take shape in the late ‘90’s, insurance companies started to open up their systems. They launched insurance websites that allowed some access for customers to information and functions that previously had only been available to their staff. And with that access, the testing stakes were immeasurably raised.
Gone were the days where problems could be covered up. Failures in testing were publicly exposed. Now if the system were unavailable or functionality not working correctly, it was out there for everyone to see. This gave rise to a whole new set of testing that focused on the customers’ ease of use, the quality of the experience and end-to-end testing.
And just as insurers were opening up for customers, the customers themselves were becoming far more discerning, technically savvy with raised service expectations.
Mobility and the App Economy
Arguably Nintendo and Apple created the perfect storm in evolving customer expectations. Nintendo in the 90’s and 00’s educated young adults with their GameBoy, honing skills and user experience with handheld gaming devices. Apple then took over with the iPod, morphing from games to music. Finally, with Apple’s initial release of the iPhone in 2007, the mobile App economy was born.
Only in the decade since have we fully realized the impact on society that Steve Jobs’ creation has wrought. The insurance industry now releases its own Apps into this sophisticated market with great trepidation, knowing customers have high expectations and the average life expectancy of an App is measured in mere months.
Customers expect Apps will function flawlessly every time. Far more critical are the user experience, ease of use and simplicity of completing transactions. The devices and OS’s on which your insurance App is installed are more the determining factor of your customers’ smile or frown.
Complexity and the customer experience challenge
To answer this challenge, new digital testing labs are being established. Within these, we find upwards of 500 mobile devices with a variety of operating systems installed. This is needed to mimic what consumers are really using. If you think this excessive, consider Apple has around 20 models of iPhone and iPad still in use. Each of these could have one of four recent iOS operating systems – 80 combinations.
Within the Android world, we have an even larger number of combinations, due to the proliferation of popular brands of phones and tablets, e.g. from Samsung, Sony, HTC, LG and Google to name just a few. Each brand carries its own range of models. Each model supports a variety of its own customized Android operation system.
To assure a high quality experience for customers at all times, testing and monitoring must be continuous. Some testing companies have created “synthetic” users to continually sample the performance and reliability of the Apps they roll out. By analyzing this data, a standard performance “finger print” is created for each App. Further real-time analysis enables small deviations to be detected, triggering investigations to find the cause. This enables issues to be corrected before they escalate into problems for the wider App user community. Additionally, the contact centre can reach out to affected customers, rescuing the user experience before irreparable harm has been done. This can mean the difference between a customer defecting to a competitor or remaining with the insurer as a brand advocate.
The State of the Art – IoT
As we move into 2017, insurers are gradually embracing IoT (The Internet of Things), a diverse range of new data sources to help them underwrite and proactively manage risk. These solutions move beyond the mobile phone and tablet and introduce a new set of specialized challenges. From in-vehicle, to on wrist, to in-home, a vast array of new devices are being pressed into the service of insurers seemingly every month. Now the focus is on protecting the flow and quality of data from these sensors back to the insurer. This places a heightened emphasis on compatibility and communication standards such as Bluetooth, Wi-Fi, and mobile formats of 2G, 3G and 4G.
The connected car is one example of this specialization, where diagnostic hardware is required to simulate all of the inputs and outputs of a passenger vehicle. New testing frameworks for the management of firmware settings and configurations of devices becomes critical. For health insurance, App to App communications is becoming important, as smart watches and fitness trackers will usually pass data via Apple’s HealthKit, Google’s Fit or S-Health from Samsung and then onto the insurance App. As insurance programs move beyond step counts, new specialized devices will be introduced. These will capture a wider variety of data such as blood sugar levels or heart rate variability, and place greater emphasis on clinical accuracy.
Additionally, compliance to regulations and quality standards, which will vary by line, are required to be built-in to testing platforms, e.g. HIPPA could apply to health and fitness data, but definitely not to motor telematics data.
The bottom line
Without doubt, testing is now only for professionals using industrialized digital testing labs and quality assurance platforms. If you are still testing as though it’s the ‘90’s, or if you are concerned with customer experience, or you are introducing new insurtech capabilities, then it’s time you consult with experts in quality assurance. See how your whole operation can be improved and your customer satisfaction can be de-risked by using the latest testing techniques and platforms. Remember, your greatest test begins after your Insurance App is released.
I hope you had as much fun reading this column as I did writing it. If you did, please let me know by subscribing. Once again I’d like to thank the team at Quality Kiosk who helped educate me on the latest digital testing techniques and instilled in me their passion for testing.
If you want to see more of my “off the wall” comments, you can follow me on Twitter @ITInsuranceGuy. Finally, I am keen to hear what you have to say, so please email me at email@example.com and share your views, insights and stories about technology in the insurance industry. I look forward to interacting with you.
Andrew Dart, is an Insurance thought leader, telematics practitioner and editor of “Insurance Connected”monthly column for The Digital Insurer.