Back to overview
Depositphotos 24242017 xl 2015

How to succeed the insurance data monetization challenge

If you have ever wondered why we talk about Big Data, just take a look at these numbers on the ever-increasing volume of data:

  • According to a report from McKinsey Global Institute (1), the volume of data continues to double every 3 years as information pours in from digital platforms, wireless sensors (Internet of Things …), virtual reality applications and billions of mobile phones.
  • A report from IBM (2) states that 90% of the data in the world today has been created in the last 2 years alone, at 2,5 quintillion bytes of data a day. We need to invent new units of measure in order to express the volume of data that is created. 2,5 quintillion is roughly 2.500.000 terabytes (Tb) with 1 Tb equals 1.024 Gigabytes (Gb)

It is not surprising either that to explore these huge amounts of data, the job of data scientist is becoming the most sought-after profile today. Glassdoor (3) ranked the job of data scientist as the #1 job in the United States in 2018 for the 3rd consecutive year.

Insurance User Experience is fuelled with data

We see in the insurance industry that data is not only required for internal purposes such as aggregated data for the regulator or optimisation of internal insurance core processes, but also to fuel the user experience. A recent study from Willis Towers Watson(4) showed that besides external credit attributes and weather information, property and vehicle characteristics are the most valuable internal and external data sources for an insurance company. Identifying and evaluating the insured amount based upon a postal address for a property or having all features and options known just by having the license plate of a vehicle shortens and simplifies the customer buying journey drastically. Nice examples on the Belgian market are the Quick Quote App from Generali (5) and the “app-normal” car app from Belfius (6).

Is technology the answer to unleash the full potential?

On top of this, we have the fast-growing capability of new technologies such as Artificial intelligence (AI), Machine Learning (ML) and Cloud Computing. Many insurance companies already adopted Cloud Computing to manage their need for immediate scalability regarding the Solvency II calculations. Another recent study from Willis Towers Watson(4) tells us that insurance companies are now considering to use big data platforms such as Amazon Web Services (AWS), Microsoft Azure and Hadoop. The main question is whether all these technological capabilities will unleash the full potential of the data monetization opportunities for the insurance companies?

When asked what the 3 biggest challenges are in preventing the insurance company to be more data driven(4) , we see that technological concerns such as “Lack of tools to analyse data” only scores 12% compared to Data accessibility (41%), IT/Information services bottlenecks/Lack of coordination (41%), Conflicting priorities/Executive buy-in (33%) and Data volume/Quality/reliability (33%).

So, here’s what is really slowing us down in the data monetization challenge:

  1. Organizational issues addressing the lack of expertise, buy-in, priorities
  2. Data accessibility and availability in general
  3. Data Quality and reliability

Let us take a look at how we can address those pain points.

Business outcomes require an enterprise wide continuous engagement

When talking about data & analytics, this sounds very technical and many believe that you should leave it over to the IT department. However, if you really want a business outcome in this domain, you need to organise the whole insurance company around this data. Many insurance companies already have some sort of data & analytics office in place: sometimes as part of the marketing division with a “customer analytics” department or as part of the risk & finance function. A true data driven insurance company should first of all have a Data & Analytics Office headed by a Chief Data & Analytical Officer reporting to the CEO or COO level.

As stated by Deloitte (7) the role of Chief Data Officer in financial services should evolve from a marshal and steward to that of a business strategist. Unfortunately, most of the insurance companies are still at the early stages.

Making an insurance company data centric is truly an ongoing enterprise-wide endeavour that requires the continuous collaboration of every department in the insurance company. One of the key tasks of this new entity is to ensure the data governance. First of all, time needs to be spent by the information owners to register and describe all information items with their data quality rules into 1 single unified central data catalog. Appointing an information steward role in every department that creates and updates data, is one of the key elements to be considered. This single data catalog tool should make it possible later to use a self-service reporting tool by simply dragging and dropping the data just as you would “buy” any item in web shop and put it into your shopping basket.

Circle Chart blue Big data

Putting in place and expanding this kind of organisation in an insurance company is not going to happen overnight. You need to approach this, step by step, by expanding the reach of the human change and by doing so gaining in maturity on the data culture of the insurance company.

Graph blue big data

Going from plateau to plateau, you manage the complexity and deliver intermediate states. Starting with single unified data catalog (some departments, especially in the risk & finance area, will most likely already have a Data Dictionary in place for their needs) with the active help of the information owners and information stewards that are appointed in every department. By defining the data quality rules with every information element, you can later establish processes to monitor these rules. The human change will expand in reach from plateau to plateau and addresses also the fundamental question of every change management effort: what’s in it for me?

The need for a robust & integrated industrial data delivery chain

Looking at the data delivery chain today in an insurance company, we see a lot of challenges, first of all in the data source legacy applications:

  • You will find many technological outdated product-oriented core insurance systems that are often maintained by a handful of people that are too busy with maintenance tasks and often not given the priority to handle data quality issues. Resulting in workaround fixes along the data delivery chain.
  • Low automated data integration chains that lack accuracy, integrity and speed:
    • with a regulator that is becoming more and more intrusive with regulations such as BCBS239, Circularly NBB_2017_27 and IFRS17, the need to have a robust & integrated data delivery chain is no longer optional.
    • many insurance companies already have ad-hoc analytics capabilities in place such as churn analysis for car insurance or lead generation for outbound cross-selling. The challenge however is how to close and automate the loop and going into a real marking automation loop with the goal of being able to propose a next best action based upon near real-time data.

When re-thinking or even re-building the data delivery chain, there are 5 basic rules to follow:

  1. Steer it all from the business side with a shadowing IT organization: the business organization needs to guide this, including also more technical subjects such as the logical data architecture and when to use which data modelling technique.
  2. Foster the essence of an Agile Project delivery: these kinds of projects cannot be delivered with a complete Agile methodology, meaning delivering in a couple of sprints the design, build & testing activities. Nonetheless it should provide an incremental delivery and especially implement co-locating teams. This must be considered from day 1.
  3. Building e2e vertical pieces but keeping in mind the big picture by using a common Insurance Industry Data Model. Strangely enough, on the technological market place today you will find some global players that offer an Insurance Industry Data Model, but given the fact that it is too generic it takes too much time to get it integrated. Looking for local or regional players for taking over the data model of your existing insurance package is a far better idea.
  4. Do it the “Chinese way”: design from right (“output”) to left (“data sources”). Knowing the finality of a group of attributes, will determine which choices to make when modelling these group of attributes.
  5. Approach the source data mapping as a business activity in order to save time: too often, it is left over to technical profiles to find out how to map the source attributes to the common industry data model in a trial and error mode. Getting business involved in this important activity will avoid endless iterations and get it right.

Data quality and reliability are always top of mind

Once you have established a robust data delivery chain, you can tackle data quality & reliability on a structural basis. Protecting data quality with clear quality rules for every information item is key. Define quality rules by taking the 7 data quality dimensions into account.

Data quality dimension 1024x503

Managing and monitoring data quality is a continuous process that involves everybody in the insurance company. Here is a basic approach on how to systematically increase your data quality:

  1. Build multi-dimensional drill down dashboards from the central data integration layer. These dashboards should tell you, according to multiple dimensions, what the quality level is given the defined data quality rules. Not only should you monitor individual data elements, but also have an overview by department for example. Quality is not only a matter of tools & processes, but also of people and people managing people.
  2. Use technical data lineage tools to speed up root cause analysis: these tools can visualize from the data exploitation layer how an attribute is transformed in the data delivery chain and allows you to pinpoint issues more easily.
  3. Identify Critical Data Elements (CDE) in order to steer priorities: if this is not already done when putting the information elements into the central data catalog, see from the output which data elements are key for most of the reports. This will drive your efforts.
  4. Fix issues at the source instead of ad-hoc patching along the data delivery chain. Often, data warehouses need to implement workaround fixes.
  5. Use ITIL-based incident & problem management processes. Do not re-invent processes that already exist and can be leveraged easily.

Getting there also means : iterate, iterate and iterate in a continuous improvement cycle.

Mass Personalisation

Once data is integrated and quality is under control, you will find that AI and ML are much easier to use then you thought. Many analytics tools, including cloud-based tools/vendors, offer these kinds of out-of-the-box capabilities. The basic art of analytics does not need to be overrated: it is about customer data with a look at the who (profiles, demographics), the how (behavioural, channel interaction), the where (geolocation, travelling, living, work location) and the when(lifetime events, periods of interaction during the day/year). Think of it as replacing somehow what the broker or bank employee has/had on personal knowledge of the insured in order not only to give the best possible service but also close opportunities of cross and upsell. By capturing behavioural data and even unstructured data (think about emails) from the digital footprints of the customer by your broker or directly from your digital channels, you both are able to serve your customer even better than before in a mass personalised way.

The Wave uses cookies on its website to make your browsing experience more enjoyable. By continuing to browse the website you agree to the use of cookies. More info.