This article originally appeared in IT Professional.
There’s a lot of noise and nonsense about so-called big data, especially about its role in the new and exciting field of “predictive analytics.” But many industries have been using data and analytics for decades.
Consider the insurance industry. Insurance has always been about predictive analytics. What are actuarial tables, loss history analysis, and pricing/risk algorithms if not “predictive”? Moreover, these approaches have always required data analysis, along with the judgment and expertise of underwriters and actuarial experts, to accurately assess and price the risk and balance it with the organization’s risk appetite and business objectives. So what’s truly new in terms of predictive analytics, and what does this mean for the IT industry?
In a way, almost every decision we make in business is predictive. The planning process is predictive. Making resource allocations is predictive. We have an expectation about an outcome when making business plans and setting organizational objectives. Every aspect of a manager’s job is about predicting the future and anticipating problems and outcomes. The IT function has been about enabling that core capability and increasing access to information to better allocate resources (place bets) to attain expected results (gain market share, develop a product that meets customers’ needs better than the competition, engineer some type of solution, and so on).
Thanks to big data, predictive analytics now can be applied to a much wider range of processes in the enterprise, including those that have traditionally relied on human judgment and expertise. We now have more means and mechanisms to collect and measure the results of our decisions and test the collective application of business strategy and performance. Of course, every new tool that helps with this process adds to the arms race of new applications, produces more data, requires output management, and thus results in more complexity.
Take the world of customer acquisition and retention. Until recently, only a few tools needed to be mastered to manage customer processes. Staying with the insurance example, agents and call centers were the primary points of interaction, and call center and agent systems supported those processes. Then the Internet became a point of contact, starting with simple websites offering virtual brochures. Within a short time period, the website became an increasingly complex portal for customer interactions, transactions, self-service, provisioning, claims processing, and more, eventually encompassing almost the entire customer experience.
Now, hundreds of tools are available just for customer communications and marketing processes. Each tool yields data that can be mined for patterns of behavior that provide enormous payoffs if the organization responds correctly. Conversely, not using these tools appropriately can cause loss of brand equity and market share with astonishing speed. The key is in understanding the customer’s “digital body language.” Responding to the acquired data requires the ability to understand customer data—the attributes, needs, characteristics, life stage, behavior, demographics, and psychographics—and respond to this data with information that will cause the customers to act in a way that satisfies their needs and meets the organization’s business objectives.
This is a new way to view the work that organizations have operationalized—understanding risks and predicting outcomes—in terms of additional dimensions of customer interaction. This approach can be applied to loyalty and retention, pricing, marketing segmentation, conversion, and quality measures across the spectrum of internal process performance. Predictive data can also be applied to collaboration, knowledge processes, and even search (a search result is merely a prediction of what someone needs based on limited clues of keywords and perhaps some knowledge of the context).
Although relatively straightforward ways of understanding choices and measuring outcomes are now available, it’s easy to be overwhelmed by the range of technologies, number of processes and data sources, complexity of the data, rapidity of change, and volume of data. One place to start is with the desired end state or target process; however, that approach must be aligned with the
larger goals of the business area and strategy of the organization. Data analytics is evolving and maturing, and tools and capabilities are available to provide a competitive advantage, but organizations must be willing to methodically understand, apply, and leverage the underlying data before adding complex and costly programs to move to the next level. Organizations have always struggled to deal with the deluge of data and ability to accurately link interventions and programs and measure outcomes to optimize spending,
but we now have more means and mechanisms to collect and measure the results of our decisions and test the collective application of business strategy and performance.
The problem with all of this data moving rapidly through the organization and from many different sources is exactly as one would expect—making sense of that information and keeping it in context. That isn’t an easy task. Most organizations can barely leverage the information that they have during day-to-day operations. A recent study by the International Customer Management Institute indicated that more than 25 percent of customer contact centers are suffering from information overload.1 Furthermore, the report noted that “an even higher percentage of service reps who have access to customer information don’t use it to the fullest to support calls.”1 The report goes on to say the following:
Nearly 48 percent of contact centers collect satisfaction data, but few use this information. About 36 percent of agents don’t collect data around customer satisfaction. Some 51 percent of call centers do not ask for customers’ [communication method] preference, while 32 percent of contact centers report collecting preferred [communication method] information from customers.
This is an interesting turn of events. The study reveals numerous issues that all boil down to the dilemma that organizations face when they have the data but don’t know how to make it actionable. Multiply this scenario by each stage of the business lifecycle, and it becomes clear that there’s tremendous potential for leveraging the various data sources and streams that are available through increasingly powerful and sophisticated tools.
The remedy to this situation is to focus on business value. Where can analytics have direct business impact? Where are the problems and pain points that, if solved, can provide unequivocal value? Here, I consider these questions using the insurance industry example.
The concept of analytics is well understood in risk analysis and pricing. However, there are many things that can get in the way of leveraging analytics for things such as pricing strategies.
Legacy systems and fragmented processes limit the potential for optimization through analytics. There might be data sources to analyze, but getting that data into the hands of the right people who can make decisions can be an enormous challenge. There’s a need for collaboration across the departments involved in the pricing function, but these departments are using siloed applications, and they have a hard time harvesting data from test-pricing scenarios when online users reject offers. Applying analytics can be difficult to put into practice without necessary tools and processes to support collaboration and integration. It would be like having a Ferrari in a place with only bicycle paths instead of smooth paved roads, so there would be no way to take advantage of the high performance capabilities.
Underwriting policies is linked to pricing approaches. Pricing models allow for the development of risk profiles and factors with actuarial data to develop products. Underwriting classifies and quantifies specific customers and opportunities according to attributes identified in actuarial and pricing rules and mechanisms. Everything from developing underwriting rules and applications to developing performance metrics to performing mix analysis, market comparisons, exposure management, loss analyses, and segmentation are part of typical baseline analytics. However, new information and insights are available from a variety of sources, ranging from government sources to proprietary providers to social media sites (which offer social network graph information and Web behavior data). The ability to rapidly change modeling parameters (along with more complex analytics frameworks and data sources) requires enterprises to push the envelope with regard to current capabilities.
The bottom line is that predictive analytics offers great possibilities and will be a game changer for many enterprises across all industries. The reality is that most organizations need to work on the basics—the “blocking and tackling” of core operations—and learn to leverage operational data to improve and optimize those processes before embarking on expensive and risky big data programs. Big data is getting bigger, faster, and more complex. Getting one’s house in order by linking customer metrics to performance of internal supporting functions (such as the customer service process for call centers) will be a great place to begin.
Unstructured claims information is yielding to text analytics approaches as well as business process mining to allow for new efficiencies and effectiveness. Claims can never be completely automated—some part of the process involves humans. Whether the requirement is coding health-care procedures using ICD-10
codes or producing first notice of loss forms, human judgment, unstructured text, and manual processes are part of the picture.
However, predictive analytics and big data text analytics approaches can be used to improve these judgment-dependent, knowledge-intensive tasks. Call center and claims processing agents require access to detailed policy content and guidelines
that allow for proper claims handling. Analytics can help, because analytics tools can help to mine online reference applications, self-help systems, and self-service systems for actionable knowledge. Furthermore, there’s the ever expanding competition between perpetrators of fraud and fraud-detection approaches. Such approaches require advanced mechanisms for data and text analytics to reveal complex patterns of interactions that are the telltale signatures of nefarious activities. (These kinds of approaches are similar to those used by the US National Security Agency for its data-monitoring programs.)
Most insurance organizations don’t consider themselves marketers first. However, automating marketing efforts, segmenting customers, and analyzing buyer attributes are just as essential to success as pricing and actuarial analytics. These need to be part of the toolkit of the enterprise and, if effectively deployed, can significantly improve the efficiency and effectiveness of marketing activities, leading to more competitive positioning and improved business results. The same lessons apply to any business. These days the chief marketing officer is influencing technology choices more than the chief information officer.
The customer dynamic has changed forever, and knowledge of customer lifecycles, buying behaviors, and individual needs (even in business to business contexts) will allow forward-looking businesses to stay ahead in this ever shifting competitive, demographic, economic, and technological landscape by leveraging core analytics strengths in new ways.
The clock speed of these processes is continually increasing, and picking your unique combination of competitive strengths and optimizing the analytic dimensions will help assure your place in the data driven marketplace.
Reference
1. L. Sullivan, “Data Overload Stifling Customer Service Improvements,” Media Post, 27 Nov. 2013; www. mediapost.com/publications/Articles/214310/data-overload-stifling-customerservice-improvemen.html.