This Article originally appeared on ClickZ.
I have spent more than 25 years researching and implementing AI technologies — from the days of IBM’s “discovery server” which lead to “OmniFind” and then became part of their flagship AI brand Watson to various iterations of text analytics and machine learning applications to prototypes and deployed projects using conversational technologies.
During these cycles of new technology – all of which were marked by hype, missed expectations and eventual adoption — I have seen patterns of success and failure as the AI landscape has evolved. In The AI-Powered Enterprise, I seek to help businesses deliver AI’s promise of revolutionary change.
Achieving this goal means avoiding the most common mistakes I’ve seen companies make, including:
Executives tend to think AI technology is beyond their ability to understand. The complex programming may be out of reach for many, but the basic functioning needs to be explainable and understandable in business terms.
Businesspeople can understand the principles, and vendors can explain their solutions in an understandable way. Effort is required on both sides.
This is a big one. Every major technology change causes vendors, gurus, consultants and organizations to confuse what is possible with what is practical. SRI, the company that originally developed Siri as part of a DARPA project, created intelligent assistants 25 years ago.
Approximately $150 million was invested, and that research and development showed what was possible. These tools did not become practical until the last several years (and they still have a long way to go).
All too often, ROI is projected without a real understanding of what is needed to make things work and the costs of achieving functionality. Startups often try to do things that have not been possible before and are selling what I refer to as “aspirational functionality.”
They may believe they can deliver what they are selling, but this will not happen without a lot of customer investments and pain. “Moon shot” projects (big, ambitious and game changing) are the ones most likely to fail.
Consider a marketer trying to translate the work of an AI researcher or data scientist into something that the marketplace (and their salespeople) understand. Marketing’s understanding will be an approximation that is then interpreted by the salesperson.
In each translation of the message, the high points are emphasized and the challenges minimized. There is bound to be misinterpretation and miscommunication and in some cases misinformation.
Unfortunately, this has led to millions of dollars in wasted funding and career limiting mistakes for the executives who took those risks.
In many cases, AI technology can be equated to a high-performance super car. However, the necessary supporting processes are like rutted dirt roads, not smooth flat racetrack where the car will perform to its specifications.
Personalized experiences are an example of this. Some marketers segment customers but don’t know how to differentiate the experiences across those segments. The tools and the architecture are ready, but the supporting messaging and content processes are not.
Organizations are experimenting with lots of tools, and some functions – like marketing – have been evolving very rapidly with easy-to-deploy cloud-based tools.
In many situations, this has led to more fragmentation of data, processes and the experience. Legacy systems are difficult to upgrade or replace, and adding AI tools on top of an outdated infrastructure can make things worse.
It’s all about the data. In fact, the data is more important than the algorithm.
Some projects work well in a proof of concept (PoC) but only because the data was hand selected, integrated, cleansed, enriched and/or curated for the AI. In a production environment, companies don’t have the same conditions or luxury.
So, how can companies avoid these mistakes? It’s really back to the basics. Executives need to be very clear about their objectives and understand the processes that they want to improve.
AI for AI’s sake or to check a box to say “yes, we have an AI program,” is an exercise in futility – and a drain on resources.
Governance is also key – from deciding on strategic priorities to assigning accountabilities, mobilizing and allocating funding, monitoring preconditions (such as data quality) and linking to process and outcome metrics, guiding working agendas, and evangelizing, educating and socializing to catalyze needed changes in work habits and culture.
AI cannot replace entire functions or jobs. Rather, it enhances specific processes that need to be narrowly defined and well understood. You can’t automate a mess, and you can’t use AI to fix a process that your people don’t understand.
It is also critical important to know what success looks like and to measure how well things are working today (as a baseline) and then measure after deploying an AI solution.
If you cannot measure it, how will you know it’s working? Be clear about the objective, the process and the success measures. Otherwise getting support and funding will be difficult.
Finally, executives need to understand dependencies – data, technology, process, people – and be clear on what is needed in each domain.
AI has to be integrated into the organizational infrastructure. This means everything from the technology stack to cultural readiness, decision making and governance.
Who is going to own the capability? What are the upstream and downstream impacts – both short term and longer term as capabilities evolve? How will resources be allocated? How will course corrections be made?
AI algorithms (programs) run on data. One of the reasons why AI has become more practical in recent years is that so much data is available to “train” the algorithms.
Training data comes in different forms and structures. For example, training an AI that is looking for fraudulent transactions, requires providing multiple examples of valid transactions along with examples of fraudulent ones.
Alternatively, for a “cognitive” assistant (a really smart bot or virtual assistant), the training takes the form of the actual knowledge needed to answer questions that will be posed to the assistant.
In The AI Powered Enterprise, I describe a project for insurance company Allstate, for which thousands of pieces of information had to be broken into pieces and ingested (imported) into the system as answers to questions.
In other words, AI needs to learn about your products, services, solutions – the knowledge architecture that defines your organization’s value in the marketplace.
This is why effective bots are difficult and costly to build. Indeed, projects like the insurance example above can cost upwards of $1mm+, but they provide enormous return on investment.
Imagine that you are trying to personalize an experience for your customers. The key is identifying their “digital body language” when they come to your site.
That is data – the data exhaust thrown off by sometimes dozens of applications that support their experience. Without those data signals, the AI has nothing to go by.
In many organizations, data is disconnected, inconsistent, and in many cases of poor quality. You cannot be successful with AI until you have your data house in order.
Born digital companies – like the big tech vendors – are obviously amassing enormous wealth by doing it right.
Financial services firms have been building maturity in analytics by using AI programs as an extension of advanced and predictive analytics. Some retailers are capitalizing on their knowledge of their customer needs and the data that is being harvested throughout their journeys.
Other organizations can learn from these companies by investing in a cloud-based, modern, well- integrated infrastructure. Because technologies have advanced so much in the past several years, there are actually some advantages to being a follower now.
New approaches to harmonizing, cleansing, and managing data have been made more practical by using graph data and knowledge graphs.
These approaches allow the linking of related things throughout the organization. Think of Facebook’s “friend of a friend.” Finding common elements allows people to navigate their friend networks by interests, school, employers, associations, etc.
These structures, combined with ontologies (the catalog of data, concepts, products, solutions, processes, and everything else important to the business) become the knowledge scaffolding of the enterprise.
They become the foundation for all AI tools as well as conventional tools and technologies. Indeed, the critical role of these structures and ontologies is the subject of The AI Powered Enterprise.
We can think of the flood of data employees face as “information overload,” but it is really “filter failure.” Humans have worried about information overload since the invention of the printing press.
The pace of information growth is unimaginable, but people are always filtering out what they don’t need and focusing on what they do need.
This is not done by accident. Libraries were created to manage information in context and to find what humans need to learn, to create, and to solve problems.
That requires effort – the energy needed to categorize and organize. AI can help with this, but first it needs to be trained in what is important — the products, services, solutions, processes and more.
By properly organizing information needed for a high-value process (for example customer support), the business can make it easy for employees to get what they need without being overloaded. That was what the Allstate virtual assistant referenced above accomplished – but there is no free lunch.
Time, money, resources and energy needed to be invested to make the crucial underwriting and policy information accessible and findable. AI helped, but humans needed to teach the AI about the insurance business.
The same elements that are needed to train humans are needed to train AI. Therefore, the investments that are made now for people can be fully leveraged for AI. I discuss how to do this in great depth in The AI Powered Enterprise.