While predictions are difficult to make, especially about the future, there is no doubt that technical capabilities in the field of making computers more responsive to humans are accelerating and producing powerful new applications.
Different terms are used to describe these types of systems. People refer to AI as a system that emulates human approaches to information processing for providing answers or solving problems. Intelligent agents are algorithms that interpret requests and provide responses in a narrower domain or for specific tasks, and produce unique results to questions that have not been pre-programmed. Cognitive Computing is a broad term that incorporates many AI capabilities and integrates a variety of mechanisms to allow for continuous learning and improvement.
As is true with many applications, we lose sight of AI capabilities once they become ubiquitous and embedded. In fact many applications that are taken for granted these days contain AI – everything from speech recognition to machine vision used by robotic systems and, though still in development, self-driving cars. Many everyday applications were developed in AI laboratories but are no longer considered AI: the computer mouse, financial trading systems, aircraft simulators, computer assisted design, and email spam detection were all once considered AI[1]. Stephen Gold, World Wide VP of Marketing at IBM goes so far as to say that almost all of the core technologies that are part of Watson have been around for many years[2].
At the core of AI applications is the need to translate human needs and intent into something that the computer can provide or respond to. Think of this as the ultimate in usability. In some cases the function or capability is sifting through more data than a human can handle and making sense out of that data to provide an answer. Cognitive Computing is a newer description of software that enables more powerful capabilities including those where the system is able to:
Susan Feldman of cognitive computing consultancy Synthexis describes the following characteristics of Cognitive Computing:
These are very ambitious and comprehensive lists. These capabilities are in place in a number of commercial applications, albeit with limited practical implementation. Algorithms to achieve these outcomes function best within narrow domains and contexts. General purpose AI (like that in current science fiction) is a very long way away.
Hundreds, if not thousands of solution providers are springing up in the space which will make the job of the CIO and CMO even more difficult and complex. A recent blog post by investor Shivon Zilis describes a landscape of technologies and classifies them according to her own taxonomy: Core Technologies, Rethinking the Enterprise, Rethinking Industries, Rethinking Humans/HCI, and Supporting Technologies. http://www.shivonzilis.com/machineintelligence. This landscape is bound to garner as much attention as Lumascape’s or Chiefmartech’s in their respective focus areas.
Shivon analyzed over 2,500 companies (more than the 2,000 marketing technology companies in Chiefmartec’s latest landscape) in order to settle on the couple of hundred represented in her landscape. As one reads through her blog post, it is difficult to see meaningful and tangible enterprise applications. The statement “business models tend to take a while to form, so they need more funding for longer period of time to get them there” suggests that a clear value proposition for the enterprise is not quite there at the moment. The conclusion is that “they’re coming”. “DeepMind blew people away by beating video games. Vicarious took on CAPTCHA”. Playing video games and beating the “visual Turing test” are significant achievements. For those businesses who compete on playing video games and passing Turing tests, these technologies are must have.
Of course there are many practical applications of AI, Intelligent Agents and Cognitive Computing however these are applications that are developed over a period of months and years with millions of dollars of development funding. There are very few that can be considered “out of the box” and even that term is misleading. Many of the software packages require extensive configuration, curated content, training sets of data and ongoing tuning and evolution.
Regardless of what they are called – agents, intelligence, machine learning, AI - despite my tongue in cheek statement, beating CAPTCHA and CAPTCand learning to play video games are significant technical achievements that are foundational to many powerful applications that will change the multiple industries. For organizations trying to evaluate these tools today, there is still a bit of arm waving and research intensive application development. A quick review of many of the web sites of Cognitive Computing and AI startups reveals a great deal of market speak and motherhood and apple pie – the types of statements and assertions that everyone wants but are difficult to demonstrate in a meaningful way. The benefits are described as “Improving decision making”, “making data more accessible”, “helping managers answer questions”, and my favorite “automatically formulating hypotheses”. These are ambiguous claims that require a great deal of faith to get behind. Venture funds are supporting companies that are going to market with lots of promise without the bottom line, clear cut, unambiguous, hard hitting results that CIO’s and CMO’s need to see in order to dedicate scarce funds and organizational resources.
Two things are true – 1. These are real applications (though they are not as far along as vendors claim they are) and 2. They will change your business, no matter what that business is. Though this landscape is confusing and it is difficult to tell what is real from the snake oil, there are practical steps that organizations can take in order to prepare for the inevitable market shifts that will force all organizations to embrace cognitive computing.
Cognitive computing can be considered search and retrieval on steroids. A question or request is a query that the system needs to respond to. In order to make these applications practical, listen to Mike Rhodin, Senior Vice President of the Watson Program:[3] He states that these tool are different. When asked “what is Watson?” He responds, “The best way to think about it is that Jeopardy is a demonstration of a new class of application. It understands natural language, it can generate hypotheses and it learns. These new systems are information based as opposed to program based.”
He goes on to say that you need to start by “thinking about the problem you are trying to solve, the information that may be necessary to solve that problem, where you are going to find that information, how you are going to curate it, how you are going to put it into a system, how you are going to train the information - once you have that done, then you write the app. So it’s a very different kind of model”
Wow. Let me repeat that. Wow.
Identification of the problem, locating the information, curating the content, and structuring it to put it into a system is at the core of the problem that the knowledge and information management community has been trying to solve for years. Yes, Watson is a new, powerful tool in the toolkit. But it does not solve the problem out of the box. In fact, most of the AI and cognitive computing systems require significant levels of configuration, tuning and content processing to be effective. Another example of this is from the Wellpoint Watson implementation[4]:
“Watson isn't simple or inexpensive. While Bigham wouldn't disclose WellPoint's financial arrangement with IBM, the process of training Watson for use by the insurer includes reviewing the wording on every medical policy with IBM engineers, who define keywords to help Watson draw relationships between data.”
” The nursing staff together with IBM engineers must keep feeding cases to Watson until it gets it. Teaching Watson about nasal surgery, for example, means going through policies and inputting definitions specific to the nose and conditions that affect it. Test cases then need to be created with all of the variations of what could happen and fed to Watson.”
Organizations will need to build these capabilities, however many of the fundamentals are the same fundamentals that support basic search and retrieval. The starting point for this new realm is to develop proof of concept and proof of technology pilots that will leverage the fundamentals of content curation and corpus creation. A PoC will identify the success factors and gaps in current processes and data. One way to approach this is through development of search driven intelligent agent technology.
At the core of an intelligent agent is the retrieval of information. Retrieval might be in the form of a simple search where the result is a set of documents. Or the result could be finer grained in providing a specific answer to a question. There are a variety of search driven applications that could be classified as a form of intelligent agent. What makes a search driven intelligent agent? The degree of sophistication of the algorithms used to process the user input, mechanism used to retrieve information and the ways of surfacing that information in context to the user including use of text to speech and avatar interfaces to guide the user.
These can support question answering systems for narrow tasks like filling out a form to more sophisticated and complex approaches for understanding context and interpreting language and ambiguous questions in order to guide the user in a task that requires judgement. There are a number of approaches to developing information access mechanisms that can be placed on a continuum of sophistication. Key components of these systems include:
Search drives the interaction:
Metrics driven governance:
Answer based content:
Ongoing quality management:
There seems to be several tiers of organizations developing capabilities that are emerging in the marketplace. The first is the type of large digital enterprise that understands the importance of these technologies and has the capital and resourcing to invest in capabilities. Think Google, Microsoft, Amazon and other technology giants. They are investing in machine learning and analytics in applications that comprise their core business. At a recent Big Data Innovation Summit in Boston, the head data scientist at Uber described several fascinating products in development that leveraged machine learning and predictive analytics. Amazon’s recommendations engines are based on pattern recognition through analysis of large volumes of data.
Another tier of organization is the large enterprise for whom data is important but where many of the advanced applications of machine learning and cognitive computing are not primary (at least not yet) to their businesses. They do have the resources to invest in these emerging areas and understand that their businesses will benefit and remain competitive through application of these technologies. One example is the reinsurance company Swiss Re. Swiss Re’s Riccardo Baron, Big Data & Smart Analytics Lead Americas VP, revealed that the company has engaged in over 100 pilots in diverse information areas and that there are several that have demonstrated clear value for the company and their customers.
The third tier is that of organizations that don’t have the resources, the interest or perhaps the understanding about how to apply these tools to their models. There may be significant impacts on these companies as the market develops around them with more “proven” solutions.
For any of these organizations it is possible to solve today’s problems using today’s proven technology. Cognitive Computing does not have to be academic or require millions of dollars. Intelligent agents can deliver real value in a very short timeframe. A bonus is that it allows the organization to start down the path or more sophisticated, complex and powerful applications that will truly be game changing. Take the first step today and begin the conversation.