AI projects are increasingly running into a brick wall when it comes to getting meaningful, measurable results. The best way to measure impact is to understand what information is needed to support a specific process. One process or lifecycle that is important to all parts of the organization is the customer experience. Knowledge graphs can provide a rich source of data that improves the customer journey, as well as providing the foundation for AI initiatives that may come later as part of a digital transformation.
Customer intent throughout their journey
The customer experience can be described in terms of their intent and various stages of their lifecycle, from learning about an offering or solution to choosing products and services through onboarding or installation and finally support, renewal, and word-of-mouth recommendations.
At each of these stages, the customer will require different messaging, supporting content, specific product bundles or solution packages, and even human intervention in the case of sales or support services. For that experience to be as smooth and frictionless as possible, understanding the customer context is essential. Context includes the customer’s objective, background and baseline knowledge, other products or solutions they already own and the immediate task at the stage in which they find themselves.
No matter what channel or mechanism is being used, the correct information related to the customer’s context must be presented. This can be accomplished through more precise search suggestions, related content (anything from recommendations to installation guides, troubleshooting approaches, or specification sheets), or question-answering bots powered by AI.
Aligning customer data, product information and knowledge/content
The key to success is to align three elements: customer data, product information, and knowledge/content. An engineer designing an HVAC system, for example, will want to begin by inputting something about their problem or the specifications required by a solution. The necessary content will likely reside in a systems engineering guide, technical specifications documents, or a tech references manual. We know this because a customer specialist with a user experience designer defined the use cases for that particular role, and those use cases describe tasks and needed content.
How do we know what level of content is needed — basic background versus more advanced content? A data model representing the customer includes the customer’s role, tasks (from use cases), level of proficiency, and interest areas. Based on the needs of the engineer’s project, product details from a product data model can be used to recommend types of products that meet the specifications that the engineer provided the system.
This approach is common in Configure Price Quote (CPQ) applications. Large Language Models (LLMs) can scale the approach once a customer data model, content model, and product data model have been developed. An LLM is a computer program that has been trained on large amounts of content so that it can use statistical analysis to respond appropriately to inquiries using natural language, among other use cases. These models all reside in an enterprise knowledge graph which an LLM uses as a source of truth, just as master data is the source of truth for an ERP system.
Knowledge graphs represent people, products, and content.
A graph database captures the relationships among different entities, including people, places, events, and products. Customer descriptors can be captured in a graph database that consists of data points and relationships to other data. A person can be represented in a graph data source as an object with descriptors such as name, title, and the company/employer. This collection of information is called an identity graph, and is represented in tools such as customer database platforms (CDPs).
In addition, a descriptor such as “company” (which is an attribute of the person) can be its own object with its own attributes. The company’s industry, the types of products it sells, the size of the organization, its competitors, and other attributes constitute this object. One of the company’s attributes, products that the company sells, can in turn have its own characteristics — product type, price, size, brand, specifications, and documentation such as installation guides.
When these relationships are defined and represented visually in a knowledge graph, they are displayed as a web of interrelated elements. Through these connections, a user can navigate through the graph and find customer information, product information, and supporting documentation, even if the information is scattered throughout many repositories.
Retrieving data about complex connections and relationships with traditional relational databases can be very difficult. Retrieval of the same information is faster and easier using a graph database because information about the relationships and connections is already known and stored. The knowledge graph can retrieve this information without having to create the connections every time, as would be the case with a query to a relational database.
Recommendations, contextualization, and personalization
When customer identity graphs are combined with product information graphs and content and knowledge graphs, e-commerce tools can present more precise product recommendations based on the needs of the customer, where they are in their journey, and the characteristics of the product that are most relevant. This information can be applied to display search results that are more aligned with the characteristics and needs of the user. Product listings can also be prioritized based on the products or services that customers with similar characteristics have purchased.
This approach is based not just on correlating purchase patterns such as “people who bought this also bought that” but getting into details about the specific types of individuals and customers that bought a particular product. In other words, a customer in your industry with your role and objectives searching for similar products bought these products.
A more nuanced mechanism uses real-time or near real-time “digital body language” — behavioral data such as search, clicks, and navigation—to detect/infer their intent in that moment of their journey compared with that of similar others (cohort, clustering behavior) for real in-session personalization. This technique is particularly useful when no history is available, or in situations where individual customer purchase patterns vary widely. This information can then be included in an individual graph for the user such as an affinity graph, as well as nodes in the overall knowledge graph.
Taxonomy, ontology, knowledge graphs
Taxonomies are lists of terms that comprise a category. Countries, states, and cities would each contain terms representing each region. These are parent/child and whole/part relationships. A Business to Consumer (B2C) taxonomy might include appliances, computers, TVs, cell phones, etc. while B2C automotive taxonomy might contain batteries, brakes, bearings, belts and pulleys, climate control, etc. A B2B auto supplier taxonomy would contain much finer-grained components. For example, bearings might include ball bearings, target roller bearings, cylindrical roller bearings, spherical roller bearings, and more nuanced components across the entire auto manufacturing segment.
Taxonomies are used to describe the various business concepts that are important to the enterprise. They are used to classify customer types, industries, roles, interests, content, etc. For example, relevant layers/dimensions can be added to the graph for underlying compliance or regulatory requirements unique to an industry or region as well as application and use case-specific requirements. When taxonomies show the relationships between concepts (for example, interests or tasks for a role) they can be used to build an ontology. An ontology forms the knowledge scaffolding for the enterprise. When that structure is used to access data, the result is a knowledge graph. Here’s an example:
Providing context for LLMs
Knowledge graphs can guide an LLM to present contextually relevant content to a specific audience. It is not necessary to define the audience for the LLM; relevant information can be referenced through a knowledge graph to automatically provide the correct audience information and context. We can also feed, train, and fine-tune LLMs with an organization’s specific content for additional context, and engineer prompts and responses to be more relevant. Personalization can then be based on the use case/objective, process, and individual characteristics of the employee or customer. When combined with LLMs, graph data is very powerful, and can speed digital transformations by providing new capabilities at lower cost.
To make these technologies work correctly, however, the information on which they rely needs to be structured and curated to match business objectives and support use cases. This is core information architecture — curating and tagging information for retrieval by the LLM. Metadata is applied to content as descriptors (metadata, called “labeled data,” refers to “features” in machine learning parlance). That information provides the LLM with the source of truth and enough context to answer questions accurately and precisely.
Get started with a Proof of Value
The best way to start using graph databases is with a Proof of Value that begins with targeted users, high narrow-use cases supporting high-value processes, and a specific information domain. For the selected process, develop a lightweight information architecture to describe the assets, people, and if relevant, products. Those assets enriched with metadata can be ingested into an LLM and the model can be queried based on the constraints of the knowledge graph.
Because data is the foundation for any type of advanced personalization and contextualization efforts, a prerequisite for using knowledge graphs is to get your data house in order. Consider the use of a knowledge graph product such as Ontotext or PoolParty to design your information structures. Without that structure, no technology initiative will work to its potential. This is especially true of ChatGPT, Generative AI, and LLM-based tools, which many business leaders believe will be of increasing importance to the enterprise in the next few years.