AI works best when humans are in the loop. Knowledge communities can provide a robust flow of information that supports and continuously refreshes the content on which AI relies. When organizational processes are identified and documented, AI can take over routine tasks, leaving the creative and more challenging problem-solving tasks to be handled by humans. Behind the scenes, content and product models need to be developed and aligned with data capture processes to make AI components work, but humans must create the knowledge flow and take charge of the content.
Knowledge communities contribute to healthy knowledge flows, which in turn become an invaluable source of accurate, up-to-date information for knowledge bases that support organizational processes.
A knowledge community is essentially a community of practice—a place where experts can share approaches, nominate best practices, and submit exemplars of solutions or deliverables. When carefully planned and managed at enterprise scale, knowledge communities will always be a richer, deeper source of wisdom and expertise than a unit-level group. They can draw upon a more diverse set of information and enable more in-depth analyses.
Knowledge communities can help address several inherent challenges around knowledge capture/refactoring, curation, tagging and retrieval. For many organizations with very large volumes of content, purely manual approaches are costly and cannot keep up with the velocity of knowledge creation and downstream application and thus become cost prohibitive. Instead, a combination of AI/ machine learning, and human in the loop approaches is the only cost effective and sustainable option.
Processes must be in place for capturing, curating, tagging and componentizing legacy content (for examples, decades of documentation for long lifecycle products) and for information sourced from human interactions. This will prepare information to be consumed by chatbots or other automated, AI-based components.
This approach can be used for legacy content and for ongoing knowledge creation processes by making humans more efficient and effective across the knowledge lifecycle. This requires changes in process, planning, identification of appropriate scenarios and a reference knowledge architecture. Although these changes add to the costs and level of effort, that extra work pays offs by capturing and organizing knowledge that is current and relevant to workers. Moreover, building downstream scenarios (such as consumption by marketing, sales or customer service applications) into the knowledge lifecycle speeds the rate of knowledge flow more effectively than other approaches, which will help the organization adapt to changes in customer needs, market forces and competitive threats more quickly and easily.
When reviewing content, breaking it into reusable pieces allows consumption by intelligent virtual assistants and chat bots. Larger documents are componentized and structured for the correct retrieval. These content components are tagged using a content model that describes various aspects of the task it supports, the topic, audience, product, configuration, and other parameters.
The content model must be aligned with the product data model with knowledge capture upstream from engineering and development workstreams, and through practitioner communities where internal subject matter experts can share their “on the ground” experience and expertise. Knowledge output of one process is the knowledge input to another process – perhaps another machine learning process. Human in the loop for training bots is certainly very important.
Humans are the source of the knowledge behind the component content that powers cognitive systems, but AI tools can help organize, tag and process information it so it can be found and acted on.
Machine learning with a knowledge architecture can help to surface the most important information for roles, departments, and processes by analyzing the signals from interactions with content (shares, likes, responses) and matching patterns from similar content. This analysis can be from the explicit attributes (metadata tagging—whether human or machine assisted) or through implied or derived latent attributes (that is, patterns that algorithms identify that are less directly observable by humans). For example, the association of similar content can be through a common metadata structure or through deeper patterns that defy explicit classifications. Perhaps several groups working on different but related problems all found the content to be useful. Another group might be offered the same content without obvious metadata that would lead to the association.
An organization runs on the collective knowledge of the people that work in the organization along with the accumulated knowledge that everyone that came before them—each generation of employee contributes to applying and creating value. Scaling the organization is about institutionalizing that collective knowledge in a way that is embedded in designs and tools and that can be passed on to future employees who build on, synthesize, and recombine knowledge in a (hopefully) never ending cycle. When this information is compiled in the appropriate repositories and made findable by the correct structuring and metadata, AI becomes powerful and can be repurposed for anything from chatbots to automation of processes to personalizing and contextualizing information based on customer behavior or preferences.
Building knowledge flows requires optimizing human flows individually and in the aggregate.
Many of the elements of team flow can be found in the states of seamless idea interchange with likeminded colleagues. The is the state of enterprise flow that comes from having the right members of the team, aligned with skillsets, with the tools that they need to succeed and the social fabric of strong leadership and honest communications with common vision on which to focus. Exciting, meaningful work with a purpose is motivating and energizing. Among the necessary ingredients for good knowledge flows are a common purpose and vision, shared responsibilities and accountability, trust among team members, and healthy communication. Removal of noise, friction, and distraction are essential to maintaining a good information flow.
Every technology initiative, usability project, or effort to improve the efficiency with which humans interacted with technology has sought to “reduce the cognitive load” on the human--make it easier for them to make decisions, do their work, or achieve their goal. That entails designing human understanding of how people go about those tasks into the tools. That is a form of knowledge capture that institutionalizes that understanding.
While supporting the community with sponsorship and accountability, continually refine the foundational architecture for ingestion into AI technologies that will supercharge your knowledge communities.
When a framework for information retrieval has been overlaid on top of the knowledge creation process, the knowledge can be accessed by either humans or AI tools designed to automate tasks. But over time, the categories of information may change, new products may be introduced, and therefore the taxonomy needs to change.
As the organization matures in how it leverages AI technology, new functionality will likely require changes to the foundational architecture and fine tuning of algorithms with new or restructured data sources. This process is iterative, not static and is driven by data and measurable results, rather than opinion. Knowledge systems and tools need to be instrumented to capture performance metrics so that the impact of changes can be measured. This can be done at the detailed level of knowledge quality, completeness, usage and changes to architecture. While some insights can be gained by considering usage metrics, more meaningful key performance indicators (KPI’s) require the linkage of specific knowledge usage to the processes that are being supported. Those processes in turn can be correlated with the next level of measurable business outcome results. Business outcomes and departmental mandates should be aligned with organizational objectives. In this way, interventions at the knowledge quality level can be linked all the way to organizational strategy.
When a transformation program is successful, many departments and projects have a role that contributed to the successful outcome. The challenge becomes how to tease apart the various contributors to determine where additional investment is justified. By linking process performance to specific knowledge initiatives, the value of investments in knowledge communities can be more readily justified and maintained.
AI is not magic—it runs on data and knowledge. Humans are the source of knowledge throughout the enterprise and nurturing knowledge communities and knowledge flows will prepare the organization for a future dominated by cognitive technologies. Knowledge will continue to be the competitive differentiator as it is today, but transforming and embedding that knowledge into reusable components that are suited to a range of downstream systems, channels and applications will be the critical differentiator—and even table stakes—in the very near-future.
This article was originally published on KMWORLD.com.