It’s easy to argue that today’s information technologies accelerate the speed at which enterprises make decisions, process information, and collaborate to solve problems. But do they provide any competitive advantages?
The pace of collaboration, problem solving, innovation, and value creation has been increasing as new enabling tools emerge, resulting in a cycle in which innovation leads to new inventions. For example, the development of knowledge bases presented an innovative way to collaborate, which in turn lead to inventions in many fields.(1) Some might assume that this process creates an advantage, but in reality, almost everyone else is developing these tools at the same time. New capabilities are evolving in the context of an eco-system of competitors, all of whom are trying to do the same thing. The key is to leverage new tools and approaches faster than others in your industry, creating differentiated value for the customer.
One challenge is that there’s an increasing level of overhead involved with these tools and infrastructure. While adding capability and improving the ability to collaborate and solve problems, technologies are also adding complexity and reducing productivity in some contexts. This well-known productivity paradox isn’t really a paradox at all. Since economist Robert Solow made this statement back in 1987, referring to a lack of evidence of productivity increases from computer technology,(2) several explanations have been presented. These include issues related to how productivity is measured, how the technology is implemented and managed, the lag between technology investment and benefits realization, and the phenomenon of differentiated advantages across competitors (the “redistribution of profits”).2
Innovative approaches for applying new technology can have a significant impact on the enterprise or institution. If competitors can take advantage of a shift in technology use faster than you can, it puts your market share at risk. For example, a large book publisher gained a sizable share of the K-12 market after adopting component authoring.(3) The new approach helped the publisher better adapt to changing curricula standards to develop textbooks that accommodated such standards, which varied by grade, subject matter, and school district. The company developed taxonomies and metadata for a repository containing more than one million content components, which an editor could use to begin the development process as standards were updated. This reduced the time to market by six months. By the time others realized there was a new process in use, the publisher had a three-year advantage in the marketplace, having already transformed its internal processes and developed competencies.
So how can you exploit new technology to stay ahead of the competition? The challenge isn’t just to recognize innovative technology but also to apply it to your existing business model. In some cases, this requires breaking the business model and coming up with an entirely new way of doing business. Much of the maturity that’s required to leverage technology for a competitive advantage relates more to the people and business processes than to the complexity of the application. Organizations must transform how they do business—they can’t just use the same old approaches with new software.
Transformation requires a strong leader who has developed an achievable vision only after exploring issues and challenges at the front lines of the business and understanding how technology can create order-of-magnitude, not just incremental, improvements when solving problems. This vision is about the digital experience—regardless of whether it’s for internal or external users.
The era we live in no longer tolerates disruptions caused by some-one accessing a computer in a call center to answer your question. Similarly, being transferred between departments because of siloed processes causes frustration and loss of good will.
Users have higher expectations for Web experiences and internal information system capabilities. They expect intuitive access to a wide range of information sources as they go about their day-to-day tasks. Shoppers want to find products with minimal effort. They also want you to offer choices that directly support their tasks and present reviews and recommendations from their peers. The digital experience must tailor content and functionality to the user’s current need.
All of this is data driven. It requires having a richer under-standing of users and their needs, modeling the data to represent those users and needs, and measuring the results using multiple data streams. Whether organizations create a tailored Web experience, a personalized interface to an intranet with content adapted based on the user’s role in the organization, or a custom search experience based on the user’s task and interest, the same mechanisms apply— modeling the user, modeling the information he or she seeks, and developing mechanisms that can serve that information depending on a particular set of circumstances. Doing so means changing how information is created, structured, organized, curated, managed, and integrated across systems, processes, and department siloes. The larger the organization, the more challenging this becomes.
Providing a seamless, contextually relevant, digital experience requires changing deeply entrenched organizational habits. Implementing such changes entails a long-term commitment to goals consistent with the organization’s vision.
This evolution requires bringing to the surface knowledge of the customer, which has been embedded in institutional memory, so that it can be systematically exploited through multiple touch points and channels. For example, in an engineering-focused manufacturing organization that sells to other manufacturers, the expertise that the customer wants to access is mostly in the minds of company engineers. In a small firm, you might pick up the phone or have a face-to-face meeting to capture such expertise, but as the volume of knowledge grows, technology is needed to transfer the knowledge to the customer.
However, effective capture and presentation of expertise requires more than just installing Web content management software. You need a mechanism for capturing the knowledge upstream and curating it through a managed process that applies a selective filter, so you’re only presenting information as it’s needed by the customer. In a consumer context, this might require harmonizing your understanding of the customer across a customer relationship management system, a website content management system, automated marketing or email systems, and ecommerce engines.
In a system-siloed world, different departments would be behind these systems, which would be from different software providers or a combination of custom, off-the-shelf, and home-grown tools with inconsistent architectures and different models of the customer. Each system would have a different “schema” or set of attributes and descriptors. Although some standards might be observed, each product would consider different attributes about the customer and have different data-sets, interpreted differently based on the system view. Therefore, the information provided by that application would paint a different picture with different parameters about who the customer is. These different data streams must be interpreted and, oftentimes, normalized—that is, different terms describing the same concept must be translated to a preferred, common term across systems to allow for analysis.
With more data sources, organizations have more mechanisms to provide input to develop user attributes. You can divide and subdivide users into many different categories, depending on the context. So you might start with “women between 30 and 35 who are Yoga enthusiasts and professionals,” but then you could add “who work in the media and entertainment industry, drive foreign-brand luxury cars and own Apple computers, are interested in green products…” and so on. You can add descriptors to data that will allow more slicing and dicing and, in combination with other data, allow for new insights. These details will reveal opportunities to address unmet needs or to outperform a competitor. The very fact of discovering patterns in the data means that something of value exists—once those attributes can be aligned, the organization’s value proposition increases.
Having more data sources means that the potential combinations expand exponentially. You can combine demographic profiles with Facebook and social media patterns. Mobile applications that leverage geofencing—the ability to track users’ proximity to physical points of interest—allow for unprecedented mining of user characteristics and attributes.
Organizations must continue to get better at managing information as a strategic asset—and not just focusing on the obviously high-value content of sources such as ecommerce websites or knowledge bases for call centers. Before content is organized for external consumption, many internal processes require the ability to find and reuse high-value content that typically hasn’t been curated effectively. If we can reduce the amount of churn, friction, and asset duplication and make it easier for others in the enterprise to leverage the collective knowledge and expertise of coworkers, the enterprise can become more agile and efficient. The resulting improvement applies to structured as well as unstructured information.
Big data initiatives compound the problem by adding more sources to the mix. As more organizations deploy complex customer-facing applications, trying to stitch together internal systems and processes and interpret more streams of data, foundational capabilities and competencies in data, information, and knowledge cura-tion become more important. With the explosion of information and the coming Internet of Things, this problem will only become more pressing and the need more urgent.
The recipe for getting ahead of the curve and leveraging technology to improve the customer experience is to begin with clear goals that support business objectives at a detailed process level—not at an abstract, theoretical level. It’s also important to get the basic housekeeping in order—consistent language and terminology, unstructured content organization, data curation at the source, and data governance processes for making decisions and allocating resources.
Big data and new customer experience technologies are game changers to be sure. However, unless the lessons of the productivity paradox are applied, these changes will only serve as distractions. These lesson include
Companies that anticipate the changing needs of the rapidly changing market-place and successfully implement new technology put themselves in a good position to gain the edge over their competitors.
1. A.B. Markman and K.L. Wood,
Tools for Innovation: The Science Be-hind the Practical Methods That Drive New Ideas, Oxford University Press, 2009, pp. 157–159.
2. E. Brynjolfsson, “The Productivity Paradox of Information Technology: Review and Assessment Center for Coordination Science,” MIT Sloan School of Management, 1994; http://ccs.mit.edu/papers/ CCSWP130/ccswp130.html.
3. S. Earley, C. Hogue, and M. Walch, “Taxonomies, Metadata, and Publishing,” Earley & Associates, 5 Dec. 2007; https://www.earley.com/ training-webinars/taxonomies-metadata-and-publishing.
4. J. Dedrick and K.L. Kraemer, The Productivity Paradox: Is it Resolved? Is there a New One? What Does It All Mean for Managers?” Center for Research on Information Technology and Organizations, UC Irvine, 2001; http://escholarship. org/uc/item/4gs825bg.
This Article was originally published in IT Pro.