Imagine that Bill and Sue sit in separate rooms. In each room there is a table on which two keys are placed. The keys are labelled One and Two. One is yellow and the other is blue. The yellow key has teeth but the blue one is smooth. Bill and Sue have only one attempt to insert the yellow key to open a treasure chest. It sounds easy, right? Not for Sue.
Sue is colour blind. She has everything she needs to open the box, and all her senses inform her that there are differences between the keys. And yet she cannot be certain which one to choose. Unlike Bill, who easily completes the task. Even if faced with a thousand keys, he would eventually manage. As a matter of probability, Sue is much more likely to fail.
Now imagine the same scenario in an organisational setting, with multiple data sets. Department A clearly knows what to do, how to do it, and why. Department B has no way of fully understanding the data points before Department A has integrated or processed them, because data (e.g. raw factual input like letters, numbers, and symbols) only become meaningful when structured and ordered into a format people can understand. While Department A presses on, Department B will always struggle.
This situation is daily reality in the asset management industry. A limited set of key individuals can make sense of the vast array of data extracted from the organisation’s enterprise applications, while others need this data to be presented in an entirely different way for it to be informative. While Department B might not understand its inherent meaning, the data still exists and provides meaning to others (Dept. A).
One can only agree that data is the new gold, but only when put into context, integrated, and ordered so that it becomes information and knowledge across a wider spectrum. I would argue that Person A´s tacit knowledge (the knowledge needed to understand certain data) can only become Person B’s tacit knowledge if Person A shares her understanding of that data on a recurrent basis, i.e. socializes and externalizes her context for Person B to understand the data.
In other words, I believe that the data challenge is the result of a lack of shared comprehension of the concepts used on a daily basis and the processes which use these concepts to create value for investors. In essence, the challenge is not only of a technological nature. The digitalization imperative facing the asset management industry requires firms to take a step back and reflect on what knowledge means and how it is shared, enhanced and expanded between humans and humans, between humans and machines and ultimately between machines and machines.
These challenges are not insurmountable, but they require a period of reflection and a healthy interest in how other communities of professionals, especially scientists, are continuously attempting to model the physical reality that we call life.
“Data is the new gold” may be an accurate saying, but it represents a consequence of intelligent abstraction of both concepts and processes. Data is just the expression of the level of conceptualization maturity that the organization can fathom. It is sometimes accurate, meaningful, and shared but unfortunately often lonely, misleading, and potentially misunderstood.
To put an end to the data incomprehension, I believe that asset managers need to work on their proprietary syntax, semantics, taxonomy, thesauri, and ontology. In other words, they need to develop proper frameworks for sharing and internalizing knowledge that helps establish a common context or frame of reference. The benefits of this approach will be significant, enabling each asset management firm to fully reap the benefits of digitalization.
Tom Gruber from Stanford University defined “an ontology as an explicit, formal, specification of a shared conceptualization” and that “for computer systems what exists is what can be represented”. This is the correct approach underpinning the interoperability of systems. However, the success of this will depend heavily on a contextual factor for the human mind, including personal backgrounds, training, and profession. People first, machines second.
Obviously, this will not happen overnight, and continuous effort will be required by those who deeply understand their firm’s internal value creation processes. By value creation I mean the process of transforming inputs into outputs: from investment analysis to delivering rewards to investors via the distribution of a product. Data is a concrete expression of the concepts underpinning this value creation process.
Thinking is the new gold. There is nothing fundamentally new about this. But thinking as a group, with shared concepts and deeper understanding, might just be the new black.
 Nonaka, I & Takeuchi, H: The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation, 1995  Gruber, T. R. A Translation Approach to Portable Ontology Specifications. 1993.