10 Steps To Creating A Data-Driven Culture
10 Steps To Creating A Data-Driven Culture
Digital
Article
Data
10 Steps to Creating a
Data-Driven Culture
by David Waller
Exploding quantities of data have the potential to fuel a new era of fact-
based innovation in corporations, backing up new ideas with solid
evidence. Buoyed by hopes of better satisfying customers, streamlining
operations, and clarifying strategy, firms have for the past decade
amassed data, invested in technologies, and paid handsomely for
analytical talent. Yet for many companies a strong, data-driven culture
remains elusive, and data are rarely the universal basis for decision
making.
Why is it so hard?
For example, a leading telco operator wanted to ensure that its network
provided key customers with the best possible user experience. But it
had only gathered aggregated statistics on network performance, so it
knew little about who was receiving what and the service quality they
experienced. By creating detailed metrics on customers’ experiences,
the operator could make a quantitative analysis of the consumer impact
of network upgrades. To do this, the company just needed to have a
much tighter grip on the provenance and consumption of its data than
is typically the case — and that’s precisely the point.
The first tactic is to make any boundaries between the business and the
data scientists highly porous. One leading global insurer rotates staff out
of centers of excellence and into line roles, where they scale up a proof
of concept. Then they may return to the center. A global commodities
trading firm has designed new roles in various functional areas and
lines of business to augment the analytical sophistication; these roles
have dotted-line relationships to centers of excellence. Ultimately, the
particulars matter less than the principle, which is to find ways to fuse
domain knowledge and technical knowhow.
Top firms use a simple strategy to break this logjam. Instead of grand —
but slow — programs to reorganize all their data, they grant universal
access to just a few key measures at a time. For example, a leading global
bank, which was trying to better anticipate loan refinancing needs,
constructed a standard data layer for its marketing department,
focusing on the most relevant measures. In this instance, these were
core data pertaining to loan terms, balances, and property information;
marketing channel data on how loans were originated; and data that
characterized customers’ broad banking relationship. No matter the
specific initiative, a canny choice for the first data to make accessible is
whichever metrics are on the C-suite agenda. Demanding that other
numbers eventually be tied to this data source can dramatically
encourage its use.
6. Make proofs of concept simple and robust, not fancy and brittle.
In analytics, promising ideas greatly outnumber practical ones. Often,
it’s not until firms try to put proofs of concept into production that the
difference becomes clear. One large insurer held an internal hackathon
and crowned its winner — an elegant improvement of an online process
— only to scrap the idea because it seemed to require costly changes to
employees will get excited enough to persevere and revamp their work.
But if the immediate goals directly benefit them — by saving time,
helping avoid rework, or fetching frequently-needed information —
then a chore becomes a choice. Years ago, the analytics team at a
leading insurer taught itself the fundamentals of cloud computing
simply so they could experiment with new models on large datasets
without waiting for the IT department to catch up with their needs. That
experience proved foundational when, at last, IT remade the firm’s
technical infrastructure. When the time came to sketch out the platform
requirements for advanced analytics, the team could do more than
describe an answer. They could demonstrate a working solution.
they considered, what they understood the tradeoffs to be, and why they
chose one approach over another. Doing this as a matter of course gives
teams a deeper understanding of the approaches and often prompts
them to consider a wider set of alternatives or to rethink fundamental
assumptions. One global financial services company at first assumed
that a fairly conventional machine-learning model to spot fraud
couldn’t run quickly enough to be used in production. But it later
realized the model could be made blazingly fast with a few simple
tweaks. When the company started to utilize the model, it achieved
astonishing improvements in accurately identifying fraud.
David Waller is a partner and the head of data Science and analytics
DW for Oliver Wyman Labs.