MCK - Semiconductors - Oct2019-Full Book-V12-RGB
MCK - Semiconductors - Oct2019-Full Book-V12-RGB
MCK - Semiconductors - Oct2019-Full Book-V12-RGB
Semiconductors
Creating value, pursuing innovation, and
optimizing operations
16 Artificial-intelligence hardware:
New opportunities for 50 Right product, right time,
right location: Quantifying the
semiconductor companies semiconductor supply chain
Artificial intelligence is opening Problems along the
the best opportunities for semiconductor supply chain
semiconductor companies in are difficult to diagnose. A new
decades. How can they capture metric can help companies
this value? pinpoint performance issues.
Many semiconductor companies are already benefiting from the innovative offerings, with the sector
showing strong and rising profits over the past few years. But there may be challenges ahead, since
companies that want to remain industry leaders must continue to increase their R&D investments. With
costs in labor and other areas rising, some semiconductor companies may have difficulty finding additional
funds for innovation. Moreover, some customers are already designing chips internally, and others may
follow—a trend that could decrease sales. These concerns, and possible solutions, are the focus of the first
article in this issue: “What’s next for semiconductor profits and value creation?”
Other articles discuss recent technological trends that are increasing demand for chips. In “Artificial-
intelligence hardware: New opportunities for semiconductor companies,” the authors explore how the rise
of AI could help players capture more value. Overall, semiconductor growth from AI may be five times greater
than growth from other sources. Opportunity is also the central theme of “Blockchain 2.0: What’s in store for
the two ends—semiconductors (suppliers) and industrials (consumers)?” As this article describes, blockchain’s
role in cryptocurrency and its potential growth as a business application may accelerate demand for chips.
The extent of the change, as well as the timing, is still uncertain, but semiconductor leaders that monitor
developments could have an advantage if blockchain takes off.
This issue also contains two articles that discuss a technology leap that has intrigued consumers: the rise
of autonomous vehicles. “Rethinking car software and electronics architecture” explores how sensors
and other automotive components may evolve, since these changes could influence chip demand. One
specific shift—the rise of domain control units—is described in another article: “How will changes in the
automotive-component market affect semiconductor companies?”
While all of these developments are exciting, semiconductor companies also have to deal with some
stubborn problems that have plagued their businesses for years. Companies will discover a new approach
to eliminating late shipments in “Right product, right time, right location: Quantifying the semiconductor
supply chain.” Similarly, they will learn about strategies for decreasing one of their largest costs in
“Reducing indirect labor costs at semiconductor companies.” The final article, “Taking the next leap forward
in semiconductor yield improvement,” will help companies use analytics and other strategies to optimize
production in both line and die processes.
McKinsey on Semiconductors is designed to help industry executives navigate the road ahead and achieve
continued growth. We hope that you find these articles helpful.
1
Economic profit equals the net operating profit after tax minus the capital charge (the invested capital, excluding goodwill—the amount of a
purchase that exceeds the value of the assets involved) at previous year-end multiplied by the weighted average cost of capital.
Exhibit 1
From a value-creation perspective, 2017 was a record year for the semiconductor industry.
Economic-profit1 value creation for semiconductor industry² (excluding goodwill), $ billion
Cumulative positive economic profit for companies Cumulative negative economic profit for companies
99
59
52
47
42 40
34
30
–4 –7 –5 –5 –5 –3
–10 –12
38 30 18 28 42 47 54 97
When we looked at value creation within the semiconductor industry, we deliberately restricted our analysis to economic
profit, which is a periodic measure of value creation. In simplest terms, it is the amount left over after subtracting the cost of
capital from net operating profit. The formula for computing it is as follows:
return on weighted
Economic
profit = invested capital
(ROIC)
x invested
capital – average cost of
capital (WACC)
x invested
capital
We chose to focus on economic profit because this metric comprehensively captures both profitability and the opportunity cost
for the capital deployed. It also allowed us to perform reliable benchmark analyses for companies that followed many different
business models. We only considered operating assets and excluded goodwill and other M&A intangibles. This approach
allowed us to compare operating performance for different companies, regardless of whether their growth occurred organically
or arose from a merger. For the years 2013 through 2017, however, we conducted two analyses: one factored in goodwill, and
one did not (similar to the long-term analysis for 1997 through 2017). We conducted the two analyses to determine if the recent
surge in M&A activity had a significant effect on results.
From 2012 to 2016, the semiconductor sector Value trends within the semiconductor
ranked tenth out of 59 major industries for value industry
creation, placing it in the top 20 percent. That The rise in economic profit is not the only big shift
represents a big jump from the period from within the semiconductor industry. As we reviewed
2002 through 2006, when it ranked 18th. While the trends, we also found that value distribution
the semiconductor sector still lags far behind has changed. From 1997 to 2012, the cumulative
software, which was second only to biotechnology, positive economic profit across segments was
it now outranks IT services, aerospace and $161.5 billion. But some segments also lost value
defense, chemical, and many other major sectors (Exhibit 2).
for value creation.
Overall, value was highly concentrated in a few
Strong global economic growth since the areas (Exhibit 3). The microprocessor subsegment
2008 recession has propelled the semiconductor generated the most value, followed by fabless.
industry’s revenues, but an even more important Together, they created almost all value in the
factor involves the continued rise of the technology industry, with all other subsegments roughly
sector. Companies such as Alibaba, Amazon, breaking even when their results were totaled.
Exhibit 3
From 1997 through 2012, the microprocessor and fabless subsegments created the most value.
Economic-profit1 value creation by subsegment (excluding goodwill), 1997–2012 cumulative, $ billion
Cumulative positive economic profit for companies Cumulative negative economic profit for companies
A B C D E F G H I J K
Capital Electronic- IP2 Fabless Micro- Memory Analog Diversified Other Foundry Packaging
equipment design processor IDM3 IDM IDM and
automation assembly
82.4
62.5
38.1
–47.6 –36.4
Total economic profit, $ billion
13.8 3.8 1.5 57.1 74.8 –9.5 14.2 –20.8 –3.3 8.3 –1.1
A) AMAT, ASML, KLA-Tencor; B) Synopsys, Cadence, Mentor graphics: C) ARM, Rambus, Spansion; D) Qualcomm, Mediatek, Xilinx; E)
Intel; F) Samsung, Sandisk; G) Linear, Analog, Maxxim; H) TI, ON, NXP; I) Microchip, Powertech, Faraday; J) TSMC; K) Silicon-ware,
Monolithic power.
Exhibit 4
From 2013 through 2017, almost all subsegments demonstrated economic profit.
Economic-profit1 value creation by subsegment (excluding goodwill), 2013–17 cumulative, $ billion
Cumulative positive economic profit for companies Cumulative negative economic profit for companies
A B C D E F G H I J K
Capital Electronic- IP2 Fabless Micro- Memory Analog Diversified Other Foundry Packaging
equipment design processor IDM3 IDM IDM and
automation assembly
77.6
59.7
39.7
3.3 42 5.3
1.9 13.1 0.9
0 –0.1 –1.6 –0.4 –1.2 –0.2 –1.5
–3.3 –2.7 –3.3 –3.3
23.6 3.3 1.8 76.0 39.3 58.4 12.9 24.3 2.0 25.9 –0.6
A) AMAT, ASML, Lam Research; B) Synopsys, Cadence, Mentor graphics; C) ARM, Rambus, CEVA; D) Broadcomm, Qualcomm,
Apple; E) Intel; F) Samsung, SK Hynix, Sandisk; G) Analog, Skyworks, Linear; H) TI, Toshiba, NXP; I) Microchip, Nuflare, Fingerprint; J)
TSMC; K) ASE, Silicon-ware.
Exhibit 5
Industry consolidation over the past five years has likely contributed to improved profitability.
Economic-profit value creation by number of companies, 2012–17 (all subsegments)
Companies with cumulative positive economic profit Companies with cumulative negative economic profit
105
92
72 68
61
45
A) Mindspeed Tech: Acquired by MACOM, Mtekvision Assets: Became private, Transwitch Corp: Filed for bankruptcy; B) International
Rectifier: Acquired by Infineon Tech, Supertex: Acquired by Microchip Technology; C) Altera Corp: Acquired by Intel, IBM Microelectronics:
Acquired by GlobalFoundries, Spansion: Merged with Cypress; D) Actions Semiconductor: Became Private, Anadigics: Acquired by II-VI Inc.,
Fairchild Semicon: Acquired by ON; E) Applied Micro Circuits: Acquired by MACOM
Source: Annual reports; Semiconductor CPC database; McKinsey Corporate Performance Center
Because there’s greater scale within subsegments, top three ranking. Results for value creation were
companies have more resources to invest similar across most other subsegments, although
in innovation and operating improvements. some notable declines occurred. For instance, in the
Their large size also helps them rebound when microprocessor subsegment, the positive cumulative
downturns occur in this highly cyclical industry, economic profit for the period from 2013 to 2017
since they can take advantage of economies would be reduced from $39.7 billion to $28.3 billion
of scale and rely on more designs than in the past if goodwill is included. Since all semiconductor
for their revenues. If one customer leaves, their subsegments have engaged in M&A to a similar
bottom-line will not see the same hit as a smaller extent, it is not surprising that the relative rankings
player with only a few accounts. Overall, down remained similar.
cycles have been milder and peaks have been
higher within the semiconductor industry over the
past few years (Exhibit 6). Potential challenges: The rise of
in-house chip design
Even when we factored goodwill—the amount of a After five successful years, semiconductor
purchase that exceeds the book value of the assets leaders across the industry have become a bit
involved—into the calculation for the past five years, less optimistic about their prospects. Next to
economic profit remained high. The fabless, memory, global tensions (hitting the semiconductor sector
and microprocessor subsegments retained their significantly, given the international value chains),
Exhibit 6
Within the semiconductor industry, down cycles have been milder and peaks have been higher
in past few years.
Economic-profit value creation by all subsegments (excluding goodwill), $ billion
Cumulative positive economic profit for companies Cumulative negative economic profit for companies
100 100
80 80
60 60
40 40
20 20
0 0
–20 –20
–40 –40
1997 2000 2010 2017
Source: Annual reports; Semiconductor CPC database; McKinsey Corporate Performance Center
Exhibit 7
Apple has become a large fabless semiconductor company by designing its own chips.
3.5 300
3.0 250
2.5
200
2.0
150
1.5
100
1.0
0.5 50
0 0
2011 2013 2015 2017 2011 2013 2015 2017
operations might surprise even industry insiders of revenue may further shift away from stand-alone
(Exhibit 7). Apple is now the third largest fabless semiconductor companies.
player in the world, behind Broadcom and Qualcomm
Technologies. If the company were selling chips, its Many technology companies with deep pockets
revenue would be around $15 billion to $20 billion have taken notice of Apple’s success with in-house
annually, in line with Qualcomm Technology’s. And chip design. Several, including large cloud players,
based on current multiples, Apple’s semiconductor are beginning to follow its example by developing
business would be worth $40 billion to $80 billion.² AI chips.⁴ They have already had some significant
These numbers speak volumes about the strength of wins, such as Google’s tensor-processing unit and
Apple’s internal chip operations. Amazon’s Graviton and Inferentia chips, all of which
facilitate cloud computing.⁵ In-house creation
Although shipments of iPhones and iPads appear allows these companies to develop customized
to have peaked, Apple is still expected to expand chips that offer better performance and security.
its semiconductor footprint for iWatches and Costs are also potentially lower, since companies
HomePods. It may also explore internal chip design do not have to pay a designer’s premium. In the
for other products and components, such as those hotly competitive cloud market, these cost savings
that enable power management and graphics.³ If could help companies differentiate themselves
Apple does go down this path, an important source from their rivals.
2
Based on a three- to fourfold revenue multiple of core Qualcomm Technology business (licensing business excluded).
3
Mark Gurman and Ian King, “Apple plans to use its own chips in Macs from 2020, replacing Intel,” Bloomberg, April 2, 2018, bloomberg.com.
4
Richard Waters, “Facebook joins Amazon and Google in AI chip race,” Financial Times, February 18, 2019, ft.com; Argam Atashyan, “Amazon
releases machine learning chips, namely Inferentia and Graviton,” Gizchina Media, November 29, 2018, gizchina.com; Jordan Novet, “Microsoft
is hiring engineers to work on A.I. chip design for its cloud,” CNBC, June 11, 2018, cnbc.com.
⁵ Jordan Novet, “Microsoft is hiring engineers to work on A.I. chip design for its cloud,” CNBC, June 11, 2018, cnbc.com; Tom Simonite, “New at
Amazon: Its own chips for cloud computing,” Wired, November 27, 2018, wired.com.
Exhibit 8
Industry total returns to shareholders for the largest subsegments, index (Dec 31, 2012 = 100)
Fabless
400
Foundry
Memory
350
S&P 500
300
250
200
150
100
50
Q1 Q1 Q1 Q1 Q1 Q1 Q1
2013 2014 2015 2016 2017 2018 2019
Exhibit 9
3 2 1 –1 1 0 N/A⁴
¹ Earnings before interest, taxes, and amortization; market data as of Dec 31, 2018.
² Integrated device manufacturer.
³ Compound annual growth rate.
⁴ Memory industry reached the peak of the valuation cycle in 2018 with extraordinarily high margins. Margin decline is imminent for the sector and is included in the
sector valuation. At 2018 margins, valuation cannot be reconciled with topline decline alone.
Source: Annual reports; Semiconductor CPC database; McKinsey Corporate Performance Center
Fabless also has the highest enterprise value competitor for their design needs. Such losses
and is now the largest subsegment by far hurt, but they were often temporary because
(Exhibit 10). Its ability to capture the highest customers often came back to the original
economic profit, strong near-term growth company for future designs. Now if customers
prospects, and potential resiliency all contribute are dissatisfied with a road map, they might
to higher investor expectations. move design capabilities in-house, resulting in
a permanent loss of business.
Potential strategies for
semiconductor companies —— Using M&A in moderation. The semiconductor
Given the current landscape, semiconductor industry is still fragmented in many
companies must accelerate value creation. Four subsegments, and industry consolidation
actions seem essential: still makes sense. The best strategy involves
programmatic M&A, in which companies
—— Creating strong road maps for leading acquire at least one company a year, spending
customers. Semiconductor companies have long an average of 2 to 5 percent of their market
recognized the importance of delivering winning capitalization, with no single deal accounting
road maps for chip design, but the stakes are for more than 30 percent of their market
now higher than ever. In the past, customers that capitalization.⁶ These deals allow players
did not like a proposed road map might go to a
6
Chris Bradley, Martin Hirt, and Sven Smit, Strategy Beyond the Hockey Stick: People, Probabilities, and Big Moves to Beat the Odds, first
edition, Hoboken, NJ: John Wiley & Sons, 2018.
Exhibit 10
Foundry Analog
13 IDM
5
Microprocessors
17 Others²
7
Capital
equipment
12
to branch into adjacent areas to strengthen deal could improve any of these areas, it will
their competitive position. Deals that involve help the companies create a more compelling
companies that only offer similar products road map that positions them for future
will not produce as much value. One factor to success. But companies that undertake
consider when contemplating a deal is the value M&A must avoid falling into the trap of paying
that it will bring to customers on measures such too much for goodwill, or else they risk
as price, quality, and performance. If an M&A destroying value.
7
Benjamin Mayo, “Apple licenses Dialog power management tech, and hires 300 engineers, to develop more custom iPhone chips,” 9to5Mac,
October 10, 2018, 9to5mac.com.
Marc de Jong is a partner in McKinsey’s Amsterdam office, and Anurag Srivastava is an alumnus of the New York office.
by Gaurav Batra, Zach Jacobson, Siddarth Madhav, Andrea Queirolo, and Nick Santhanam
Exhibit 1
The technology stack for artificial intelligence (AI) contains nine layers.
Interface Interface systems Systems within framework that determine and facilitate Networking
• Switches, routers, and other
communication pathways between software and
underlying hardware equipment used to link servers
in the cloud and to connect
Hardware Head node Hardware unit that orchestrates and coordinates edge devices
computations among accelerators
Accelerator Silicon chip designed to perform highly parallel operations
required by AI; also enables simultaneous computations
Exhibit 2
Companies will find many opportunities in the artificial intelligence (AI) market, with leaders
already emerging.
• Potential growth in demand for existing storage systems • AI-optimized storage systems
Storage
as more data are retained • Emerging NVM (as storage device)
¹ Graphics-processing units.
² Field programmable gate arrays.
³ Static random access memory.
⁴ Nonvolatile memory.
Source: McKinsey analysis
For each area, we examined how hardware demand Data-center usage. Most compute growth will stem
is evolving at both data centers and the edge. from higher demand for AI applications at cloud-
We also quantified the growth expected in each computing data centers. At these locations, GPUs
category except networking, where AI-related are now used for almost all training applications. We
opportunities for value capture will be relatively expect that they will soon begin to lose market share
small for semiconductor companies. to ASICs, until the compute market is about evenly
divided between these solutions by 2025. As ASICs
Compute enter the market, GPUs will likely become more
Compute performance relies on central processing customized to meet the demands of DL. In addition
units (CPUs) and accelerators—graphics- to ASICs and GPUs, FPGAs will have a small role in
processing units (GPUs), field programmable gate future AI training, mostly for specialized data-center
arrays (FPGAs), and application-specific integrated applications that must reach the market quickly or
circuits (ASICs). Since each use case has different require customization, such as those for prototyping
compute requirements, the optimal AI hardware new DL applications.
architecture will vary. For instance, route-planning
applications have different needs for processing For inference, CPUs now account for about
speed, hardware interfaces, and other performance 75 percent of the market. They’ll lose ground to ASICs
features than applications for autonomous driving or as DL applications gain traction. Again, we expect to
financial-risk stratification (Exhibit 4). see an almost equal divide in the compute market,
with CPUs accounting for 50 percent of demand in
Overall, demand for compute hardware will increase 2025 and ASICs for 40 percent.
through 2025 (Exhibit 5). After analyzing more than
150 DL use cases, looking at both inference and Edge applications. Most edge training now occurs
training requirements, we were able to identify the on laptops and other personal computers, but more
architectures most likely to gain ground in data devices may begin recording data and playing a role
centers and the edge (Exhibit 6). in on-site training. For instance, drills used during
Exhibit 3
Growth for semiconductors related to artificial intelligence (AI) is expected to be five times
greater than growth in the remainder of the market.
AI semiconductor total AI semiconductor total Estimated AI semiconductor total
available market,1 $ billion available market, % available market CAGR,2 2017–25, %
AI
362 18–19
Non-AI
7 11
65 19
93
88
256 295 81
240
17 32
5x
223 224
3–4
¹ Total available market includes processors, memory, and storage; excludes discretes, optical, and micro-electrical-mechanical systems.
² Compound annual growth rate.
Source: Bernstein; Cisco Systems; Gartner; IC Insights; IHS Markit; Machina Research; McKinsey analysis
DL algorithms. But over time, the demand for AI that signals their customers are willing to
memory at the edge will increase—for instance, pay for expensive AI hardware in return for
connected cars may need more DRAM. performance gains.¹
Current memory is typically optimized for CPUs, but —— On-chip memory. For a DL compute processor,
developers are now exploring new architectures. storing and accessing data in DRAM or other
Solutions that are attracting more interest include outside memory sources can take 100 times
the following: more time than memory on the same chip. When
Google designed the tensor-processing unit
—— High-bandwidth memory (HBM). This (TPU), an ASIC specialized for AI, it included
technology allows AI applications to process enough memory to store an entire model on
large data sets at maximum speed while the chip.² Start-ups such as Graphcore are also
minimizing power requirements. It allows increasing on-chip memory capacity, taking
DL compute processors to access a three- it to a level about 1,000 times more than what
dimensional stack of memory through a fast is found on a typical GPU, through a novel
connection called through-silicon via (TSV). architecture that maximizes the speed of AI
AI chip leaders such as Google and Nvidia calculations. The cost of on-chip memory is
have adopted HBM as the preferred memory still prohibitive for most applications, and chip
solution, although it costs three times more designers must address this challenge.
than traditional DRAM per gigabyte—a move
1
Liam Tung, “GPU killer: Google reveals just how powerful its TPU2 chip really is,” ZDNet, December 14, 2017, zdnet.com.
2
Kaz Sato, “What makes TPUs fine-tuned for deep learning?,” Google, August 30, 2018, google.com.
Exhibit 4
Face
GPU CPU
detection
FPGA5 ASIC
Financial-risk
stratification GPU CPU
FPGA
Dynamic
pricing GPU CPU
ASIC
Autonomous
driving ASIC GPU
ASIC
FPGA
¹ Can use interfaces and data from earlier versions of the system.
² Graphics-processing unit.
³ Application-specific integrated circuit.
⁴ Central processing unit.
⁵ Field programmable gate array.
Source: McKinsey analysis
³ When exploring opportunities for semiconductor players in storage, we focused on not AND (NAND). Although demand for hard-disk drives will
also increase, this growth is not driven by semiconductor advances.
Exhibit 5
At both data centers and the edge, demand for training and inference hardware is growing.
Data center, total market, $ billion Edge, total market, $ billion
4–5 4–5
4–4.5
1–1.5
~1
<0.1 <0.1
Exhibit 6
The preferred architectures for compute are shifting in data centers and the edge.
Data-center architecture, % Edge architecture, %
ASIC1 CPU2 FPGA3 GPU4 Other
50 50 50
60
75 70 70
97
40 40
30 50
10 20 20
15 10 10 10 10 10
there are now more types of AI hardware than object-detection libraries that can help applications
ever, including new accelerators, players should interpret data from cameras and sensors in self-
offer simple interfaces and software-platform driving cars.
capabilities. For instance, Nvidia provides
developers with Compute Unified Device As preference for certain hardware architectures
Architecture, a parallel-computing platform and builds throughout the developer community,
application programming interface (API) that works semiconductor companies will see their visibility
with multiple programming languages. It allows soar, resulting in better brand recognition. They’ll
software developers to use Compute Unified also see higher adoption rates and greater
Device Architecture–enabled GPUs for general- customer loyalty, resulting in lasting value.
purpose processing. Nvidia also provides software
developers with access to a collection of primitives Only platforms that add real value to end users will
for use in DL applications. The platform has now be able to compete against comprehensive offerings
been deployed across thousands of applications. from large high-tech players, such as Google’s
TensorFlow, an open-source library of ML and DL
Within strategically important industry sectors, Nvidia models and algorithms.⁴ TensorFlow supports
also offers customized software-development kits. Google’s core products, such as Google Translate,
To assist with the development of software for self- and also helps the company solidify its position
driving cars, for instance, Nvidia created DriveWorks, within the AI technology stack, since TensorFlow is
a kit with ready-to-use software tools, including compatible with multiple compute accelerators.
4
An open-source, machine-learning framework for everyone, available at tensorflow.org.
Gaurav Batra is a partner in McKinsey’s Washington, DC, office; Zach Jacobson and Andrea Queirolo are associate partners
in the New York office; Siddarth Madhav is a partner in the Chicago office; and Nick Santhanam is a senior partner in the
Silicon Valley office.
The authors wish to thank Sanchi Gupte, Jo Kakarwada, Teddy Lee, and Ben Byungchol Yoon for their contributions to this article.
by Gaurav Batra, Rémy Olson, Shilpi Pathak, Nick Santhanam, and Harish Soundararajan
© Imaginima/Getty Images
Blockchain 2.0: What’s in store for the two ends—semiconductors (suppliers) and industrials (consumers)? 27
Blockchain is best known as a sophisticated specific strategies for capturing value. One caveat:
and somewhat mysterious technology that allows all information in this article reflects data available
cryptocurrencies to change hands online without as of December 2018. Cryptocurrency values
assistance from banks or other intermediaries. fluctuate widely, so the numbers reported, including
But in recent years, it has also been promoted as those for market capitalization, may not reflect the
the solution to business issues ranging from fraud most recent data. Blockchain technology and the
management to supply-chain monitoring to identity competitive landscape are also evolving rapidly, and
verification. Despite the hype, however, block- there may have been changes since publication.
chain’s use in business is still largely theoretical. A
few pioneers in retail and other sectors are exploring
blockchain business applications related to supply- Blockchain 1.0: The cryptocurrency era
chain management and other processes, but most It is not surprising that many people conflate
are reluctant to proceed further because of high blockchain with Bitcoin, the first and most dominant
costs, unclear returns, and technical difficulties. cryptocurrency. Until recently, the vast majority
of blockchain applications involved enabling
But we may now be at a transition point between cryptocurrency transactions. Around 2014, however,
Blockchain 1.0 and Blockchain 2.0. In the new era, private companies began investigating the use of
blockchain-enabled cryptocurrency applications will blockchain for other business applications. Since
likely cede their prominence to blockchain business most of these players are still at the pilot stage, it is
applications that can potentially increase efficiency fair to say that blockchain-enabled cryptocurrency
and reduce costs. These applications will be in a has been the focus of the Blockchain 1.0 era.
good position to gain steam, since many large tech
companies may soon begin offering blockchain The emergence of cryptocurrencies
as a service (BaaS). Rather than just providing Bitcoin hit the market in 2009 as an open-source
the hardware layer, as they’ve traditionally done, software application. It was first used in a commer-
these companies will extend their services up the cial transaction in 2010, when two pizzas were
technology stack to blockchain platforms and tools. bought for 10,000 bitcoin (under $10 then, but
As blockchain deployment becomes less complex about $35 million as of December 2018). With no
and expensive, companies that have sat on the central authority or server to verify transactions,
sidelines may now be willing to take the plunge. (See the public was initially skeptical about Bitcoin and
sidebar, “What advantages do blockchain business reluctant to use it. Beginning in 2014, however,
applications offer?”) Bitcoin experienced a meteoric increase in user
base, brand-name recognition, and transaction
Will blockchain business applications continue to volume. Its value is extremely volatile, however, and
grow and finally validate their promise? Industrial it has declined sharply from its late 2017 peak of
companies, which were largely on the sidelines more than $19,000.
during the Blockchain 1.0 era, want an answer to
this question because they could find opportunities The past two years have seen the most growth in
to deploy business applications that improve their blockchain-enabled cryptocurrencies, with the
bottom line. Semiconductor companies are also number increasing from 69 in 2016 to more than
interested in the growth of both blockchain business 1,500 in 2018. Even though Bitcoin’s value has
applications and blockchain-enabled cryptocurrency decreased this year, an influx of initial coin offerings
because this could increase demand for chips. (ICOs) has increased the market capitalization for
cryptocurrencies (Exhibit 1).
Both industrial and semiconductor players will
need a solid understanding of specific blockchain- Many of the additional currencies—also called “alt-
enabled use cases and the market landscape to coins”—were created to address certain gaps or
succeed in the new era. To assist them, this article inefficiencies with Bitcoin, and they are available
reviews the changing market and then focuses on through various networks. Popular altcoins
Exhibit 1
The number of active cryptocurrencies and their market capitalization has soared.
Cryptocurrencies active in the market, number Cryptocurrency market capitalization,¹ $ billion
1,500+ 177
~150
392
69 11
33 7
29 2 4
8
¹ This is the market capitalization for a select bundle of cryptocurrencies. Bundle includes: Bitcoin, Dash, Ethereum, Litecoin, Ripple, and several other altcoins. Figures
are as of Dec 11, 2018.
Blockchain 2.0: What’s in store for the two ends—semiconductors (suppliers) and industrials (consumers)? 29
In the beginning, many individuals mined Bitcoin to be ASIC resistant—and that means miners must
as a hobby. But as interest in cryptocurrencies fetch random data and compute randomly selected
grew, the number and size of Bitcoin miners soared, transactions to solve their cryptographic questions.
necessitating more sophisticated hardware and Both activities require frequent access to memory,
more intense computing power. This shift has which ASICs alone won’t provide. Ethereum miners
favored the rise of large mining pools. Many of these, primarily rely on a system that utilizes a GPU in
including AntPool and BTC.COM, are based in combination with memory.
China. The top five mining pools account for 70 to 85
percent of the overall Bitcoin network’s collective
hash rate, or computing power. Blockchain 2.0: Uncertainty about
cryptocurrencies and the emergence of
Hardware for cryptocurrency players business applications
In the early day of cryptocurrency, amateur The Blockchain 2.0 era will likely usher in many
hobbyists relied on central processing units (CPUs) changes. The cryptocurrency market could become
to optimize compute performance. When the more diverse if Bitcoin continues to decrease
Bitcoin network began expanding around 2010, in price, since ICOs may see the situation as an
the graphics-processing unit (GPU) replaced opportunity to stake their claims. Consumers may
the CPU as the accelerator of choice. The ascent also begin demonstrating more interest in other
of GPUs was short lived, however, since many established altcoins. For instance, users may come
companies began designing application-specific to favor Dash or Litecoin for some transactions,
integrated circuits (ASICs) for cryptocurrency since they offer faster transaction speed than
mining to improve hash rates. Bitcoin does. Companies and the general public
are generally becoming more comfortable with
About 50 to 60 percent of companies that cryptocurrency transactions, which could increase
manufacture ASICs for Bitcoin transactions usage rates.¹
are based in the Greater China region (Exhibit
2). (Some of these began creating ASICs for In tandem with these changes, the market for
cryptocurrency mining before Bitcoin entered the blockchain business applications is heating up as
market in 2008, since this was already viewed BaaS simplifies implementation. Demand for these
as a potential growth area.) BitMain Technologies, applications is expected to be strong, and corporate
a China-based company, supplied 70 to 80 percent users could soon outnumber cryptocurrency miners.
of the cryptocurrency ASICs in 2017. Its customers
typically use “crypto rigs”—multiple ASICs Investors are showing continued interest in
working together—to optimize compute speed. blockchain, although funding levels have recently
By conservative estimates, BitMain Technologies declined. Venture-capital funding peaked in 2017
has a gross margin of 65 to 75 percent and an at about $900 million for both cryptocurrency
operating margin of 55 to 65 percent—equivalent and business applications, and it will likely still be
to $3 billion to $4 billion in 2017. That figure is between $600 million and $800 million in 2018. It is
roughly the same as the profit margin for Nvidia, unclear whether 2019 will show continued decline,
which has been in business for 20 years longer. a plateau, or greater investment.
Although most major cryptocurrencies now reward Although it is difficult to make predictions about
miners with high compute speed, some have taken blockchain, since it is a relatively new technology,
steps to prevent large mining pools with crypto rigs we were able to identify several trends in the
from dominating the market. For instance, Ethash, the cryptocurrency and business-application markets
hashing algorithm that Ethereum uses, is designed
Exhibit 2
Company INNOSILICON Technology Bitfury BitMain Technologies CoinBau GmbH Butterfly Labs
HQ Ningbo, China DC, US Beijing, China Dresden, Germany Leawood, KS, US
Recent product T2 Turbo+ 32T Bitfury Tardis Antminer S9-Hydro WolfCAVE XE Monarch
Launch Sept 2018 Oct 2018 Aug 2018 Not available Aug 2014
Hash rate 32 Th/sec 80 Th/sec 18 Th/sec 4.8 Th/sec 725 or 825 Gh/sec
Power efficiency 0.069 J/Gh 0.055 J/Gh 0.096 J/Gh 0.27 W/Gh 0.7 W/Gh
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
Company Ebang Communication Black Arrow Canaan Creative CoinTerra Halong Mining
HQ Hangzhou, China Guangdong, China Beijing, China CA, US Not applicable (online only)
Recent product EBIT E11+ Prospero X36 AvalonMiner 851 TerraMiner IV Dragonmint T16
Launch Oct 2018 Dec 2015 Aug 2018 Jan 2014 Mar 2018
Hash rate 37 Th/sec 2.2 Th/sec 15 Th/sec 1.6 Th/sec 16 Th/sec
Power efficiency 0.055 J/Gh 0.7 W/Gh 0.11 J/Gh 0.6 W/Gh 0.075 J/Gh
that could affect demand for this technology. Here may have lingering doubts that are difficult to
is what we found. overcome. But we do expect to see greater usage
rates. In addition, miners will have a greater number
The cryptocurrency market is evolving rapidly, of options from which to choose. Although Bitcoin
but uncertainties remain now represents about 40 to 50 percent of market
Despite the widespread press attention that capitalization for cryptocurrency, other altcoins are
cryptocurrencies receive, their practical value is still becoming more popular. Ethereum, for instance, now
limited. Most people regard them as something of accounts for more than 10 percent of the market
an online Swiss bank account—a haven for activities capitalization. And small ICOs—those beyond the
that can’t be closely tracked by authorities. In many top 20—now represent about 20 percent of market
cases, potential users hold back because they don’t capitalization, up from 5 percent only two years ago.
believe cryptocurrencies are secure. Digital-ledger
technology, the backbone of blockchain, has never Government intervention—particularly the
been hacked, but cryptocurrencies are vulnerable in development of laws and regulations—may strongly
other ways. The most infamous theft occurred in 2014 influence the cryptocurrency market over the next
when someone took 850,000 bitcoin from the Mt. few years. If the current market provides any clues,
Gox exchange by assuming another person’s identity. it is unlikely that a global consensus will emerge.
In the corporate sphere, only about 3,000 companies For instance, some governments allow individuals
now accept Bitcoin transactions. to use cryptocurrency but prohibit banks and
securities companies from doing so. Other countries
Future growth of cryptocurrencies take a much stricter approach by forbidding
It is difficult to predict whether cryptocurrencies ICOs to operate within their borders. If additional
will experience strong growth in Blockchain 2.0, governments adopt this stance, cryptocurrency
since corporate leaders and members of the public uptake could be limited.
Blockchain 2.0: What’s in store for the two ends—semiconductors (suppliers) and industrials (consumers)? 31
Another big question relates to investment. Funding intermediaries and decrease administrative costs
for ICOs usually comes from venture capitalists associated with record keeping. Over the longer
because pension funds and other institutional term, blockchain might be used to improve fraud
investors consider cryptocurrency too risky. (The management, supply-chain monitoring, cross-
majority of ICOs do not yet have customers nor do border payments, identity verification, and the
they generate revenue.) Even though venture-capital protection of copyrights or intellectual property. It
investment in cryptocurrency has increased, the lack could also help companies with smart contracts—
of interest from institutional investors could restrict transactions that execute automatically when
future growth to some extent. certain conditions are met.
Blockchain 2.0: What’s in store for the two ends—semiconductors (suppliers) and industrials (consumers)? 33
Blockchain 2.0
Exhibit 3 of 3
Exhibit 3
Enable user interface and implement business logic (ie, portion of an enterprise
system that determines how data are transformed or calculated, and how they are routed to
Applications people or software)
Are typically domain or industry specific
Ensure interoperability of systems and manage permissions, disaster recovery, and governance
Digital-ledger software
and services
Contain ecosystem of data, such as sales information and shipping records, pulled into
blockchain application
Data
Provide base protocol and configurable functionalities for various services, such as smart
contracting
Digital-ledger fabric
platform
Provide infrastructure for hosting and developing blockchain, and for operating nodes
Include hardware
Development platform
and infrastructure
Belief 2: Scalable use cases will involve high preferable to a traditional shared database, for
value, low volume, and collaborative mechanisms instance. Similarly, blockchain applications
The list of potential blockchain applications that that significantly reduce cost by increasing
industrial companies could implement is long. They efficiency are well worth exploring. For instance,
could facilitate smart contracts, provide customers a machinery manufacturer may have a supply
with a clear record of a product’s origin, enhance chain that involves multiple intermediaries. A
logistics and supply chain, improve product quality, blockchain application that could reduce cost
or help satisfy regulatory requirements. But not and complexity during shipping would deliver
every industrial use case with strong potential will enormous value.
survive past the PoC stage. Those that are most
likely to gain traction share three characteristics: —— Low transaction volume. Blockchain
technology still has limited processing
—— High value. Each blockchain application power, which makes it difficult to perform
must deliver significant value to the bottom many transactions simultaneously. Until the
line. If an information breach could cause technology advances, industrial companies
a company to lose millions of dollars, a should apply it to use cases that involve limited
blockchain application might be infinitely transaction volume. For instance, a consumer-
Blockchain 2.0: What’s in store for the two ends—semiconductors (suppliers) and industrials (consumers)? 35
What advantages do blockchain business applications offer?
Think of blockchain as a database in real time through the blockchain fails, the information will still be available
shared across a number of participants, network. Companies doing business elsewhere. Another advantage involves
each with a computer. At any moment, with each other must thus store most of the audit trail. Users can go back through
each member of the blockchain holds an their transactional information in digital the blocks of information and easily see
identical copy of the blockchain database, form to take advantage of blockchain. the information previously recorded in the
giving all participants access to the same database, such as the previous owner of
information. All blockchains share three —— A public or private network that a piece of property. And perhaps most
characteristics: enables sharing. Anyone can join or important, blockchain maintains process
leave a public network without express integrity. The database can only be updated
—— A cryptographically secure database. permission. Admission into private when two things happen. First, a user must
When data are read or written, users networks is by invitation only. provide the correct public and private keys.
must provide the correct cryptographic Second, a majority of participants in the
keys—one public (essentially, the Blockchain’s cryptographic keys provide network must verify those credentials. This
address) and one private. Users cannot leading-edge security that goes far reduces the risk that a malicious user will
update the blockchain unless they have beyond that found in a standard distributed gain illicit access to the network and make
the correct keys. ledger. The technology also eliminates the unauthorized updates.
possibility that a single point of failure will
—— A digital log of transactions. emerge since the blockchain database is
Transactional information is available distributed and decentralized. If one node
in which compute power is less important. For companies will need to develop new strategies
blockchain business applications, which could that align with their customers’ priorities. To do so
represent the wave of the future, compute power effectively, they must ask themselves four questions:
is essential but not a differentiator. Instead,
semiconductor companies and other players will win —— In which specific use cases and microverticals
by enabling or providing BaaS. are customers likely to adopt a blockchain
solution at scale?
Belief 2: To win in Blockchain 2.0, semiconductor
companies can’t just understand their —— Which customers or end markets have the
customers—they also have to understand their market position and structure to ensure
customers’ customers that all relevant companies will be willing to
Cryptocurrency ASICs have been in extremely collaborate?
high demand since 2016, because miners began
getting higher rewards for adding the next block. —— How do end customers plan to use blockchain
Most orders come from the top five Bitcoin mining and what aspects of our hardware—for
pools in China, and the demand could increase instance, cost, compute capability, or power
over the next few years. This trend will keep orders consumption—will differentiate the winners
flowing into substrates, ASIC designers, foundries, from the losers?
outsourced assembly and testing companies, and
equipment manufacturers. —— How can we work with (or without) BaaS players,
including those who provide other hardware
With value migrating from cryptocurrencies to components, software integration, or go-to-
blockchain business applications, and with BaaS market capabilities, to enable end-to-end
players gaining market share, semiconductor solutions for customers?
Gaurav Batra is a partner in McKinsey’s Washington, DC, office; Rémy Olson is an alumnus of the San Francisco office; Shilpi
Pathak is an alumnus of in the Chicago office; Nick Santhanam is a senior partner in the Silicon Valley office; and Harish
Soundararajan is an alumnus of the Boston office.
The authors wish to thank Jo Kakarwada and Celine Shan for their contributions to this article.
Blockchain 2.0: What’s in store for the two ends—semiconductors (suppliers) and industrials (consumers)? 37
Rethinking car
software and
electronics architecture
As the car continues its transition from a hardware-driven machine to
a software-driven electronics device, the auto industry’s competitive
rules are being rewritten.
© Just_Super/Getty Images
However, as the importance of electronics and One consequence of these strategic moves is that
software has grown, so has complexity. Take the the vehicle architecture will become a service-
exploding number of software lines of code (SLOC) oriented architecture (SOA) based on generalized
contained in modern cars as an example. In 2010, computing platforms. Developers will add new
some vehicles had about ten million SLOC; by 2016, connectivity solutions, applications, artificial-
this expanded by a factor of 15, to roughly 150 million intelligence elements, advanced analytics, and
lines. Snowballing complexity is causing significant operating systems. The differentiation will not be in
software-related quality issues, as evidenced by the traditional vehicle hardware anymore but in the
millions of recent vehicle recalls. user-interface and experience elements powered
by software and advanced electronics.
With cars positioned to offer increasing levels
of autonomy, automotive players see the quality Tomorrow’s cars will shift to a platform of new
and security of vehicle software and electronics brand differentiators (Exhibit 2). These will likely
as key requirements to guarantee safety. include infotainment innovations, autonomous-
This means the industry must rethink today’s driving capabilities, and intelligent safety features
approaches to vehicle software and electrical and based on “fail operational” behaviors (for example,
electronic architecture. a system capable of completing its key function
even if part of it fails). Software will move further
down the digital stack to integrate with hardware
Addressing an urgent industry concern in the form of smart sensors. Stacks will become
As the automotive industry transitions from horizontally integrated and gain new layers that
hardware- to software-defined vehicles, the average transition the architecture into an SOA.
software and electronics content per vehicle is
rapidly increasing. Software represents 10 percent Ultimately, the new software and electronic
of overall vehicle content today for a D-segment architecture will come from several game-changing
(large) car (approximately $1,220), and the average trends that drive complexity and interdependencies.
share of software is expected to grow at a compound For example, new smart sensors and applications
annual rate of 11 percent, to reach 30 percent of will create a “data explosion” in the vehicle that
overall vehicle content (around $5,200) in 2030. Not companies need to handle by processing and
surprisingly, companies across the digital automotive analyzing the data efficiently, if they hope to remain
value chain are attempting to capitalize on inno- competitive. A modularized SOA and over-the-air
vations enabled through software and electronics (OTA) updates will become key requirements to
(Exhibit 1). Software companies and other digital- maintain complex software in fleets and enable new
technology companies are leaving their current tier- function-on-demand business models. Infotainment
two and tier-three positions to engage automakers and, to a lesser degree, advanced driver-assistance
as tier-one suppliers. They’re expanding their systems (ADAS) will increasingly become “appified”
participation in the automotive technology stack as more third-party app developers provide vehicle
by moving beyond features and apps into operating content. Digital-security requirements will shift the
systems. At the same time, traditional tier-one focus from a pure access-control strategy to an
Exhibit 1
Source: Automotive Electronics Initiative; Robert N. Charette, “This car runs on code,” IEEE Spectrum, February 2009, spectrum.ieee.org; HAWK; McKinsey analysis
integrated-security concept designed to anticipate, developments are already under way and will hit the
avoid, detect, and defend against cyberattacks. The market in two to three years’ time. This consolidation
advent of highly automated driving (HAD) capabilities is especially likely for stacks related to ADAS and
will require functionality convergence, superior HAD functionality, while more basic vehicle functions
computing power, and a high degree of integration. might keep a higher degree of decentralization.
Exhibit 2
Architecture will become service oriented, with new factors for differentiation.
Future layered in-vehicle and back-end architecture
Existing layer Modified layer New layer
Electronic/electrical
hardware1
Closely controlled add-on app
Power
Sensors Actuators and modules due to safety
components
considerations
Vehicle
entrants into automotive that will likely disrupt the and embedded firmware (including the operating
industry through a software-oriented approach to system) will depend on key nonvehicle functional
vehicle architecture. Increasing demand for HAD requirements instead of being allocated part of a
features and redundancy will also require a higher vehicle functional domain. To allow for separation
degree of consolidation of ECUs. and a service-oriented architecture, the following
four stacks could become the basis for upcoming
Several premium automakers and their suppliers are generations of cars in five to ten years:
already active in ECU consolidation, making early
moves to upgrade their electronic architecture, —— Time-driven stack. In this domain, the controller
although no clear industry archetype has emerged is directly connected to a sensor or actuator
at this point. while the systems have to support hard real-time
requirements and low latency times; resource
The industry will limit the number of stacks used scheduling is time based. This stack includes
with specific hardware systems that reach the highest Automotive
Accompanying the consolidation will be a Safety Integrity Level classes, such as the
normalization of limited stacks that will enable a classical Automotive Open System Architecture
separation of vehicle functions and ECU hardware (AUTOSAR) domain.
that includes increased virtualization. Hardware
Exhibit 3
Object detection
Object classification
Distance estimation
Object-edge precision
Lane tracking
Range of visibility
Cost
Production readiness
Radar and camera most likely combination in next 5–8 years, although solid-state lidar and camera¹ will be dominant in the long term
when proved and integrated into mass-production designs
¹ Comparison with other technologies not yet possible due to low maturity of technology.
health monitoring of measures such as heart rate Sensors will become more intelligent
and drowsiness, as well as face recognition and iris System architectures will require intelligent and
tracking, are just a few of the potential use cases. integrated sensors to manage the massive amounts
However, as an increase or even a stable number of of data needed for highly automated driving. While
sensors would require a higher bill of materials, not high-level functions such as sensor fusion and
only in the sensors themselves but also in the vehicle 3-D positioning will run on centralized computing
network, the incentive to reduce the number of platforms, preprocessing, filtering, and fast reaction
sensors is high. With the arrival of highly automated cycles will most likely reside in the edge or be
or fully automated vehicles, future advanced done directly in the sensor. One estimate puts the
algorithms and machine learning can enhance amount of data an autonomous car will generate
sensor performance and reliability. Combined with every hour at four terabytes. Consequently,
more powerful and capable sensor technologies, a intelligence will move from ECUs into sensors to
decrease of redundant sensors can be expected. conduct basic preprocessing requiring low latency
Sensors used today might become obsolete as their and low computing performance, especially if
functions are overtaken by more capable sensors (for weighting costs for data processing in the sensors
instance, a camera- or lidar-based parking assistant against costs for high-volume data transmission
could replace ultrasound sensors). in the vehicle. Redundancy for driving decisions
Cars will feature updatable components that Assessing the future implications
communicate bidirectionally of vehicle software and electronic
Onboard test systems will allow cars to check architecture
function and integration updates automatically, While the trends affecting the automotive industry
thus enabling life-cycle management and the today are generating major hardware-related
enhancement or unlocking of aftersales features. All uncertainties, the future looks no less disruptive
ECUs will send and receive data to and from sensors for software and electronic architecture. Many
and actuators, retrieving data sets to support strategic moves are possible: automakers could
innovative use cases such as route calculation create industry consortia to standardize vehicle
based on vehicle parameters. architecture, digital giants could introduce onboard
cloud platforms, mobility players could produce their
OTA update capabilities are a prerequisite for own vehicles or develop open-source vehicle stacks
HAD; they will also enable new features, ensure and software functions, and automakers could
cybersecurity, and enable automakers to deploy introduce increasingly sophisticated connected and
features and software quicker. In fact, it’s the OTA autonomous cars.
update capability that is the driver behind many
of the significant changes in vehicle architecture The transition from hardware-centric products
described previously. In addition, this capability also to a software-oriented, service-driven world is
requires an end-to-end security solution across all especially challenging for traditional automotive
layers of the stack outside the vehicle to the ECUs companies. Yet, given the described trends and
in the vehicle. This security solution remains to be changes, there is no choice for anyone in the
designed, and it will be interesting to see how and by industry but to prepare. We see several major
whom this will be done. strategic pushes:
This article was developed in collaboration with the Global Semiconductor Alliance (GSA).
Ondrej Burkacky is a partner in McKinsey’s Munich office, where Georg Doll is a senior expert; Johannes Deichmann is an
associate partner in the Stuttgart office; and Christian Knochenhauer is a senior expert in the Berlin office.
The authors wish to thank Silviu Apostu, Michaela Brandl, and Virginia Herbst for their contributions to this article. They
also wish to thank executives from GSA member companies and others who participated in the interviews and survey that
contributed to this article.
Exhibit
The global market for automotive components is expected to grow about 7 percent annually
through 2030.
Automotive software and electrical/electronic market, $ billion CAGR¹ 2020–30, %
~7 Overall
469
~3
85 Other electronic components2
~10
76 50 Software3 ~9
25 63 Sensors
238 37 ~8
3,800
2,755 3,027
Automotive sales,
$ billion ~3
2020 2025 2030
Ondrej Burkacky is a partner in McKinsey’s Munich office, where Jan Paul Stein is a consultant. Johannes Deichmann is an
associate partner in the Stuttgart office.
© TimeStopper/Getty Images
Right product, right time, right location: Quantifying the semiconductor supply chain 51
McKinsey Semiconductors 2019
Right product semiconductor
Exhibit 1 of 5
Exhibit 1
9.7
8.2 8.2
5.8
5.3
2 percent over the same period. These findings Most missteps that lead to late deliveries relate
suggest that supply-chain inefficiencies are a major to one of three areas: forecasting, execution, and
cause of customer churn. inventory (Exhibit 2). For instance, if the order lead
time is shorter than the three to four weeks required
What’s behind the low OTD rates? The root causes for back-end processing, a semiconductor company
are as complex as the supply chain itself. When must have sufficient finished-goods inventory to
semiconductor companies receive an order, they meet the target delivery date. But many players
have chips at every stage of the supply chain, with inaccurately forecast future demand and don’t
some undergoing front-end processing, others have enough finished goods in stock when such
in die-bank or back-end processing, and the requests arrive.
remainder sitting in warehouses as finished goods.
Likewise, the lead times for orders may vary, with
some customers expecting quick shipments and A comprehensive metric for assessing
others requesting deliveries along a more relaxed supply-chain performance
timeline. All too often, however, semiconductor The three elements of the RPRTRL metric allow
companies discover that the requested lead time is companies to quantify their performance in
shorter than the cycle time needed to fulfill the order. forecasting, execution, and inventory management
Exhibit 2
The ability to meet target delivery dates depends on order lead time, inventory along the supply
chain, and other factors.
Order lead time Shorter than full cycle time Longer than back-end cycle time Shorter than back-end
but less than full cycle time cycle time
Fulfillment Order can be produced from raw Order can be fulfilled from die Order must be fulfilled
requirement materials, if: bank, if: from finished-goods
• front-end planning and execution • sufficient die-bank inventory inventory, since there is
happen in line with anticipated exists at order time no time to produce from
front-end cycle times die bank
• planning and execution happen
• there are no front-end capacity in line with anticipated
constraints back-end cycle times
Possible root cause • Inaccurate front-end planning, • Testing-execution delays • Low or nonexistent
of on-time- including forecasting • Assembly-execution delays inventory, often
delivery gaps • Fabrication-execution delays resulting from poor
• Inaccurate back-end planning, forecasting
• Fabrication-capacity constraints resulting in insufficient dies
planned for assembly
• Insufficient die-bank inventory
(Exhibit 3). Companies must evaluate these methods to determine if they are making decisions
elements for every product ordered, to ensure based on insufficient or flawed information. For
that the overall metric reflects the most up-to- instance, companies may only look at past-
date information. order data to forecast demand, even if they have
other information that provides important clues
Right product about future trends, such as customer financial
If companies can’t predict when products will be statements, the number of web-page views for
needed, it doesn’t matter whether the rest of their certain product parts, and data-sheet downloads
supply chain is efficient. They simply won’t be able for different products on their website. Some
to fulfill orders, or they’ll have excessive inventory companies also encounter problems because
because they make more products than they they use the same forecasting model for all SKUs,
need. The right-product component of RPRTRL which can lead to inaccuracies. If a product has
measures how companies perform in this area by intermittent spikes in demand, it needs a different
calculating the extent of a company’s forecasting model than does a product with low but
bias (the arithmetic mean of a forecasting error) and steady demand.
the magnitude of the forecasting error (the sum of
mistakes on all orders). Right time
The right-time component focuses on how well
Companies that score low on the right-product companies execute orders once they are received—
component will need to reexamine their forecasting basically, it evaluates whether a company is
Right product, right time, right location: Quantifying the semiconductor supply chain 53
McKinsey Semiconductors 2019
Right product semiconductor
Exhibit 3 of 5
Exhibit 3
The right product, right time, right location (RPRTRL) metric evaluates the three major
components of supply-chain performance.
Right location
Was inventory staged along
the supply chain at the
right
locations?
completing all tasks, including those related to on progress, so semiconductor companies don’t
fab operations, sorting, assembly, and testing, learn about delays until it’s too late to address
within the expected time frame. The right-time them. In other cases, companies may not use all
score is computed by determining the volume- available vendor capacity or may fail to manage
weighted percentage of individual tasks for which their priorities. As one example, companies might
the actual cycle time was shorter than or equal not accelerate production for “hot lots”—those that
to the planned cycle time, in both back-end and need to enter production quickly because timelines
front-end processing. This calculation of execution will be tight.
performance provides more insights than current
measurement methods, which typically involve Right location
looking at overall cycle times and determining the Are inventory levels sufficient at all locations
percentage of orders with delays. along the supply chain, including die banks and
warehouses for finished goods? Many companies
If companies score low on the right-time can’t answer this question because their current
component, they should review their production- inventory systems haven’t been properly tested
management processes, including those related or implemented. All too often, they just consider
to vendors. For instance, foundry and back-end- average supply and demand, rather than examining
process partners may not provide daily updates the factors that might change these variables.
Exhibit 4
The right-location analysis looks at inventory sufficiency per SKU at key points and its impact
on on-time delivery.
+16 pp
¹ Percentage points.
Right product, right time, right location: Quantifying the semiconductor supply chain 55
McKinsey Semiconductors 2019
Right product semiconductor
Exhibit 5 of 5
Exhibit 5
The right product, right time, right location (RPRTRL) calculation uses scores from
three areas.
Right product Right time
Forecasting accuracy: evaluates extent Execution quality: measures “pull”
of forecasting bias (arithmetic mean part of supply chain (events that occur
of a forecasting error) and after order is received) and
magnitude of forecasting calculates volume-weighted
error (sum of mistakes Each percentage of orders for
on all orders) component is which actual cycle time
scored on a scale was shorter than or
of 0 to 1. Individual equal to planned
scores are multiplied to cycle time
calculate the total
RPRTRL score
Right location
score. For the initial computation, companies Calculating an RPRTRL score provides valuable
typically use anywhere from one to two years’ worth insights, but it’s just the first step in any supply-
of data. To measure progress, they should recalculate chain transformation. Companies must then
RPRTRL at monthly or weekly intervals (when they assess the costs and benefits of addressing each
have sufficient data). problem before developing appropriate solutions.
Fore
Since supply-chain issues will vary, companies e
The total RPRTRL score will range from zero to one. In must develop customized strategies for improving
our benchmark analysis of semiconductor companies, forecasting accuracy, execution, and inventory
the best-in-class players had an RPRTRL score in management. Some might get the most benefit
the range of 0.6 to 0.7. The average semiconductor from improved vendor management, for instance,
company scores 0.3. The key question for all semi- while others gain by adopting new predictive data
conductor executives is this: sets that decrease forecasting errors. But in all
cases, the RPRTRL metric will provide a common
Do you know your RPRTRL score? view of the supply chain that helps all groups deploy
a coordinated response. That alone will provide
invaluable assistance.
Gaurav Batra is a partner in McKinsey’s Washington, DC, office; Kristian Nolde is an associate partner in the Vancouver office;
and Nick Santhanam is a senior partner in the Silicon Valley office, where Rutger Vrijen is a partner.
© gorodenkoff/Getty Images
Exhibit
Labor
20
100
15
200mm 15
fab 20
10 20
15
300mm 100
20
fab
(highly 25
automated) 20
2 18
10 15
100
Back-end 15
fab 30
10 20
products associated with every job description, as Such analyses may not seem new to many
well as the activities that employees perform during industries, since companies across sectors
a typical week and the time spent on each one. already have established methods for identifying
This activity mapping often reveals findings that value drivers. Their analyses may not focus on the
surprise both managers and frontline employees. purpose, end products, or activities of employees,
One executive of a global memory-solution but their overall goal is to gain insight into different
company commented, “PEA is just like a magnetic- functions and reduce costs. In the semiconductor
resonance-imaging scan. Now I finally understand industry, however, such value analyses have been
how my engineers’ time is spent.” Often, a PEA rare, particularly with respect to indirect labor.
analysis will show that employees spend many
hours on activities that are not considered vital to Once companies have baseline metrics and a
their jobs or which do not contribute substantially solid understanding of all job functions, they can
to the creation of a desired end product. identify initiatives to improve efficiency and reduce
1
Eoin Leydon, Ernest Liu, and Bill Wiseman, “Moneyball for engineers: What the semiconductor industry can learn from sports,” McKinsey on
Semiconductors, March 2017, McKinsey.com.
Koen De Backer is an alumnus of McKinsey’s Singapore office, where Bo Huang is an associate partner and Matteo Mancini
is a partner. Amanda Wang is a consultant in the Shanghai office.
by Koen De Backer, RJ Huang, Mantana Lertchaitawee, Matteo Mancini, and Choon Liang Tan
1
For more, see Koen De Backer, Matteo Mancini, and Aditi Sharma, “Optimizing back-end semiconductor manufacturing through Industry 4.0,”
February 2017, McKinsey.com.
2
“Yield and yield management,” in Cost Effective IC Manufacturing, Scottsdale, AZ: Integrated Circuit Engineering Corporation,1997.
3
Jim Handy, “What’s it like in a semiconductor fab?,” Forbes, December 19, 2011, forbes.com.
Exhibit 1
Cost-of-non-quality (CONQ) calculation can be broken down into three components: Volume,
standard cost, and yield.
CONQ-calculation breakdown
CONQ total Actual scrap volume Average standard cost Average standard yield
per unit
Example • Chips detected as defective • Cost per chip increased by • Expected die-yield %
• Chips falling into bins allocated expected scrap (yield)
for scrap
Description • Quantity of scrap attributable to • Average cost per chip, including: • % of expected yield loss used
die-yield loss, ie, products — Variable (material) costs for standard chip costing,
discarded during production multiplied by the average
• Scrap quantity measured at — Overhead (including labor) costs standard cost per unit, gives
process output, reported in — Yield adjustment, ie, additional unit the “unyielded” cost, ie, the
enterprise-resource-planning cost due to yield losses real cost at input
system
Not included Utilization loss: Working chips that are unsold and scrapped after 6 months
Rework: Defective chips thrown into bins for reprocessing to ideally produce a good chip
Freight costs: Cost of transportation of wafers from upstream processing
Develop a holistic, data-driven view of what production process, the company was losing almost
needs to improve and where $68 million due to yield losses overall, including
Typically, engineers are dedicated to discrete almost $19 million during electrical testing alone
processes, enabling them to develop deep expertise (Exhibit 2). Engineers can use their technical
in a given area and more effectively serve on the line. knowledge of what happens in particular processes
However, when embarking on a yield transformation, to determine why certain reject codes are high
a semiconductor company must develop a holistic within those processes. By also calculating the
view of the manufacturing process. Therefore addressable amount of loss, this heat-map view
engineering must take a step back to see exactly enables the organization to prioritize what to focus
what parts of the process, and specifically what on and allocate resources to the process areas
reject categories, lead to the greatest amount of most likely to improve profitability.
loss. While some companies already bring a product
focus to yield losses, an overarching view of the entire In our experience, having this view handy is extremely
manufacturing line is usually not top of mind. Thus, useful not only to ensure that everyone has a view of
instead of a singular transformation, what usually what must be addressed and where but also to keep
happens is a lot of the efforts are siloed into individual track of what areas have been covered—and which
processes, products, and even pieces of equipment. ones are still unexplored. The heat map also enables
engineers to take a top-management approach
A loss matrix enables engineering to map process toward the line as a whole, instead of focusing only on
areas (in a heat map) and reject categories against their particular process, and reinforces the view that
yield performance of the manufacturing line all engineers are responsible for managing quality
from start to finish. One manufacturer found that and yield.
across the eight major steps of its semiconductor
Exhibit 2
An example loss matrix illustrates how manufacturers can identify major yield losses by
category to help prioritize improvement efforts.
Total loss 18.7 5.8 10.5 7.9 7.3 9.5 4.9 3.2
Implement systemic improvements to identify per-product analysis ensures that action is taken
yield loss only on items that have the biggest impact on yield.
Once the biggest loss areas are identified using
the loss matrix, it is important to ensure the actions As a result, engineers have the detailed insight they
taken to improve the identification of yield loss are need to address the key issues that drive the
sustainable; this starts by isolating the products that particular losses identified by the loss matrix. They
are the biggest contributors to scrap (Exhibit 3). This can also use a product Pareto analysis to identify
Exhibit 3
Product analysis helps manufacturers identify the biggest contributors to overall yield
loss as well as the size gap.
Week:
X top products contribute
Process 1 target Process 2 actual Gap to target Contribution to X% drop of yield (overall gap to
target is x.x%, partly positively
85% 81.3% –10% –10% affected by parts that are better
than target)
the use cases where addressing an issue will solve which rejects are true and which are false, as
the most significant, far-reaching problems. well as discuss which potential cross-functional
collaborations may help solve the issue. One
Key improvement themes are generally structured manufacturer completed an analysis of four of the
using the traditional “5 Ms” of lean manufacturing— Ms (measurement was not applicable in that case)
machine, man, material, measurement, and method. and sorted out true from false rejects while also
While organizing loss categories along these lines, developing a sound foundation for improvement
semiconductor companies should also analyze initiatives (Exhibit 4).
Exhibit 4
Key improvement themes are identified, evaluated, and structured in close collaboration
with experts.
True rejects False rejects
External Internal
Loss Specifications/
categories material Method (process) Machine Man
One finding from the yield-loss analysis showed that Given their cross-functional nature, the machine-
the manufacturer was experiencing contamination variability initiatives entailed both internal effort and
and wrinkle issues at a particular process point. external involvement. Internally, product, process,
The ensuing problem-solving session identified and test engineers, quality engineering, and R&D
underlying, systemic issues in the manufacturing worked together to run the necessary tests and
process, resulting in four improvement initiatives qualifications to ensure the activity had no negative
relating to both true and false rejects (Exhibit 5). impact on semiconductor quality. Armed with their
Exhibit 5
The idea-generation process starts by brainstorming ways to reduce both true and false
rejects and focusing on addressable issues.
Estimated
improvement
potential, %
Decrease false
Adjust tool recipe settings (calibrations) on machines to
rejects 10
decrease false rejects
Decrease
Reject
cost of non- 40
code 1
quality at
process X Match internal operational specs at process X to customer specs
Reject
(no additional further tightening) 5
code 2
analysis, engineers could have more meaningful Impact on a yield engineer’s typical
discussions with external vendors about legacy day, with the holistic view of yield
patches to existing equipment and ideas to improve improvements
machine performance. Yield engineering resources are typically spent
supporting or leading improvement activities
The implementation of these four initiatives reduced across both product and process engineering. At
contamination rejects for identified products by one manufacturer, yield engineers’ daily activities
90 percent and wrinkle rejects by 40 percent, and ranged across three main areas—root-cause
in the long term gave valuable insight to engineers problem solving of excursions and other critical
on both collaborating with third parties as well as identified yield losses, cross-functional yield-
ingraining an ownership mind-set. improvement activities and collaborations with other
The role of advanced analytics in semiconductor yield improvement: Converting data into actions
As noted by the CEO of advanced- is a big difference between insights from cause identification. This benefit
analytics company Motivo Engineering, traditional quantitative analysis and those is greater when we try to uncover
“Each fab has thousands of process from advanced analytics. Furthermore, root causes of low- and medium-
steps, which, in turn, have thousands of semiconductor manufacturing is in a frequency errors, which are difficult
parameters that can be used in different unique position compared with other to detect using traditional analytics.
combinations. With so many factors industries to reap the benefits of advanced
in play, we see a lot of chip failures or analytics, given the massive amount of —— Improved value-added time for
defects.”¹ Given its complexities, traditional data embedded in fabs’ highly automated engineers. At one organization, for
quantitative analysis wouldn’t help fabs and sensor-laden environment. Fabs can example, data pulling and analysis
uncover all improvement opportunities, benefit from yield analytics through three in line-maintenance activities can
resulting in a lengthy process of root-issue key levers: take up more than triple the time
discovery—and thus massive yield losses. required than if data infrastructure
—— Early defect detection and root- and interface are well designed. This
For that reason, the use of advanced cause identification. Advanced- situation represents an opportunity
analytics offers a new paradigm for analytics tools can help uncover to free up engineers’ time to focus
yield improvement in the semiconductor issues much faster and in much instead on core issues and production
industry. Indeed, the nature of greater detail, leading to faster root- design solutions.
manufacturing complexity means there
For more, see Koen De Backer, Matteo Mancini, and Aditi Sharma, “Optimizing back-end semiconductor manufacturing through Industry 4.0,” February 2017, McKinsey.com.
1
—— Powerful tool for “past learning” and management of the manufacturing —— Golden-flow analysis is a crucial
continuous improvement. Machine- process. These tools and processes analytical capability to determine tool
learning algorithms, a well-organized enable data to be managed and commonality and identify which tools
data lake, and the appropriate tools reported by the engineers so it’s most are performing at optimal levels—and
allow fabs to accumulate learning beneficial to their target audience, be which are not. These data help with
from past experiences and enable they process engineers, managers, or both tool matching and ensuring that
continuous improvement. Whereas the third parties such as customers. production is as high yield and efficient
traditional approach eliminates defects as possible, maximizing throughput
by adjusting multiple parameters, which —— Parametric analysis refers to and optimizing manufacturing flow
helps with the current batch, it fails to testing how product parameters are (see case study “Golden-flow analysis
offer any insight into the root cause of distributed at performance testing in action”).
the problem—meaning it is likely to be and inspections and comparing these
repeated in future batches.² findings to product development’s —— Equipment optimization as an
specification limits. This analysis analytical capability refers to how
Identify core analytics capabilities that ultimately aims to enable the software can perform predictive
can improve yield optimization of specifications—tight analyses to determine potential
Seven core analytics capabilities are enough to ensure good quality but issues before they occur. This
important in yield-management solutions: also reasonable enough to prevent ability is closely linked to predictive
monitoring and reporting, parametric unnecessary over- or under-rejection. maintenance and aims to avoid
analysis, correlation analysis, golden-flow yield loss by tackling predictable
analysis, equipment optimization, pattern —— Correlation analysis finds tool variation and necessary
recognition, and event analysis: correlations between test parameters parameter tuning.
at earlier stages versus final
—— Monitoring and reporting is the most inspections. This assessment aims to —— Pattern recognition is about looking
basic among the capabilities—but also maximize final product performance at the distribution of parameter
one of the most important. This process and help manage end-to-end yield by patterns across wafer maps and
refers to trend charts, histograms, adjusting test parameters depending connecting the findings to equipment,
Pareto analysis, proactive reporting on how they correlate with testing manufacturing trends, and
and notification, and enhanced results, either electrical or visual correlations with process and test
statistical process control, all of test parameters. parameters. With this capability, live
which enable enhanced performance feedback can be given to engineers
“Yield and yield management,” in Cost Effective IC Manufacturing, Scottsdale, AZ: Integrated Circuit Engineering Corporation,1997.
2
Golden-flow analysis helps identify and configuration, was experiencing an indicatication of a problem until after
bad actors and golden tools in situations uptick in normalized defect density across it got worse. The advanced warning
where
McKinseytrends Semiconductors
are unclear. At one 2019 different layers over a seven-day period of increased defect density allowed
manufacturer,
Taking the next the analysis
leap detected (exhibit). The uptick had not surpassed the manufacturer to take down the
that a specific tool (XYZ-1), which the upper control limit (UCL), so without tool for investigation, repairs, or
Exhibit
was one of three tools in the same class the analysis there would have been no calibration interventions.
Exhibit
Normalized litho-defect density at Tool 1–Tool 2 by lithography tool, number per cm²
1.1
1.0
Upper control limit
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1 2 3 4 5 6 7
Days of week, #
so tools and process parameters can highly automated and sensor-laden collaboration and action ( see case study
be adjusted to reduce yield loss systems in fabs, data quality is usually “Feedback loop finds costs savings”).
(see case study “Using analytics to a challenge in implementing analytics
reduce losses”). software or using data for analysis; for Partnerships with technology and
example, different product families have analytics vendors. As our colleagues
—— Event analysis entails studying different data formats and complex have noted, many analytics and
production events, such as production processes. The important machine-learning vendors believe that
maintenance and supply changes, step is to get individuals with a strong semiconductor companies prefer to
to discover their effect on yield. technical knowledge of data and database develop solutions in-house,³ which
Identifying root causes for quality optimization to create the right data discourages them from building strong
shifts or parametric surges can infrastructure to enable scale-up of relationships with other semiconductor
be done by tying them to the analytics solutions. companies. In reality, active partnerships
occurrence of various events on the with analytics vendors will help
manufacturing floor. Right organization setup to take data increase the speed of building analytics
insights to fast action and feedback loop. capabilities for fabs. Given the fast-
Undertake key enablers to Converting data and insights into actions changing environment and highly
overcome typical challenges in is among the most critical steps—and specialized capability in analytics,
implementing yield analytics challenges—to capture benefits from ongoing collaboration and partnership
Well-organized data integration and analytics. In particular to yield, issues will help semiconductor companies stay
interface. Data pull and cleaning (that is, always cross sites and require end-to-end on the cutting edge and employ solutions
the creation of a data lake) are important collaboration to get breakthrough results. that enhance in-house capability.
steps in deploying analytics. Despite The key to success is to have effective
the richness of data gathered through yield tracking and a platform to enable
Ondrej Burkacky, Mark Patel, Nicholas Sergeant, and Christopher Thomas, “Reimagining fabs: Advanced analytics in semiconductor manufacturing,” March 2017, McKinsey.com.
3
impact of recommended improvements. Armed with Teams can effectively link decisions from customer
end-to-end traceability of yield losses from front requirements (either by R&D or business units),
end to back end, yield teams benefit from a more down to bottom-line impact on front-end and
granular view of bottom-line impact, reducing the back-end expected yield losses, to identify systemic
analytical resources needed and allowing for more root causes cutting across processes, reject
insights to be shared with the cross-functional team, categories, or products. This capability helps yield
including R&D, business-unit sales and marketing engineers be more precise in identifying which
teams, and front- and back-end managers. teams (product or process engineers) are needed
and helps to prioritize the initiatives in which they
One manufacturer developed a false- The algorithm provides a daily automated estimation and monitoring on a monthly
reject estimator analytics tool for final report of false rejects at tool and part basis. This approach reduced losses from
inspection equipment to help the fab number (product) levels, enabling a material wastes and customer quality
detect and estimate sizes of false rejects focused effort to tackle problems in a issues while enhancing overall capacity
based on a pattern-recognition algorithm. timely manner by comparing with manual (for example, dice output per day).
Case study
Feedback loop finds cost savings
One semiconductor player operating end-to-end yield monitoring and speed this yield PMO has delivered 10 percent
across regions in Asia and America up the feedback loop. Along with the yield improvement and identified and
set up a cross-site yield project- development of four analytical tools and implemented a $12 million cost-savings
management office (PMO) to facilitate a performance-management dashboard, opportunity within six months.
ought to invest most of their time. From an efficiency the value chain to collaborate on more data and to
improvement and workload-reduction perspective, push initiatives to be more fact based and prioritize
teams can better rationalize meeting participation. resources to maximize profitability.
Yield engineers are further empowered with data Yield-performance tracking and reporting
to highlight potential opportunities to implement For both mature and new unreleased products,
more yield gains by aligning or relaxing internal yield engineers have shifted from daily or weekly
specifications, without affecting customer demand yield-percentage monitoring to more continuous
or satisfaction. Transparency enables teams across monitoring thanks to the capabilities of the loss
Koen De Backer is an alumnus of McKinsey’s Singapore office, where Matteo Mancini is a partner. RJ Huang is a consultant
in the Manila office, Mantana Lertchaitawee is a consultant in the Bangkok office, and Choon Liang Tan is an alumnus of the
Kuala Lumpur office.
November 2019
Designed by Global Editorial Services
Copyright © McKinsey & Company
McKinsey.com