Vim Ref1
Vim Ref1
circuit (IC) by combining millions of MOS transistors onto a single chip. VLSI
began in the 1970s when MOS integrated circuit chips were widely adopted, enabling
complex semiconductor and telecommunication technologies to be developed. The
microprocessor and memory chips are VLSI devices. Before the introduction of VLSI
technology, most ICs had a limited set of functions they could perform. An
electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI lets
IC designers add all of these into one chip.
The history of the transistor dates to the 1920s when several inventors attempted
devices that were intended to control current in solid-state diodes and convert
them into triodes. Success came after World War II, when the use of silicon and
germanium crystals as radar detectors led to improvements in fabrication and
theory. Scientists who had worked on radar returned to solid-state device
development. With the invention of the first transistor at Bell Labs in 1947, the
field of electronics shifted from vacuum tubes to solid-state devices.
With the small transistor at their hands, electrical engineers of the 1950s saw the
possibilities of constructing far more advanced circuits. However, as the
complexity of circuits grew, problems arose.[1] One problem was the size of the
circuit. A complex circuit like a computer was dependent on speed. If the
components were large, the wires interconnecting them must be long. The electric
signals took time to go through the circuit, thus slowing the computer.[1]
The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this
problem by making all the components and the chip out of the same block (monolith)
of semiconductor material. The circuits could be made smaller, and the
manufacturing process could be automated. This led to the idea of integrating all
components on a single-crystal silicon wafer, which led to small-scale integration
(SSI) in the early 1960s, and then medium-scale integration (MSI) in the late
1960s.
Very large-scale integration was made possible with the wide adoption of the MOS
transistor, originally invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs
in 1959.[2] Atalla first proposed the concept of the MOS integrated circuit chip in
1960, followed by Kahng in 1961, both noting that the MOS transistor's ease of
fabrication made it useful for integrated circuits.[3][4] General Microelectronics
introduced the first commercial MOS integrated circuit in 1964.[5] In the early
1970s, MOS integrated circuit technology allowed the integration of more than
10,000 transistors in a single chip.[6] This paved the way for VLSI in the 1970s
and 1980s, with tens of thousands of MOS transistors on a single chip (later
hundreds of thousands, then millions, and now billions).
The first semiconductor chips held two transistors each. Subsequent advances added
more transistors, and as a consequence, more individual functions or systems were
integrated over time. The first integrated circuits held only a few devices,
perhaps as many as ten diodes, transistors, resistors and capacitors, making it
possible to fabricate one or more logic gates on a single device. Now known
retrospectively as small-scale integration (SSI), improvements in technique led to
devices with hundreds of logic gates, known as medium-scale integration (MSI).
Further improvements led to large-scale integration (LSI), i.e. systems with at
least a thousand logic gates. Current technology has moved far past this mark and
today's microprocessors have many millions of gates and billions of individual
transistors.
At one time, there was an effort to name and calibrate various levels of large-
scale integration above VLSI. Terms like ultra-large-scale integration (ULSI) were
used. But the huge number of gates and transistors available on common devices has
rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of
integration are no longer in widespread use.
In 2008, billion-transistor processors became commercially available. This became
more commonplace as semiconductor fabrication advanced from the then-current
generation of 65 nm processes. Current designs, unlike the earliest devices, use
extensive design automation and automated logic synthesis to lay out the
transistors, enabling higher levels of complexity in the resulting logic
functionality. Certain high-performance logic blocks like the SRAM (static random-
access memory) cell, are still designed by hand to ensure the highest efficiency.
Structured VLSI design is a modular methodology originated by Carver Mead and Lynn
Conway for saving microchip area by minimizing the interconnect fabrics area. This
is obtained by repetitive arrangement of rectangular macro blocks which can be
interconnected using wiring by abutment. An example is partitioning the layout of
an adder into a row of equal bit slices cells. In complex designs this structuring
may be achieved by hierarchical nesting.[7]
Structured VLSI design had been popular in the early 1980s, but lost its popularity
later[citation needed] because of the advent of placement and routing tools wasting
a lot of area by routing, which is tolerated because of the progress of Moore's
Law. When introducing the hardware description language KARL in the mid' 1970s,
Reiner Hartenstein coined the term "structured VLSI design" (originally as
"structured LSI design"), echoing Edsger Dijkstra's structured programming approach
by procedure nesting to avoid chaotic spaghetti-structured program
s microprocessors become more complex due to technology scaling, microprocessor
designers have encountered several challenges which force them to think beyond the
design plane, and look ahead to post-silicon:
• Process variation – As photolithography techniques get closer to the
fundamental laws of optics, achieving high accuracy in doping concentrations and
etched wires is becoming more difficult and prone to errors due to variation.
Designers now must simulate across multiple fabrication process corners before a
chip is certified ready for production, or use system-level techniques for dealing
with effects of variation.[8]
• Stricter design rules – Due to lithography and etch issues with scaling,
design rules for layout have become increasingly stringent. Designers must keep in
mind an ever increasing list of rules when laying out custom circuits. The overhead
for custom design is now reaching a tipping point, with many design houses opting
to switch to electronic design automation (EDA) tools to automate their design
process.
• Timing/design closure – As clock frequencies tend to scale up, designers are
finding it more difficult to distribute and maintain low clock skew between these
high frequency clocks across the entire chip. This has led to a rising interest in
multicore and multiprocessor architectures, since an overall speedup can be
obtained even with lower clock frequency by using the computational power of all
the cores.
• First-pass success – As die sizes shrink (due to scaling), and wafer sizes go
up (due to lower manufacturing costs), the number of dies per wafer increases, and
the complexity of making suitable photomasks goes up rapidly. A mask set for a
modern technology can cost several million dollars. This non-recurring expense
deters the old iterative philosophy involving several "spin-cycles" to find errors
in silicon, and encourages first-pass silicon success. Several design philosophies
have been developed to aid this new design flow, including design for manufacturing
(DFM), design for test (DFT), and Design for X.
• Electromigration