0% found this document useful (0 votes)
98 views18 pages

Digital Electronics

Digital Electronics

Uploaded by

Jeff Gicharu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views18 pages

Digital Electronics

Digital Electronics

Uploaded by

Jeff Gicharu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Digital electronics

From Wikipedia, the free encyclopedia


(Redirected from Digital systems)

Jump to navigationJump to search


Digital electronics

A digital signal has two or more distinguishable waveforms, in this example, high voltage and low voltages,
each of which can be mapped onto a digit.

An industrial digital controller

Digital electronics is a field of electronics involving the study of digital signals and the


engineering of devices that use or produce them. This is in contrast to analog
electronics and analog signals.
Digital electronic circuits are usually made from large assemblies of logic gates, often
packaged in integrated circuits. Complex devices may have simple electronic
representations of Boolean logic functions.[1]

Contents

 1History
 2Properties
 3Construction
 4Design
o 4.1Representation
o 4.2Synchronous systems
o 4.3Asynchronous systems
o 4.4Register transfer systems
o 4.5Computer design
o 4.6Computer architecture
o 4.7Design issues in digital circuits
o 4.8Automated design tools
o 4.9Design for testability
o 4.10Trade-offs
 4.10.1Cost
 4.10.2Reliability
 4.10.3Fan-out
 4.10.4Speed
 5Logic families
 6Recent developments
 7See also
 8Notes
 9References
 10Further reading
 11External links

History[edit]
The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705)
and he also established that by using the binary system, the principles of arithmetic and
logic could be joined. Digital logic as we know it was the brain-child of George Boole in
the mid 19th century. In an 1886 letter, Charles Sanders Peirce described how logical
operations could be carried out by electrical switching circuits. [2] Eventually, vacuum
tubes replaced relays for logic operations. Lee De Forest's modification of the Fleming
valve in 1907 could be used as an AND gate. Ludwig Wittgenstein introduced a version
of the 16-row truth table as proposition 5.101 of Tractatus Logico-
Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the
1954 Nobel Prize in physics, for creating the first modern electronic AND gate in 1924.
Mechanical analog computers started appearing in the first century and were later used
in the medieval era for astronomical calculations. In World War II, mechanical analog
computers were used for specialized military applications such as calculating torpedo
aiming. During this time the first electronic digital computers were developed, with the
term digital being proposed by George Stibitz in 1942. Originally they were the size of a
large room, consuming as much power as several hundred modern PCs.[3]
The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in
1941, it was the world's first working programmable, fully automatic digital computer.
[4]
 Its operation was facilitated by the invention of the vacuum tube in 1904 by John
Ambrose Fleming.
At the same time that digital calculation replaced analog, purely electronic
circuit elements soon replaced their mechanical and electromechanical
equivalents. John Bardeen and Walter Brattain invented the point-contact
transistor at Bell Labs in 1947, followed by William Shockley inventing the bipolar
junction transistor at Bell Labs in 1948.[5][6]
At the University of Manchester, a team under the leadership of Tom Kilburn designed
and built a machine using the newly developed transistors instead of vacuum tubes.
[7]
 Their "transistorised computer", and the first in the world, was operational by 1953,
and a second version was completed there in April 1955. From 1955 and onwards,
transistors replaced vacuum tubes in computer designs, giving rise to the "second
generation" of computers. Compared to vacuum tubes, transistors were smaller, more
reliable, had indefinite lifespans, and required less power than vacuum tubes - thereby
giving off less heat, and allowing much denser concentrations of circuits, up to tens of
thousands in a relatively compact space.
While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas
concerning the integrated circuit (IC), then successfully demonstrated the first working
integrated circuit on 12 September 1958. [8] Kilby's chip was made of germanium. The
following year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated
circuit. The basis for Noyce's silicon IC was the planar process, developed in early 1959
by Jean Hoerni, who was in turn building on Mohamed Atalla's silicon surface
passivation method developed in 1957.[9] This new technique, the integrated circuit,
allowed for quick, low-cost fabrication of complex circuits by having a set of electronic
circuits on one small plate ("chip") of semiconductor material, normally silicon.
The metal–oxide–semiconductor field-effect transistor (MOSFET), also known as the
MOS transistor, was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in
1959.[10][11][12] The MOSFET's advantages include high scalability,[13] affordability,[14] low
power consumption, and high transistor density.[15] Its rapid on–off electronic
switching speed also makes it ideal for generating pulse trains,[16] the basis for
electronic digital signals,[17][18] in contrast to BJTs which, more slowly, generate analog
signals resembling sine waves.[16] Along with MOS large-scale integration (LSI), these
factors make the MOSFET an important switching device for digital circuits.[19] The
MOSFET revolutionized the electronics industry,[20][21] and is the most
common semiconductor device.[11][22]
In the early days of integrated circuits, each chip was limited to only a few transistors,
and the low degree of integration meant the design process was relatively simple.
Manufacturing yields were also quite low by today's standards. The wide adoption of the
MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips
with more than 10,000 transistors on a single chip. [23] Following the wide adoption
of CMOS, a type of MOSFET logic, by the 1980s, millions and then billions of
MOSFETs could be placed on one chip as the technology progressed, [24] and good
designs required thorough planning, giving rise to new design methods. The transistor
count of devices and total production rose to unprecedented heights. The total amount
of transistors produced until 2018 has been estimated to be 1.3×1022 (13 sextillion).[25]
The wireless revolution (the introduction and proliferation of wireless networks) began in
the 1990s and was enabled by the wide adoption of MOSFET-based RF power
amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS).[26][27][28] Wireless
networks allowed for public digital transmission without the need for cables, leading
to digital television, GPS, satellite radio, wireless Internet and mobile phones through
the 1990s–2000s.

Properties[edit]
An advantage of digital circuits when compared to analog circuits is that signals
represented digitally can be transmitted without degradation caused by noise.[29] For
example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be
reconstructed without error, provided the noise picked up in transmission is not enough
to prevent identification of the 1s and 0s.
In a digital system, a more precise representation of a signal can be obtained by using
more binary digits to represent it. While this requires more digital circuits to process the
signals, each digit is handled by the same kind of hardware, resulting in an
easily scalable system. In an analog system, additional resolution requires fundamental
improvements in the linearity and noise characteristics of each step of the signal chain.
With computer-controlled digital systems, new functions can be added through software
revision and no hardware changes are needed. Often this can be done outside of the
factory by updating the product's software. This way, the product's design errors can be
corrected even after the product is in a customer's hands.
Information storage can be easier in digital systems than in analog ones. The noise
immunity of digital systems permits data to be stored and retrieved without degradation.
In an analog system, noise from aging and wear degrade the information stored. In a
digital system, as long as the total noise is below a certain level, the information can be
recovered perfectly. Even when more significant noise is present, the use
of redundancy permits the recovery of the original data provided too many errors do not
occur.
In some cases, digital circuits use more energy than analog circuits to accomplish the
same tasks, thus producing more heat which increases the complexity of the circuits
such as the inclusion of heat sinks. In portable or battery-powered systems this can limit
the use of digital systems. For example, battery-powered cellular phones often use a
low-power analog front-end to amplify and tune the radio signals from the base station.
However, a base station has grid power and can use power-hungry, but very
flexible software radios. Such base stations can easily be reprogrammed to process the
signals used in new cellular standards.
Many useful digital systems must translate from continuous analog signals to discrete
digital signals. This causes quantization errors. Quantization error can be reduced if the
system stores enough digital data to represent the signal to the desired degree
of fidelity. The Nyquist–Shannon sampling theorem provides an important guideline as
to how much digital data is needed to accurately portray a given analog signal.
In some systems, if a single piece of digital data is lost or misinterpreted, the meaning of
large blocks of related data can completely change. For example, a single-bit error in
audio data stored directly as linear pulse-code modulation causes, at worst, a single
click. Nevertheless, many people use audio compression to save storage space and
download time, even though a single bit error may cause a large disruption.
Because of the cliff effect, it can be difficult for users to tell if a particular system is right
on the edge of failure, or if it can tolerate much more noise before failing. Digital fragility
can be reduced by designing a digital system for robustness. For example, a parity
bit or other error management method can be inserted into the signal path. These
schemes help the system detect errors, and then either correct the errors, or request
retransmission of the data.

Construction[edit]

A binary clock, hand-wired on breadboards

A digital circuit is typically constructed from small electronic circuits called logic


gates that can be used to create combinational logic. Each logic gate is designed to
perform a function of boolean logic when acting on logic signals. A logic gate is
generally created from one or more electrically controlled switches,
usually transistors but thermionic valves have seen historic use. The output of a logic
gate can, in turn, control or feed into more logic gates.
Another form of digital circuit is constructed from lookup tables, (many sold as
"programmable logic devices", though other kinds of PLDs exist). Lookup tables can
perform the same functions as machines based on logic gates, but can be easily
reprogrammed without changing the wiring. This means that a designer can often repair
design errors without changing the arrangement of wires. Therefore, in small volume
products, programmable logic devices are often the preferred solution. They are usually
designed by engineers using electronic design automation software.
Integrated circuits consist of multiple transistors on one silicon chip, and are the least
expensive way to make large number of interconnected logic gates. Integrated circuits
are usually interconnected on a printed circuit board which is a board which holds
electrical components, and connects them together with copper traces.

Design[edit]
Engineers use many methods to minimize logic redundancy in order to reduce the
circuit complexity. Reduced complexity reduces component count and potential errors
and therefore typically reduces cost. Logic redundancy can be removed by several well-
known techniques, such as binary decision diagrams, Boolean algebra, Karnaugh
maps, the Quine–McCluskey algorithm, and the heuristic computer method. These
operations are typically performed within a computer-aided design system.
Embedded systems with microcontrollers and programmable logic controllers are often
used to implement digital logic for complex systems that don't require optimal
performance. These systems are usually programmed by software engineers or by
electricians, using ladder logic.
Representation[edit]
Representations are crucial to an engineer's design of digital circuits. To choose
representations, engineers consider different types of digital systems.
The classical way to represent a digital circuit is with an equivalent set of logic gates.
Each logic symbol is represented by a different shape. The actual set of shapes was
introduced in 1984 under IEEE/ANSI standard 91-1984 and is now in common use by
integrated circuit manufacturers.[30] Another way is to construct an equivalent system of
electronic switches (usually transistors). This can be represented as a truth table.
Most digital systems divide into combinational and sequential systems. A combinational
system always presents the same output when given the same inputs. A sequential
system is a combinational system with some of the outputs fed back as inputs. This
makes the digital machine perform a sequence of operations. The simplest sequential
system is probably a flip flop, a mechanism that represents a binary digit or "bit".
Sequential systems are often designed as state machines. In this way, engineers can
design a system's gross behavior, and even test it in a simulation, without considering
all the details of the logic functions.
Sequential systems divide into two further subcategories. "Synchronous" sequential
systems change state all at once when a clock signal changes state. "Asynchronous"
sequential systems propagate changes whenever inputs change. Synchronous
sequential systems are made of well-characterized asynchronous circuits such as flip-
flops, that change only when the clock changes, and which have carefully designed
timing margins.
For logic simulation, digital circuit representations have digital file formats that can be
processed by computer programs.
Synchronous systems[edit]

A 4-bit ring counter using D-type flip flops is an example of synchronous logic. Each device is connected to the
clock signal, and update together.

Main article: synchronous logic


The usual way to implement a synchronous sequential state machine is to divide it into
a piece of combinational logic and a set of flip flops called a state register. The state
register represents the state as a binary number. The combinational logic produces the
binary representation for the next state. On each clock cycle, the state register captures
the feedback generated from the previous state of the combinational logic and feeds it
back as an unchanging input to the combinational part of the state machine. The clock
rate is limited by the most time-consuming logic calculation in the combinational logic.
Asynchronous systems[edit]
Most digital logic is synchronous because it is easier to create and verify a synchronous
design. However, asynchronous logic has the advantage of its speed not being
constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic
gates.[a] Building an asynchronous system using faster parts makes the circuit faster.
Nevertheless, most systems need to accept external unsynchronized signals into their
synchronous logic circuits. This interface is inherently asynchronous and must be
analyzed as such. Examples of widely used asynchronous circuits include synchronizer
flip-flops, switch debouncers and arbiters.
Asynchronous logic components can be hard to design because all possible states, in
all possible timings must be considered. The usual method is to construct a table of the
minimum and maximum time that each such state can exist and then adjust the circuit
to minimize the number of such states. The designer must force the circuit to
periodically wait for all of its parts to enter a compatible state (this is called "self-
resynchronization"). Without careful design, it is easy to accidentally produce
asynchronous logic that is unstable—that is—real electronics will have unpredictable
results because of the cumulative delays caused by small variations in the values of the
electronic components.
Register transfer systems[edit]

Example of a simple circuit with a toggling output. The inverter forms the combinational logic in this circuit, and
the register holds the state.

Many digital systems are data flow machines. These are usually designed using
synchronous register transfer logic and written with hardware description
languages such as VHDL or Verilog.
In register transfer logic, binary numbers are stored in groups of flip flops
called registers. A sequential state machine controls when each register accepts new
data from its input. The outputs of each register are a bundle of wires called a bus that
carries that number to other calculations. A calculation is simply a piece of
combinational logic. Each calculation also has an output bus, and these may be
connected to the inputs of several registers. Sometimes a register will have
a multiplexer on its input so that it can store a number from any one of several buses. [b]
Asynchronous register-transfer systems (such as computers) have a general solution. In
the 1980s, some researchers discovered that almost all synchronous register-transfer
machines could be converted to asynchronous designs by using first-in-first-out
synchronization logic. In this scheme, the digital machine is characterized as a set of
data flows. In each step of the flow, a synchronization circuit determines when the
outputs of that step are valid and instructs the next stage when to use these outputs. [citation
needed]

Computer design[edit]

Intel 80486DX2 microprocessor

The most general-purpose register-transfer logic machine is a computer. This is


basically an automatic binary abacus. The control unit of a computer is usually designed
as a microprogram run by a microsequencer. A microprogram is much like a player-
piano roll. Each table entry of the microprogram commands the state of every bit that
controls the computer. The sequencer then counts, and the count addresses the
memory or combinational logic machine that contains the microprogram. The bits from
the microprogram control the arithmetic logic unit, memory and other parts of the
computer, including the microsequencer itself. In this way, the complex task of
designing the controls of a computer is reduced to a simpler task of programming a
collection of much simpler logic machines.
Almost all computers are synchronous. However, asynchronous computers have also
been built. One example is the ASPIDA DLX core.[32] Another was offered by ARM
Holdings.[33] They don't, however, have any speed advantages because modern
computer designs already run at the speed of their slowest component, usually memory.
They do use somewhat less power because a clock distribution network is not needed.
An unexpected advantage is that asynchronous computers do not produce spectrally-
pure radio noise. They are used in some radio-sensitive mobile-phone base-station
controllers. They may be more secure in cryptographic applications because their
electrical and radio emissions can be more difficult to decode. [33]
Computer architecture[edit]
Computer architecture is a specialized engineering activity that tries to arrange the
registers, calculation logic, buses and other parts of the computer in the best way
possible for a specific purpose. Computer architects have put a lot of work into reducing
the cost and increasing the speed of computers in addition to boosting their immunity to
programming errors. An increasingly common goal of computer architects is to reduce
the power used in battery-powered computer systems, such as smartphones.
Design issues in digital circuits[edit]
This section does not cite any sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may be
challenged and removed. (September 2015)  (Learn how and when to remove this
template message)

Digital circuits are made from analog components. The design must assure that the
analog nature of the components doesn't dominate the desired digital behavior. Digital
systems must manage noise and timing margins, parasitic inductances and
capacitances.
Bad designs have intermittent problems such as glitches, vanishingly fast pulses that
may trigger some logic but not others, runt pulses that do not reach valid threshold
voltages.
Additionally, where clocked digital systems interface to analog systems or systems that
are driven from a different clock, the digital system can be subject to metastability where
a change to the input violates the setup time for a digital input latch.
Since digital circuits are made from analog components, digital circuits calculate more
slowly than low-precision analog circuits that use a similar amount of space and power.
However, the digital circuit will calculate more repeatably, because of its high noise
immunity.
Automated design tools[edit]
This section does not cite any sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may be
challenged and removed. (June 2021)  (Learn how and when to remove this
template message)

Much of the effort of designing large logic machines has been automated through the
application of electronic design automation (EDA).
Simple truth table-style descriptions of logic are often optimized with EDA that
automatically produce reduced systems of logic gates or smaller lookup tables that still
produce the desired outputs. The most common example of this kind of software is
the Espresso heuristic logic minimizer. Optimizing large logic systems may be done
using the Quine–McCluskey algorithm or binary decision diagrams. There are promising
experiments with genetic algorithms and annealing optimizations.
To automate costly engineering processes, some EDA can take state tables that
describe state machines and automatically produce a truth table or a function table for
the combinational logic of a state machine. The state table is a piece of text that lists
each state, together with the conditions controlling the transitions between them and
their associated output signals.
Often, real logic systems are designed as a series of sub-projects, which are combined
using a tool flow. The tool flow is usually controlled with the help of a scripting language,
a simplified computer language that can invoke the software design tools in the right
order. Tool flows for large logic systems such as microprocessors can be thousands of
commands long, and combine the work of hundreds of engineers. Writing and
debugging tool flows is an established engineering specialty in companies that produce
digital designs. The tool flow usually terminates in a detailed computer file or set of files
that describe how to physically construct the logic. Often it consists of instructions on
how to draw the transistors and wires on an integrated circuit or a printed circuit board.
Parts of tool flows are debugged by verifying the outputs of simulated logic against
expected inputs. The test tools take computer files with sets of inputs and outputs and
highlight discrepancies between the simulated behavior and the expected behavior.
Once the input data is believed to be correct, the design itself must still be verified for
correctness. Some tool flows verify designs by first producing a design, then scanning
the design to produce compatible input data for the tool flow. If the scanned data
matches the input data, then the tool flow has probably not introduced errors.
The functional verification data are usually called test vectors. The functional test
vectors may be preserved and used in the factory to test whether newly constructed
logic works correctly. However, functional test patterns don't discover all fabrication
faults. Production tests are often designed by automatic test pattern generation software
tools. These generate test vectors by examining the structure of the logic and
systematically generating tests targeting particular potential faults. This way the fault
coverage can closely approach 100%, provided the design is properly made testable
(see next section).
Once a design exists, and is verified and testable, it often needs to be processed to be
manufacturable as well. Modern integrated circuits have features smaller than the
wavelength of the light used to expose the photoresist. Software that are designed for
manufacturability add interference patterns to the exposure masks to eliminate open-
circuits, and enhance the masks' contrast.
Design for testability[edit]
There are several reasons for testing a logic circuit. When the circuit is first developed, it
is necessary to verify that the design circuit meets the required functional, and timing
specifications. When multiple copies of a correctly designed circuit are being
manufactured, it is essential to test each copy to ensure that the manufacturing process
has not introduced any flaws.[34]
A large logic machine (say, with more than a hundred logical variables) can have an
astronomical number of possible states. Obviously, factory testing every state of such a
machine is unfeasible, for even if testing each state only took a microsecond, there are
more possible states than there are microseconds since the universe began!
Large logic machines are almost always designed as assemblies of smaller logic
machines. To save time, the smaller sub-machines are isolated by permanently
installed design for test circuitry, and are tested independently. One common testing
scheme provides a test mode that forces some part of the logic machine to enter a test
cycle. The test cycle usually exercises large independent parts of the machine.
Boundary scan is a common test scheme that uses serial communication with external
test equipment through one or more shift registers known as scan chains. Serial scans
have only one or two wires to carry the data, and minimize the physical size and
expense of the infrequently used test logic. After all the test data bits are in place, the
design is reconfigured to be in normal mode and one or more clock pulses are applied,
to test for faults (e.g. stuck-at low or stuck-at high) and capture the test result into flip-
flops or latches in the scan shift register(s). Finally, the result of the test is shifted out to
the block boundary and compared against the predicted good machine result.
In a board-test environment, serial to parallel testing has been formalized as
the JTAG standard.
Trade-offs[edit]
Cost[edit]
Since a digital system may use many logic gates, the overall cost of building a computer
correlates strongly with the cost of a logic gate. In the 1930s, the earliest digital logic
systems were constructed from telephone relays because these were inexpensive and
relatively reliable.
The earliest integrated circuits were constructed to save weight and permit the Apollo
Guidance Computer to control an inertial guidance system for a spacecraft. The first
integrated circuit logic gates cost nearly US$50, which in 2021 would be equivalent to
$458. Mass-produced gates on integrated circuits became the least-expensive method
to construct digital logic.
With the rise of integrated circuits, reducing the absolute number of chips used
represented another way to save costs. The goal of a designer is not just to make the
simplest circuit, but to keep the component count down. Sometimes this results in more
complicated designs with respect to the underlying digital logic but nevertheless
reduces the number of components, board size, and even power consumption.
Reliability[edit]
Another major motive for reducing component count on printed circuit boards is to
reduce the manufacturing defect rate due to failed soldered connections and increase
reliability. Defect and failure rates tend to increase along with the total number of
component pins.
The failure of a single logic gate may cause a digital machine to fail. Where additional
reliability is required, redundant logic can be provided. Redundancy adds cost and
power consumption over a non-redundant system.
The reliability of a logic gate can be described by its mean time between failure (MTBF).
Digital machines first became useful when the MTBF for a switch increased above a few
hundred hours. Even so, many of these machines had complex, well-rehearsed repair
procedures, and would be nonfunctional for hours because a tube burned-out, or a moth
got stuck in a relay. Modern transistorized integrated circuit logic gates have MTBFs
greater than 82 billion hours (8.2×1010 h).[35] This level of reliability is required because
integrated circuits have so many logic gates.
Fan-out[edit]
Fan-out describes how many logic inputs can be controlled by a single logic output
without exceeding the electrical current ratings of the gate outputs. [36] The minimum
practical fan-out is about five.[citation needed] Modern electronic logic gates
using CMOS transistors for switches have higher fan-outs.
Speed[edit]
The switching speed describes how long it takes a logic output to change from true to
false or vise versa. Faster logic can accomplish more operations in less time. Modern
electronic digital logic routinely switches at 5 GHz, and some laboratory systems switch
at more than 1 THz.[citation needed].

Logic families[edit]
Main article: Logic family
Digital design started with relay logic which is relatively inexpensive and reliable, but
slow. Occasionally a mechanical failure would occur. Fan-outs were typically about 10,
limited by the resistance of the coils and arcing on the contacts from high voltages.
Later, vacuum tubes were used. These were very fast, but generated heat, and were
unreliable because the filaments would burn out. Fan-outs were typically 5 to 7, limited
by the heating from the tubes' current. In the 1950s, special computer tubes were
developed with filaments that omitted volatile elements like silicon. These ran for
hundreds of thousands of hours.
The first semiconductor logic family was resistor–transistor logic. This was a thousand
times more reliable than tubes, ran cooler, and used less power, but had a very low fan-
out of 3. Diode–transistor logic improved the fan-out up to about 7, and reduced the
power. Some DTL designs used two power-supplies with alternating layers of NPN and
PNP transistors to increase the fan-out.
Transistor–transistor logic (TTL) was a great improvement over these. In early devices,
fan-out improved to 10, and later variations reliably achieved 20. TTL was also fast, with
some variations achieving switching times as low as 20 ns. TTL is still used in some
designs.
Emitter coupled logic is very fast but uses a lot of power. It was extensively used for
high-performance computers, such as the Illiac IV, made up of many medium-scale
components.
By far, the most common digital integrated circuits built today use CMOS logic, which is
fast, offers high circuit density and low power per gate. This is used even in large, fast
computers, such as the IBM System z.

Recent developments[edit]
In 2009, researchers discovered that memristors can implement a boolean state storage
and provides a complete logic family with very small amounts of space and power,
using familiar CMOS semiconductor processes.[37]
The discovery of superconductivity has enabled the development of rapid single flux
quantum (RSFQ) circuit technology, which uses Josephson junctions instead of
transistors. Most recently, attempts are being made to construct purely optical
computing systems capable of processing digital information using nonlinear optical
elements.

See also[edit]
 De Morgan's laws
 Logical effort
 Logic optimization
 Microelectronics
 Unconventional computing

Notes[edit]
1. ^ An example of an early asynchronous digital computer was
the Jaincomp-B1 manufactured by the Jacobs Instrument
Company in 1951.[31]
2. ^ Alternatively, the outputs of several items may be
connected to a bus through buffers that can turn off the output
of all of the devices except one.

References[edit]
1. ^ Null, Linda; Lobur, Julia (2006). The essentials of computer
organization and architecture. Jones & Bartlett Publishers.
p. 121.  ISBN  978-0-7637-3769-6. We can build logic
diagrams (which in turn lead to digital circuits) for any
Boolean expression...
2. ^ Peirce, C. S., "Letter, Peirce to A. Marquand", dated
1886, Writings of Charles S. Peirce, v. 5, 1993, pp. 541–3.
Google Preview. See Burks, Arthur W., "Review: Charles S.
Peirce, The new elements of mathematics", Bulletin of the
American Mathematical Society v. 84, n. 5 (1978), pp. 913–
18, see 917. PDF Eprint.
3. ^ In 1946, ENIAC required an estimated 174 kW. By
comparison, a modern laptop computer may use around
30 W; nearly six thousand times less. "Approximate Desktop
& Notebook Power Usage". University of Pennsylvania.
Archived from  the original on 3 June 2009. Retrieved 20
June  2009.
4. ^ "A Computer Pioneer Rediscovered, 50 Years On". The
New York Times. April 20, 1994.
5. ^ Lee, Thomas H. (2003). The Design of CMOS Radio-
Frequency Integrated Circuits  (PDF). Cambridge University
Press. ISBN 9781139643771. Archived  (PDF)  from the
original on 2022-10-09.
6. ^ Puers, Robert; Baldi, Livio; Voorde, Marcel Van de; Nooten,
Sebastiaan E. van (2017). Nanoelectronics: Materials,
Devices, Applications, 2 Volumes.  John Wiley & Sons.
p. 14.  ISBN  9783527340538.
7. ^ Lavington, Simon (1998), A History of Manchester
Computers (2 ed.), Swindon: The British Computer Society,
pp.  34–35
8. ^ "The Chip that Jack Built". Texas Instruments. 2008.
Retrieved 29 May 2008.
9. ^ Bassett, Ross Knox (2007). To the Digital Age: Research
Labs, Start-up Companies, and the Rise of MOS Technology.
Johns Hopkins University Press.
p. 46.  ISBN  9780801886393.
10. ^ "1960 - Metal Oxide Semiconductor (MOS) Transistor
Demonstrated".  The Silicon Engine.  Computer History
Museum.
11. ^ Jump up to:a b "Who Invented the Transistor?". Computer
History Museum. 4 December 2013. Retrieved  20 July 2019.
12. ^ "Triumph of the MOS Transistor".  YouTube. Computer
History Museum. 6 August 2010. Archived from the original
on 2021-12-11. Retrieved 21 July  2019.
13. ^ Motoyoshi, M. (2009). "Through-Silicon Via
(TSV)".  Proceedings of the IEEE.  97  (1): 43–
48.  doi:10.1109/JPROC.2008.2007462. ISSN 0018-9219.  S2
CID 29105721.
14. ^ "Tortoise of Transistors Wins the Race - CHM
Revolution".  Computer History Museum. Retrieved  22
July  2019.
15. ^ "Transistors Keep Moore's Law Alive". EETimes. 12
December 2018. Retrieved 18 July  2019.
16. ^ Jump up to:a b "Applying MOSFETs to Today's Power-
Switching Designs". Electronic Design. 23 May 2016.
Retrieved 10 August  2019.
17. ^ B. SOMANATHAN NAIR (2002).  Digital electronics and
logic design. PHI Learning Pvt. Ltd.
p. 289.  ISBN  9788120319561.  Digital signals are fixed-width
pulses, which occupy only one of two levels of amplitude.
18. ^ Joseph Migga Kizza (2005). Computer Network Security.
Springer Science & Business Media. ISBN 9780387204734.
19. ^ 2000 Solved Problems in Digital Electronics.  Tata McGraw-
Hill Education. 2005. p. 151.  ISBN  978-0-07-058831-8.
20. ^ Chan, Yi-Jen (1992). Studies of InAIAs/InGaAs and
GaInP/GaAs heterostructure FET's for high speed
applications.  University of Michigan. p.  1.  The Si MOSFET
has revolutionized the electronics industry and as a result
impacts our daily lives in almost every conceivable way.
21. ^ Grant, Duncan Andrew; Gowar, John (1989). Power
MOSFETS: theory and applications. Wiley.
p. 1. ISBN 9780471828679. The metal-oxide-semiconductor
field-effect transistor (MOSFET) is the most commonly used
active device in the very large-scale integration of digital
integrated circuits (VLSI). During the 1970s these
components revolutionized electronic signal processing,
control systems and computers.
22. ^ Golio, Mike; Golio, Janet (2018). RF and Microwave
Passive and Active Technologies. CRC Press. pp.  18–
2. ISBN 9781420006728.
23. ^ Hittinger, William C. (1973). "Metal-Oxide-Semiconductor
Technology". Scientific American.  229  (2): 48–
59.  Bibcode:1973SciAm.229b..48H. doi:10.1038/scientificam
erican0873-48.  ISSN  0036-8733. JSTOR 24923169.
24. ^ Peter Clarke (14 October 2005).  "Intel enters billion-
transistor processor era". EE Times.
25. ^ "13 Sextillion & Counting: The Long & Winding Road to the
Most Frequently Manufactured Human Artifact in
History".  Computer History Museum. April 2, 2018.
Retrieved 12 October  2020.
26. ^ Golio, Mike; Golio, Janet (2018). RF and Microwave
Passive and Active Technologies. CRC Press. pp.  ix, I-1, 18–
2. ISBN 9781420006728.
27. ^ Rappaport, T. S. (November 1991). "The wireless
revolution".  IEEE Communications Magazine. 29 (11): 52–
71.  doi:10.1109/35.109666. S2CID  46573735.
28. ^ "The wireless revolution". The Economist. January 21,
1999. Retrieved  12 September 2019.
29. ^ Paul Horowitz and Winfield Hill, The Art of Electronics 2nd
Ed. Cambridge University Press, Cambridge, 1989 ISBN 0-
521-37095-7 page 471
30. ^ Maini. A.K. (2007). Digital Electronics Principles, Devices
and Applications. Chichester, England.: John Wiley & Sons
Ltd.
31. ^ Pentagon symposium: Commercially Available General
Purpose Electronic Digital Computers of Moderate Price,
Washington, D.C., 14 MAY 1952
32. ^ "ASODA sync/async DLX Core". OpenCores.org.
Retrieved September 5, 2014.
33. ^ Jump up to:a b Clarke, Peter.  "ARM Offers First Clockless
Processor Core". eetimes.com. UBM Tech (Universal
Business Media). Retrieved 5 September  2014.
34. ^ Brown S & Vranesic Z. (2009). Fundamentals of Digital
Logic with VHDL Design. 3rd ed. New York, N.Y.: Mc Graw
Hill.
35. ^ MIL-HDBK-217F notice 2, section 5.3, for 100,000 gate 0.8
micrometre CMOS commercial ICs at 40C; failure rates in
2010 are better, because line sizes have decreased to 0.045
micrometres, and fewer off-chip connections are needed per
gate.
36. ^ Kleitz , William. (2002). Digital and Microprocessor
Fundamentals: Theory and Application. 4th ed. Upper Saddler
Reviver, NJ: Pearson/Prentice Hall
37. ^ Lehtonen, Eero; Laiho, Mika (2009).  Stateful implication
logic with memristors. 2009 IEEE/ACM International
Symposium on Nanoscale Architectures. pp. 33–
36.  doi:10.1109/NANOARCH.2009.5226356.  ISBN  978-1-
4244-4957-6.

Further reading[edit]
 Douglas Lewin, Logical Design of Switching Circuits,
Nelson,1974.
 R. H. Katz, Contemporary Logic Design, The
Benjamin/Cummings Publishing Company, 1994.
 P. K. Lala, Practical Digital Logic Design and
Testing, Prentice Hall, 1996.
 Y. K. Chan and S. Y. Lim, Progress In
Electromagnetics Research B, Vol. 1, 269–290,
2008, "Synthetic Aperture Radar (SAR) Signal
Generation, Faculty of Engineering & Technology,
Multimedia University, Jalan Ayer Keroh Lama, Bukit
Beruang, Melaka 75450, Malaysia.

External links[edit]

Wikimedia Commons has media related to Digital electronics.

 Digital Circuit Projects: An Overview of Digital


Circuits Through Implementing Integrated
Circuits (2014)
 Lessons in Electric Circuits - Volume IV (Digital) at
the Wayback Machine (archived 2012-11-27)
 MIT OpenCourseWare introduction to digital design
class materials ("6.004: Computation Structures")
show

Digital electronics

show

Electronic components

 France (data)

 Germany
rol: National
 Israel
libraries 
 United States

 Czech Republic
Categories: 
 Digital electronics
 Electronic design
 Electronic design automation
Navigation menu
 Not logged in
 Talk
 Contributions
 Create account
 Log in
 Article
 Talk
 Read
 Edit
 View history

Search Go

 Main page
 Contents
 Current events
 Random article
 About Wikipedia
 Contact us
 Donate
Contribute
 Help
 Learn to edit
 Community portal
 Recent changes
 Upload file
Tools
 What links here
 Related changes
 Special pages
 Permanent link
 Page information
 Cite this page
 Wikidata item
Print/export
 Download as PDF
 Printable version
In other projects
 Wikimedia Commons
 Wikibooks
Languages
 ‫العربية‬
 Español
 Gĩkũyũ
 हिन्दी
 Bahasa Indonesia
 Kiswahili
 Bahasa Melayu
 Русский
 中文
43 more
Edit links
 This page was last edited on 11 January 2023, at 13:40 (UTC).
 Text is available under the Creative Commons Attribution-ShareAlike License 3.0; additional terms may apply. By
using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of
the Wikimedia Foundation, Inc., a non-profit organization.
 Privacy policy

 About Wikipedia

 Disclaimers

 Contact Wikipedia

 Mobile view

 Developers

 Statistics

 Cookie statement

You might also like