Embedded System Notes Unit 2
Embedded System Notes Unit 2
P)
Notes: Unit-2
Topic
The embedded system was originally designed to work on a single device. However, in the current
scenario, the implementation of different networking options has increased the overall performance of the
embedded system in terms of economy as well as technical considerations.
The most efficient types of the network used in the embedded system are BUS network and an Ethernet
network.
A BUS is used to connect different network devices and to transfer a huge range of data, for example,
serial bus, I2C bus, CAN bus, etc.
The Ethernet type network works with the TCP/IP protocol.
Examples of embedded networking include CAN, I2C, Component, sensor, and serial bus networking.
Telecommunication systems make use of numerous embedded systems ranging from telephone switches
for the network to mobile phones at the end-user.
Computer networking uses dedicated routers and Network Bridge to route data.
The Advanced HVAC system uses networked thermostats for more accurate and efficient control of
temperature that may change during a day or season.
The home automation system uses wired and wireless networking to control lights, climate, and security,
audio and so on.
Types of Networks
There are different types of networks used in embedded systems, depending on the requirements of the
application. Some of the commonly used networks in embedded systems are:
1. Local Area Network (LAN) - This type of network is used for devices that are located in
close proximity to each other, typically within a single building or campus. LANs are used
for communication between devices like computers, printers, and servers.
2. Wireless Sensor Network (WSN) - WSNs are used for applications that require
communication between a large number of small devices over a wireless medium. These
networks are commonly used in applications like home automation, industrial automation,
and smart cities.
3. Industrial Ethernet - This type of network is used in industrial applications where high-
speed communication is required between devices like programmable logic controllers
(PLCs), human-machine interfaces (HMIs), and sensors. Industrial Ethernet provides high-
speed communication and can operate in harsh environments.
4. Cellular Networks - Cellular networks are used in embedded systems that require
communication over long distances. These networks are commonly used in applications like
fleet management, remote monitoring, and surveillance.
Networking Protocols
Networking protocols are used to establish communication between devices in a network. There are
different types of protocols used in embedded systems, depending on the network type and the
application requirements. Some of the commonly used protocols are:
1. Transmission Control Protocol/Internet Protocol (TCP/IP) - TCP/IP is the most widely used
protocol in the world and is used for communication between devices on the internet. TCP/IP
provides reliable and secure communication between devices.
2. User Datagram Protocol (UDP) - UDP is a lightweight protocol that is used for applications
that require low latency and do not require reliable communication. UDP is commonly used
in applications like streaming media and gaming.
3. Hypertext Transfer Protocol (HTTP) - HTTP is used for communication between web
servers and web clients. HTTP is used to transfer data between the server and the client and
is used in applications like web browsing and e-commerce.
4. Message Queue Telemetry Transport (MQTT) - MQTT is a lightweight protocol that is used
for communication between IoT devices. MQTT is commonly used in applications like home
automation and smart cities.
2. Security - Embedded systems are often used in applications where security is critical, like
medical devices and industrial automation. Designers need to ensure that the networking
hardware and protocols used provide adequate security.
3. Real-time requirements - Some applications, like industrial automation and control, have
strict real-time requirements. Designers need to ensure that the networking hardware and
protocols used can meet these requirements.
The OSI Model
Our discussion of embedded networking begins with an overview of computer networking systems and
how they function. The earliest conceptual model of computer networks was developed by the
International Organization for Standardization (ISO) in 1984 and is known as the Open System
Interconnection (OSI) model.
The OSI model itself is conceptual in nature - it does not include any actual specifications for network
implementation. However, the OSI model does provide a framework for understanding the components
of a complete network communication system. As we will see, many of today's most commonly
implemented networking technologies use features and protocols that reflect parts of the OSI model.
The OSI model defines seven-layer architecture for a complete communication system:
1. Application Layer
The application layer is the top-most layer of the OSI model. Data transmissions frequently originate in
the application layer of the origin device and terminate in the application layer of the target device. This
layer deals with the identification of services and communication partners, user authentication, and data
syntax. Some common application layer protocols include hypertext transfer protocol (HTTP), Telnet and
file transfer protocol (FTP).
2. Presentation Layer
The presentation layer is a software layer that formats and encrypts data that will be sent across a
network, ensuring compatibility between the transmitting device and the receiving device. The
presentation layer includes protocols such as ASCII, JPEG, MPEG.
3. Session Layer
For data transfer to occur between applications on separate devices, a session must be created. The
purpose of the session layer is to manage, synchronize, and terminate connectivity between applications,
ensuring coordinated data exchange while minimizing packet loss. The session layer can provide for full-
duplex, half-duplex, or simplex communications.
4. Transport Layer
In the OSI model, the transport layer receives messages from the data layer and converts it into smaller
units that can be efficiently handled by the network layer. In protocols such as TCP/IP, the transport layer
adds a header to each data segment which includes the port of origin and the destination port address -
this is called service point addressing. Service point addressing ensures that a message from the
transmitting computer goes to the correct port once it arrives at the destination computer. The
Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are popular transport layer
protocols for devices that connect to the internet.
5. Network Layer
The network layer provides the features and functions that transfer data sequences from the host device to
a destination device. Along with routing network traffic and reporting delivery errors, the network layer
divides outgoing messages into packets and assembles incoming packets into messages. Network layer
devices use protocols such as IP, ICMP, and IPX.
6. Data Link Layer
Data packets are encoded and decoded into bits in the data link layer, which may be divided into two sub-
layers: media access control (MAC) and logical link control (LLC). Hardware network interface
controllers are typically assigned a MAC address by the manufacturer that acts as a unique device
identifier and network address within a network segment. While the MAC layer supports physical
addressing, the LLC layer deals with data synchronization, error checking, and flow control. Protocols
for the data link layer include IEEE 802.5/ 802.2, IEEE 802.3/802.2, and the Point-to-point protocol
(PPP).
7. Physical Layer
The physical layer defines the electrical and physical requirements for networked devices with control
over the transmission and reception of unstructured raw data over the network. The physical data also
manages data encoding and the conversion of digital bits into electrical signals. Devices that operate at
the physical layer include network interface cards (NICs), repeaters, and hubs.
A connection point that acts as interface between the computer and external devices like mouse, printer,
modem, etc. is called port. Ports are of two types −
Internal port − It connects the motherboard to internal devices like hard disk drive, CD drive,
internal modem, etc.
External port − It connects the motherboard to external devices like modem, mouse, printer,
flash drives, etc.
Serial Port
Serial ports transmit data sequentially one bit at a time. So they need only one wire to transmit 8 bits.
However it also makes them slower. Serial ports are usually 9-pin or 25-pin male connectors. They are
also known as COM (communication) ports or RS323C ports.
External modems
Serial mouse or pointing devices such as trackballs or touchpads
Plotters
Label printers
Serial printers
PDA docking stations
Digital cameras
PC-to-PC connections used by file transfer programs such as Direct Cable Connection, LapLink,
and Interlink
The parameters are set initially, before the peripheral interface isused, by setting appropriate bit
patterns in one or more controlregisters associated with the serial port.
Baud (bit) rate.
Number of bits per character: Usually 8 data bits, although 5, 6 and 7are allowed.
The START bit is always low and forces a transition from line idle toindicate a new data
byte.
Parity/no parity: optional, indicates ODD or EVEN bit parity in thedata byte
Length of stop bit (1, 1.5, 2 bits):The STOP bit is always high andforces the line idle state
at the end of the transmission
Today, data rate can range as high as 115.2 kbits/s. Typical data rates are 1200, 2400, 4800, 9600,
19,200, 38,400, and 115,200 bits/s.
RS-485
The RS-485 standard specifies differential signaling on two lines rather than single-ended with a
voltage referenced to ground. Logic1 is a level smaller than -200 mV, and logic 0 is a level greater
than
+200 mV.
Typical line voltage levels from the line drivers are a minimum of ±
1.5 V to a maximum of about ± 6 V.
The differential format produces effective common-mode noisecancellation
The standard transmission medium is twisted-pair wire. Data rates up to 10 Mbit/s and
distances up to 1,200 m. RS-485 is not a protocol, it’s simply an electrical interface.
The RS-422 interface is a multi-drop interface, giving unidirectional communication over a pair of
wires from one transmitter to several receivers, up to 10 unit loads (UL). If the devices receiving the
data wish to communicate back to the transmitter, the designer must use a separate, dedicated bus
between each receiver and the transmitter. (Using this return bus will allow full-duplex
transmissions.) For that reason, RS-422 is seldom used between more than two nodes.
The RS-485 interface, on the other hand, is a bidirectional communication over one pair of wires
between several transceivers. The specification states that the bus can include up to 32 UL worth of
transceivers. Many manufacturers produce fractional-UL transceivers, thereby increasing the
maximum number of devices to well over 100.
The RS-422 and RS-485 interfaces often use the same start bit/data/stop bit format of RS-232. In
fact, several converters exist to go from RS-232 to RS-485 and back. Do keep in mind, however, that
RS-232 is a full-duplex interface, while RS-485 is half-duplex.
Several microcontroller manufacturers provide built-in UARTs that boast special RS-485 abilities.
Parallel Port
Parallel ports can send or receive 8 bits or 1 byte at a time. Parallel ports come in form of 25-pin female
pins and are used to connect printer, scanner, external hard disk drive, etc
1. SERIAL DATA COMMUNICATION - BASICS
Within a micro-computer system, the data transfer is in parallel because it is the fastest method.
But transferring the data over long distances, the parallel data transmission requires too many wires
and it is complicated and expensive. Therefore, the data to be sent for long distances is converted into
serial form so that it can be sent on a single wire. At the destination, the received serial data is converted
into parallel form so that it can be easily transferred on the micro-computer buses.
(i) Simplex :
In this mode, the data is transmitted only in one direction over a single communication channel.
(ii) Half-dupplex :
In this mode, the data is transmitted in both directions, but only one direction at a time.
i.e., simultaneous data transfer is not possible
(iii)Full-dupplex :
In this mode, the data transmission takes place in both directions simultaneously.
It requires two channels.
SYNC pulses are required START and STOP bits are required
A group of characters can be For each character, the START & STOP bits
transmitted after sending the SYNC are required.
pulses
It is used in low speed data transmission
It is used in high speed data
transmission It is used to exchange data with other
equipment such as PC.
Generally used between CPU and other
devices on the same PCB, as the same
power supply and CLK are used. Ex: UART
TXD-transmit data
RXD-receive data
SCLK-SERIAL CLOCK
SS-SLAVE SELECT
SDA-SERIAL DATA
SCL-SERIAL CLOCK
CAN BUS
CAN was the solution developed by Robert Bosch GmbH, Germany in 1986 for the
development of a communication system between three ECUs (electronic control units) in
vehicles being designed by Mercedes. The UART, which had been in use for
long, had been rendered unsuitable in their situation because of its point-to-point
communication methodology. The need for a multi-master communication system became a
stringent requirement. Intel then fabricated the first CAN in 1987. Controller Area Network
(CAN) is a very reliable and message-oriented serial network that was originally designed for
the automotive industry, but has become a sought after bus in industrial automation as well as
other applications. The CAN bus is primarily used in embedded systems, and is actually a
network established among micro controllers. The main features are a two-wire, half duplex,
high-speed network system mainly suited for high-speed applications using short messages.
Its robustness, reliability and compatibility to the design issues in the semiconductor industry
are some of the remarkable aspects of the CAN technology.
Main Features
1. CAN can link up to 2032 devices (assuming one node with one identifier) on a
single network. But accounting to the practical limitations of the hardware
(transceivers), it may only link up to110 nodes (with 82C250, Philips) on a single
network. ‰
2. It offers high-speed communication rate up to 1 Mbits/sec thus facilitating real-
time control. ‰
3. It embodies unique error confinement and the error detection features making it
more trustworthy and adaptable to a noise critical environment.
CAN Versions
The main aspect of these Versions is the formats of the MESSAGE FRAME; the main
difference being the IDENTIFIER LENGTH.
CAN Standards
There are two ISO standards for CAN. The two differ in their physical layer descriptions. ‰
The basic topology for the CAN Controller has been shown in figure 2 below.
The basic controller involves FIFOs for message transfers and it has an enhanced
counterpart in Full-CAN controller, which uses message BUFFERS instead.
CAN-bus line usually interconnects to a CAN controller between line and host at the node. It
gives the input and gets output between the physical and data link layers at the host node.
The CAN controller has a BIU (bus interface unit consisting of buffer and driver), protocol
controller, status-cum control registers, receiver-buffer and message objects. These units
connect the host node through the host interface circuit
HardwareforI²C
The electronic interface to the I²C bus is shown in Figure10.11 for a master and two slaves.
A full-featured slave has the same hardware as a master but most are simpler and cannot
drive the clock line SCL.
On the other hand, slaves must always be able to drive SDA.
Digital outputs a renormalize driven actively for both their binary values, either to
VSS for logic 0 or to VCCfor logic 1.
Pull-up resistors Rp keep the lines at VCC when none of the drivers is active. The devices
must therefore have open-drain (or open-collector) outputs. This means that there is only
ann-channel MOSFET between the output and ground a sin Figure10.11.
The pull- up resistor Rp holds the line at VCC when there is no activity, so both the
clock and data idle high. If a single n-MOSFET is turned on, its line is pulled down
to VSS to give a logical 0.
USCI-I2CMode
In I2C mode, the USCI module provides an interface between the MSP430 and I2C-
compatibledevicesconnectedbywayofthetwo-wireI2Cserialbus.Externalcomponents
attached to the I2C bus serially transmit and/or receive serial data to/from the USCI
module through the 2-wire I2C interface.
TheI2Cmodefeatures include:
1. 7-bitand10-bitdeviceaddressingmodes
2. General call
3. START/RESTART/STOP
4. Multi-master transmitter/receiver mode
5. Slave receiver/transmitter mode
6. Standard mode upto100kbpsand fast mode upto400kbps support
7. Programmable UCx CLK frequency in master mode
8. Designed for low power
9. Slave receiver START detection for auto-wake up from LPMx modes
10. Slave operation in LPM4
USCI Operation: I2C Mode
I2C BUS
Data Register:
Communication between the devices will start after CS (chip select) pin will go low. (CS is
an active low pin). In SPI, the 8-bit shift registers are used. After passing of 8 clock pulses,
the contents of two shift registers are interchanged. SPI is full duplex communication.
In SPI protocol both master and slaves use the same clock for communication When
CPOL= 0 the idle value of the clock is zero while at CPOL=1 the idle value of the clock is
one.
CPHA=0 means sample data on the leading (first) clock edge, while CPHA=1 means
sample data on the trailing (second) clock edge. The idle value of the clock is zero the
leading clock edge is a positive edge but if the idle value of the clock is one, the leading
clock edge is a negative edge.
In SPI protocol both master and slaves use the same clock for communication When
CPOL= 0 the idle value of the clock is zero while at CPOL=1 the idle value of the clock is
one.
CPHA=0 means sample data on the leading (first) clock edge, while CPHA=1 means
sample data on the trailing (second) clock edge. The idle value of the clock is zero the
leading clock edge is a positive edge but if the idle value of the clock is one, the leading
clock edge is a negative edge.
SPI in Tiva Microcontroller
The TM4C123 microcontroller includes four Synchronous Serial Interface (SSI) modules.
The TM4C123GH6PM SSI modules have the following features:
Programmable interface operation for Freescale SPI, MICROWIRE, or Texas Instruments
synchronous serial interfaces
Master or slave operation
Programmable clock bit rate and prescaler
Separate transmit and receive FIFOs, each 16 bits wide and 8 locations deep
Programmable data frame size from 4 to 16 bits
Internal loopback test mode for diagnostic/debug testing
Standard FIFO-based interrupts and End-of-Transmission interrupt
Efficient transfers using Micro Direct Memory Access Controller (μDMA)
Separate channels for transmit and receive
Receive single request asserted when data is in the FIFO; burst request asserted when FIFO
contains 4 entries
Transmit single request asserted when there is space in the FIFO; burst request asserted
When four or more entries are available to be written in the FIFO.
SPI data Transmission
To perform SPI data transmission, follow the steps given below:
Enable the clock to SPI module in system control register RCGCSSI.
Before initialization, disable the SSI via bit 1 of SSICR1 register.
Set the Bit Rate with the SSICPSR prescaler and SSICR0 control registers.
Select the SPI mode, phase, polarity, and data width in SSICR0 control register.
Set the master mode in SSISCR1 register. Enable SSI using SSICR1 register.
Assert slave select signal.
Wait until the TNF flag in SSISR goes high, then load a byte of data into SSIDR.
Wait until transmit is complete that is, transmit FIFO empty and SSI not busy.
Register description:
Clock to SSI: RCGCSSI register is used to enable the clock to SSI modules. We need to write
RCGSSI = 0x0F to enable the clock to all SSI modules.
The SSIDR is used for both as transmitter and receiver buffer. In SPI handling 8-bit data, will be
placed into the lower 8-bits of the register and the rest of the register are unused. In the receive
mode, the lower 8-bit holds the received data.
Device Driver in computing refers to a special kind of software program or a specific type of
software application that controls a specific hardware device that enables different hardware
devices to communicate with the computer’s Operating System. A device driver communicates
with the computer hardware by computer subsystem or computer bus connected to the hardware.
Device Drivers are essential for a computer system to work properly because without a device
driver the particular hardware fails to work accordingly, which means it fails in doing the
function/action it was created to do. Most use the term Driver, but some may say Hardware
Driver, which also refers to the Device Driver.
Working of Device Driver:
Device Drivers depend upon the Operating System’s instruction to access the device and perform
any particular action. After the action, they also show their reactions by delivering output or
status/message from the hardware device to the Operating system. For example, a printer driver
tells the printer in which format to print after getting instruction from OS, similarly, A sound card
driver is there due to which 1’s and 0’s data of the MP3 file is converted to audio signals and you
enjoy the music. Card reader, controller, modem, network card, sound card, printer, video card,
USB devices, RAM, Speakers, etc need Device Drivers to operate.
The following figure illustrates the interaction between the user, OS, Device driver, and the
devices:
Types of Device Driver:
For almost every device associated with the computer system there exist a Device Driver for the
particular hardware. But it can be broadly classified into two types i.e.,
1. Kernel-modeDeviceDriver –
This Kernel-mode device driver includes some generic hardware that loads with the operating
system as part of the OS these are BIOS, motherboard, processor, and some other hardware
that are part of kernel software. These include the minimum system requirement device
drivers for each operating system.
2. User-modeDeviceDriver –
Other than the devices which are brought by the kernel for working the system the user also
brings some devices for use during the using of a system that devices need device drivers to
function those drivers fall under User mode device driver. For example, the user needs any
plug-and-play action that comes under this.
Virtual Device Driver:
There are also virtual device drivers(VxD), which manage the virtual device. Sometimes we use
the same hardware virtually at that time virtual driver controls/manages the data flow from the
different applications used by different users to the same hardware.
It is essential for a computer to have the required device drivers for all its parts to keep the
system running efficiently. Many device drivers are provided by manufacturers from the
beginning and also we can later include any required device driver for our system.
Advanced RISC Machine (ARM)
Advanced RISC Machine (ARM) Processor is considered to be a family of Central Processing
Units that are used in music players, smart phones, wearables, tablets, and other consumer
electronic devices.
Advanced RISC Machines create the ARM processor architecture, hence the name ARM. This
needs very few instruction sets and transistors. It has very small in size. This is the reason that it
is a perfect fit for small-size devices. It has less power consumption along with reduced
complexity in its circuits.
ARM processors are a family of central processing units (CPUs) based on a reduced instruction set
computer (RISC) architecture. ARM stands for Advanced RISC Machine. ARM architectures
represent a different approach to how the hardware for a system is designed when compared to
more familiar server architectures like x86.
ARM came into existence in the year 1983 and was developed by Acorn Computers. It has
achieved tremendous popularity as it was the first commercial RISC implementation. However, in
the year 1990, it was owned by Acorn, Apple, and VLSI. ARM processors are considered to be
rigid but these are very much performance-oriented.
The ARM processor is a 32 bit that offers single cycle execution of instructions with high clock
speed. Due to the features possessed, it is considered to be a fundamental component of embedded
systems. The major reason behind the success of ARM is its simple and powerful design that has
undergone constant technical upgrades after invention. Due to the simplicity of the processors,
these show utility in portable devices such as smart phones, tablets, networking modules, advanced
music players, etc
ARM Architecture
The ARM architecture processor is an advanced reduced instruction set computing [RISC] machine
and it’s a 32bit reduced instruction set computer (RISC) microcontroller. It was introduced by the
Acron computer organization in 1987. This ARM is a family of microcontroller developed by
makers like ST Micro electronics, Motorola, and so on. The ARM architecture comes with totally
different versions like ARMv1, ARMv2, etc., and, each one has its own advantage and
disadvantages.
ARM Architecture
ARM Block Diagram
The ALU has two 32-bits inputs. The primary comes from the register file, whereas the other
comes from the shifter. Status registers flags modified by the ALU outputs. The V-bit output goes
to the V flag as well as the Count goes to the C flag. Whereas the foremost significant bit
really represents the S flag, the ALU output operation is done by NORed to get the Z flag.
The ALU has a 4-bit function bus that permits up to 16 opcode to be implemented.
The final thing that must be explained is how the ARM will be used and the way in which the chip
appear. The various signals that interface with the processor are input, output or supervisory signals
which will be used to control the ARM operation.
Functional Block Diagram.
An ARM microcontroller is a load store reducing instruction set computer architecture means the
core cannot directly operate with the memory. The data operations must be done by the registers
and the information is stored in the memory by an address. The ARM cortex-M3 consists of 37
register sets wherein 31 are general purpose registers and 6 are status registers. The ARM uses
seven processing modes to run the user task.
USER Mode
FIQ Mode
IRQ Mode
SVC Mode
UNDEFINED Mode
ABORT Mode
Monitor Mode
USER Mode: The user mode is a normal mode, which has the least number of registers. It
doesn’t have SPSR and has limited access to the CPSR.
FIQ and IRQ: The FIQ and IRQ are the two interrupt caused modes of the CPU. The FIQ
is processing interrupt and IRQ is standard interrupt. The FIQ mode has additional five
banked registers to provide more flexibility and high performance when critical interrupts
are handled.
SVC Mode: The Supervisor mode is the software interrupt mode of the processor to start up
or reset.
Undefined Mode: The Undefined mode traps when illegal instructions are executed. The
ARM core consists of 32-bit data bus and faster data flow.
THUMB Mode: In THUMB mode 32-bit data is divided into 16-bits and increases the
processing speed
We have already discussed in the beginning that ARM processors are based on RISC
architecture. Thus, it includes key features of the RISC architecture, which are as follows:
1. Instructions: A single instruction is of 32 bit. This allows fetching of every instruction in one
cycle thereby making the operations simple. Here each instruction is of fixed length thereby
permitting the future instructions to get fetched while the previous ones are in getting executed.
While this facility is not provided by CISC architecture as in that case, the instructions exhibit
variable
size whose execution requires multiple cycles. This means ARM processors offer simple instruction
decoding.
2. Registers: The RISC machines contain large uniform register files. It has 37 registers of 32 bits
each out of which only 16 can be used at a time. Unlike CISC processors where each register is
dedicated for specific purposes, in RISC, any register can contain either data or address. This
increases the execution process of the whole system.
3. Pipelining: It is based on 3 stage pipelining, which provides maximum throughput. Basically, in
this case, by the time the first instruction is executing, the next one will be decoded, and next to
next one will be fetched. This allows fetching, decode and execution to occur simultaneously. This
means that on each cycle there is the advancement of one step that saves time and hence for
execution microcodes are not required for instruction execution like CISC processors.
4. Load/Store Model: In this architecture, all operations take place within the register.
Through load/store operation, data from the memory is loaded into the register and over that
data, the operation is performed. Once the operation gets done then the result of the same is
stored in the memory. This means, unlike CISC, which supports data processing on memory
directly, it does not provide memory-based operations.
7- A large number of Registers: A large number of registers are used in ARM processors
to prevent large amounts of memory interactions. Records contain data and addresses.
These act as a local memory store for all operations.
Difference between ARM and x86
The differences between ARM and x86 are described below.
ARM x86
ARM uses Reduced Instruction Set Computing x86 uses Complex Instruction Set
Architecture (RISC). Architecture (CISC).
ARM processors require fewer registers, but they x86 processors require less memory, but
require more memory. more registers.
ARM processors use the memory which is already x86 processors require some extra memory
available to them. for calculations.
It has various commercial applications such as in modern mobile phones, digital television, set-top
boxes, hard drives, inkjet printers, GPS navigation systems, etc. Not only these, it is useful in
portable gaming units, camcorders, AirPods, routers, etc.
ASIC stands for Application Specific Integrated Circuit. It is a type of integrated circuit (IC) that is
designed for a specific purpose or function. Unlike general-purpose ICs, such as microprocessors or
memory chips, ASICs are optimized for a particular task or application. For example, an ASIC can
be designed to perform encryption, image processing, audio processing, or any other specialized
function.
Early Concepts and Inception: The concept of specialized circuits for specific tasks dates
back to the early days of electronics. Early inventions like the analog computer designed by
Vannevar Bush in the 1930s and the development of digital logic gates in the 1940s laid the
foundation for specialized circuitry. However, it wasn’t until the 1960s and 1970s that the
term “ASIC” started to take shape.
Early ASIC Developments: The late 1960s and early 1970s saw the emergence of
programmable logic devices (PLDs) like Programmable Array Logic (PAL) and
Programmable Logic Array (PLA). These devices allowed designers to create custom logic
circuits without the need for full custom chip fabrication. Although not true ASICs, these
developments were crucial precursors.
MOS and VLSI Era: The invention of Metal-Oxide-Semiconductor (MOS) technology
and the advent of Very Large Scale Integration (VLSI) in the 1970s paved the way for the
creation of more complex and integrated circuits. Companies like Intel, Texas Instruments,
and IBM played pivotal roles in advancing these technologies.
Gate Arrays and Semi-Custom Design: In the 1980s, gate array technology gained
prominence. Gate arrays provided a compromise between fully custom chips and off-the-
shelf components. These arrays contained predefined transistors that could be
interconnected to create custom logic circuits. This approach reduced design time and cost
compared to full custom design.
Structured ASICs: The late 1990s and early 2000s saw the rise of structured ASICs. These
devices allowed designers to work with a pre-built structure of logic elements
interconnected through metal layers. This approach struck a balance between full custom
ASICs and FPGAs (Field-Programmable Gate Arrays).
System-on-Chip (SoC) and Advanced Process Nodes: As semiconductor technology
continued to advance, the early 2000s saw the rise of System-on-Chip (SoC) designs, where
entire systems were integrated onto a single chip. ASICs became essential for devices such
as smartphones, where performance, power efficiency, and form factor were critical. The
development of advanced process nodes allowed for greater transistor density, enabling
more complex and powerful ASICs.
Modern ASIC Applications: Modern ASICs find applications in a wide range of
industries, including telecommunications, automotive, aerospace, consumer electronics, and
more. They power devices such as network equipment, automotive control systems, medical
devices, and specialized hardware accelerators for AI and machine learning.
Challenges and Future Trends: Designing and fabricating ASICs can be complex and
costly, often requiring specialized expertise and resources. However, advancements in
design tools, simulation techniques, and the availability of IP cores (pre-designed functional
blocks) have helped mitigate these challenges. Future trends in ASIC development include
increased focus on energy efficiency, integration of heterogeneous components, and the
adoption of advanced packaging technologies.
1. Customized Functionality: Delve into the heart of ASICs’ design, where functionalities
are intricately customized to suit specific tasks, ensuring optimal performance for targeted
applications.
2. Optimized Performance: Explore how ASICs achieve exceptional performance by
eliminating extraneous components, focusing solely on necessary functions, and optimizing
circuitry for speed and efficiency.
3. Power Efficiency: Uncover the power-efficient design principles inherent to ASICs, as they
are engineered to operate within designated power envelopes, conserving energy and
extending battery life.
4. Compact Design: Venture into the world of compactness, where ASICs pack intricate
circuitry onto a single chip, leading to smaller device footprints and enabling sleeker, space-
efficient end products.
5. Cost-Effectiveness: Learn how ASICs balance upfront development costs with long-term
savings by excelling in high-volume production scenarios, making them cost-effective
solutions over time.
6. Enhanced Security: Discover how ASICs bolster security through embedded proprietary
functions and algorithms, safeguarding sensitive data and protecting against external threats.
7. Reduced Electromagnetic Interference (EMI): Dive into the realm of signal integrity as
ASICs minimize electromagnetic interference, ensuring robust performance and reducing
susceptibility to external noise.
8. Real-Time Responsiveness: Explore how ASICs thrive in real-time applications, rapidly
executing critical tasks and responses due to their optimized design and dedicated functions.
9. High Integration: Uncover the prowess of ASICs’ high integration, where multiple
functions are seamlessly combined into a single chip, reducing component count and
simplifying designs.
10. Design Freedom: Embrace the freedom to innovate as ASICs provide designers with
tailored solutions that align with specific requirements, fostering creativity and limitless
possibilities.
11. Long-Term Availability: Understand the longevity inherent to ASICs, as their specialized
nature ensures stable availability over extended periods, mitigating concerns of component
obsolescence.
12. Precision Interfaces: Experience the precision of ASICs’ interfaces and protocols,
carefully designed to seamlessly integrate with existing systems and ensure optimal
compatibility.
The architecture of an Application-Specific Integrated Circuit (ASIC) refers to its internal structure,
organization of components, and how they work together to fulfill a specific task. Here’s an
overview of the typical architecture of an ASIC:
Logic Blocks: ASICs consist of various logic blocks that perform specific functions. These
blocks can include standard cells, complex logic elements, memory arrays, arithmetic units,
and more. Logic blocks are interconnected to achieve the desired circuit behavior.
Input/Output (I/O) Interfaces: ASICs have input and output interfaces to communicate
with external devices. These interfaces can include various types of pins or pads for
connecting to other circuits or systems. They handle data transfer, control signals, and
power supply connections.
Clock Distribution: Clock signals are crucial for synchronizing the operation of different
parts of the ASIC. The architecture includes a clock distribution network that ensures that
all components of the chip operate in harmony based on the same clock reference.
Power Distribution: Power distribution networks provide the necessary supply voltage to
different parts of the ASIC. These networks are designed to minimize voltage drops and
ensure that each block receives the required power for correct operation.
Memory Elements: ASICs may include different types of memory elements, such as
registers, flip-flops, and memory arrays. These elements store data temporarily or
permanently, enabling the chip to retain and process information.
Datapath and Control Logic: The datapath includes the functional units responsible for
performing computations, such as arithmetic operations and data manipulations. The control
logic manages the flow of data and operations within the datapath, ensuring that tasks are
executed in the correct sequence.
Configuration Memory (Optional): Some ASICs, especially FPGAs (Field-Programmable
Gate Arrays), include configuration memory. This memory stores the configuration
information that defines the behavior of the logic blocks. During startup, the ASIC loads
this configuration to set up its functionality.
Clock Management Units: Modern ASICs often incorporate clock management units that
allow for clock scaling, gating, and distribution. These units enhance power efficiency and
allow different parts of the chip to operate at different frequencies.
Test and Debug Circuitry: ASICs include built-in test circuitry that facilitates
manufacturing testing, functional testing, and debugging. This circuitry helps ensure the
chip’s quality and aids in identifying and fixing issues.
Specialized Accelerators (Optional): Depending on the application, ASICs might feature
specialized hardware accelerators optimized for specific tasks, such as cryptography, signal
processing, or machine learning.
Package and Pins: The package houses the ASIC and provides physical protection and
connections to the external world. The pins or pads on the package connect the internal
circuitry to external components or systems.
ASIC Design Flow
Specification: The initial phase of the ASIC design flow revolves around establishing the
chip’s prerequisites. This entails comprehending the purpose behind the chip’s creation, its
performance benchmarks, power usage, dimensions, and additional specifications.
Architecture Design: Once the requirements have been outlined, the subsequent stage
entails crafting a high-level blueprint for the chip’s structure. This process encompasses
pinpointing the key functional segments within the chip, determining their interconnections,
and mapping out the path through which data will flow.
RTL Design: RTL (Register Transfer Level) design entails crafting an intricate design of
the chip using a hardware description language (HDL) like Verilog or VHDL. This RTL
design precisely defines the operation of every block within the chip and outlines their
interconnections and interactions.
Verification: Subsequently, the RTL design undergoes a sequence of tests aimed at
confirming its alignment with the prerequisites and specifications set forth in the initial
phase. This verification process encompasses simulation, formal verification, and hardware
emulation.
Synthesis: After the RTL design has been successfully verified, the next step involves
synthesis, wherein it is transformed into a gate-level netlist. This synthesis process entails
translating the RTL code into a representation at the gate level, which can then be translated
into hardware implementation.
Place and Route: During the place and route phase, the gate-level netlist is matched to the
actual physical layout of the chip. This stage encompasses the arrangement of gates on the
chip’s surface and the establishment of pathways for interconnections linking them.
Physical Verification: Following the completion of the place and route stage, a meticulous
examination of the physical design ensues to detect any instances of design rule breaches,
timing discrepancies, or other potential errors. This comprehensive evaluation involves a
range of assessments, including the Design Rule Check (DRC), Layout vs. Schematic
(LVS) verification, and Electrical Rule Check (ERC).
Tape out: Upon successful verification of the physical design, the ultimate design data is
forwarded to the foundry for the purpose of manufacturing. This particular stage is referred
to as “tape out,” encompassing the creation of essential photomasks that are integral to the
fabrication procedure.
Testing and Packaging: After the chip is manufactured, it undergoes several testing
procedures to ensure that it meets the specifications. Once it is tested, the chip is packaged
and made ready for deployment in the final application.
There are different types of ASICs based on the level of customization and flexibility. Some of the
common types are:
Full-custom ASIC: This is the most customized type of ASIC, where the designer has full
control over every aspect of the circuit design, including the logic cells, interconnections,
mask layers, and physical layout. This type of ASIC can achieve the highest performance
and efficiency, but it is also the most expensive and risky to design.
Standard-cell ASIC: This is a type of ASIC where the designer uses predefined logic cells
from a library to create the circuit. The logic cells are optimized for speed, power, or area,
and they can be placed and routed automatically by a tool. This type of ASIC can reduce the
design time and cost compared to full-custom ASICs, but it still has a high NRE cost and
low flexibility.
Gate-array ASIC: This is a type of ASIC where the designer uses predefined arrays of
unconnected transistors on a chip and customizes only the metal layers that connect them.
This type of ASIC can reduce the NRE cost and manufacturing time compared to standard-
cell ASICs, but it has lower performance and higher power consumption.
Programmable ASIC: This is a type of ASIC where the designer can modify some aspects
of the circuit after fabrication by using programmable elements such as fuses or antifuses.
This type of ASIC can offer some flexibility and adaptability to changing specifications or
market demands, but it has lower performance and higher power consumption than non-
programmable ASICs.
Applications of Application-Specific Integrated Circuits (ASICs)
1. Telecommunications: Venture into the world of seamless connectivity where ASICs power
networking equipment, enabling efficient data transmission, signal processing, and
managing complex protocols.
2. Consumer Electronics: Dive into how ASICs shape our everyday gadgets, from
smartphones to smart home devices, by optimizing performance, power efficiency, and
specialized functionalities.
3. Automotive Industry: Explore how ASICs drive modern vehicles by managing safety
systems, engine control, infotainment, and advanced driver-assistance systems (ADAS),
contributing to enhanced driving experiences.
4. Healthcare and Medical Devices: Uncover the critical role ASICs play in medical
advancements, powering imaging equipment, patient monitoring systems, and implantable
medical devices with precision and reliability.
5. Industrial Automation: Delve into the realm of industrial applications, where ASICs
facilitate process control, data acquisition, and communication in manufacturing and factory
automation.
6. Aerospace and Defense: Embark on a journey through aerospace and defense applications,
witnessing how ASICs ensure the reliability and performance of avionics, radar systems,
navigation, and secure communication.
7. IoT and Wearables: Experience the IoT revolution driven by ASICs, as they enable small-
sized, low-power devices to connect and interact seamlessly, giving rise to smart wearables
and efficient IoT ecosystems.
8. Digital Signal Processing: Learn how ASICs optimize digital signal processing tasks,
enhancing audio and video processing, image recognition, and various real-time
applications.
9. Cryptocurrency Mining: Delve into the intricate world of cryptocurrency mining, where
ASICs designed specifically for hashing algorithms revolutionize the efficiency and speed
of blockchain transactions.
10. Custom Accelerators: Explore how ASICs act as custom accelerators in specialized
applications like artificial intelligence (AI), machine learning, and scientific simulations,
boosting performance and efficiency.
11. Sensors and Sensing Systems: Discover how ASICs contribute to sensor technology,
enabling accurate and responsive sensing systems for environmental monitoring, industrial
sensing, and more.
12. Emerging Technologies: Peek into the future as ASICs embrace emerging technologies
such as quantum computing, neuromorphic computing, and advanced AI hardware, paving
the way for revolutionary innovations.