ESD Notes Min
ESD Notes Min
UNIT-1
INTRODUCTION TO EMBEDDED SYSTEMS
Contents: Introduction to ESs, History of embedded systems, Classification of embedded
systems based on generation and complexity, Purpose of embedded systems, The embedded
system design process-requirements, specification, architecture design, designing hardware
and software components, system integration, Applications of embedded systems, and
characteristics of embedded systems.
INTRODUCTION TO EMBEDDED SYSTEMS
Definition:
An Embedded System is a system that has software embedded in computer hardware. It is
designed to perform a dedicated function. Sometimes, it may be designed to perform a set of
few tasks.
An embedded system mainly consists of hardware, OS, memory, peripherals (I/O devices)
and interfaces for performing the set of dedicated tasks. The embedded software is called
“firmware”.
Embedded System (ESs) existed even before IT revolution. In the olden days, ESs
were built using vacuum tubes and transistor technology. Embedded algorithms were
developed in low level languages.
With the advent of semiconductor technology, nano-technology and IT revolution,
miniature ESs came into picture.
The first modern, real-time ES was “Apollo Guidance Computer (AGC)”. It was
developed in 1960 by Dr. Charles Stark Draper at the MIT Instrumentation Labs.
In 1965, Autonetics developed the D-17B. It was used in the computer used in the
Minuteman missile guidance system. It is widely recognized as the first mass-
produced ES.
In 1968, the first ES for a vehicle was released.
In 1971, Texas Instruments developed the first microcontroller.
In 1987, the first embedded OS, VxWorks, was released by Wind River.
In 1996, Microsoft developed its OS called “Windows embedded Compact”.
By the late 1990s, the first embedded Linux system appeared.
They were built using 8-bit microprocessors like 8085 and Z80, and 4-bit
microcontrollers.
They used simple hardware.
3
They were built around 16-bit microprocessors and 8- or 16- bit microcontrollers.
They used complex and powerful instruction sets.
Some of them contained embedded OSs for their operation.
Examples: Data Acquisition Systems and SCADA (Supervisory Control and Data
Acquisition systems) used in industrial plants.
Third Generation ESs:
Fig. 1
(i) Small-Scale Embedded systems: They are simple systems.
1. Data Collection/Storage/Representation
2. Data communication
3. Data (signal) processing
4. Monitoring
5. Control
6. Application Specific User Interface
1. Data Collection/Storage/Representation:
Here data means information like text, audio, video, electrical signals and other measurable
quantities. Data is collected from the external world. Based on the purpose of the ES, the
collected data may be used in one or more of the following ways: -
Example: A digital camera (Images can be captured, stored and presented to user through a
graphic LCD unit).
2. Data communication:
ESs are used in data communication applications ranging from simple home
networking systems to complex satellite communication systems.
The collected data can be transmitted to a remote system using wire-line medium
(e.g., RS-232C, USB) or wireless medium (e.g., Bluetooth, ZigBee and Wi-Fi) by
analog or digital means.
All the embedded products used in medical domain are designed for monitoring
purpose.
They determine the status of some variables using sensors.
They cannot control the variables.
o ES can also control some output variables when input variables change.
o They use sensors at the input port and actuators at the output port to control the
output variables and keep them in a specified range.
Example: ATM
6. Application specific user interface:
ESs use application-specific user interfaces like buttons, switches, keypad, lights, bells and
display units.
Consider a mobile phone. In this, user interface is provided through the keyboard, graphic
LCD module, system speaker, etc.
7
Functions of the embedded product (e.g., withdrawal, deposit and balance enquiry of
ATM)
Unit cost, size and power consumption
Response time and throughput
Product deadlines
Reliability and safety measures
6. System Integration: Usually, the system is divided into various modules called units).
Each unit is designed, developed and tested independently. Afters unit testing, all the
modules are integrated to get the final product.
Components and modules may work fine independently. But when integrated, there will be
many bugs. So, appropriate integration testing is performed to ensure good performance and
quality.
9
3. Architecture Design: Architecture is a plan of the overall structure of the system that will
be used later in design and development. The specification only says what the system does.
Architecture describes how the system does the things. Architecture of an ES involves: -
Processor
Memory and I/O units
Sensors and Actuators
ADC and DAC
User Interfaces, etc.
Operating System
Control algorithms for various functions
Programming language
Software tools
Device drivers, etc.
Each ES is designed to perform one or few unique functions. It cannot be used for any other
purpose.
For example: You cannot replace embedded control system
Example 2: an Automatic Teller Machine (ATM) contains a card reader for reading and
validating ATM card, cash dispensers, receipt printers and LCD Display units.
All the units are individual ESs. But they work together to achieve a common goal.
5. Small size and weight:
Generally, a user gives much importance to product aesthetics (size, weight, shape, style,
etc.) in choosing a product. (e.g., Mobile phone)
Many users like small devices. It is not just their like. A small, less weighted and compact
(occupying less space) device is always convenient to handle than a bulky one.
Same is the situation in embedded domain also.
6. Power Concern:
Power consumption: Power consumption is the amount of energy used per unit time. This
implies that ESs should have low power consumption. This can be achieved by using the
ultra-low power consuming components available in the market.
Also, battery life of portable systems (e.g., Cell phones and laptop computers) is limited by
power consumption.
Power dissipation: An ES with large power dissipation needs cooling fans, which occupy
additional space and make the system bulky. So ESs should have low power dissipation.
To achieve this, ESs are designed using regulators with low dropout and processors /
controllers with power saving mode.
Note; Low dropout means small difference between supply voltage and load voltage
***** OVER*****
13
UNIT-2
TYPICAL EMBEDDED SYSTEM
Contents: Core of the embedded system-general purpose and domain specific processors,
ASICs, PLDs, COTs; Memory-ROM, RAM, Memory according to the type of interface,
Memory shadowing, Memory selection for embedded systems, Sensors, Actuators, I/O
components: Seven Segment LED, Relay, Piezo buzzer, Push button switch, Other sub-
systems: Reset circuit, Brownout Protection circuit, Oscillator Circuit, Real time clock, Watch
dog timer.
Based on the domain and application, the processor may be a microprocessor, microcontroller
or a digital signal processor (DSP). Most of the applications use microprocessors or
microcontrollers. Some speech and video signal processing applications use DSPs.
(i) Microprocessors: A Microprocessor is a central processing unit (CPU) fabricated on a
single silicon chip. It mainly contains Arithmetic Logic Unit (ALU), Control Unit (CU) and
working registers. It needs other hardware like memory, timer unit, interrupt controller, etc.,
for proper functioning. An n-bit (n = 8, 16. 32 or 64) microprocessor is capable of handling n-
bit data and program memory.
Examples: Zilog Z80 (8-bit), Intel 8086 (16-bit), Motorola 68020 (32-bit)
(ii) Microcontrollers:
A Microcontroller basically contains CPU, memory and I/O devices on a single silicon chip.
Microcontrollers are more powerful and more preferred than microprocessors.
They may be either general-purpose microcontrollers (Intel 8051) or domain specific
microcontrollers (Atmel’s AVR). TMS1000 (from Texas instruments) is world’s first
microcontroller.
Units of a microcontroller:
CPU.
Scratchpad RAM (SPRAM), On-chip ROM/FLASH memory
General-purpose and special registers
Timer and interrupt control units
Dedicated I/O ports
Note: Scratchpad RAM is a high-speed internal RAM directly connected to the CPU. It
is used to hold very small items of data for rapid retrieval.
They are powerful, special-purpose, 8- or 16- or 32-bit microprocessors. They are used
in signal processing applications. (e.g., Speech processing, image processing, video
processing, video games, digital cameras. HDTV, radar, sonar, etc.).
They are 2 or 3 times faster than the general-purpose microprocessors.
They involve large number of real-time calculations.
They implement algorithms in hardware which speeds up the execution.
Note: General purpose processors implement the algorithms in firmware while DSPs
implement algorithms in hardware.
15
Program memory
Data memory
Computational Engine: It performs the signal processing as per the program stored in
memory.
Classification of Processors/Controllers:
Processors/controllers are classified according to the following criteria:
A. Instruction set
B. Architecture
C. Order of storing bits
B. Based on architecture:
(i) Von-Neumann architecture microprocessors/microcontrollers
It shares a single bus for program memory and data memory.
Processor first fetches instruction and then fetches data needed. Two separate fetches
slow down processor operation.
Program memory and data memory are stored on same chip. So, there is a chance of
accidental corruption of program memory.
Cheaper and no memory alignment problems.
It contains separate buses for program memory and data memory. So, it has more speed.
Program memory and data memory are stored at different locations. So, no chances for
accidental corruption of program memory.
It is costly and has memory alignment problems.
Most of the ASICs are proprietary products. So, their developers keep the internal
details confidential.
Non-Recurring Engineering (NRE) Cost: It is a onetime cost to research, design,
develop a new product. Once invested, you need not spend again each time while
producing additional number of products.
If the NRE cost is spent by a third party, the ASIC is made openly available in the
market. Then it is called Application Specific Standard Product (ASSP).
Applications of ASICs: Chips used for toys, hand held computers, cell phones and voice
recorders.
3. PROGRAMMABLE LOGIC DEVICES (PLDS)
There are two types of logic devices-fixed logic devices and programmable logic devices.
Fixed logic devices contain permanent digital circuits. They perform one or a few functions.
Once manufactured, they cannot be changed.
Programmable logic devices (PLDs) have reprogrammable digital circuits like PALs, PLAs
and PROMs. Thus, they allow frequent changes in design. This reduces development time.
Major Classification of PLDs: FPGAs and CPLDs.
(i) Field Programmable Gate Arrays (FPGAs):
An FPGA is an IC configured by a customer after manufacturing. FPGAs are quickly
programmed using Hardware Description Language (Hardware Description Language (HDL))
like VHDL or Verilog. This reduces development time. Basic building block of FPGA is called
“Configurable Logic Block (CLB)”.
FPFAs have CLBs, programmable interconnects and other components. They offer high logic
density and performance with many features (e.g., on-chip RAM, DSP, etc.)
Examples: Xilinx VertexTM series. It provides 8 million gates.
Applications:
Macrocell is the basic building block of the CPLD, which can perform sophisticated
logic functions.
CPLDs contain architectural features of both PALs and FPGAs.
Complexity of CPLDs lies between that of PALs and FPGAs.
Even though CPLDs have low logic density, they offer very predictable characteristics.
So, they are ideal for critical control applications.
Disadvantages of COTS:
Types of RAMs: RAMs are of 3 types - Static RAM, Dynamic RAM and Non-Volatile RAM.
Static RAM (SRAM):
SRAM is a type of RAM that retains data bits as long as power supply is there. Basic cells for
storing information in SRAM are flip-flops. Conventionally each flipflop has 6 transistors as
shown in Fig. 1(a). Once a flip-flop stores a bit, it keeps that value until the opposite value is
stored in it.
20
Applications of SRAMs:
As cache memory.
In digital Cameras.
In cell Phones.
In LCD screens and printers.
Here, all the contents of the chip are erased. Partial erasing is not possible.
For erasing, chip is to be removed from circuit board.
Because of the above drawbacks, EPROM chips are now replaced by EEPROMS and FLASH.
Applications: EPROMS were earlier used to store computer BIOS, electronic gadgets, etc.
(iv) Electrically Erasable Programmable Read Only Memory (EEPROM):
EEPROM can be erased electrically in a few milliseconds and then reprogrammed. Erasing can
be possible at byte level. Erasing and reprogramming can be done at board-level. i.e., we need
not remove the chip from the circuit board.
EEPROMs are faster and convenient to use. But they are more expensive and have low capacity
(a few kilobytes). They are expensive compared to PROMs and EPROMs.
Applications: Used as BIOS and in embedded systems.
(v) FLASH memory:
It is the latest and most popular ROM technology used in present ESs. It combines the re-
programmability feature of the EEPROM and the high capacity of standard ROMs. Flash
memory is organized as blocks. The erasing of information can be done at block level without
affecting the other blocks.
Example: W27C512
Applications of FLASH: Web applications, digital cameras, mobile phones, etc.
MEMORY ACCORDING TO THE TYPE OF INTERFACE
Interfaces connecting the memory and processor/controller may be serial, parallel or serial
peripheral interfaces. Parallel interfaces were used in earlier computers for connecting
peripherals. Serial interface is commonly used for data memory like EEPROM.
These interfaces may be onboard interfaces or external interfaces.
The memory density of a serial memory interface is expressed kilobits, whereas that of a
parallel memory interface is expressed in kilobytes.
Note: Communication Interfaces are discussed separately in Unitr-3.
MEMORY SHADOWING
A RAM access is 3 times as fast as ROM access. So, if program is stored in RAM, it gives high
execution speed. But RAM is volatile. i.e., its contents are lost if power supply is OFF.
On the other hand, a ROM is non-volatile memory. Its stored contents are permanent. But it
gives low execution speed. Memory shadowing technique resolves this execution speed
problem.
24
In computer systems (e.g., PCs) and video systems, a configuration called BIOS (Basic Input
Output System) is used to hold ROM. This information is required during boot up. Since it is
stored in ROM, it is time-consuming.
Now, manufacturers use Memory shadowing. They put a RAM behind the logical layer of
BIOS. This RAM acts as a shadow to the BIOS. While booting, BIOS is copied to shadow
RAM, write-protected and the BIOS reading operation is disabled.
Thus, information is accessed from RAM instead of from ROM.
MEMORY SELECTION FOR EMBEDDED SYSTEM
Selection of RAM and ROM depends on the type of ES and the application. Important factors
while selecting the RAM and ROM are: -
(i) Need for external memory: We need to check whether the on-chip memory is sufficient
or external memory is required. Then we should estimate the memory required.
Examples:
(i) A Windows mobile device needs 64 MB RAM and 128 MB ROM
(ii) In small applications (toys), a microcontroller with less data memory, i.e., a few bytes of
internal RAM, Flash and EEPROM (if necessary), are needed. We don’t need external memory.
(ii) Memory size: Memory chips come in standard sizes like 512 bytes, 1024 bytes (1 kilobyte),
2048 bytes (2 kilobytes), 4 Kb, 8 Kb, 16 Kb, 32 Kb, 64 Kb, 128 Kb, 256 Kb, 1024 Kb (1
megabyte), etc. If an ES needs 20 Kb, we have to go for 32 Kb, but not 16 Kb.
(iii) Address range supported by the processor: A processor with 16-bit address bus can
address a maximum of 216 (= 64 Kb) memory locations. Hence it is meaning less to select a
128 Kb memory chip.
(iv) Sharing of memory: The memory may also be shared by I/O devices and other IC chips.
(v) Word size of the memory: Word size is the number of bits that can be read/written together
at a time. 4, 8, 12, 16, 24, 32, etc., are the word sizes supported by memory chips. Word size
supported by the memory chip should match with the width of the data bus of the
processor/controller.
SENSORS AND ACTUATORS
An ES is in constant interaction with real world. It monitors the changes in the input variables
and controls the output variables. For this purpose, it uses sensors and actuators.
Sensors:
25
A Sensor is a device that detects (or measures) a physical variable and responds to, in its
environment. For example, a thermometer is a temperature sensor. In ESs, sensors are used at
the input port.
Actuators:
To” actuate” means to put something into mechanical action or motion (movement). An
actuator is a component that is responsible for moving and controlling a mechanism or system.
For example, a windmill actuates a pump. In ESs, actuators are used at the output port.
Other examples for actuators:
Electric motor
Stepper motor
Solenoid
Piezoelectric actuator
Hydraulic cylinder
I/O COMPONENTS
Examples:
For displaying ‘4’, the segments B, C, F and G are lit.
For displaying ‘3.’, the segments A, B, C, D, G and DP are lit.
For displaying the letter ‘d’, the segments B, C, D, E and G are lit.
\
Fig. 2 (b) Common Anode Display
27
2. RELAY
Relay is an actuator. In embedded applications, it is used to select a signal/power path
dynamically. The relay unit contains a relay coil made up of insulated wire wound on a metal
core, and a metal armature with one or more contacts.
Relay works on electromagnetic principle. When a voltage is applied to the relay coil, current
flows through the coil and generates a magnetic field. This magnetic field attracts the armature
core and moves the contact point. The movement of the contact point changes the power/signal
path.
Relay Configurations:
Fig. 3 illustrates the widely used relay configurations for embedded applications.
Normally open SPST relay: The circuit is normally open and it becomes closed when
the relay is energised.
Normally closed SPST relay: The circuit is normally closed and it becomes open when
the relay is energised.
SPDT (Single pole Double throw) configuration: There are two paths for information flow
and they are selected by energizing or de-energizing the relay.
The relay is normally controlled using a relay driver circuit connected to the port pin of the
processor/controller.
Advantages of relay:
Less power consumption.
Fast operation
Used for both ac and dc systems
Simple, compact and reliable.
Applications:
3. PIEZO BUZZER
A ‘piezoelectric buzzer’ or ‘piezo buzzer’ is a type of electronic device that is used
to produce a tone, alarm or sound. It is a low-cost and light weight device with simple
construction,
A piezo buzzer contains a piezoelectric diaphragm. When you apply voltage, the diaphragm
vibrates and produces sound. A piezo buzzer can be directly interfaced to the port pin of the
processor /controller.
Active and Passive Piezo Buzzers: Active piezo buzzers operate on DC voltage. They are
often called transducers. Passive piezo buzzers operate on AC voltage.
Self-driving and External-driving piezo buzzers:
The Self-driving type generates a fixed single tone, when a voltage is applied.
External driving type can generate different tones. The tone can be varied by applying
a variable pulse train to the piezoelectric buzzer.
Applications:
Fire alarms
Microwave oven
Automobile alerts
In pest repelling devices (Pest is a destructive insect or animal that attacks crops and
food).
Doorbell
Calculator buttons
Keys on a keyboard
In embedded applications, push button is generally used as a ‘reset and start switch’ and pulse
generator. The push button is normally connected to the port pin of the host processor/
controller.
Depending on the way in which the push button is interfaced to the controller, it can generate
either a “LOW’ pulse or a “HIGH” pulse as shown in Figs. 5 (a) and 5 (b) respectively.
Fig. 5 (a): ‘LOW” Pulse Generator Fig. 5 (b): ‘HIGH’ Pulse Generator
OTHER SUB-SYSTEMS
The ‘other subsystems’ refer to the components/circuits/ICs necessary for proper functioning
of the ES.
1. Reset Circuit
2. Brown-out Protection Circuit
3. Oscillator Unit
4. Real-Time Clock
5. Watchdog Timer
1. RESET CIRCUIT
‘To reset’ means ‘to restart’ a device or system in a good condition. A Reset circuit helps a
microprocessor re-initialize itself and resume its normal operation, whenever an undesirable
error occurs. It ensures:
31
Clock frequency: The execution speed is directly proportional to clock frequency. But
we should not blindly increase the clock frequency to increase execution speed. There
is an upper threshold of clock frequency for logical devices on the chip. Beyond that,
the system becomes unstable and non-functional.
Power consumption: Power consumption of system is directly proportional to clock
frequency. If we increase the clock frequency, power consumption will also increase.
Accuracy: The accuracy of program execution depends on the accuracy of clock signal.
The accuracy of quartz oscillator or ceramic resonator is expressed in +/- ppm (parts
per million).
4. REAL-TIME CLOCK
Real-Time Clock is an IC used in PCs, ESs, servers and other electronic systems, to keep track
of time and date. It counts hours, minutes, seconds. It also counts days, months and years. Thus,
it acts like a clock as well as calendar.
Example: DS12885 (From Maxim)
Important features of (RTC):
RTC runs even when the system is off or in a low power state
RTC is more accurate than system clock.
Advantages of RTC:
UNIT-3
COMMUNICATION INTERFACE
Contents: Onboard Communication Interfaces-I2C, SPI, CAN, Parallel interface; External
communication interfaces-RS232 and RS485, USB, Infrared, Bluetooth, Wi-Fi, ZigBee,
GPRS, GSM.
INTRODUCTION
A communication interface (CI) is essential for an ES to communicate with its subsystems or
with external world. There are two types of CIs for an ES: Onboard CIs and External CIs.
Onboard Communication Interfaces:
Communication channels or buses used to connect various ICs and peripherals within the ES
are called Onboard Communication Interfaces. They are also called “Device/Board Level
Communication Interfaces”.
Examples:
1. First, master makes SCL = high and SDA = low. It is the ‘START’ condition for data
transfer.
2. Master sends 7-bit or 10-bit address of the slave on SDA line, and clock pulses on SCL
line for synchronization. The
3. Master sends data on the bus. It sends MSB first. This data is valid when clock pulse is
‘HIGH'.
4. Master sends Read bit ‘1’ or Write bit ‘0’ as per the need.
5. Slave receives it and sends acknowledge bit ‘1’ on SDA line.
6. Master receives the acknowledge bit.
7. If write operation is requested, master sends 8-bit data to slave on SDA line. slave sends
acknowledgement bit to master.
8. If read operation is requested, slave sends data to master on SDA line. Master sends
acknowledgement.
9. Master terminates the transfer by making SDA line high when SCL is high. It is STOP
operation.
Applications of I2C
Reading some memory ICs.
Accessing DACs and ADCs.
Transmitting and controlling user-directed actions.
Reading hardware sensors.
Communicating with multiple microcontrollers.
2. SERIAL PERIPHERAL INTERFACE (SPI) BUS
SPI bus is a synchronous, bi-directional, full duplex, four-wire used for serial
communication.
38
The concept of SPI was introduced by Motorola. SPI is a single master multi-slave system. It
is possible to have more than one master. But only one master device should be active at any
given point of time. SPI requires 4 signal lines for communication. They are:
Master Out Slave In (MOSI): It carries the data from master to slave device.
It is also known as Slave Input/Slave Data In (SI/SDI).
Serial Clock (SCLK): It carries the clock signals.
Master In Slave Out (MISO): It carries the data from slave to master device.
It is also known as Slave Output/Slave Data Out (SO/SDO).
Slave Select (SS): It is an active low signal used to select the slave device.
The bus interface diagram shown in Fig. 2 illustrates the connection of master and slave devices
on the SPI bus.
Status register to holds the status of various conditions for transmission and reception.
Working Principle:
SPI works on the principle of ‘Shift Register’ (SR). The master and slave devices
contain a special SR for transmitting and receiving data. The size of the shift register is
device dependent. Normally it is a multiple of 8-bits.
During transmission from the master to slave, the data in SR of master is shifted out to
the MOSI pin of the slave device. At the same time the shifted-out data bit from the SR
of slave enters the SR of the master device through MISO pin.
In summary the SRs of master and slave devices form a circular buffer. Some devices
are configurable to select LSB/MSB as first bit to send.
When compared to I2C, SPI bus is most suitable for applications requiring transfer of
data in streams. The only limitation is SPI doesn’t support an acknowledgement
mechanism.
3. CONTROLLED AREA NETWORK (CAN)
CAN is a serial communication protocol used over a pair of wires. It is used in many real-
time applications. It was developed by Robert Bosch to provide communication among various
electronic components of vehicles.
Later it was extended to automation and industrial applications. CAN specification doesn’t
specify the layout and structure of the physical bus. But a device connected to CAN bus is able
to transmit on the physical bus.
CAN protocol defines data packet format and transmission rules ---
To prioritize messages
Guarantee latency times
Handle transmission errors
Retransmit corrupted messages
Distinguish between a permanent failure of a node versus temporary errors
o Used in cars, buses, trucks and aircrafts in (i) Safety systems like airbag control (ii)
Antilock Brake System (ABS) (iii) Navigational systems like GPS.
o Real-time support.
o Offers medium speed (up to 125 Kbps) and high speed (up to 1 Mbps) data transfer.
40
Used in cars, buses, trucks and aircrafts in (i) Safety systems like airbag control (ii)
Antilock Brake System (ABS) (iii) Navigational systems like GPS.
Elevator Controllers
Photo Copiers
Medical instruments
Production line control systems
4. PARALLEL INTERFACE
Parallel Interface is used for communicating with peripheral devices which are memory
mapped to the host processor. Fig. 3 illustrates the interfacing of devices using parallel
interface.
41
Host processor has a parallel bus and control over read/write signals.
Communication is controlled by ‘control signal interface’ between host and device.
Read’ and ‘Write’ signals control the direction of data transfer It also provides control
signals.
Each device connected to the processor is assigned a range of addresses.
When the address selected is in this range, a decoder circuit activates the chip select
line. Thus, the device becomes active.
Here, processor initiates the parallel communication. If device wants to initiate
communication, it sends an interrupt signal to the processor.
Parallel data communication offers highest speed for data transfer.
Width of the parallel interface can be 4-bit, 8-bit, 16-bit, 32-bit or 64-bits. It must match
with the width of the data bus of the processor
42
EXTERNAL INTERFACES
1. RS-232 C & RS-485
RS-232 C stands for Recommended Standard number 232, revision C. RS-232 and RS-232C
are almost same. The names are used interchangeably.
Important features of RS-232:
Applications of RS-232C:
43
Most PCs had RS232 compatible serial ports. They were used to connect peripherals
such as keyboards, mice, and printers to the computers.
It was most popular during the olden days before the advent of Bluetooth, USB, etc. It
is still popular in some legacy applications.
RS-485:
Applications of RS-485:
2. USB
USB stands for Universal Serial Bus. It is a wired high-speed bus used for serial data
communication. It was released in 1995 by the USB core group members organizations.
“USB.ORG” is the standards body for defining and controlling the standards for USB
communication.
Important features of USB:
(i) Topology and hubs:
44
It follows a tired-star topology with a USB host at the center and one or more peripheral
devices connected to it. In place of any peripheral device, we can connect another USB
which supports some more peripherals. This type of connection is called a hub.
Host and peripherals are connected by using cables.
USB cable supports a distance up to 5 meters.
USB transmits data in packets with standard data format.
Control transfer: Used to query, configure and issue commands to the USB device.
Bulk transfer: Used to send a block of data to a device (e.g., transferring data to a
printer). It supports error checking and correction.
Isochronous data transfer: Used for real-time data communication, where data is
transmitted as streams (e.g., frames in TV). It doesn’t support error checking and
retransmission of data.
Interrupt transfer: Used for transferring small amounts of data (e.g.: data from mouse
and keyboard).
3. INFRARED (IrDA)
IrDA stands for Infrared Data Association. It provides specifications for a set of
protocols for wireless infrared (IR) communications.
Infrared technology is a serial, half duplex, wireless technology for data communication
between devices.
Infrared waves are used for transmitting the data.
IrDA standards have been used to install several low-cost, short-range communication
systems in laptops, printers, handheld PCs, and PDAs.
The IrDA protocol contains Physical Layer, Media Access Control (MAC) and Logical
Link Control (LLC). The physical layer defines the physical characteristics of
communication like range, data rate, power, etc.
It supports point-to-point and multipoint communications using line of sight
propagation.
Its typical communication range is 10 cm to 1m. The range can be increased by
increasing the transmitting power of the IR device.
It supports data rates ranging from 9600 bit/seconds to 16 Mbps.
An Infrared LED is used as transmitting source, while a photodiode acts as receiver.
Sometimes same device can be used as transmitter and receiver (e.g.: Mobile phone).
Then such device is called a transceiver.
Certain devices always require unidirectional communication. So, they have separate
transmitter and receiver. For example, in TV remote control device, the remote-control
device contains the transmitter unit and TV contains the receiver unit.
Advantages of IrDA:
Disadvantages:
Applications of IrDA:
IrDA is in use from the olden days of communication. The remote control of your TV
or VCD player use IrDA.
It was the transmission channel in mobile phones before Bluetooth’s existence. Even
now, most of the mobile phone devices support IrDA.
Popularly used in low-cost, short-range communication systems in laptops, printers,
handheld PCs, and PDAs.
4. BLUETOOTH
Advantages of Bluetooth:
Disadvantages:
5. WI-FI
Wi-Fi stands for wireless fidelity. It is a wireless technology used to connect computers,
laptops, smartphones and other devices to the internet. Wi-Fi follows the IEEE 802.11
standard.
Important Features of Wi-Fi:
It is intended for network communication and it supports Internet Protocol (IP) based
communication, where each device is identified by a unique network address called ‘IP
address’.
Wi-Fi based communications require an intermediate agent called Wi-Fi
router/Wireless access point (WAP) to manage the communications.
Wi-Fi router is responsible for
Restricting the access to a network
Assigning IP address to a device on the network
Routing data packets to the intended devices on the network.
48
Wi-Fi enabled devices contain a wireless adaptor for transmitting and receiving data in
the form of radio signals through an antenna. The hardware part of it is known as Wi-
Fi Radio.
Wi-Fi operates at 2.4 GHz or 5 GHz of radio spectrum and co-exists with other ISM
band devices like Bluetooth.
Wi-Fi supports data rates ranging from 1 Mbps to 150 Mbps and access/modulation
method.
Depending on the type of antenna and usage location (indoor/outdoor), Wi-Fi offers a
range of 100 to 300 feet.
When its Wi-Fi radio is turned ON, Wi-Fi device searches Wi-Fi network in its vicinity
and lists out the Service Set Identifier (SSID) of the available networks. If the network
is security enabled, a password may be required to connect to a particular SSID.
Wi-Fi employs different security mechanisms like Wired Equivalency Privacy (WEP),
Wireless Protection Access (WPA), etc., for securing the data communication.
Applications of Wi-Fi:
Business applications
Mobile applications
Computer applications
Automotive applications
Video conference
6. ZIGBEE
i. A ZigBee Coordinator (ZC) Which acts as the root of the ZigBee Network.
ii. A ZigBee Router (ZR) for passing information from a device to another device or
another ‘ZC’.
iii. ZigBee End Device (ZED) that contains ZigBee functionality for data communication.
It can talk only with a ZR or ZC.
49
Advantages of ZigBee
Low-cost.
Low power consumption.
High efficiency (due to long battery life).
Simple installation, monitoring and control
Disadvantages of ZigBee:
Applications of ZigBee:
7. GSM
The concept of GSM was conceived in Bell Laboratories in early 1970s. GSM stands
for Global System for Mobile Communication. GSM is a digital cellular 2G technology used
for transmitting voice and data over mobile network. It also provides roaming service,
i.e., your GSM phone number can be used in another GSM network. GSM is also the name
of a standardization group to create a common European mobile telephone standard.
Frequencies: It operates in the mobile communication bands 900 MHz and 1800
MHz.
GSM devices: Mobile telephones, PCMCIA cards, embedded radio modules, and
external radio modems.
GSM Services:
Teleservices: Telephony, Videotext, Facsimile and SMS
Advantages of GSM:
51
GPRS technology is faster than GSM (Data rate of GSM is about 9.6 Kbps, while that
of GPRS is about 14.4 to 115.2 Kbps).
GPRS provides 124 channels with 200 KHz spacing.
It provides an uninterrupted connectivity to the internet for mobile phones and
computers. Moreover, services will be easy and quick to access.
It provides voice and data services globally.
52
Eight different users could share the same uplink and downlink channels. Each is
divided into eight time slots. These time slots are dynamically allocated to various users
as and when needed.
GPRS is also known as GSM-IP (Global-System Mobile Communications-Internet
Protocol). It supports Internet Protocol (IP), Point to Point Protocol (PPP) and X.25
protocols for communication.
It uses one or more frequency-bands the radio supports (850, 900, 1800, 1900 MHz).
The GPRS specifications are written by the European Telecommunications Standard
Institute (ETSI).
GPRS is an old technology and it is being replaced by new techniques like EDGE
(Enhanced data rates for GSM evolution), HSDPA (High Speed Downlink Packet
Access), etc. which offer higher bandwidths for communication.
GPRS services:
Advantages of GPRS:
Provides high data rate than GSM.
Un-interrupted internet access
Fast connection set up.
Supports bursty applications like email, broadcasting, web browsing, etc.
High bandwidth.
Used for point to point and multipoint communications.
Users send or receive voice calls while browsing the internet or downloading data.
Thus, users can have both voice call and data call together.
Disadvantages of GPRS:
Networks can be affected when more users use GPRS from same location at the same.
time. It leads to traffic congestion and slows down data connection.
If issues occur, it is difficult to troubleshoot.
UNIT-4
EMBEDDED FIRMWARE DESIGN AND DEVELOPMENT
The only way to come out of the loop is either by using a hardware reset or issuing an
interrupt.
54
A Hardware reset re-initializes the hardware components and restarts the program.
An Interrupt suspends the current task execution temporarily and executes a routine called
interrupt service routine (ISR). Then, control goes back to the interrupted program.
Advantages and Applications of Super loop-based design:
(i) It is simple and straight forward. It has no OS related overheads because: -
(ii) It is more suitable for applications which are not time critical and the response time
is not important.
Example: An electronic video game toy with keypad and display unit.
Here, the program running inside this product is designed such that: -
Drawbacks:
(i) If one task fails, total system is affected:
Suppose, while executing a task, the program hangs up at a point. Then, it will remain there
forever and the product stops functioning.
To overcome this, watchdog timers (WDTs) are to be used. If a preset time is exceeded due
to hanging up of processor or unexpected failure, the WDT will time out and reset the system.
But this results in additional hardware cost and firmware overheads.
(ii) Lack of real-timeliness:
Let there be more tasks in an application, and each task is repeated many times
Then, some events are likely to be missed.
2. OPERATING SYSTEM-BASED APPROACH
This is also called “Task scheduling” approach. Here, functions to be executed are split into
tasks. Then, the tasks are run using a scheduler which is a part of OS kernel. In this approach,
a General-purpose operating system (GPOS) or a Real-time operating system (RTOS) is used
to host the user-written application firmware.
(i) GPOS based design:
55
to write firmware.
AL is processor/controller specific, whereas HLL is processor/controller independent.
ASSEMBLY LANGUAGE BASED DEVELOPMENT
Important features:
Some OS dependent tasks require low-level languages. For instance, ALP is used for
device driver programming.
The general format of an AL instruction is an opcode followed by one or more
operands. The opcode tells the processor/controller what to do. Operands provide the
required data. In some instructions, opcode implicitly contains the operand and no
operand is required.
A software called “assembler” is used to convert AL to ML.
A Library allows you to reuse code of important functions without compiling each
time. (e.g., functions/programs for performing multiplication, floating point
arithmetic, etc.)
When the linker processes a library, it uses the object modules in library, which are
necessary to create program.
Library files are generated with extension “.lib”.
If you are using a commercial version of the assembler/compiler, its vendor provides
the required library files.
Checks for object modules (.obj files) of the program in the library.
Extracts them.
Assigns absolute address to each module.
Combines them into a single object file.
Important features:
Hex file is created from the final ’Absolute Object File’ using the utility program
called “Object to Hex File Converter”.
The hex file representation is processor/controller specific.
For Intel processors/controllers, the target hex file format will be ‘Intel Hex’ and for
Motorola, it is ‘Motorola Hex’ format.
‘OH51’ from Keil software is an example for an Object to Hex File Convert utility for
A51 Assembler/C51 Compiler for 8051 specific controllers.
The various steps involved in HLL based firmware development are similar to the steps in
AL based development, with the following differences.
Any text editor like ‘Notepad’ or ‘WordPad’ from Microsoft or text editor provided
by the IDE can be used for writing the program. Most of the HLLs support and use
modular programming approach. Here program is divided into multiple modules. i.e.,
multiple source files.
The program written in any HLL is saved with the corresponding language extension
(.c for C, .cpp for C++, etc.)
The cross compiler in the IDE environment converts the HLL program into a hex file.
Which contains the machine code in hexadecimal format.
Example for cross-compiler: The IDE “Keil μvision3” is used for 8051 family
microcontrollers. It contains the Keil C51 C Compiler. It is the most popular cross-
compiler available for ‘C’ language for the 8051 family of microcontroller.
(iii) Portability: The code in HLL is highly portable. It works with any processor/controller
with little modification. Only thing we have to do is to recompile the program in new
processor’s IDE, after placing the required “include files” for that processor.
(v) Ease of debugging: If the source code contains necessary comments and document lines,
it is very easy to understand and debug.
61
(vi) Best choice for beginners: For a beginner best choice is writing source code in HLL.
(vii) It provides Scope of refinement.
(ii) The investment required for HLL based firmware development tools (IDE containing
cross-compiler) are higher than the AL counterpart.
1. Mixing AL to HLL
2. Mixing HLL to AL
3. In-line Assembly Programming
Let ‘C’ language is the HLL used. Then we discuss above types of mixing.
1. Mixing AL to HLL:
Program is written in HLL ‘C’. Some essential assembly routines are added to it.
Consider the following possible situations: -
Then required routine is written in AL and it is invoked from C. Here, the programmer must
be aware of the following: -
These functions are cross-compiler dependent. Different cross compilers implement these
functions in different ways depending on the general-purpose registers and the memory used.
There is no universal rule for this. So, you must get the information from documentation of
the cross compiler in use.
2. Mixing HLL with AL:
Source code is written in AL. C routines are included in that assembly code.
The entire source code is planned in AL for various reasons like
Some portions of the code may be very difficult and tedious to code in AL.
Example: 16-bit multiplication and division in 8051 AL.
We have to include built in C library functions provided by the cross compiler.
Example: Built in Graphics library functions and String operations supported by C.
3. Inline Assembly:
It is a technique used to insert processor-specific assembly instructions at any location of a
source code written in C. This avoids the delay in calling assembly routine from C code’.
Special keywords are used to indicate that the starting and ending of Assembly instructions.
The keywords are cross compiler specific. C51 uses #progma asm and #pragma endasm to
indicate a block of code written in assembly language.
e.g.:
#pragma asm
MOV A, #13H
#progma endasm
63
UNIT-5
RTOS BASED EMBEDDED SYSTEM DESIGN
Contents: Operating system basics, Types of operating systems, Tasks, Process and Threads,
Multiprocessing and Multitasking, Task scheduling: Non-pre-emptive and Pre-emptive
scheduling; Task communication-Shared memory, Message passing, Remote Procedure Call
and Sockets, Task Synchronization: Task Communication/Synchronization Issues, Task
Synchronization Techniques.
OPERATING SYSTEM BASICS
INTRODUCTION:
Operating System (OS):
An OS acts as a bridge between the user applications (or tasks) and the system resources. It is
a system software responsible for managing system memory, resources and application
programs. This program is loaded into the computer by a boot program.
Application Program Interface (API):
API acts as an interface between application program and OS. In addition, users can interact
directly with the OS through a user interface (UI).
User Interfaces (UIs):
A UI is the point of human-computer interaction and communication. (e.g.: display screen,
keyboard, mouse, etc.).
Types of UIs: CLI, GUI, VUI and GBI
Kernel is the core of the OS. It is responsible for managing the hardware and software
resources and communication among them. Kernel acts as an abstraction layer between
system resources and Kernel mainly contains a set of system libraries and services.
OS Scheduler:
OS Scheduler is a system software. It is a part of the kernel process. In multiprocessing
environment, it chooses the order of execution of processes. i.e., it decides which process is
to be executed first and which one is next. It also takes an active role in interrupt handling
and exception handling.
KERNEL SERVICES OF OS
1. Process Management
2. Memory Management
3. File Management
4. I/O Device Management
5. Inter-Process Communication (IPC)
6. Protection
7. Error Handling
8. Job accounting
9. Interrupt Handling
10. Synchronization
11. Managing time-critical requirements (RTOS)
2. Memory Management:
(a) Primary (Main) Memory Management:
Before a process is executed, process and required data are stored in main memory. It is
basically a RAM. It has high speed and can be accessed directly by the CPU. Kernel provides
th following primary Memory management services:
It provides a service called ‘Device manager’ to handle I/O operations using device
drivers. Device Manager loads and unloads device drivers and sends data and control
signals to and from the I/O devices.
6. Protection:
‘Protection’ means protecting system resources from unauthorized access and manipulation.
The system must also be protected from external attacks like viruses and worms. Protection
services of kernel provide different levels of permissions (e.g.: no access, read only access,
etc.). It provides authentication mechanism for each user by means of passwords.
7. Error/Exception Handling: It deals with registering and handling the errors occurred and
exceptions raised during the execution of processes/tasks. It constantly checks for possible
errors and takes appropriate action. It ensures correct and consistent processing. It also
ensures reliable data transmission across vulnerable networks.
Examples for errors/exceptions: Programming errors (syntax errors, logical errors and
errors at compilation time and run time), Resource errors (e.g.: Bus error), Interface errors,
Communication error, divided by zero.
8. Job Accounting: OS keeps track of ‘time and resources’ used by various tasks and users.
This information can be used to track resource usage for a particular user or group of users.
9. Interrupt Handling Service:
An interrupt is an event or signal that alters the sequence in which the processor executes
instructions. It causes processor to stop executing current program temporarily and executes
an Interrupt Service Routine (ISR) to handle that interrupt. Then control is passed to the main
program to resume its execution. Handling of interrupts is based on priorities.
10. Other important services provided by RTOS:
(i) Task Synchronization: Task/Process synchronization means efficient sharing of system
resources by concurrent processes without any conflicts.
67
Types of RTOSs:
There are 3 types of RTOSs. They are OSs for Hard real-time, Soft real-time and Firm real-
time systems.
68
Hard real-time system: It has a set of strict deadlines. Missing even single deadline is
considered a system failure. (e.g.: Flight control system, missile guidance system)
Soft real-time systems: Here, one or more failures to meet the deadlines are not considered
complete system failure, but performance is considered to be degraded (e.g.: ATM,
audio/video systems).
Firm real-time system: In this, a few missed deadlines will not lead to total failure, but
missing more than a few may lead to complete or catastrophic system failure. (e.g.: Satellite-
based surveillance applications, financial forecast systems),
‘Create’ state: In this state, the process is about to be created but not yet created. OS recognises
it but no resources are allocated to process. Create state is also called ‘New’ state. or ‘Dormant’
state.
‘Ready’ state: It is the state where process in loaded into the memory and placed in the ready
queue maintained by the OS. A ready queue is a queue of processes which are waiting for CPU
allocation. After creation, a process enters in the ready state.
‘Running’ state: It is the state at which source code instructions of process are executed.
‘Blocked’ state: It is also called ‘waiting’ state or ‘interrupted’ state. It is the state where a
process is waiting for
Some event to occur
For a resource or
For the completion of an I/O operation.
After the event occurs or resource is available, the process again goes to ready state.
‘Completed’ state: A state where process completes its execution. It is also called
‘Terminated' state, ‘finished’ state, ‘exit’ state or ‘deleted’ state. Here process is removed from
main memory. OS deletes PCB and releases the associated resources.
Process structure:
Process structure means data, objects and resources related to process. Process structure
mainly refers Stack, Program Counter, Process status and CPU registers used by the process.
A process structure is available in PCB.
Process Control Block (PCB):
PCB is a data structure used by OS to store process structure and all information about a
process. Each process has a PCB. It is created when process is created and deleted when
process is terminated.
70
Process state
Process ID
Process Priority
Program Counter
Registers
Pointer
Miscellaneous Information
Fig. 2 shows the main contents of a PCB. They are described below.
Process State: It is the current state of the process i.e., create, ready, running, blocked or
completed state.
Process ID: Identification number of the particular process.
Process Priority: Priority assigned to the process.
Program Counter: Register which contains the address of the next instruction to be
executed in the process.
Registers: This specifies the other registers used by the process. (Accumulator, index
registers, stack pointer, general purpose registers, etc.)
Pointer: It points to the address of the next PCB in ready state.
Miscellaneous Information:
Memory Management Information: Page tables or the segment tables, etc.in the
memory.
I/O Status Information: List of files and I/O devices used by the process.
Accounting information: Amount of CPU used, time constraints, etc.
Process Memory:
It is the memory occupied by a process. It contains three regions, namely, stack memory, data
memory and code memory. Stack memory holds all temp data such as variables local to the
process. Data memory holds all global data for the process. Code memory contains the
program code (instructions) corresponding to the process.
Steps in Process Management:
71
THREADS
A thread is a segment of a process. It is the smallest code sequence that can be managed
independently by scheduler. It is also called a lightweight process. A process can have one or
more threads. Different threads in a process share the data memory code memory and heap
memory area in the address space. Threads maintain their own thread status (Stack and CPU
register values) and program counter.
MULTIPROCESSING AND MULTITASKING
Multiprocessing:
It is the ability of an OS to execute multiple processes simultaneously. Multiprocessor
systems have multiple CPUs and execute multiple processes simultaneously.
In single processor systems, multiprocessing/multitasking involves switching of CPU from
executing one task to another. It creates the illusion of multiple tasks executing in parallel.
A process is considered as a virtual processor, because it has its own CPU registers, stack
and program counter like a physical processor. When CPU switches from one process to
another, the properties of process (virtual processor) are converted to those of physical
processor. OS scheduler controls this switching.
When CPU is interrupted, it temporarily stops the execution of the current process, and
switches to interrupting process. This situation is called context switching. Before doing so,
it saves the context of current process. Context means the state of a process that includes the
details of CPU registers, memory, system resource usage, execution details, etc.
After executing the new process, control comes to the first process. Then CPU retrieves the
saved context, uses it and runs the process.
Example for multiple tasks in Automatic chocolate vending machine (ACVM):
Multithreading:
A process/task in an application consists of many suboperations like:
If all suboperations are executed in sequence, it takes more time and CPU utilization is not
efficient. For example, if the proces is waiting for a user input, CPU has to stay idle.
So the process is split into different threads.
Each thread occupies a portion of the process. It can be independently managed by OS
scheduler. Each thread is meant for one suboperation.
Advantages of multithreading:
Better memory utilization: because multiple threads within a process share address
space for data memory.
Since variables can be shared across threads, IPC becomes easier.
Better CPU utilization: When one thread enters wait state, other threads can utilize
CPU. This speeds up the execution of process.
Example for multiple threads in a process: Display-Process in mobile phone has the
following threads:
A thread to display clock time and date.
A thread to display battery power.
A thread to display silent or active mode
A thread to display unread messages in the in box
A thread to display call status: whether dialing or call waiting
A thread to display menu.
73
Process vs Thread:
S.No. Process Thread
2 Processes don't share memory with other Threads share memory with other
processes. threads of the same process.
TASK SCHEDULING
Scheduling and Kernel:
Scheduling Policies/Algorithms:
Scheduling policies form the guidelines for scheduling. A scheduling policy is implemented
as an algorithm, which is run by the kernel as a service.
The selection of scheduling algorithm depends on the following factors:
Turn Around Time (TAT): Total amount of time spent by a process in the system.
(TAT = Waiting time + Bust time)
Waiting Time: Time for which a process waits in the ready queue for getting a CPU.
Bust time: Execution time for a process.
Response Time: Amount of time after which a process gets the CPU for the first time
after entering the ready queue.
A good scheduling algorithm has high CPU utilization, minimum turnaround time, maximum
throughput and least response time.
Queues maintained by OS in CPU Scheduling:
Job Queue: This queue contains all the processes in the system.
Ready Queue: Contains all the processes residing in main memory that are ready to
run.
Device Queue: Contains the processes which are waiting for an I/O device.
A process migrates through theses queues during its life cycle. In OS context, queue acts as a
buffer.
SCHEDULING MECHANISMS:
Based on the scheduling algorithm used, the scheduling mechanisms are classified as Non-
pre-emptive and Pre-emptive scheduling mechanisms.
In non-pre-emptive scheduling, currently executing process is allowed to run until it
terminates or enters to ‘wait’ state. In other words, once CPU is allocated to a process, it
holds the CPU till it terminates or enters ‘wait’ state.
A Pre-emptive scheduling mechanism is a priority-based scheduling. Processes with higher
priorities are honored first. If a higher priority task becomes ready to run, OS pre-empts a
lower priority task that is already running. Lower priority task is suspended and higher
dpriority task runs on CPU.
NON-PREEMPTIVE SCHEDULING ALGORITHMS
1. First-Come-First-Served (FCFS) Scheduling policy:
It is also called First-in-First-Out (FIFO) algorithm. It executes the processes in ready queue
in the order of their arrival into the queue. i.e., it allocates first job arrived in the queue to the
CPU first, then allocates the second one, and so on.
2. Shortest Job First (SJF) Scheduling policy:
75
It is also called “Shortest Job Next (SJN) Algorithm”. It executes the process with the
shortest estimated run time first, followed by the next shortest process, and so on. Here, the
OS needs to know (or guess) the execution time of each process on CPU. It is not suited for
interactive jobs, where execution time is not known.
3. Last-Come-First-Served (LCFS) Scheduling:
It is also called Last-Come-First-Out (LIFO) Scheduling. Here the last job entered the queue
is served first, followed by the last but one, and so on.
4. Priority Based Non-preemptive Scheduling:
It ensures that a process in the ready queue with high priority is serviced first. The priority of
a process can be indicated through various mechanisms. One way is while creating a process
a priority can be assigned to the process. For example, 0 to 255 is assigned as priority in
Windows CE. Here ‘0’ indicates highest priority and ‘255’ indicates the lowest priority.
Example: In SJF Algorithm, the lower the time required to complete a process the higher is
its priority.
PRE-EMPTIVE SCHEDULING ALGORITHMS
1. Shortest Remaining Time (SRT) Scheduling:
Normally, OS assigns the time slice. The time slice varies in the order of a few microseconds
to milli seconds. Some OS kernels permit user to assign time slice.
Note: Round-robin algorithm is a pre-emptive algorithm as the scheduler forces the process
out of the CPU once the time quota expires.
TASK COMMUNICATION-
In a multitasking system, multiple tasks/processes run concurrently. There may be
interaction/communication between tasks. Based on the degree of interaction, the processes
running on OS are classified as:
(i) Co-operating Processes: One process requires inputs from other process to run.
These processes exchange data through some shared resources. They also communicate for
synchronization.
(ii) Competing Processes: The competing processes do not share anything among
themselves but they share the system resources like files, display devices, etc.
Processes communicate with each other through Inter-Process Communication (IPC). IPC is
essential for process coordination. Different IPC mechanisms are described below.
SHARED MEMORY:
Processes share a part of the memory to communicate with each other. The implementation
of shared memory concept is Kernel dependent. Important IPC mechanisms used in this
context are pipes and memory mapped objects.
77
PIPES
.
Kernel creates a parent process. Parent process creates the child process using a
system call “fork ( )”.
A parent process can have one or more child processes.
A child process is like a copy of parent and it inherits many attributes from parent
A child process and its parent process run independently and asynchronously.
Each child process has a unique ‘Process ID (PID)’. But all the child processes of a
parent have the same ‘Parent ID (PPID)’.
A parent process will not terminate until all its child processes are terminated.
78
OS provides functions for creating, opening and closing a pipe device, connecting a thread or
task to a pipe, reading and writing. Those functions are described below.
OS Functions for a pipe:
If Process1 wants to send a message to Process2, Process-1 first sends the message to a First-
In-First-Out queue called Message queue. The queue stores the message temporarily in a
system defined memory object, to pass it to the desired process. The messages are exchanged
through a message queue. This is shown in Fig. 4.
1. OSQCreate: This queue function creates a queue and initializes the Q message blocks
with front and back pointers called *QFRONT and *QBACK respectively.
2. OSQPost: It posts (sends) a message to the message block.
3. OSQPend: To wait for a queue message at the queue. It reads the message and deletes
it when received.
80
4. OSQAccept: Reads the present *QFRONT after checking its presence (Yes or No).
After the read *QFRONT increments.
5. OSQFlush: It reads the queue from front to back and deletes all the messages in the
queue. After the flush, the *QFRONT and *QBACK point to the beginning of queue.
6. OSQQuery: It queries the queue message block when read, but the messages are not
deleted. This function returns a pointer to the *QFRONT, number of queued
messages, size of the queue and table of tasks waiting for the messages from the
queue.
7. OSQPostFront: Sends a message as per the *QFRONT. It is used when a message is
urgent or has higher priority than all the previous posted messages into the queue.
Note: These functions are offered by OS and used by ISRs and processes/tasks.
Mailbox:
Mailbox is an alternate form of ‘Message Queue’ and it is used as an IPC mechanism in
certain RTOSs. It is used for one way messaging.
Example: Mailbox in a mobile phone.
Important Features:
The thread which creates the mailbox is known as ‘mailbox server’ and the threads
which subscribe to the mailbox are known as ‘mailbox clients.
The mailbox server posts the messages to the mailbox and notifies it to the clients
which are subscribed to the mailbox.
The clients read the message from the mailbox on receiving the notification.
The implementation of mailbox is OS kernel dependent.
Each mailbox has an ID. OS provides the IPC functions such as create, post (send
message), pend (wait for a mailbox-message), and query (queries the mailbox and
obtains the information about a mailbox).
Each mailbox for a message is initialized with a NULL pointer before posting any
message into the box.
4. OSMboxAccept: Checks the mailbox to see if a message is available. This function does
not block (suspend) the calling task if a message is not available. If available, it returns the
pointer.
5. OSMboxQuery: Queries the mailbox and obtains information about a mailbox.
SIGNALLING
Signalling is an IPC mechanism used by OS. A signal is a notification to a ‘process’ or
‘thread within the same process’ to indicate the occurrence of an event for which the other
process/thread is waiting. Since we cannot predict its occurrence, it is called an asynchronous
notification.
Important Features:
Signals are not queued and they do not carry any data.
Signals are used for interrupt and exception-handling processes. They are also used
for OS or user-defined processes.
A signal handler is associated with each signal. Whenever a specific signal occurs, its
signal handler handles it. Signal handler is a function defined in the program code and
registered with the kernel.
A signal can be specified with a number or name. Usually, a signal name starts with
SIG.
A signal is generated by a system, or through a program. When a signal is raised,
there are three situations:
Signal performs a default action.
Signal is handled to perform other actions.
Signal is ignored.
signal () function: It sets a signal handler based on the argument in the brackets
os_send_signal kernel call in ‘RTX51 Tiny’ OS sends a signal from one task to
another specified task.
OS provides RPCs for distributed environment like client-server systems. Here client
and server are often on remote systems connected by network. But they may also be
on same machine.
Interface Definition Language (IDL) defines the interface for RPC.
In order to make the communication platform-independent, certain standard formats
are essential.
The RPC communication can be either Synchronous (Blocking) or Asynchronous
(Non-Blocking).
In the former case, the calling process is blocked until it receives a response from the
other process. In later case, the calling process continues its execution while the
remote process executes the procedure. Result from the remote procedure is received
through call-back functions.
Mechanism like IDs, public key cryptography (like DES, 3DES), etc. are used by the
client for authentication.
Sockets are also used for RPC communication.
Advantages of RPC:
Disadvantages of RPCs:
RPC is a concept that can be implemented in different ways. It is not a standard.
There is no flexibility in RPC for hardware architecture. It is only interaction based.
It involves more cost.
SOCKETS
Socket is an IPC mechanism Which allows user to exchange information between processes
on the same machine or two different machines. Each socket may have the source address
and destination address.
Important Features of Sockets:
Sockets are available on every OS. They are designed to run over a standard Network
layer like TCP or UDP.
Sockets provide point-to-point, two-way communication.
Each socket on internet has a port number, source address and destination address.
IP addresses of source and destination may be on the same computer or different
computers on the network. A protocol is used for communication between source and
destination.
Functions are available for clients and server that run on same CPU or distinct CPUs
on internet.
(Server is a machine or system that requires services from client. Client is a machine
or system that serves those requests).
Client and server may use different domain (e.g.: One socket domain may be TCP
(Transport Control Protocol) and another socket domain may be UDP (User Datagram
Protocol).
Advantage of socket design is that client and server can be loaded on same machine.
Applications of Sockets:
close (): It is used for closing the device, to enable its use from beginning of its allocated
buffer, only after opening it again.
TASK SYNCHRONIZATION
‘Task Synchronization’ means efficient sharing of system resources by concurrent tasks
without any conflicts. It is most essential in multitasking environment.
Example for conflict: Suppose 2 processes try to access a shared memory area and perform
updates on a shared memory location.
To address such conflicts, we should see that each process is aware of other processes
accessing the shared resources.
TASK COMMUNICATION/SYNCHRONIZATION ISSUES
1. Racing
2. Deadlock
3. The Dining Philosopher’s Problem
4. Producer-Consumer/Bounded Buffer Problem
5. Readers-Writers Problem
6. Priority Inversion
1. Racing:
Racing occurs in an OS when two or more concurrent processes or threads try to access and
change same shared resource (such as a variable or file). Then the outcome of their execution
depends on the order in which they are executed. If the processes/threads are not correctly
synchronized, one thread can overwrite other's changes. This is an undesirable situation. This
can lead to incorrect results, system crashes, deadlocks or security problems.
A real-world example: Two employees performing updates on a shared document at the same
time.
Detection of racing: Debugging tools (e.g.: race condition detectors, code analysis tools, etc)
can be used to detect and debug race conditions. Thorough testing and code reviews also help
to predict race conditions.
Ways to prevent racing:
(i) Using atomic operations: Atomic means indivisible. An atomic operation involves set of
instructions that guarantee any updates to a shared variable, by allowing only one thread to
access it at a time.
85
(ii) Using synchronization mechanisms: Use locks or semaphores to ensure that only one
process or thread can access a shared resource at any given time.
(iii) Avoid global variables.
(iv) Using message passing: Use message passing instead of shared memory to communicate
between processes or threads.
(v) Proper design process: When designing your application, make sure that your design can
handle multiple processes or threads accessing shared resources.
2. Deadlock:
We know that a race condition produces incorrect results.
A deadlock is a situation where none of the processes can progress in their execution. i.e., all
processes stop running. It occurs when a group of processes are blocked in a state where
each process is waiting for a resource from some other process.
Example: Consider two processes A and B. Suppose A is holding a resource ‘x’ and it wants
a resource ‘y’ held by Process B. Process B is currently holding ‘y’. let B wants resource ‘x’
held by A. So, both A and B compete for the resource held by other process. The result of this
competition is deadlock, where both processors stop execution.
Possible conditions for deadlock: E.G. Coffman described 4 possible conditions for dead
lock situation.
(i) Mutual exclusion: Only one process can hold a shared resource at a time.
Example: A printer can be accessed by only one process.
(ii) Hold and Wait: A process holds a shared resource by locking and waits for additional
resources held by other processes.
(iii) No resource pre-emption: A resource can be released only voluntarily by the process
holding it. i.e., resource cannot be released until the process holding it is executed.
(iv) Circular Wait: Let there be n processes P1, P2, P3, …, PN. Let P1 is waiting for a
resource held by P2, P2 is waiting for a resource held by P3, ……, PN is waiting for a resource
held by P1. This forms a circular wait queue, which results in deadlock.
Deadlock Handling Methods:
Using a protocol which ensures that the system will never enter a deadlock state.
Detecting and removing deadlocks, when they occur.
Ignoring the problem altogether, pretending that deadlocks never occur in the system.
This solution is used by most OSs. (Linux, Windows, etc.)
Starvation: Any philosopher may have to wait for a long time and cannot get two
sticks. This is called starvation.
Assume that philosophers are processes and sticks are shared resources. Then OS faces the
same situation.
Solutions:
(i) Using Round Robin allocation
(ii) Using FIFO allocation
(iii) Imposing rules in accessing the forks: The philosophers should put down the fork in
his hand (say left fork) after waiting for a fixed duration for the second fork (right fork).
Also, he should wait for fixed time before making next attempt. This solution works to some
extent. But,
(iv) Using Semaphores: Each philosopher acquires a semaphore (mutex) before picking up
any fork. When a philosopher wants to eat, he checks whether his left and right philosophers
are using the fork. It is done by checking the state of associated semaphores. If forks are in
use, he waits until the forks are available. When a philosopher finished eating, he puts the
fork down and informs his left and right philosophers. It is done by signalling the semaphores
associated with forks.
Note: This solution gives maximum concurrency.
4. Product-consumer problem (or Bounded Buffer Problem):
Suppose two processes concurrently access a shared buffer, with fixed size. A thread/process
which produces data is called ‘Producer thread/process’. A thread/process which consumes
data is called ‘Consumer thread/process’. Suppose both of them are continuously working
using the buffer.
If producer produces data at a faster rate than the rate at which it is consumed by consumer. It
causes ‘buffer overrun’. i.e., producer tries to put data in a full buffer.
If the consumer works at a faster rate than produce, it leads to ‘buffer under run’. i.e., after
sometime consumer tries to read data from empty buffer.
Both of these conditions will lead to inaccurate data and data loss. This problem can be
rectified in many ways. One simple solution here is the mutual exclusion through ‘sleep and
wake up’ technique.
5. Readers-Writers Problem:
89
If many processes try to read a shared data concurrently, there is no problem. But, when
many processes try to write and read concurrently, it will definitely create inconsistent
results.
Example: Suppose one process in a bank system tries to read available balance in an account
and other process tries to update the available balance in that account. This leads to
inconsistent results.
Proper synchronization techniques are applied to avoid the Readers-Writers problem.
6. Priority Inversion Problem:
‘Priority inversion’ means inverting the priority of a high priority task with that of a low
priority task.
It results in a situation, where a high priority task needs to wait for a low priority task to
release a resource which is shared among the high priority task, low priority task, and a
medium priority task. It is illustrated by the following example.
Let A, B and C be 3 processes with high, medium and low priorities respectively.
Let A and C share a variable ‘x’ whose access is synchronized by a binary semaphore.
Let OS Scheduler picked up C to execute. Suppose C needs ‘x’. Then C acquires
semaphore over ‘x'.
Suppose B started executing, and A enters ready state at his stage.
Since A has higher priority than B, it pre-empts B. So, A is scheduled for execution.
A needs ‘x’. Since C acquired semaphore, A is put into blocked state.
C continues its execution.
Now A has to wait until C executes and releases the semaphore.
This produces unwanted delay in the execution of higher priority task A. This may also lead
to missing of deadlines for A.
Problems due to Priority Inversion:
A system malfunction may occur if a high priority process is delayed and if deadlines
are not met. It is quite undesirable when requirements are time-critical.
Reduces the performance of the system.
Work around mechanisms used to handle the priority inversion problem:
(i) Priority Inheritance:
Suppose 2 tasks P and Q are sharing a resource, where P has low priority. If P accesses the
shared resource and holds a lock, it inherits the priority of Q. i.e., priority of P is boosted to
90
priority of high priority task Q. In other words, P inherits the priority of Q. P continues to
execute and holds the shared resource, till it completes the execution.
When P releases the shared resource, its priority is brought to its original value. Q gets a lock
over it and starts execution.
This method checks the priorities of all tasks which tries to access shared resources and
adjust the priorities dynamically. It is a run-time overhead. It is only a work around. It will
not eliminate the waiting delay of Q.
(ii) Priority Ceiling:
Here, a priority is associated with each shared resource. The priority associated with each
resource is the priority of highest priority task which uses that particular resource. This
priority level is called ‘Ceiling priority’. Whenever a task accesses a shared resource, the
scheduler elevates its priority to ceiling priority of the resource.
If the low priority task accesses the shared resource, its priority is temporarily boosted to
priority of highest task that may share that resource.
This eliminates the pre-emption of the task by other medium priority tasks. Once the task
completes its execution, its priority is brought back to its original level.
Advantages:
Drawback:
It may produce hidden priority inversion because the priority of a task is always elevated
irrespective of other higher priority tasks that want the shared resources. This always gives
the low priority task the luxury of running at high priority when it accesses shared resources.
Advantage of spin lock is that it doesn’t have any overhead due to context switching. It is only
held for a short time, and it is useful for multiprocessor systems.
This method basically needs: - (i) Reading (ii) Testing and (iii) Setting the lock variable..
There is no atomic (single) instruction to combine these operations. These are implemented
using multiple low-level instructions, which depend on the instruction set of the processor.
This can be achieved with the combined support of hardware and software. Most of the
processors support an instruction ‘TSL’ (Test and Set Lock) for this purpose. This instruction
is processor specific.
Drawbacks:
‘Busy waiting’ technique makes the CPU always busy. It is because it involves in continuous
checking for a lock. This results in the wastage of CPU time and leads to high power
consumption. This is not suited for an ES powered on battery. An alternative to this technique
is ‘Sleep & Wakeup’ mechanism.
2. MUTUAL EXCLUSION THROUGH SLEEP & WAKEUP
92
Suppose a process is holding lock on critical section and another process tries to access the
section. Then the second process is made to sleep, i.e., enters wait state. When first process
leaves the critical section, it sends a wakeup message to the sleeping process and wakes it up.
The implementation of this policy is OS dependent.
SEMAPHORE
Semaphore is a technique used for mutual exclusion through Sleep & Wake up mechanism
Important features of semaphore:
Advantages of semaphores:
A binary semaphore can have only two integer values: 0 or 1. It’s simpler to implement and
provides mutual exclusion. We can use a binary semaphore to solve the critical section
problem.
Example applications of binary semaphores:
Two trains are in the nearby stations. There is only one track. They have to move in the
opposite direction. Then, at a time, one train is given signal and other has to wait.
Counting Semaphore:
Binary semaphore limits one process to use a shared resource. A Counting semaphore allows
a fixed number of processes to access a shared resource.
Counting semaphore maintains a count between zero and a value and limits the resource to
this count.
We can use Counting semaphores to resolve synchronization problems like resource
allocation. Here, semaphore count is the number of available resources. If new resources are
added, semaphore count automatically incremented and if some existing resources are
removed, the count is decremented.
States of Counting semaphore:
State of the counting semaphore is set to be ‘signalled’ when the count of semaphore
is > 0. Then, the count associated with a semaphore object is decremented by one,
when a process/thread acquires it and the count is incremented by one when a
process/thread releases the semaphore object.
The state of the Counting semaphore is set to be ‘non-signalled’ when the semaphore
is acquired by the maximum number of processes/ threads that the semaphore can
support (i.e., when the count becomes zero)
Example:
We use them in the dining philosophers’ problem.
Critical section Objects:
In Windows CE, a critical section object is same as the mutex object except that critical
section object can only be used by the threads of a single process.
Illustrate binary and counting semaphores by real world examples:
Real word example for Binary Semaphore: Hotel rooms
Any user who pays and follows the norms of the hotel can avail the rooms for
accommodation.
94
A person wants to avail the hotel room facility can contact the receptionist. If room is
available, the receptionist will hand over the room key to the user. If room is not
available currently, the user can register his name in advance-booking register.
When a person gets a room, he/she is granted the exclusive access to room facilities
like TV, telephone, toilet, etc. (Not sharing like in dormitory)
When a user vacates the room, he gives the keys back to receptionist. The receptionist
informs the next user who booked room in advance.
*****