0% found this document useful (0 votes)
20 views108 pages

embedded system.docs

Embedded system material embedded system material embedded system material

Uploaded by

patansayyada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views108 pages

embedded system.docs

Embedded system material embedded system material embedded system material

Uploaded by

patansayyada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 108

Unit-1

Introduction to Embedded systems

System is a set of interrelated parts/components which are designed/developed to perform


common tasks or to do some specific work for which it has been created.
Embedded System is an integrated system that is formed as a combination of computer
hardware and software for a specific function. It can be said as a dedicated computer system
has been developed for some particular reason. But it is not our traditional computer system or
general-purpose computers, these are the embedded systems that may work independently or
attached to a larger system to work on a few specific functions. These embedded systems can
work without human intervention or with little human intervention.
Three main components of embedded systems are:
1. Hardware
2. Software
3. Firmware

Some examples of embedded systems:

 Digital watches
 Washing Machine
 Toys
 Televisions
 Digital phones
 Laser Printer
 Cameras
 Industrial machines
 Electronic Calculators
 Automobiles
 Medical Equipment

Block Structure Diagram of Embedded System:


Embedded systems vs general computing systems
Embedded systems and general computing systems differ significantly in their design, purpose,
and capabilities:

1. Purpose and Functionality:


- Embedded Systems: Designed for specific tasks or functions within a larger system or
device. They are often dedicated to performing a particular function reliably and efficiently,
such as controlling a microwave oven, managing the engine in a car, or monitoring sensors in
industrial equipment.
- General Computing Systems: Designed for versatility and general-purpose computing tasks.
They are capable of running a wide range of applications and tasks, from word processing and
web browsing to complex simulations and data analysis.

2. Hardware and Software Integration:


-Embedded Systems: Integrate specific hardware components (such as microcontrollers or
Application-Specific Integrated Circuits - ASICs) with tailored software. The software is often
low-level (e.g., C, assembly language) and optimized for the particular hardware to maximize
performance and efficiency.
- General Computing Systems: Typically consist of a combination of standard hardware
components (like CPUs, GPUs, RAM) and run operating systems (such as Windows, macOS,
Linux) that support a variety of applications. The software development is often higher-level
and abstracted from the hardware details.

3. Real-Time Requirements:
- Embedded Systems: Many embedded systems require real-time operation, meaning they
must respond to external events within strict timing constraints. This is critical in applications
like automotive control systems, medical devices, and industrial automation where timing and
reliability are paramount.
- General Computing Systems: Real-time operation is not a primary concern for general-
purpose computing systems. They prioritize tasks based on scheduling algorithms but do not
typically require deterministic timing guarantees.

4. Power and Size Constraints:


- Embedded Systems: Often designed to operate under power constraints, such as low power
consumption or limited battery life. They are also typically compact in size to fit within the
physical constraints of the host device or system.
- General Computing Systems: Less constrained by power and size limitations compared to
embedded systems, as they are designed to operate in environments with stable power sources
and ample physical space.

5. Examples:
- Embedded Systems: Examples include automotive control units, industrial PLCs
(Programmable Logic Controllers), medical devices (like pacemakers and insulin pumps),
consumer electronics (like smart watches and IoT devices), and more.
- General Computing Systems: Examples include desktop computers, laptops, servers, and
smart phones, which are capable of running a wide range of applications and tasks.
So while both embedded systems and general computing systems involve computing
technology, they are tailored for different purposes, environments, and constraints. Embedded
systems prioritize specific functionality, real-time performance, and efficiency within a
dedicated application context, whereas general computing systems offer versatility,
multitasking capabilities, and support for a broad range of applications in diverse computing
environments.

History of Embedded systems

The history of embedded systems traces back to the mid-20th century and has evolved
significantly alongside advancements in electronics, computing, and technology. Here are key
milestones and developments in the history of embedded systems:
1940s - 1950s: Early Developments:
The roots of embedded systems can be traced to the era of early computers and electronic
control systems. One notable example is the Harvard Mark I, an electromechanical computer
developed during World War II.
During this period, industrial automation and early electronic control systems began to emerge,
employing simple embedded systems for tasks such as process control and monitoring.
1960s - 1970s: Rise of Microprocessors:
The introduction of microprocessors in the early 1970s revolutionized embedded systems. The
Intel 4004 microprocessor, released in 1971, marked a significant milestone by integrating all
essential computing functions on a single chip.
This era saw the development of early embedded systems for applications like industrial
control, automotive electronics (e.g., engine control units), and early consumer electronics.
1980s - 1990s: Expansion and Diversification:
The 1980s witnessed rapid advancements in microcontroller technology, which integrated
microprocessors with additional peripherals (e.g., memory, I/O ports) on a single chip. This
made embedded systems more powerful and cost-effective.
Embedded systems found widespread adoption in various industries, including
telecommunications (e.g., modems), automotive (e.g., anti-lock braking systems), aerospace
(e.g., flight control systems), and consumer electronics (e.g., video game consoles, handheld
devices).
2000s - Present: Proliferation and Connectivity:
The 21st century saw the proliferation of embedded systems enabled by advancements in
semiconductor technology, miniaturization, and connectivity (IoT - Internet of Things).
Embedded systems became increasingly interconnected, forming the backbone of IoT
applications. This led to the development of smart devices, wearable technology, home
automation systems, and industrial IoT (IIoT) solutions.
Modern embedded systems continue to evolve with advancements in real-time operating
systems (RTOS), low-power design, wireless communication protocols (e.g., Wi-Fi, Bluetooth,
LoRa), and embedded software development tools.
Emerging Trends:
Current trends in embedded systems include the integration of artificial intelligence (AI) and
machine learning (ML) algorithms for enhanced decision-making and automation.
Security and reliability remain critical concerns, especially with the rise of connected devices
and potential vulnerabilities in IoT ecosystems.
The demand for embedded systems continues to grow across diverse sectors, driven by
advancements in autonomous vehicles, robotics, healthcare devices, and smart infrastructure.
Classification of Embedded systems

Embedded systems can be classified into various categories based on different criteria such as
performance, complexity, real-time requirements, and application domains. Here are common
classifications of embedded systems:

Based on Performance and Complexity:

Types of embedded systems

- Small-scale embedded systems: These systems typically have limited processing power and
memory, designed for simple control functions. Examples include microcontrollers used in
household appliances, toys, and simple industrial controls.
- Medium-scale embedded systems: These systems are more capable than small-scale
systems and may include microprocessors with additional peripherals. They are used in
applications like automotive electronics (e.g., engine control units), consumer electronics (e.g.,
digital cameras), and medical devices.
- Large-scale embedded systems: These are complex systems with significant computational
power, often featuring high-performance processors and advanced software architectures.
Examples include embedded systems in aerospace and defense (e.g., avionics systems),
telecommunications (e.g., base stations), and industrial automation (e.g., robotic systems).

Based on Real-Time Requirements:


- Real-time embedded systems: These systems are designed to respond to external events
within strict timing constraints. They can be further classified into:
- **Hard real-time systems**: Critical tasks must be completed within a guaranteed time
frame to ensure safety or system integrity (e.g., airbag deployment system in cars).
- **Soft real-time systems**: Timing constraints are important but not as strict as in hard
real-time systems (e.g., multimedia streaming applications).
- **Non-real-time embedded systems**: These systems do not have strict timing
requirements and can operate without time constraints, focusing more on functionality (e.g.,
some consumer electronics).
**Based on Network Connectivity**:
- **Connected embedded systems**: Include IoT devices that are interconnected via
networks (e.g., Wi-Fi, Bluetooth, Zigbee) to exchange data and perform collaborative tasks.
- **Stand-alone embedded systems**: Operate independently without network connectivity,
often performing localized control functions.

**Based on Power Consumption**:


- **Low-power embedded systems**: Designed to operate efficiently on limited power
sources, suitable for battery-operated devices and energy-efficient applications.
- **High-performance embedded systems**: Focus on processing power and capability,
often requiring higher power consumption for intensive computational tasks.

Major applications of embedded systems


Embedded systems find application in a wide range of industries and domains due to their
ability to perform specific functions efficiently and reliably. Some major applications of
embedded systems include:

Embedded systems have computers hidden inside.

1. **Automotive**:
- **Engine Control**: Embedded systems manage fuel injection, ignition timing, and other
engine parameters for optimal performance and efficiency.
- **Safety Systems**: Anti-lock braking systems (ABS), electronic stability control (ESC),
airbag deployment systems, and collision avoidance systems rely on embedded systems for
real-time decision-making and response.

2. **Consumer Electronics**:
- **Smartphones and Tablets**: Embedded systems handle user interfaces, multimedia
playback, connectivity (Wi-Fi, Bluetooth), and sensor integration (gyroscopes,
accelerometers).
- **Digital Cameras**: Embedded systems control image processing, autofocus, exposure,
and other camera functionalities.

3. **Industrial Automation**:
- **PLCs (Programmable Logic Controllers)**: Embedded systems monitor and control
machinery and processes in manufacturing environments, ensuring precise operation and
coordination.
- **SCADA Systems**: Embedded systems are used in supervisory control and data
acquisition systems for real-time monitoring and control of industrial processes.

4. **Medical Devices**:
- **Patient Monitoring Systems**: Embedded systems track vital signs, administer
medication (e.g., infusion pumps), and provide alarms/alerts for healthcare professionals.
- **Implantable Medical Devices**: Pacemakers, insulin pumps, neurostimulators, and other
implants rely on embedded systems for controlling therapeutic interventions and monitoring
patient conditions.

5. **Aerospace and Defense**:


- **Avionics**: Embedded systems are critical in aircraft for flight control, navigation,
communication, and monitoring of aircraft systems.
- **Missile Guidance Systems**: Embedded systems ensure precision guidance and
targeting in defense applications.

6. **Telecommunications**:
- **Base Stations**: Embedded systems manage signal processing, network protocols, and
data routing in cellular and wireless communication networks.
- **Networking Equipment**: Routers, switches, and modems use embedded systems for
network management and data transmission.

7. **Smart Home and IoT (Internet of Things)**:


- **Home Automation**: Embedded systems control smart thermostats, lighting systems,
security cameras, and appliances, offering remote monitoring and control capabilities.
- **IoT Devices**: Sensors, actuators, and smart devices in various applications (e.g.,
agriculture, environmental monitoring) use embedded systems to collect and process data for
automated decision-making.

8. **Embedded Vision and Robotics**:


- **Machine Vision Systems**: Embedded systems analyze images and video feeds for
applications in quality control, object recognition, and surveillance.
- **Robotics**: Embedded systems provide control and feedback mechanisms for
autonomous robots in industries such as manufacturing, logistics, and healthcare.

9. **Energy Management**:
- **Smart Grids**: Embedded systems help manage electricity generation, distribution, and
consumption efficiently.
- **Renewable Energy Systems**: Embedded systems optimize the operation of solar
panels, wind turbines, and energy storage systems.

10. **Environmental Monitoring**:


- **Weather Stations**: Embedded systems collect and analyze weather data for forecasting
and monitoring environmental conditions.
- **Air Quality Sensors**: Embedded systems monitor air pollution levels in urban areas
and industrial sites.

11. **Retail and Point-of-Sale (POS) Systems**:


- **Barcode Scanners**: Embedded systems decode barcode information for inventory
management and sales transactions.
- **Cash Registers**: Embedded systems process payments and track sales data in retail
environments.

12. **Transportation and Logistics**:


- **GPS Navigation Systems**: Embedded systems provide real-time navigation and route
guidance for vehicles and ships.
- **Fleet Management**: Embedded systems monitor vehicle location, fuel consumption,
and maintenance schedules for logistics operations.

13. **Entertainment and Multimedia**:


- **Digital Signage**: Embedded systems display multimedia content in public spaces,
stores, and entertainment venues.
- **Audio Equipment**: Embedded systems control audio processing and playback in
concert halls, theaters, and recording studios.

14. **Education and Learning**:


- **Interactive Whiteboards**: Embedded systems enhance classroom interaction with
touch-based interfaces and multimedia capabilities.
- **Educational Toys**: Embedded systems integrate learning activities and interactive
features for children.

15. **Home Healthcare**:


- **Medical Monitoring Devices**: Embedded systems assist with remote monitoring of
patients' health conditions and medication adherence.
- **Assistive Technologies**: Embedded systems support individuals with disabilities
through smart devices and wearable technologies.

16. **Security and Surveillance**:


- **CCTV Systems**: Embedded systems manage video surveillance cameras for
monitoring public spaces, buildings, and sensitive areas.
- **Access Control Systems**: Embedded systems regulate entry to secure locations using
biometric sensors and electronic locks.
These applications highlight the versatility and importance of embedded systems in enabling
advanced functionalities, automation, and connectivity across diverse industries, contributing
to innovation, in modern society, spanning across various sectors to improve efficiency,
enhance safety, and enable innovative solutions tailored to specific needs and environments.

Purpose of embedded systems


The primary purpose of embedded systems is to perform specific tasks or functions within a
larger device, system, or machine. These systems are designed with specialized hardware and
software to meet the unique requirements of their intended applications. Here are the key
purposes and objectives of embedded systems:

1. **Dedicated Functionality**: Embedded systems are tailored to perform predefined tasks


efficiently and reliably. They are optimized for specific applications, ensuring that they can
handle tasks such as control, monitoring, or processing data within their designated
environments.

2. **Real-Time Operation**: Many embedded systems operate in real-time, meaning they


must respond to external stimuli or inputs within strict timing constraints. This is critical in
applications where timing accuracy is crucial for safety, performance, or functionality.

3. **Low Power Consumption**: Efficiency in power usage is a fundamental consideration in


embedded system design. Many embedded systems are designed to operate on minimal power,
making them suitable for battery-powered devices or applications where energy efficiency is
essential.

4. **Compact Size and Integration**: Embedded systems are often compact and integrated
into the device or system they control. This integration reduces physical footprint and
complexity, which is particularly advantageous in applications with space constraints or where
multiple functions need to be consolidated.

5. **Reliability and Stability**: Embedded systems are engineered for high reliability and
stability, minimizing the risk of failure or malfunction. This is crucial in critical applications
such as medical devices, automotive systems, and industrial automation where system
downtime or errors can have significant consequences.

6. **Customization and Optimization**: Embedded systems allow for customization and


optimization of hardware and software components to meet specific performance, cost, and
functionality requirements of the application. This flexibility enables manufacturers to tailor
solutions that best fit their needs.

7. **Security and Safety**: Embedded systems often include security features to protect
against unauthorized access and ensure data integrity. In safety-critical applications like
medical devices or automotive systems, embedded systems play a vital role in ensuring
operational safety and compliance with regulatory standards.
8. **Cost-Effectiveness**: By focusing on specific functions and optimizing resources,
embedded systems can be cost-effective solutions compared to general-purpose computing
platforms. This is particularly beneficial in mass-produced consumer electronics, industrial
automation, and other high-volume applications.

Elements of an embedded systems

Embedded systems are basically designed to regulate a physical variable (such Microwave
Oven) or to manipulate the state of some devices by sending some signals to the actuators or
devices connected to the output port system (such as temperature in Air Conditioner), in
response to the input signal provided by the end users or sensors which are connected to the
input ports.

Hence the embedded systems can be viewed as a reactive system.


Examples of common user interface input devices are keyboards, push button,
switches, etc.
The memory of the system is responsible for holding the code (control algorithm and
other important configuration details).
An embedded system without code (i.e. the control algorithm) implemented memory
has all the peripherals but is not capable of making decisions depending on the situational as
well as real world changes.
Memory for implementing the code may be present on the processor or may be
implemented as a separate chip interfacing the processor In a controller based embedded
system, the controller may contain internal memory for storing code
Such controllers are called Micro-controllers with on-chip ROM, eg. Atmel AT89C5.

Core of the embedded systems


Basic block diagram of an embedded system

Embedded systems typically consist of several key elements that work together to perform
specific functions within a larger system or device. These elements include:

1. **Microcontroller or Microprocessor**: This is the central processing unit (CPU) of the


embedded system, responsible for executing instructions and performing computations. It may
include additional components such as memory (RAM and ROM), timers, and I/O ports
integrated into a single chip.

2. **Memory**: Embedded systems require memory to store program instructions (code) and
data. This includes:
- **RAM (Random Access Memory)**: Used for temporary data storage during program
execution.
- **ROM (Read-Only Memory)**: Stores permanent data and program instructions that are
essential for the system's operation, typically including the bootloader and firmware.

3. **Input/Output (I/O) Interfaces**: Embedded systems interact with the external


environment through various I/O interfaces, which may include:
- **Analog-to-Digital Converters (ADC)**: Convert analog signals from sensors (such as
temperature or pressure sensors) into digital data that the system can process.
- **Digital-to-Analog Converters (DAC)**: Convert digital signals into analog outputs (such
as controlling motor speeds or generating sound).
- **Serial Communication Interfaces**: Such as UART, SPI, I2C, used for communication
with other devices, sensors, or networks.

4. **Real-Time Clock (RTC)**: Provides accurate timekeeping and timestamping capabilities


for scheduling tasks, logging events, and managing time-sensitive operations.

5. **Power Management**: Embedded systems often include power management components


to regulate voltage levels, manage power consumption, and ensure efficient use of energy
resources. This is crucial for battery-operated devices and systems requiring low power
consumption.

6. **Operating System (OS) or Real-Time Operating System (RTOS)**: Depending on the


complexity and requirements of the embedded system, it may run on a lightweight operating
system or RTOS. These provide task scheduling, memory management, and device drivers to
simplify application development and ensure system reliability.

7. **Sensors and Actuators**: Embedded systems interface with physical world through
sensors (e.g., temperature sensors, motion detectors) to gather input data, and actuators (e.g.,
motors, valves) to perform physical actions based on processed data.

8. **User Interface**: In many embedded systems, a user interface is provided to interact with
the device or system. This may include displays (LCD, LED), buttons, touch screens, or
communication interfaces (USB, Ethernet) for configuration, monitoring, and control.

9. **Networking and Communication**: For embedded systems that require connectivity,


components such as Ethernet controllers, Wi-Fi modules, Bluetooth, or cellular modems enable
communication with other devices, networks, or cloud services.

10. **Security Features**: Depending on the application, embedded systems may include
security mechanisms such as encryption, secure boot, and authentication to protect against
unauthorized access, data breaches, or tampering.

These elements collectively define the architecture and functionality of embedded systems,
enabling them to perform specific tasks efficiently and reliably in diverse applications ranging
from consumer electronics and automotive systems to industrial automation and medical
devices.

Unit-2
Communication buses in embedded systems:
In embedded systems, communication buses play a crucial role in enabling various components
and peripherals to exchange data and control signals efficiently. Here are some common
communication buses used in embedded systems:
Onboard communication interfaces:
Inter-integrates circuit I2C:
Inter-integrates circuit (I2C) is the serial communication protocol. This protocol is effective for
the sensors and modules but not for the PCB device communications. The bus transmits
information bidirectionally between connected devices using only two wires. With them, you
can connect up to 128 devices to the mainboard while maintaining a clear communication
pathway between them. They are ideal for projects that require many different parts to work
together (e.g., sensors, pins, expansions, and drivers). Also, I2C speed depends on the speed of
data, the quality of the wires, and the amount of external noise. I2C also uses two-wire
interfaces for connecting low-speed devices, live ADC converters, microcontrollers, I/O
interfaces, etc.
Working Principle of I2C
I2C has two lines: SCL and SDA, a clock line, and a serial data line acceptance port,
respectively. The clock line CL is for synchronizing the data transmission. Data bits are sent
and received through SDA lines.
The master device transmits the data and thus generates the clock signal. This clock signal
opens the transferred device, and any address devices are considered slave devices.
On the bus, master and slave devices, transmission and reception, do not always follow the
same relationship. Data transfer direction depends on the time. The masters must address the
slave before sending data to the slave if they want to send data to the slave. This will stop the
data transfer. At the same time, the Master should address the slave if it wants to receive the
data from the slave devices. Finally, the receiver terminates the receiving process by receiving
the data sent by the slave.
Additionally, the host generates the timing clock and completes data transfers. Power supply
connections must also be made through pull-up resistors. Both lines operate at high power
levels when the bus is idle.

I2C Working Protocol

Data Transmission Method


The master devices send the signal to each of the slaves connected. This is carried out by
switching SDA and SCL lines from a high to a low voltage level.
The master device is responsible for sending 7 or 10 bits of the address, including the
read/write bit, to the slave for communicating.
The slave compares the address, and if it matches, the ACK acknowledgment bit returns. This
would switch the SDA line low for one bit, else there is no change in the SDA line, leaving it
high.

The master also transceives the data frame, and receiving device acknowledges the successful
transmission by sending an ACK bit. The master device sends a stop signal to stop the data
transmission, where the SCL switch is high before the SDA switch to high.

Clock Synchronisation

Data transmission requires clock signals from every master on the SCL line. Data in I2C
transmissions remain valid only during the high period of the clock.
Mode of Transmission

Quick mode and high-speed mode are the two data transmission modes.

Quick Mode

Devices and transceive data rate at 400 kbit/s. They must sync with this transmission to slow it
down and extend the SCL signal's low period.

High-speed Mode

In this mode, information is transmitted by 3.4 Mbps. It has backward compatibility with the
standard mode. This mode transmits the data at higher data compared to previous modes.

Pros and Cons of I2C

Pros

 It supports various master devices.


 It offers multi-slave and multi-master communication.
 This protocol is flexible and adaptable too.

Cons

 I2C is a bit slower protocol because of the need for pull-up resistors.
 It takes up more space.
 The architecture is more complex with the increasing number of devices.
 This protocol is half-duplex, which is quite problematic and requires different devices for
complete communication.
I2C in Microcontroller

Raspberry Pi's 4 channel 16 bit ADC

It is the Seeed product that is compatible with the Rasberry Pi. This 16-bit ADC is used when a
more precise ADC is required in the circuit.

I2C Driver/Adapter

It is an open-source tool that is easy to use. Usually, it is used for controlling I2C devices. It is
compatible with all OS. It offers a built-in color screen that provides the live dashboard of I2C
activity. Thus, when an I2C drive connects to the I2C bus, it displays the traffic on the screen.
Besides, it can help to debug the I2C issues and troubleshoot them.
I2C Arduino

I2C Communication interfaces between two Arduino boards are also possible. It is used for
short-distance communication interfaces and uses the synchronized clock pulse. This I2C
Arduino is used while communicating with the other sensors and devices that need to send the
information to the Master.

PCF 8574

It provides the general purpose remote I/O expansion through two bidirectional I2C buses.

What is a Serial Peripheral Interface (SPI)?

Serial Peripheral Interface (SPI) is one of the serial communication interfaces of synchronous
types. In an embedded system, they are used for short-distance communication. This is the full-
duplex communication protocol. It allows simultaneous data exchange, both transmission and
reception. PIC, ARM, and AVR controllers are some of the controllers that use the SPI
interface.

The master-slave architecture of SPI has a single master device and microcontroller, while the
slaves are the peripherals like the GSM modem, sensors, GPS, etc.

SPI uses four wires, MISO, MOSI, SS, and CLK. Their wires help in the communication
interfaces between the master and slave devices. The master devices both read and write the data.
SPI serial bus allows multiple slaves to interface with the master device thus, SPI protocol's
major benefit is the speed used where speed is crucial. Furthermore, SPI protocol's applications
include SD cards, display modules, etc.

SPI supports two communication interface modes; point-to-point and standard mode. In point-to-
point mode, a single controller follows the single slave, while in standards mode, a single master
controller can communicate with two slave devices enabling the select chip lines.

SPI Working Principle

Two methods define the SPI working:

 The first method selects each device using the CS, which is the select chip line. Each of these
devices needs the unique Chip Select line.
 The second method is daisy chaining. In this method, each device is connected to the other via
the data out of one to the data in line or another.
SPI devices can connect to various unlimited devices. But the hardware selects line limits this
connection. An SPI interface provides efficient, simple, point-to-point communication without
addressing operations, allowing full-duplex communication.

SPI Working Protocol

 SPI has four communication ports:


o MISO – master data input, slave data output.
o MOSI – Master Data Output, Slave Data Input.
o SCLK – clock signal generated by the master device,
o NSS/CS – Slave-enabled signal, controlled by the master device or also called a select chip
line.
 An I2C system has only one enable signal, while a multi-slave system needs multiple enable
signals.
 Internally, the SPI interface consists of two shift registers.
 The transmitted data is 8 bits.
 During slave enable and shift pulse signals, it is transmitted bit by bit. Front bits are high,
and back bits are low.
 CPUs and peripheral devices communicate synchronously over the SPI interface. The
master device transmits data bit by bit under the shift pulse.
 Low bits lie at the back and high bits at the front. It is a faster communication medium than
I2C and offers a few Mbps of data transfer speed.
Pros and Cons of SPI

Pros

 Supports full duplex communication.


 Simple and fast data transmission rate.
 Offers simple software implications.
 No, start and stop bits allow continuous data transmission with no interruptions.
 MOSI and MISO allow simultaneous data send and receive operations.
 The use of a master clock doesn't require precious oscillators.

Cons

 More use of slave devices complicates the wiring.


 It has a single master device.
 Limits the number of slave's devices to connect with the Master.
 No error check mechanism.
 Don't have an acknowledgment line of data receiving a message.
SPI in Microcontrollers

SPI Driver/Adapter

It is one of the easy tools to control SPI devices. It is compatible with all operating systems. The
live logic analyzer displays the SPI traffic on the screen. The operating voltage of the SPI driver
is 3.3 V -5 V.

MCP 3008

It is a 10-bit ADC having 8 channels. Moreover, it connects to the Rasberry Pi with the help of
an SPI serial connection.

SPI Seeeduino V 4.2

A master Arduino and a slave Arduino can communicate using SPI serial communication
interfaces with Arduino. The main aim is to communicate over a short distance at a higher
speed.

Comparison between I2C and SPI Communication Protocols

I2C Communication
Features SPI Communication Protocol
Protocol

Number of wires 2 (SDA and SCL) 4 (MOSI, MISO, SCK, and SS)

Communication type Half-duplex Full-duplex

Maximum number of Limited by addressing Limited by number of chip select


devices scheme (SS) lines

Data transfer speed Slower Faster

Error handling Improved due to ACK/NACK Not as robust


I2C Communication
Features SPI Communication Protocol
Protocol

feature

Cost-efficient due to fewer More expensive due to additional


Cost
wires wires

More complex due to additional


Complexity Simpler due to fewer wires
wires

Multi-master
Yes No
configuration

Synchronous
Yes Yes
communication

Clock stretching Yes No

Arbitration Yes No

In embedded systems, the One-Wire (1-Wire) bus is a communication protocol and bus system
that allows devices to communicate and receive power over a single wire. It was developed by
Dallas Semiconductor, which is now part of Maxim Integrated. Here’s an overview of the One-
Wire bus system and its characteristics:

### Characteristics of One-Wire Bus:

1. **Single Wire Communication**: As the name suggests, the One-Wire bus requires only one
signal wire for communication. This significantly simplifies the wiring required for connecting
multiple devices in an embedded system.
2. **Power over Data Line**: One notable feature of the One-Wire protocol is that it can
provide power to connected devices over the same single wire used for communication. This is
achieved using a technique called "parasite power", where devices can draw power during
communication intervals.

3. **Master-Slave Architecture**: Similar to other communication protocols, the One-Wire bus


typically operates with a master-slave architecture. The master device initiates communication
and controls the timing of data transfers with one or more slave devices on the bus.

Basic 1wire bus communication interface

4. **Low-Speed, Low-Power Design**: One-Wire is designed for low-speed communication,


typically operating at rates up to 16.3 kbps (though higher speeds are possible with newer
implementations). It is also optimized for low-power consumption, making it suitable for battery-
powered and low-power embedded applications.

5. **CRC Error Detection**: To ensure reliable communication over potentially noisy


environments (like long wires or electrically noisy environments), the One-Wire protocol
incorporates cyclic redundancy check (CRC) error detection. This helps in detecting and
correcting errors in data transmission.

### Advantages of One-Wire Bus in Embedded Systems:

- **Simplicity**: Reduced wiring complexity due to the single wire requirement.

- **Cost-Effectiveness**: Lower component costs and reduced PCB footprint.

- **Flexibility**: Devices can be easily added or removed from the bus without complex
addressing schemes.

- **Reliability**: Error detection mechanisms ensure data integrity even in challenging


environments.

### Applications of One-Wire Bus:

- **Temperature Sensing**: One-Wire temperature sensors are widely used in embedded


systems for monitoring and control applications.

- **Identification and Authentication**: Devices such as RFID tags and memory devices use the
One-Wire protocol for identification and authentication purposes.

- **Data Logging**: One-Wire devices like EEPROMs can store and retrieve data from
embedded systems.

### Example Devices:


- **DS18B20 Temperature Sensor**: A popular One-Wire digital thermometer that provides
temperature readings with ±0.5°C accuracy over a range of -10°C to +85°C.

- **DS2401 Silicon Serial Number**: Provides a unique 64-bit serial number to identify each
device on the One-Wire bus.

### Implementation Considerations:

- **Timing Requirements**: Proper timing and synchronization between master and slave
devices are crucial for reliable communication.

- **Parasitic Powering**: Devices must support or manage parasitic powering if they are to be
powered solely from the One-Wire bus.

A 1-wire IC can extract operating power from a serial-data signal by means of an internal power-
supply circuit consisting of a diode and a capacitor. When the data line is logic high, some extra
current is used to charge the capacitor, and then the diode prevents the capacitor from
discharging when the data line is logic low.

1 Wire IC in parasitic powering

Parallel interface:
In embedded systems, a parallel interface refers to a method of communication where data is
transferred simultaneously over multiple wires (or lines) between devices. This contrasts with
serial communication, where data is transmitted sequentially over a single wire or pair of wires.
Here's an overview of parallel interfaces in embedded systems, including their characteristics,
applications, and considerations:

### Characteristics of Parallel Interfaces:

1. **Multiple Data Lines**: Parallel interfaces typically use a set of data lines (e.g., 8, 16, 32, or
more) to transfer data simultaneously. Each line carries a different bit of the data word being
transmitted.

2. **Synchronous Operation**: Data transfer in parallel interfaces is often synchronous,


meaning that data is transferred based on a clock signal that synchronizes the timing of data
transmission between the sender (transmitter) and receiver.

3. **Higher Data Rates**: Parallel interfaces can achieve higher data transfer rates compared to
serial interfaces because they transmit multiple bits of data in parallel. This makes them suitable
for applications requiring high-speed data communication.
4. **Wider Bus Width**: The number of data lines (bus width) determines the size of the data
word that can be transferred in one cycle. For example, an 8-bit parallel interface transfers 8 bits
of data simultaneously.

5. **Address and Control Lines**: In addition to data lines, parallel interfaces often include lines
for addressing (selecting devices on the bus) and control signals (e.g., read/write signals,
handshaking signals).

### Applications of Parallel Interfaces in Embedded Systems:

- **Memory Interfaces**: Parallel interfaces are commonly used for interfacing with memory
devices such as RAM (Random Access Memory) and ROM (Read-Only Memory) in embedded
systems. This allows for fast access and retrieval of data.

- **Display Interfaces**: Many embedded systems use parallel interfaces to drive LCD (Liquid
Crystal Display) panels or other types of graphical displays. This requires transferring a large
amount of data (pixel information) quickly to update the display.

- **Peripheral Interfaces**: Some peripherals, such as high-speed ADCs (Analog-to-Digital


Converters) and DACs (Digital-to-Analog Converters), may utilize parallel interfaces to transfer
data efficiently.

- **Communication with External Devices**: Parallel interfaces can be used to communicate


with external devices that require high-speed data transfer, such as image sensors, cameras, and
FPGA (Field-Programmable Gate Array) boards.

### Advantages of Parallel Interfaces:


- **High Speed**: Parallel interfaces can achieve faster data transfer rates compared to serial
interfaces, making them suitable for applications requiring real-time processing and high
bandwidth.

- **Simultaneous Transfer**: Data bits are transferred simultaneously, reducing latency and
improving overall system performance for tasks like data acquisition or video processing.

- **Direct Memory Access (DMA)**: Parallel interfaces often support DMA, allowing
peripherals to transfer data directly to and from memory without CPU intervention, further
enhancing system efficiency.

### Challenges and Considerations:

- **Complexity**: Designing and implementing parallel interfaces can be more complex


compared to serial interfaces, especially in terms of PCB layout, signal integrity, and timing
considerations.

- **Power Consumption**: Parallel interfaces may consume more power compared to serial
interfaces due to the larger number of data lines actively transferring data simultaneously.

- **Crosstalk and Signal Integrity**: Maintaining signal integrity is crucial in parallel interfaces
to prevent crosstalk and ensure reliable data transmission, especially at high speeds.

External Communication interfaces:

RS-232
In RS232, ‘RS’ stands for Recommended Standard. It defines the serial communication using
DTE and DCE signals. Here, DTE refers to Data Terminal Equipment and DCE refers to the
Data Communication Equipment. Example of DTE device is a computer and DCE is a modem.
Formally, it is specified as the interface between DTE equipment and DCE equipment using
serial binary data exchange.
Communication between DTE and DCE

The DTE (computer) transmits the information serially to the other end equipment DCE
(modem). In this case, DTE sends binary data “11011101” to DCE and DCE sends binary data
“11010101” to the DTE device.

RS232 describes the common voltage levels, electrical standards, operation mode and number of
bits to be transferred from DTE to DCE. This standard is used for transmission of information
exchange over the telephone lines.

RS232 male and female connector


How RS232 Communication Works?

The working of RS-232 can be understood by the protocol format. As RS-232 is a point-to-point
asynchronous communication protocol, it sends data in a single direction. Here, no clock is
required for synchronizing the transmitter and receiver. The data format is initiated with a start
bit followed by 7-bit binary data, parity bit and stop bit which are sent one after another.

Protocol Format

RS232 Framing

The transmission begins by sending a Start bit ‘0’. This is succeeded by 7 bits of ASCII data.
The parity bit is appended to this data for the receiver validation. The data sent from the
transmitter should match at the receiver. Finally, the transmission is halted using a stop bit and it
is represented by the binary ‘1’. Generally, 1 or 2 stop bits can be sent.
In the above diagram, ASCII character ‘A’ is sent using a serial binary stream of ‘1’s and ‘0’s.
While sending data, there should be a certain delay between each bit. This delay is considered as
inactive time and the RS232 line is at negative logic state (-12V).

Applications of RS-232 in Embedded Systems:

 Console Interfaces: Many embedded systems use RS-232 for console communication,
allowing developers to monitor and control the device using a terminal or computer.
 Peripheral Communication: RS-232 interfaces are used to connect various peripherals
such as modems, printers, barcode scanners, and serial devices to embedded systems.
 Programming and Debugging: RS-232 is often used for programming and debugging
embedded systems during development, enabling firmware updates and diagnostic
outputs.
 Industrial Automation: RS-232 interfaces are prevalent in industrial automation
systems for connecting PLCs (Programmable Logic Controllers), HMI (Human-Machine
Interface) devices, and sensors.

Advantages of RS-232:

 Widespread Compatibility: RS-232 has been a long-standing standard and is widely


supported by a variety of devices and equipment.
 Simple Implementation: Implementing RS-232 communication in embedded systems is
straightforward, requiring minimal hardware and software resources.
 Long Cable Lengths: RS-232 supports relatively long cable lengths (up to 50 feet or
more) without significant signal degradation, making it suitable for industrial and
commercial applications.

Challenges and Considerations:

 Voltage Levels: The wide range of voltage levels used in traditional RS-232
implementations can pose compatibility issues with modern devices that operate at lower
voltage levels.
 Signal Integrity: Maintaining signal integrity over long distances or in noisy
environments may require additional measures such as shielding and proper grounding.
 Speed Limitations: RS-232 has limitations in data transfer speed compared to more
modern serial communication standards like USB and Ethernet, typically operating up to
speeds of 115,200 bps (bits per second) in standard implementations.

RS485

RS-485 is a popular communication standard in embedded systems, particularly well-


suited for applications requiring robust, long-distance data transmission and noise
immunity. Here’s an overview of RS-485 in embedded systems, including its main features,
applications, advantages, and considerations:
RS-485 communication: main features

Despite the wide variety of modern alternative solutions, today RS-485 technology remains the
basis of many communication networks. The major advantages of RS-485 interface are:

 Two-way data exchange via one twisted pair of wires;


 support for several transceivers connected to the same line, i.e., the ability to create a
network;
 long length of the communication line;
 high transmission speed.

### Characteristics of RS-485:

1. **Differential Signaling**: RS-485 uses differential signaling, where data is transmitted


over two wires: one for transmitting (TX) and one for receiving (RX). This differential
nature provides better noise immunity and allows for longer cable runs compared to single-
ended signaling used in RS-232.
2. **Half-Duplex or Full-Duplex**: RS-485 supports both half-duplex (one-way at a time)
and full-duplex (simultaneous two-way) communication modes. In half-duplex mode,
devices take turns transmitting and receiving on the same pair of wires.

3. **Multi-Drop Configuration**: Multiple RS-485 devices can be connected to the same


bus, allowing communication between a master device and several slave devices in a multi-
drop configuration. Each device on the bus has a unique address to facilitate
communication.

4. **Speed and Distance**: RS-485 supports higher data rates than RS-232, typically
ranging from 100 kbps to 10 Mbps, depending on cable length and environment. It can
transmit data over distances up to 1.2 kilometers (4,000 feet) at lower speeds, making it
suitable for industrial and commercial applications.

5. **Common Mode Voltage Range**: RS-485 has a wide common mode voltage range,
typically from -7V to +12V, allowing it to tolerate ground potential differences and noise
levels commonly found in industrial environments.

### Applications of RS-485 in Embedded Systems:

- **Industrial Automation**: RS-485 is extensively used in industrial automation for


communication between PLCs (Programmable Logic Controllers), sensors, actuators, and
other control devices due to its robustness and noise immunity.

- **Building Automation**: It is used in building management systems for HVAC


(Heating, Ventilation, and Air Conditioning), lighting control, and access control systems.

- **Data Acquisition Systems**: RS-485 interfaces are used in data acquisition systems
where multiple sensors or measurement devices are connected to a central data acquisition
unit.
- **Telecommunication**: RS-485 is used in telecommunications equipment for
communication between network devices, such as routers, switches, and modems.

- **Traffic Control Systems**: It is employed in traffic signal control systems for


communication between traffic lights and central control units.

### Advantages of RS-485 in Embedded Systems:

- **Noise Immunity**: Differential signaling and balanced lines provide high noise
immunity, making RS-485 suitable for operation in electrically noisy environments.

- **Longer Cable Runs**: RS-485 supports longer cable runs compared to other serial
communication standards like RS-232, making it ideal for applications spread over large
areas.

- **Multi-Drop Capability**: Multiple devices can be connected to the same bus, reducing
wiring complexity and cost in systems with multiple sensors or control points.

- **High Data Rates**: RS-485 supports higher data rates than RS-232, enabling faster
communication speeds in applications requiring real-time data exchange.

### Considerations and Challenges:

- **Termination and Biasing**: Proper termination and biasing of the RS-485 bus are
critical to ensure signal integrity and reliable communication, especially over long cable
lengths.
- **Protocol Implementation**: While RS-485 defines the physical layer, protocols for data
framing, error checking, and addressing must be implemented at the application level.

- **Power Consumption**: RS-485 transceivers typically consume more power than RS-
232 transceivers, especially when driving long cables or operating at high speeds.

- **Compatibility**: Ensuring compatibility between RS-485 devices from different


manufacturers and adhering to the standard’s electrical specifications is important for
reliable operation.

Differences between RS232 and RS485

Protocol RS232 RS485


Protocol type Duplex Half-duplex
Signal type Unbalanced Balanced
Up to 32 transmitters and 43
Number of devices 1 transmitter and 1 receiver
receivers
Maximum data
19.2Kbps for 15 meters 10Mbps for 15 meters
transfer
Maximum cable Approximately 15.25 meters at Approximately 1220 meters at 100
length 19.2Kbps Kbps
Output current 500mA 250mA
Minimum input
+/- 3V 0.2V differential
voltage

USB

General USB block diagram


The USB protocol or universal serial bus was first developed and launched by Ajay V.Bhatt from
Intel in the year 1996. This USB is replaced different kinds of serial & parallel ports for
transferring data in between a computer as well as different peripheral devices like scanners,
printers, keyboards, gamepads, digital cameras, joysticks, etc. This article discusses an overview
of what is a USB protocol – working with applications.

What is USB Protocol?

A common interface that is used to allow communication between different peripheral devices
like mice, digital cameras, printers, keyboards, media devices, scanners, flash drives & external
hard drives as well as a host controller like a smartphone or PC is known as USB protocol.

A universal serial bus is intended to allow hot swapping & enhance plug-N- play. The plug-and-
play allows the OS to configure and discover a new peripheral device spontaneously without
starting the computer whereas hot swapping removes and replaces a new peripheral device
without rebooting.
There are different types of USB connectors available in the market where Type A and Type B
are the most frequently used ones. At present, older connectors are replaced by Mini-USB,
Micro-USB & USB-C cables.
Pin Configuration
The typical Type-A USB connector is used in various applications. These USBs include 4 pins
that are given below. This type of USB is observed mostly in connecting various devices to PC
because it is the typical four-pin USB connector. This connector is taller and narrower including
4-pins arranged within a box.

Type-A USB Connector Pin Configuration

The pins of Type A USB are indicated with color wires to perform a particular function.

 Pin1 (VBUS): It is a red color wire, used for providing power supply.
 Pin2 (D-): It is a differential pair pin available in white color, used for connectivity of USB.
 Pin3 (D+): It is a differential pair pin available in green color, used for connectivity of USB.
 Pin4 (GND): It is a Ground pin, available in black color.
In the above pins, both the D+ & D- pins indicate the transfer of data. When a ‘1’ is sent across
the wires, then the D+ line will have positive flow, and if ‘0’ is sent then the reverse happens.

USB Protocol Architecture


The architecture of the USB protocol is shown below. Once various I/O devices are connected
through USB to the computer then they all are structured like a tree. In this USB structure, every
I/O device will make a point-to-point connection to transmit data through the serial transmission
format.

In this architecture, I/O devices are connected to the computer through USB which is called as a
hub. The Hub within the architecture is the connecting point between both the I/O devices as
well as the computer. The root hub in this architecture is used to connect the whole structure to
the hosting computer. The I/O devices in this architecture are a keyboard, mouse, speaker,
camera, etc.

USB Protocol Architecture

How Does The USB Protocol Work?

The USB protocol simply works on the polling principle because, in polling, the processor
continuously checks whether the input/output device is prepared for transmitting data or not.
Thus, the I/O devices do not have to update the processor regarding their conditions because it is
the main responsibility of the processor to check continuously. So this will make the USB low-
cost & simple.

Whenever a new device is allied to the hub then it is addressed like ‘0’. During a normal period,
the host computer will poll the hubs to obtain their condition which allows the host to know the
I/O devices from the system are attached or detached from the system.
Once the host becomes responsive to the new device then it knows the device capacities by
reading the available data within the particular memory of the USB interface of the device. So
that the host uses a suitable driver to communicate with devices. After that, the host allocates an
address to the new device which is written to the device register. With this device, USB provides
plug-and-play features.

This feature simply allows the host to identify the new available I/O device automatically once
the device is connected. The I/O capacities of the devices will be determined by host software.

Another feature of the USB protocol is “hot-pluggable” which means, the I/O device is
connected or removed from the host system without doing any shutdown or restart. So your
system runs continuously when the I/O device is connected or detached.

USB protocol can also support the isochronous traffic wherever the data is transmitted at a preset
interval of time. The transmission of isochronous data is very faster as compared to synchronous
& asynchronous data transfer.

To hold the traffic isochronous, the root hub transmits a series of bits over the USB that specifies
the start of isochronous data & the actual data can be transmitted after this series of bits.

USB Protocol Features


The features of USB include the following.
 The maximum speed of USB 2.0 is up to 480 Mbps.
 An individual USB length can reach up to 40 meters including a hub and up to five meters
without a hub
 USB is a plug & play device.
 It can draw power from a computer or through its own supply.
 By using a single USB host controller, above 100 peripherals can be connected.
 The power used by a USB device is up to 5 V & delivers up to 500 mA.
 Once a computer changes into power-saving mode then some types of USBs convert
automatically into sleep mode.
 A USB includes two wires; one wire is used for power & another is used for carrying the data.
 At 5V, the computer can provide power up to 500mA on the power wires.
 Low-power-based devices can draw their power from the USB directly.
 Two-way communication is possible by using a USB in between the computer & peripheral
devices.
USB Standards and Specifications
The specifications of USB will change based on USB standards that include the following.
USB supports three types of speed low speed -1.5 Mbps, Full speed -12 Mbps & High speed –
480 Mbps.

USB 2.0 Standard

 It is a high-speed USB with 480Mbps of maximum data transfer speed. This USB supports all
connectors.
 The maximum length of the cable is 5 meters.
 Its max charging power is up to 15w.
USB 3.2 Standard

 USB 3.2 (Generation1) is a super speed USB with 5Gbps of maximum data transfer speed.
 It supports different connectors like USB 3 USB-A, USB 3 USB-B & USB-C.
 The maximum length of cable for this USB is 3 meters.
 Its max charging power is up to 15w.
USB 3.2 (Generation2)

 USB 3.2 (Generation2) is also a super speed USB with 10Gbps of maximum data transfer
speed.
 The maximum length of cable for this USB is 1meter.
 It also supports different connectors like USB 3 USB-A, USB 3 USB-B & USB-C.
 Its max charging power is up to 100w.
USB 3.2 Generation 2×2

 USB 3.2 Generation 2×2 is a super speed USB with 20Gbps of maximum data transfer speed.
 The maximum length of cable for this USB is 1meter.
 It also supports USB Connector.
 Its max charging power is up to 100w.
Thunderbolt 3 Standard

 This USB is also called thunderbolt including up to 40Gbps of maximum data transfer speed.
 The maximum length of cable for this USB is 2 meters for active and 0.8meters for passive
cables.
 It supports USB Connector.
 Its max charging power is up to 100w.
USB 4 Standard

 This USB is also known as Thunderbolt 4 with up to 40Gbps of maximum data transfer speed.
 The maximum length of cable for this USB is 2m for active & 0.8m for passive cables.
 It supports USB Connector.
 Its max charging power is up to 100w.
USB Protocol Timing Diagram

The timing diagram of the USB protocol is shown below which is mainly used in the engineering
field to explain the ON/OFF values of USB wires along a timeline.

A ‘1’ specifies no charge and a ‘0’ specifies active. As time grows you can observe the on/off
progression. The below system shows Non-Return to Zero Invert (NRZI) encoding which is a
more efficient method to transmit data.
USB Timing Diagram

In the above diagram, bit stuffing is happening which means that logic 1s are added for allowing
synchronization. If the data includes several 1s, then the USB cannot synchronize the data. So in
this manner, the hardware notices an additional bit & ignores it. It includes overhead to the USB
although ensures consistent transfer also.

USB Data Format


In USB protocol, master devices are known as USB hosts which start all the communication that
happens above the USB bus. Here, a computer otherwise other controller is usually considered as
the master device, so if they request any information they only respond to other devices. The
slave device or peripheral is connected simply to the host device which is programmed to
provide the host device with the information it requires to operate. In general, slave or peripheral
devices mainly include keyboards, mouse of computers, USB flash drives, cameras, etc.

It is very essential for host devices to communicate effectively with each other. Once the
peripheral device is connected to the computer through USB, then the computer will notice what
type of device it is & load a driver automatically that permits the device to function.

The small amount of data transmitted between the two devices is called as ‘packets’ where a unit
of digital information is transferred with every packet. The data transfer that can be occurred
within the USB protocol is discussed below.

Message Format

The data of the USB protocol is transmitted within packets LSB first. There are mainly four
types of USB packets Token, Data, Handshake & Start of the Frame. Every packet is designed
from various field types which are shown in the following message format diagram.

Message Format Diagram of USB


SYNC
In USB protocol, every USB packet will begin with a SYNC field which is normally utilized to
synchronize the transmitter & the receiver to transmit the data precisely. In a slow or high-speed
USB system, the field like SYNC includes 3 KJ pairs which are followed through 2 K’s to frame
8-bits of data.

In a Hi-Speed USB system, the synchronization needs 15 KJ pairs followed through 2 K’s to
frame 32-bits of data. This field is long with 8 bits at high &low speed otherwise 32-bits long for
maximum speed & it is utilized to synchronize the CLK of the transmitter & receiver. The final
2-bits will indicate wherever the PID field begins.

Packet Identifier Field or PID


The packer identifier field within the USB protocol is mainly used to recognize the packet type
that is being transmitted and thus the packet data format. The length of this field is 8 bits long
where the upper 4- bits recognize the kind of packet & lower 4- bits are the bit-wise complement
of the upper 4- bits.

Address Field
The address field of the USB protocol indicates which packet device is mainly designated for.
The 7-bits length simply allows support of 127 devices. The address zero is invalid because any
device which is not yet allocated an address should be reacted to transmitted packets to the zero
address.

Endpoint Field
The endpoint field within the USB protocol is 4-bits long & allows for extra flexibility within
addressing. Usually, these are divided for the data moving IN/OUT. Endpoint ‘0’ is a special
case called as the CONTROL endpoint & each device includes an endpoint 0.

Data Field
The length of the data field is not fixed, so it ranges from 0 to 8192 bits long & always an
integral the number of bytes.

CRC Field
The Cyclic Redundancy Checks (CRC) are executed on the data in the packet payload where all
the token packets include 5-bit CRC & the data packets include a 16-bit CRC. The CRC-5 is five
bits long & used by the token packet as well as the start of the frame packet.

EOP Field
Every packet is terminated by an EOP (End of the Packet) field which includes an SE0 or single-
ended zero for 2-bit times followed through the J for 1-bit time.

Synchronized Issues

The commonly faced synchronized issues within USB protocol include the following. Whenever
USB devices are developing then USB developer’s experiences commonly face many
synchronized issues which are also called communication errors of USB. Some of these errors
will cause failures of the system. The following examples are some of the issues with USB bus
that can happen:

 Improper Packet Data & Data Sequencing of USB.


 Transmissions or Retransmissions of USB.
 Power or VBUS-based Issues.
 Troubles through Enumeration.
 High-speed negotiation Problems.
Advantages
The advantages of USB include the following.
 Easy to use.
 For multiple devices, a single interface is used.
 Its size is compact.
 Its connector system is robust.
 These are not expensive.
 These are available in different sizes with different connectors.
 Auto configuration.
 Its expanding is easy.
 High speed.
 Reliable and low cost.
 Power consumption is low.
 Compatible and durable.
Disadvantages
The disadvantages of USB include the following.
 Some manufacturers design low-quality USBs with less cost.
 Its capacity is limited.
 As compared to other systems, its data transfer is not fast.
 USB does not give the broadcasting feature, so individual messages are only communicated
between the host & peripheral.
Applications
The applications of USB protocol include the following.
 At present, most of the peripheral devices are connected through a USB to the system like
Mouse, Printers, Scanners, Joysticks, Modems, Webcams, Keyboards, Digital cameras,
Storage devices, Storage devices, Flight yokes, Network adapters, and data acquisition devices
in the scientific field.
 USB is mainly used in computers on hubs & host controllers
 UBB Type-B is mostly used to connect compact devices such as mobile phones and USB
peripheral devices like printers.
 It is used most frequently on PCs, video game consoles & smartphones.

IEEE 1394 firewire bus


1394 firewire cable 1394 firewire port

 High-Speed Data Transfer: IEEE 1394 FireWire provides high-speed data transfer rates,
which is advantageous in embedded systems where large amounts of data need to be transferred
quickly. It supports data rates of 100, 200, 400, and even 800 Mbps, depending on the version.

 Hot-Plugging Capabilities: FireWire supports hot-plugging, meaning devices can be


connected and disconnected without powering down the system. This flexibility is crucial in
embedded systems where components may need to be replaced or upgraded without interrupting
operations.

 Peer-to-Peer Communication: FireWire allows peer-to-peer communication between


devices without needing a central controller, which can simplify the design of embedded systems
where direct device-to-device communication is required.

 Isochronous Data Transfer: FireWire supports isochronous data transfer, which is essential
for real-time applications in embedded systems such as audio and video streaming. This ensures
consistent timing and delivery of data packets, crucial for maintaining quality in multimedia
applications.

 Bus-Powered Devices: FireWire supports bus-powered devices, allowing certain embedded


devices to operate without an external power source, which can simplify the system design and
reduce overall power requirements.

 Embedded Operating Systems Support: Many embedded operating systems support


FireWire, making it easier to integrate into embedded applications. This includes real-time
operating systems (RTOS) that may require deterministic and reliable communication channels.

 Availability of Controllers: FireWire controllers and chipsets are available from various
vendors, making it feasible to integrate FireWire into custom embedded designs with off-the-
shelf components.

How it works
FireWire, as defined in IEEE 1394, uses 64-bit device addresses. FireWire cables use two
twisted-pair wires for data transmission and two wires for power.

FireWire includes two different serial interfaces:

 A backplane interface: Runs at speeds between 12.5 and 50 Mbps for bus connections
within a computer system.
 A point-to-point interface: Runs at speeds of 98.304 Mbps (S100 specification),
196.608 Mbps (S200), and 393.216 Mbps (S400) for connecting devices to computers
using serial cables.

The topology of a typical FireWire implementation can be complex, but it is typically a


hierarchical or tree topology consisting of various IEEE 1394 components. More complex
topologies, including several computers sharing portions of the peripheral network, are also
possible. The illustration shows how you can use FireWire. The four types of components you
can use in a FireWire implementation are

 Devices: Typically have 3 ports but can have up to 27 ports and can be daisy-chained up
to 16 devices.
 Splitters: Provide extra IEEE 1394 ports if needed to accommodate the number and
arrangement of devices used.
 Repeaters: Overcome distance limitations in IEEE 1394 cables.
 Bridges: Isolate traffic within a specific portion of an IEEE 1394 bus.

FireWire
FireWire connections have a maximum distance of 4.5 meters, but up to 16 components can be
daisy-chained to a maximum distance of 72 meters without using repeaters.

Applications

1. Digital Cameras and Camcorders: Used for fast transfer of high-resolution photos and
videos to computers.
2. External Hard Drives: Provides quick data transfer speeds for storing and accessing
large files.
3. Audio Interfaces: Used in recording studios for high-quality audio input and output.
4. Industrial Automation: Connects sensors and controllers for real-time monitoring and
control.
5. Medical Imaging: Transfers high-resolution medical images swiftly between devices.
6. Aerospace Systems: Sends critical data and video feeds onboard aircraft.
7. Consumer Electronics: Previously used in DVRs and high-definition TVs for peripheral
connections.
8. Networking: Connects computers in a peer-to-peer or daisy-chain configuration.

1. TPA: Transmits Positive A


2. TPB: Transmits Positive B

These differential pairs carry signals between FireWire devices, ensuring reliable data
transmission over the bus. The TPA and TPB lines are part of the physical layer of the FireWire
protocol and are essential for maintaining signal integrity and minimizing electromagnetic
interference.

IrDA

Introduction:
IrDA(Infrared Data Association) is one type of personal communication area network which is
deployed in infrared rays.
IrDA Applications:
 The data transfer takes place between a laptop(computer) and a Mobile when both come
into vicinity and line-of-sight of the IR receivers and detectors in each of them.
 To Send a document from a notebook computer to a printer.
 By Exchanging business cards which are handheld by the PCs.
 This provides the flexibility for coordinating schedules and telephone books with a desktop
and notebook computers.
 Point to shoot communication from peer to peer is a main characteristic for this protocol.
IrDA Protocol Layers:
There are different IrDA protocol layers are there
 Application Layer
 Session Layer
 IrLMIAS
 IrTinyTP
 IrLMP
 Physical Layer

IrDA Protocol Layers

Application Layer:
 In this application layer protocol security plays a vital role.
 Sync(PIM), Object Push(PIM) or Binary File Transfer are the functions provided by this
layer.
Session Layer:
IrOBEX , IrLAN , IrBus , IrMC , TrTran , IrComm are present in this layer to perform
different tasks.
IrTinyTP:
 Segmentation and reassembly takes place in this layer .
 It provides connection to IrLMP.
IrLMP:
 It multiplexes multiple applications data as well as exclusive link access.
 It provides an Ad-hoc connection between peers.
Physical Layer:
 This layer has an ability for accessing half duplex or alternating directions duplex access.
 It provides a value 1 m or 10 cm(For low power LED).
 Different Modes: Synchronous PPM, Synchronous Serial ,Asynchronous Serial
Session and Transport IrDA Protocol:
 For infrared LAN access IrLAN is used.
 For accessing the serial bus by joysticks, keyboard, mice and game ports IrBUS is used.
 In this protocol IrMC provides mobile communication and telephony protocol.
 IrTran is a transport protocol for image file transfers.
 IrComm protocol is used by emulating serial(Ex. RS232CCOM) or parallel port.

Bluetooth

bluetooth embedded system

Bluetooth technology is a wireless communication standard that enables short-range


communication between devices. It's widely used in various applications, including smartphones,
audio devices, medical devices, and IoT (Internet of Things) devices. In embedded systems,
Bluetooth is often implemented using specialized Bluetooth modules or integrated into
microcontrollers.

Here's a breakdown of the key technical aspects of a Bluetooth embedded system:

Bluetooth Protocol Stack:

1. Physical Layer (PHY):


o Bluetooth operates in the 2.4 GHz ISM (Industrial, Scientific, and Medical) band.
It uses frequency-hopping spread spectrum (FHSS) to avoid interference.
o The physical layer defines the modulation, data rate, and other aspects of radio
communication.
2. Link Layer:
o Manages the communication between devices, handling tasks like connection
establishment, data packet formatting, and error checking.
o Implements the Bluetooth Low Energy (BLE) or Classic Bluetooth protocol,
depending on the version and application requirements.
3. Logical Link Control and Adaptation Protocol (L2CAP):
o Sits above the link layer and provides multiplexing for higher-layer protocols.
o It can support multiple logical channels for various types of data, including
control information and user data.
4. Security Manager:
o Responsible for pairing and authentication between devices.
o Implements security features like encryption to protect data during transmission.
5. Attribute Protocol (ATT) and Generic Attribute Profile (GATT):
o These are used in Bluetooth Low Energy (BLE) applications.
o GATT defines a hierarchical data structure that organizes data into services,
characteristics, and descriptors.
o ATT is the protocol used to transfer attribute data between devices.
6. Generic Access Profile (GAP):
o Defines the roles and procedures for device discovery, connection establishment,
and link management.

Bluetooth Modes:

1. Classic Bluetooth:
o Used for data-intensive applications like audio streaming.
o Supports various profiles such as A2DP (Advanced Audio Distribution Profile)
for audio streaming and HFP (Hands-Free Profile) for hands-free communication.
2. Bluetooth Low Energy (BLE):
o Designed for low-power, short-range communication in applications like fitness
trackers, smartwatches, and IoT devices.
o Operates in connection-oriented and connectionless modes.

Embedded System Integration:

1. Bluetooth Modules:
o Many embedded systems use pre-built Bluetooth modules that encapsulate the
Bluetooth functionality.
o These modules often come with their own firmware, making integration into an
embedded system more straightforward.
2. Microcontroller Integration:
o Some microcontrollers have built-in Bluetooth capabilities.
o Bluetooth functionality may be implemented through software libraries and APIs
provided by the microcontroller manufacturer.
3. Software Stacks:
o Implementing Bluetooth requires a software stack that manages the protocol
layers.
o Stack implementations may be provided by Bluetooth SIG (Special Interest
Group) or customized for specific applications.
4. Power Management:
o Bluetooth Low Energy is designed to be power-efficient, allowing devices to
operate on battery power for extended periods.
5. Application Development:
o Application developers interface with the Bluetooth stack to enable specific
functionalities.
o APIs provided by the Bluetooth stack allow developers to interact with lower-
level Bluetooth features.

Challenges:

1. Interference:
o The 2.4 GHz band is shared with other wireless technologies, leading to potential
interference.
2. Security Concerns:
oImplementing robust security measures is crucial to prevent unauthorized access
and data interception.
3. Compatibility:
o Ensuring compatibility with various Bluetooth versions and profiles can be
challenging.

A Bluetooth embedded system involves the integration of hardware components, firmware, and
software stacks to enable wireless communication between devices. The choice of Bluetooth
version (Classic or BLE) depends on the specific requirements of the application, balancing
factors such as data throughput, power consumption, and range.
Piconet

A piconet is a network created by connecting multiple wireless devices using Bluetooth


technology. In a piconet network a master device exists, this master device cm gets connected to
7 more slave devices.

It includes the master the number of devices that can be connected is limited to 8. Due to less
number of devices active at a time the usage of channel band width is not more.
Number of devices that can be connected is limited to 8. It is applicable for devices belonging to
small areas.

Given below is the diagram of piconet −

AD
Scatternet

It is a network which connects multiple piconets using Bluetooth and it acts as a master and
another type of piconet acts as a slave. It has more than 6 devices that can be connected.

Multiple devices are active, so there is an effective use of channel bandwidth.

It is a connection of multiple piconets therefore it is applicable for devices belonging to large


areas.

Given below is the diagram of scatternet −

AD
Differences

The major differences between piconet and scatternet are as follows −


Piconet Scatternet

Piconet is the type of connection between 2 or more Bluetooth enabled


formed between 2 or more devices. It is a type of ad-hoc computer
Bluetooth enabled devices. network consisting of 2 or more piconets.

It supports maximum 8 nodes It supports more than 8 nodes.


i.e,1 master & 7 slaves

It Allows less efficient use of It Allows more efficient use of Bluetooth


Bluetooth channel bandwidth. channel bandwidth.
It is usually applied to Bluetooth It is applied to Bluetooth devices too.
devices. It is a larger coverage area.
It is a smaller coverage area

The figure given below depicts the piconet and scatternet together −

Zigbee

Introduction of ZigBee




ZigBee is a Personal Area Network task group with low rate task group 4. It is a technology of
home networking. ZigBee is a technological standard created for controlling and sensing the
network. As we know that ZigBee is the Personal Area Network of task group 4 so it is based on
IEEE 802.15.4 and is created by Zigbee Alliance.
ZigBee is an open, global, packet-based protocol designed to provide an easy-to-use
architecture for secure, reliable, low power wireless networks. Flow or process control
equipment can be place anywhere and still communicate with the rest of the system. It can also
be moved, since the network doesn’t care about the physical location of a sensor, pump or valve.
IEEE802.15.4 developed the PHY and MAC layer whereas, the ZigBee takes care of upper
higher layers.
ZigBee is a standard that addresses the need for very low-cost implementation of Low power
devices with Low data rates for short-range wireless communications.
IEEE 802.15.4 supports star and peer-to-peer topologies. The ZigBee specification supports star
and two kinds of peer-to-peer topologies, mesh and cluster tree. ZigBee-compliant devices are
sometimes specified as supporting point-to-point and point-to-multipoint topologies.
Why another short-range communication standard??

Types of ZigBee Devices:

 Zigbee Coordinator Device: It communicates with routers. This device is used for
connecting the devices.
 Zigbee Router: It is used for passing the data between devices.
 Zigbee End Device: It is the device that is going to be controlled.
General Characteristics of Zigbee Standard:

 Low Power Consumption


 Low Data Rate (20- 250 kbps)
 Short-Range (75-100 meters)
 Network Join Time (~ 30 msec)
 Support Small and Large Networks (up to 65000 devices (Theory); 240 devices (Practically))
 Low Cost of Products and Cheap Implementation (Open Source Protocol)
 Extremely low-duty cycle.
 3 frequency bands with 27 channels.
Operating Frequency Bands (Only one channel will be selected for use in a network):
1. Channel 0: 868 MHz (Europe)
2. Channel 1-10: 915 MHz (the US and Australia)
3. Channel 11-26: 2.4 GHz (Across the World)

Features of Zigbee:

1. Stochastic addressing: A device is assigned a random address and announced. Mechanism


for address conflict resolution. Parents node don’t need to maintain assigned address table.
2. Link Management: Each node maintains quality of links to neighbors. Link quality is used as
link cost in routing.
3. Frequency Agility: Nodes experience interference report to channel manager, which then
selects another channel
4. Asymmetric Link: Each node has different transmit power and sensitivity. Paths may be
asymmetric.
5. Power Management: Routers and Coordinators use main power. End Devices use batteries.
Advantages of Zigbee:
1. Designed for low power consumption.
2. Provides network security and application support services operating on the top of IEEE.
3. Zigbee makes possible completely networks homes where all devices are able to
communicate and be
4. Use in smart home
5. Easy implementation
6. Adequate security features.
7. Low cost: Zigbee chips and modules are relatively inexpensive, which makes it a cost-
effective solution for IoT applications.
8. Mesh networking: Zigbee uses a mesh network topology, which allows for devices to
communicate with each other without the need for a central hub or router. This makes it ideal
for use in smart home applications where devices need to communicate with each other and
with a central control hub.
9. Reliability: Zigbee protocol is designed to be highly reliable, with robust mechanisms in
place to ensure that data is delivered reliably even in adverse conditions.
Disadvantages of Zigbee :
1. Limited range: Zigbee has a relatively short range compared to other wireless
communications protocols, which can make it less suitable for certain types of applications or
for use in large buildings.
2. Limited data rate: Zigbee is designed for low-data-rate applications, which can make it less
suitable for applications that require high-speed data transfer.
3. Interoperability: Zigbee is not as widely adopted as other IoT protocols, which can make it
difficult to find devices that are compatible with each other.
4. Security: Zigbee’s security features are not as robust as other IoT protocols, making it more
vulnerable to hacking and other security threats.

Zigbee Network Topologies:

 Star Topology (ZigBee Smart Energy): Consists of a coordinator and several end devices,
end devices communicate only with the coordinator.
 Mesh Topology (Self Healing Process): Mesh topology consists of one coordinator, several
routers, and end devices.
 Tree Topology: In this topology, the network consists of a central node which is a
coordinator, several routers, and end devices. the function of the router is to extend the
network coverage.

Architecture of Zigbee:

Zigbee architecture is a combination of 6 layers.


1. Application Layer
2. Application Interface Layer
3. Security Layer
4. Network Layer
5. Medium Access Control Layer
6. Physical Layer
 Physical layer: The lowest two layers i.e the physical and the MAC (Medium Access
Control) Layer are defined by the IEEE 802.15.4 specifications. The Physical layer is closest
to the hardware and directly controls and communicates with the Zigbee radio. The physical
layer translates the data packets in the over-the-air bits for transmission and vice-versa during
the reception.
 Medium Access Control layer (MAC layer): The layer is responsible for the interface
between the physical and network layer. The MAC layer is also responsible for providing
PAN ID and also network discovery through beacon requests.
 Network layer: This layer acts as an interface between the MAC layer and the application
layer. It is responsible for mesh networking.
 Application layer: The application layer in the Zigbee stack is the highest protocol layer and
it consists of the application support sub-layer and Zigbee device object. It contains
manufacturer-defined applications.
Channel Access:
1. Contention Based Method (Carrier-Sense Multiple Access With Collision Avoidance
Mechanism)
2. Contention Free Method (Coordinator dedicates a specific time slot to each device
(Guaranteed Time Slot (GTS)))

Zigbee Applications:

1. Home Automation
2. Medical Data Collection
3. Industrial Control Systems
4. meter reading system
5. light control system
6. Commercial
7. Government Markets Worldwide
8. Home Networking

Unit-3

Software development Tools:

Software development tools encompass a wide range of software applications and utilities
designed to aid developers in creating, maintaining, and debugging software. These tools serve
different purposes throughout the software development lifecycle, from initial design and coding
to testing, deployment, and maintenance. Here's an overview of the categories and types of
software development tools commonly used:

1. **Integrated Development Environment (IDE)**:

- An IDE is a software application that provides comprehensive facilities to programmers


for software development. It typically includes a source code editor with features like
syntax highlighting, code completion, and refactoring tools. IDEs also integrate compilers,
debuggers, build automation tools, and version control systems into a unified user
interface, streamlining the development process. They support multiple programming
languages and frameworks, enabling developers to write, test, debug, and deploy code
efficiently within a single environment.

2. **Assembler**:

- An assembler is a software tool that converts assembly language code into machine code
or object code. Assembly language consists of mnemonic instructions that are specific to a
particular processor architecture. The assembler translates these human-readable
instructions into binary machine code that can be directly executed by the computer's
CPU. It handles tasks such as assigning memory addresses, managing labels and symbols,
and generating executable code from assembly source files.

3. **Compiler**:

- A compiler is a software tool that translates high-level programming languages (such as


C, C++, Java) into machine code or intermediate code. The compilation process involves
several stages: lexical analysis, syntax analysis, semantic analysis, optimization, and code
generation. The compiler checks for syntax errors, type mismatches, and other language-
specific rules, optimizing the code for efficiency and generating executable binaries or
bytecode that can be executed by a computer's processor or a virtual machine.

4. **Linker**:

- A linker is a utility that combines object code generated by a compiler with libraries,
modules, and runtime components to produce an executable program or shared library.
During compilation, source code is translated into object code, which contains references to
external functions and variables. The linker resolves these references, ensuring that all
necessary components are linked together to create a coherent executable file. It also
performs address binding, symbol resolution, and generates the final machine code ready
for execution.

5. **Simulator**:

- A simulator is a software tool that models the behavior of hardware systems or software
components without using the actual physical hardware. It allows developers to test and
debug applications in a controlled environment, simulating different scenarios and inputs.
Simulators are commonly used in embedded systems development, where testing on real
hardware may be impractical or costly. They provide insights into system performance,
timing behavior, and interaction with peripherals, helping developers identify and fix
issues before deployment.

6. **Debugger**:

- A debugger is a software tool that allows developers to monitor, control, and analyze the
execution of a program. It helps identify and resolve bugs (errors) by providing features
such as breakpoints, stepping through code, inspecting variables and memory, and
evaluating expressions. Debuggers enable developers to track the flow of program
execution, understand runtime behavior, and diagnose issues effectively. They are essential
for software development, ensuring code correctness and optimizing performance.

7. **In-Circuit Emulator (ICE)**:

- An In-Circuit Emulator is a hardware device used for debugging embedded systems


and microcontroller-based applications. It connects directly to the target hardware,
allowing developers to monitor and control the execution of code in real-time. ICEs provide
visibility into the system's operation at the hardware level, enabling debugging of low-level
issues such as timing constraints, interrupt handling, and register contents. They are
indispensable tools for embedded software development, ensuring reliability and
performance on actual hardware platforms.

8. **Target Hardware Debugging**:

- Target hardware debugging refers to debugging software directly on the physical


hardware platform where it will run. It involves connecting a debugger to the target
hardware, monitoring its behavior, and diagnosing issues that may arise due to hardware-
specific interactions or environmental factors. Target hardware debugging ensures that
software behaves correctly in its intended deployment environment, addressing issues that
may not be apparent in simulation or emulation. It is critical for validating system
integration, optimizing performance, and ensuring reliability in real-world applications.

These software development tools collectively support different stages of the software
development lifecycle, from writing and testing code to debugging and deploying
applications. They empower developers to create robust, efficient, and reliable software
solutions by providing the necessary tools and environments for development, testing, and
optimization. Each tool plays a crucial role in ensuring code quality, performance, and
compatibility across various platforms and hardware configurations.

Certainly! Here's a concise summary of the need for hardware-software partitioning and
co-design:

### Hardware-Software Partitioning:

1. **Performance Optimization**: Assigning tasks to hardware (FPGA, ASIC) or software


(CPU) based on their computational intensity and timing requirements optimizes overall
system performance.

2. **Resource Efficiency**: Efficient use of hardware resources like power and processing
capability by offloading compute-intensive tasks to dedicated hardware accelerators.
3. **Real-Time Constraints**: Ensuring timely and predictable execution of critical tasks
by implementing them in hardware, avoiding software overheads.

4. **Security and Safety**: Enhancing security by isolating critical functions in hardware,


protecting against software vulnerabilities and unauthorized access.

### Hardware-Software Co-Design:

1. **Optimized System Architecture**: Designing integrated hardware and software


architectures that maximize performance, minimize power consumption, and reduce costs.

2. **Early Validation and Debugging**: Simultaneous development and testing of


hardware and software components to identify and resolve integration issues early in the
design process.

3. **Design Space Exploration**: Evaluating different partitioning strategies and design


alternatives to achieve the best balance between hardware and software implementations.

4. **Flexibility and Scalability**: Supporting flexible system architectures that can adapt
to changing requirements with minimal redesign, ensuring long-term scalability and
adaptability.

5. **Domain-Specific Optimization**: Tailoring hardware and software solutions to meet


the specific needs and constraints of different applications or environments, optimizing
efficiency and performance.

These practices are crucial for developing efficient, reliable, and cost-effective solutions
across a wide range of applications, from embedded systems to high-performance
computing and IoT devices.
Unified Modeling Language (UML) Diagrams



Unified Modeling Language (UML) is a general-purpose modeling language. The main aim of
UML is to define a standard way to visualize the way a system has been designed. It is quite
similar to blueprints used in other fields of engineering. UML is not a programming language,
it is rather a visual language.

1. What is UML?
Unified Modeling Language (UML) is a standardized visual modeling language used in the
field of software engineering to provide a general-purpose, developmental, and intuitive way to
visualize the design of a system. UML helps in specifying, visualizing, constructing, and
documenting the artifacts of software systems.
 We use UML diagrams to portray the behavior and structure of a system.
 UML helps software engineers, businessmen, and system architects with modeling, design,
and analysis.
 The Object Management Group (OMG) adopted Unified Modelling Language as a standard
in 1997. It’s been managed by OMG ever since.
 The International Organization for Standardization (ISO) published UML as an approved
standard in 2005. UML has been revised over the years and is reviewed periodically.
2. Why do we need UML?
 Complex applications need collaboration and planning from multiple teams and hence
require a clear and concise way to communicate amongst them.
 Businessmen do not understand code. So UML becomes essential to communicate with non-
programmers about essential requirements, functionalities, and processes of the system.
 A lot of time is saved down the line when teams can visualize processes, user interactions,
and the static structure of the system.
Unified Modeling Language (UML) has a broad scope and is widely used in software
engineering for modeling and designing software systems. Here are key aspects of the scope
of UML modeling:

1. **Visual Modeling Language**: UML provides a standardized and visual language for
expressing and communicating the design of software systems. It uses diagrams to depict
different aspects of a system, such as structure, behavior, interactions, and architecture.

2. **System Analysis and Design**: UML supports both system analysis and design phases
of software development. During analysis, UML diagrams help capture requirements, define
use cases, and model business processes. In the design phase, UML diagrams facilitate the
specification of system structure, components, and their interactions.

3. **Modeling Software Architecture**: UML allows architects to model the architecture of


software systems using various diagrams such as class diagrams, component diagrams, and
deployment diagrams. These diagrams help visualize the structural relationships between
components, their organization, and deployment across hardware platforms.

4. **Behavioral Modeling**: UML supports behavioral modeling by describing how


components interact and behave over time. Diagrams like sequence diagrams, state machine
diagrams, and activity diagrams depict the flow of control, object interactions, and state
transitions within a system.

5. **Design Patterns and Reuse**: UML enables the representation and application of
design patterns, which are proven solutions to common design problems. Design patterns
can be expressed in UML diagrams, making it easier to reuse successful design strategies
across projects and domains.

6. **Communication and Collaboration**: UML diagrams serve as a communication tool


among stakeholders, including developers, designers, architects, and clients. They provide a
clear and concise way to convey system requirements, design decisions, and architectural
concepts, fostering collaboration and consensus among team members.

7. **Tool Integration**: UML models can be integrated with various software development
tools and environments, including IDEs (Integrated Development Environments), CASE
(Computer-Aided Software Engineering) tools, and version control systems. This
integration supports automated code generation, model validation, and synchronization
between models and implementation.

8. **Support for Agile and Iterative Development**: UML is adaptable to agile and
iterative development methodologies by allowing incremental refinement of models based
on evolving requirements and feedback. It supports iterative modeling and design, ensuring
that models remain aligned with changing project needs.

9. **Industry Standard**: UML is an industry-standard modeling language adopted by


organizations and communities worldwide. Its standardized notation and semantics promote
consistency and interoperability across different software development projects and teams.

In conclusion, the scope of UML modeling encompasses a comprehensive set of techniques


and diagrams for capturing, communicating, and refining the design of software systems
throughout the development lifecycle. Its flexibility, standardization, and visual
representation make it a powerful tool for software engineers, architects, and stakeholders
involved in software development projects of varying scale and complexity.

Conceptual model of UML


is a standard visual language for describing and modelling software blueprints. The UML is
more than just a graphical language. Stated formally, the UML is for: Visualizing, Specifying,
Constructing, and Documenting. The artifacts of a software-intensive system (particularly
systems built using the object-oriented style).
Three Aspects of UML:
Figure –
Three Aspects of UML
Note –
Language, Model, and Unified are the important aspect of UML as described in the map above.
1. Language:
 It enables us to communicate about a subject which includes the requirements and the
system.
 It is difficult to communicate and collaborate for a team to successfully develop a system
without a language.
2. Model:
 It is a representation of a subject.
 It captures a set of ideas (known as abstractions) about its subject.
3. Unified:
 It is to bring together the information systems and technology industry’s best engineering
practices.
 These practices involve applying techniques that allow us to successfully develop systems.
A Conceptual Model:
A conceptual model of the language underlines the three major elements:
• The Building Blocks
• The Rules
• Some Common Mechanisms

Once you understand these elements, you will be able to read and recognize the models as well
as create some of them.
Figure –
A Conceptual Model of the UML
Building Blocks:
The vocabulary of the UML encompasses three kinds of building blocks:
1. Things: Things are the abstractions that are first-class citizens in a model; relationships tie
these things together; diagrams group interesting collections of things. There are 4 kinds of
things in the UML:
1. Structural things
2. Behavioral things
3. Grouping things
4. Annotational things
These things are the basic object-oriented building blocks of the UML. You use them to
write well-formed models.
2. Relationships: There are 4 kinds of relationships in the UML:
1. Dependency
2. Association
3. Generalization
4. Realization
These relationships are the basic relational building blocks of the UML.
Diagrams: It is the graphical presentation of a set of elements. It is rendered as a connected
graph of vertices (things) and arcs (relationships).
1. Class diagram
2. Object diagram
3. Use case diagram
4. Sequence diagram
5. Collaboration diagram
6. Statechart diagram
7. Activity diagram
8. Component diagram
9. Deployment diagram
Rules:
The UML has a number of rules that specify what a well-formed model should look like. A
well-formed model is one that is semantically self-consistent and in harmony with all its
related models. The UML has semantic rules for:
1. Names – What you can call things, relationships, and diagrams.
2. Scope – The context that gives specific meaning to a name.
3. Visibility – How those names can be seen and used by others.
4. Integrity – How things properly and consistently relate to one another.
5. Execution – What it means to run or simulate a dynamic model.
Common Mechanisms:
The UML is made simpler by the four common mechanisms. They are as follows:
1. Specifications
2. Adornments
3. Common divisions
4. Extensibility mechanisms

UML- Architecture

Software architecture is all about how a software system is built at its highest level. It is needed
to think big from multiple perspectives with quality and design in mind. The software team is
tied to many practical concerns, such as:

o The structure of the development team.


o The needs of the business.
o Development cycle.
o The intent of the structure itself.

Software architecture provides a basic design of a complete software system. It defines the
elements included in the system, the functions each element has, and how each element relates to
one another. In short, it is a big picture or overall structure of the whole system, how everything
works together.

To form an architecture, the software architect will take several factors into consideration:

o What will the system be used for?


o Who will be using the system?
o What quality matters to them?
o Where will the system run?
The architect plans the structure of the system to meet the needs like these. It is essential to have
proper software architecture, mainly for a large software system. Having a clear design of a
complete system as a starting point provides a solid basis for developers to follow.

Each developer will know what needs to be implemented and how things relate to meet the
desired needs efficiently. One of the main advantages of software architecture is that it provides
high productivity to the software team. The software development becomes more effective as it
comes up with an explained structure in place to coordinate work, implement individual features,
or ground discussions on potential issues. With a lucid architecture, it is easier to know where the
key responsibilities are residing in the system and where to make changes to add new
requirements or simply fixing the failures.

In addition, a clear architecture will help to achieve quality in the software with a well-designed
structure using principles like separation of concerns; the system becomes easier to maintain,
reuse, and adapt. The software architecture is useful to people such as software developers, the
project manager, the client, and the end-user. Each one will have different perspectives to view
the system and will bring different agendas to a project. Also, it provides a collection of several
views. It can be best understood as a collection of five views:

1. Use case view


2. Design view
3. Implementation view
4. Process view
5. Development view
Use case view

o It is a view that shows the functionality of the system as perceived by external actors.
o It reveals the requirements of the system.
o With UML, it is easy to capture the static aspects of this view in the use case diagrams,
whereas it?s dynamic aspects are captured in interaction diagrams, state chart diagrams,
and activity diagrams.

Design View

o It is a view that shows how the functionality is designed inside the system in terms of
static structure and dynamic behavior.
o It captures the vocabulary of the problem space and solution space.
o With UML, it represents the static aspects of this view in class and object diagrams,
whereas its dynamic aspects are captured in interaction diagrams, state chart diagrams,
and activity diagrams.

Implementation View

o It is the view that represents the organization of the core components and files.
o It primarily addresses the configuration management of the system?s releases.
o With UML, its static aspects are expressed in component diagrams, and the dynamic
aspects are captured in interaction diagrams, state chart diagrams, and activity diagrams.

Process View

o It is the view that demonstrates the concurrency of the system.


o It incorporates the threads and processes that make concurrent system and synchronized
mechanisms.
o It primarily addresses the system's scalability, throughput, and performance.
o Its static and dynamic aspects are expressed the same way as the design view but focus
more on the active classes that represent these threads and processes.

Deployment View

o It is the view that shows the deployment of the system in terms of physical architecture.
o It includes the nodes, which form the system hardware topology where the system will be
executed.
o It primarily addresses the distribution, delivery, and installation of the parts that build the
physical system.
3. Different Types of UML Diagrams
UML is linked with object-oriented design and analysis. UML makes use of elements and
forms associations between them to form diagrams. Diagrams in UML can be broadly
classified as:

4. Structural UML Diagrams


4.1. Class Diagram
The most widely use UML diagram is the class diagram. It is the building block of all object
oriented software systems. We use class diagrams to depict the static structure of a system by
showing system’s classes, their methods and attributes. Class diagrams also help us identify
relationship between different classes or objects.
4.2. Composite Structure Diagram
We use composite structure diagrams to represent the internal structure of a class and its
interaction points with other parts of the system.
 A composite structure diagram represents relationship between parts and their configuration
which determine how the classifier (class, a component, or a deployment node) behaves.
 They represent internal structure of a structured classifier making the use of parts, ports, and
connectors.
 We can also model collaborations using composite structure diagrams.
 They are similar to class diagrams except they represent individual parts in detail as
compared to the entire class.
4.3. Object Diagram
An Object Diagram can be referred to as a screenshot of the instances in a system and the
relationship that exists between them. Since object diagrams depict behaviour when objects
have been instantiated, we are able to study the behaviour of the system at a particular instant.
 An object diagram is similar to a class diagram except it shows the instances of classes in
the system.
 We depict actual classifiers and their relationships making the use of class diagrams.
 On the other hand, an Object Diagram represents specific instances of classes and
relationships between them at a point of time.
4.4. Component Diagram
Component diagrams are used to represent how the physical components in a system have been
organized. We use them for modelling implementation details.
 Component Diagrams depict the structural relationship between software system elements
and help us in understanding if functional requirements have been covered by planned
development.
 Component Diagrams become essential to use when we design and build complex systems.
 Interfaces are used by components of the system to communicate with each other.
4.5. Deployment Diagram
Deployment Diagrams are used to represent system hardware and its software.It tells us what
hardware components exist and what software components run on them.
 We illustrate system architecture as distribution of software artifacts over distributed
targets.
 An artifact is the information that is generated by system software.
 They are primarily used when a software is being used, distributed or deployed over
multiple machines with different configurations.
4.6. Package Diagram
We use Package Diagrams to depict how packages and their elements have been organized. A
package diagram simply shows us the dependencies between different packages and internal
composition of packages.
 Packages help us to organise UML diagrams into meaningful groups and make the diagram
easy to understand.
 They are primarily used to organise class and use case diagrams.
5. Behavioral UML Diagrams
5.1. State Machine Diagrams
A state diagram is used to represent the condition of the system or part of the system at finite
instances of time. It’s a behavioral diagram and it represents the behavior using finite state
transitions.
 State diagrams are also referred to as State machines and State-chart Diagrams
 These terms are often used interchangeably. So simply, a state diagram is used to model the
dynamic behavior of a class in response to time and changing external stimuli.
5.2. Activity Diagrams
We use Activity Diagrams to illustrate the flow of control in a system. We can also use an
activity diagram to refer to the steps involved in the execution of a use case.
 We model sequential and concurrent activities using activity diagrams. So, we basically
depict workflows visually using an activity diagram.
 An activity diagram focuses on condition of flow and the sequence in which it happens.
 We describe or depict what causes a particular event using an activity diagram.
5.3. Use Case Diagrams
Use Case Diagrams are used to depict the functionality of a system or a part of a system. They
are widely used to illustrate the functional requirements of the system and its interaction with
external agents(actors).
 A use case is basically a diagram representing different scenarios where the system can be
used.
 A use case diagram gives us a high level view of what the system or a part of the system
does without going into implementation details.
5.4. Sequence Diagram
A sequence diagram simply depicts interaction between objects in a sequential order i.e. the
order in which these interactions take place.
 We can also use the terms event diagrams or event scenarios to refer to a sequence diagram.
 Sequence diagrams describe how and in what order the objects in a system function.
 These diagrams are widely used by businessmen and software developers to document and
understand requirements for new and existing systems.
5.5. Communication Diagram
A Communication Diagram (known as Collaboration Diagram in UML 1.x) is used to show
sequenced messages exchanged between objects.
 A communication diagram focuses primarily on objects and their relationships.
 We can represent similar information using Sequence diagrams, however communication
diagrams represent objects and links in a free form.
5.6. Timing Diagram
Timing Diagram are a special form of Sequence diagrams which are used to depict the
behavior of objects over a time frame. We use them to show time and duration constraints
which govern changes in states and behavior of objects.
5.7. Interaction Overview Diagram
An Interaction Overview Diagram models a sequence of actions and helps us simplify complex
interactions into simpler occurrences. It is a mixture of activity and sequence diagrams.
6. Object-Oriented Concepts Used in UML Diagrams
1. Class: A class defines the blue print i.e. structure and functions of an object.
2. Objects: Objects help us to decompose large systems and help us to modularize our system.
Modularity helps to divide our system into understandable components so that we can build
our system piece by piece.
3. Inheritance: Inheritance is a mechanism by which child classes inherit the properties of
their parent classes.
4. Abstraction: Abstraction in UML refers to the process of emphasizing the essential aspects
of a system or object while disregarding irrelevant details. By abstracting away unnecessary
complexities, abstraction facilitates a clearer understanding and communication among
stakeholders.
5. Encapsulation: Binding data together and protecting it from the outer world is referred to
as encapsulation.
6. Polymorphism: Mechanism by which functions or entities are able to exist in different
forms.
6.1. Additions in UML 2.0
 Software development methodologies like agile have been incorporated and scope of
original UML specification has been broadened.
 Originally UML specified 9 diagrams. UML 2.x has increased the number of diagrams from
9 to 13. The four diagrams that were added are : timing diagram, communication diagram,
interaction overview diagram and composite structure diagram. UML 2.x renamed
statechart diagrams to state machine diagrams.
 UML 2.x added the ability to decompose software system into components and sub-
components.
7. Tools for creating UML Diagrams
There are several tools available for creating Unified Modeling Language (UML) diagrams,
which are commonly used in software development to visually represent system architecture,
design, and implementation. Here are some popular UML diagram creating tools:
 Lucidchart: Lucidchart is a web-based diagramming tool that supports UML diagrams. It’s
user-friendly and collaborative, allowing multiple users to work on diagrams in real-time.
 Draw.io: Draw.io is a free, web-based diagramming tool that supports various diagram
types, including UML. It integrates with various cloud storage services and can be used
offline.
 Visual Paradigm: Visual Paradigm provides a comprehensive suite of tools for software
development, including UML diagramming. It offers both online and desktop versions and
supports a wide range of UML diagrams.
 StarUML: StarUML is an open-source UML modeling tool with a user-friendly interface.
It supports the standard UML 2.x diagrams and allows users to customize and extend its
functionality through plugins.
 Papyrus: Papyrus is an open-source UML modeling tool that is part of the Eclipse
Modeling Project. It provides a customizable environment for creating, editing, and
visualizing UML diagrams.
 PlantUML: PlantUML is a text-based tool that allows you to create UML diagrams using a
simple and human-readable syntax. It’s often used in conjunction with other tools and
supports a variety of diagram types.
8. Steps to create UML Diagrams

Creating Unified Modeling Language (UML) diagrams involves a systematic process that
typically includes the following steps:
 Step 1: Identify the Purpose:
o Determine the purpose of creating the UML diagram. Different types of UML
diagrams serve various purposes, such as capturing requirements, designing
system architecture, or documenting class relationships.
 Step 2: Identify Elements and Relationships:
o Identify the key elements (classes, objects, use cases, etc.) and their relationships
that need to be represented in the diagram. This step involves understanding the
structure and behavior of the system you are modeling.
 Step 3: Select the Appropriate UML Diagram Type:
o Choose the UML diagram type that best fits your modeling needs. Common types
include Class Diagrams, Use Case Diagrams, Sequence Diagrams, Activity
Diagrams, and more.
 Step 4: Create a Rough Sketch:
o Before using a UML modeling tool, it can be helpful to create a rough sketch on
paper or a whiteboard. This can help you visualize the layout and connections
between elements.
 Step 5: Choose a UML Modeling Tool:
o Select a UML modeling tool that suits your preferences and requirements. There
are various tools available, both online and offline, that offer features for creating
and editing UML diagrams.
 Step 6: Create the Diagram:
o Open the selected UML modeling tool and create a new project or diagram. Begin
adding elements (e.g., classes, use cases, actors) to the diagram and connect them
with appropriate relationships (e.g., associations, dependencies).
 Step 7: Define Element Properties:
o For each element in the diagram, specify relevant properties and attributes. This
might include class attributes and methods, use case details, or any other
information specific to the diagram type.
 Step 8: Add Annotations and Comments:
o Enhance the clarity of your diagram by adding annotations, comments, and
explanatory notes. This helps anyone reviewing the diagram to understand the
design decisions and logic behind it.
 Step 9: Validate and Review:
o Review the diagram for accuracy and completeness. Ensure that the relationships,
constraints, and elements accurately represent the intended system or process.
Validate your diagram against the requirements and make necessary adjustments.
 Step 10: Refine and Iterate:
o Refine the diagram based on feedback and additional insights. UML diagrams are
often created iteratively as the understanding of the system evolves.
 Step 11: Generate Documentation:
o Some UML tools allow you to generate documentation directly from your
diagrams. This can include class documentation, use case descriptions, and other
relevant information.
Note: Remember that the specific steps may vary based on the UML diagram type and the tool
you are using.
9. UML diagrams best practices
Unified Modeling Language (UML) is a powerful tool for visualizing and documenting the
design of a system. To create effective and meaningful UML diagrams, it’s essential to follow
best practices. Here are some UML best practices:
1. Understand the Audience: Consider your audience when creating UML diagrams. Tailor
the level of detail and the choice of diagrams to match the understanding and needs of your
audience, whether they are developers, architects, or stakeholders.
2. Keep Diagrams Simple and Focused: Aim for simplicity in your diagrams. Each diagram
should focus on a specific aspect of the system or a particular set of relationships. Avoid
clutter and unnecessary details that can distract from the main message.
3. Use Consistent Naming Conventions: Adopt consistent and meaningful names for classes,
objects, attributes, methods, and other UML elements. Clear and well-thought-out naming
conventions enhance the understandability of your diagrams.
4. Follow Standard UML Notations: Adhere to standard UML notations and symbols.
Consistency in using UML conventions ensures that your diagrams are easily understood by
others who are familiar with UML.
5. Keep Relationships Explicit: Clearly define and label relationships between elements. Use
appropriate arrows, multiplicity notations, and association names to communicate the nature
of connections between classes, objects, or use cases.
10. UML and Agile Development
Unified Modeling Language (UML) and Agile development are two different approaches to
software development, and they can be effectively integrated to enhance the overall
development process. Here are some key points about the relationship between UML and Agile
development:
10.1. UML in Agile Development
 Visualization and Communication: UML diagrams provide a visual way to represent
system architecture, design, and behavior. In Agile development, where communication is
crucial, UML diagrams can serve as effective communication tools between team members,
stakeholders, and even non-technical audiences.
 User Stories and Use Cases: UML use case diagrams can be used to capture and model
user stories in Agile development. Use cases help in understanding the system from an end-
user perspective and contribute to the creation of user stories.
 Iterative Modeling: Agile methodologies emphasize iterative development, and UML can
be adapted to support this approach. UML models can be created and refined incrementally
as the understanding of the system evolves during each iteration.
 Agile Modeling Techniques: Agile modeling techniques, such as user story mapping and
impact mapping, complement UML by providing lightweight ways to visualize and
communicate requirements and design. These techniques align with the Agile principle of
valuing working software over comprehensive documentation.
10.2. Balancing Agility and Modeling
 Adaptive Modeling: Adopt an adaptive modeling approach where UML is used to the
extent necessary for effective communication and understanding. The focus should be on
delivering value through working software rather than exhaustive documentation.
 Team Empowerment: Empower the development team to choose the right level of
modeling based on the project’s needs. Team members should feel comfortable using UML
as a communication tool without feeling burdened by excessive modeling requirements.
11. Common Challenges in UML Modeling
1. Time-Intensive: UML modeling can be perceived as time-consuming, especially in fast-
paced Agile environments where rapid development is emphasized. Teams may struggle to
keep up with the need for frequent updates to UML diagrams.
2. Over-Documentation: Agile principles value working software over comprehensive
documentation. There’s a risk of over-documentation when using UML, as teams may
spend too much time on detailed diagrams that do not directly contribute to delivering
value.
3. Changing Requirements: Agile projects often face changing requirements, and UML
diagrams may become quickly outdated. Keeping up with these changes and ensuring that
UML models reflect the current system state can be challenging.
4. Collaboration Issues: Agile emphasizes collaboration among team members, and
sometimes UML diagrams are seen as artifacts that only certain team members understand.
Ensuring that everyone can contribute to and benefit from UML models can be a challenge.
12. Benefits of Using UML Diagrams
1. Standardization: UML provides a standardized way of representing system models,
ensuring that developers and stakeholders can communicate using a common visual
language.
2. Communication: UML diagrams serve as a powerful communication tool between
stakeholders, including developers, designers, testers, and business users. They help in
conveying complex ideas in a more understandable manner.
3. Visualization: UML diagrams facilitate the visualization of system components,
relationships, and processes. This visual representation aids in understanding and designing
complex systems.
4. Documentation: UML diagrams can be used as effective documentation tools. They
provide a structured and organized way to document various aspects of a system, such as
architecture, design, and behavior.
5. Analysis and Design: UML supports both analysis and design phases of software
development. It helps in modeling the requirements of a system and then transforming them
into a design that can be implemented.

Unit-3

Real Time Operating System (RTOS)



Real-time operating systems (RTOS) are used in environments where a large number of events,
mostly external to the computer system, must be accepted and processed in a short time or within
certain deadlines. such applications are industrial control, telephone switching equipment, flight
control, and real-time simulations. With an RTOS, the processing time is measured in tenths of
seconds. This system is time-bound and has a fixed deadline. The processing in this type of
system must occur within the specified constraints. Otherwise, This will lead to system failure.
Examples of real-time operating systems are airline traffic control systems, Command Control
Systems, airline reservation systems, Heart pacemakers, Network Multimedia Systems, robots,
etc.
The real-time operating systems can be of 3 types –

RTOS

1. Hard Real-Time Operating System: These operating systems guarantee that critical tasks
are completed within a range of time.
For example, a robot is hired to weld a car body. If the robot welds too early or too late, the
car cannot be sold, so it is a hard real-time system that requires complete car welding by the
robot hardly on time., scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.

2. Soft real-time operating system: This operating system provides some relaxation in the time
limit.
For example – Multimedia systems, digital audio systems, etc. Explicit, programmer-defined,
and controlled processes are encountered in real-time systems. A separate process is changed
by handling a single external event. The process is activated upon the occurrence of the
related event signaled by an interrupt.
Multitasking operation is accomplished by scheduling processes for execution independently
of each other. Each process is assigned a certain level of priority that corresponds to the
relative importance of the event that it services. The processor is allocated to the highest-
priority processes. This type of schedule, called, priority-based preemptive scheduling is used
by real-time systems.

3. Firm Real-time Operating System: RTOS of this type have to follow deadlines as well. In
spite of its small impact, missing a deadline can have unintended consequences, including a
reduction in the quality of the product. Example: Multimedia applications.
4. Deterministic Real-time operating System: Consistency is the main key in this type of real-
time operating system. It ensures that all the task and processes execute with predictable
timing all the time, which make it more suitable for applications in which timing accuracy is
very important. Examples: INTEGRITY, Pike OS.
History of Operating System

An operating system is a type of software that acts as an interface between the user and the
hardware. It is responsible for handling various critical functions of the computer and utilizing
resources very efficiently so the operating system is also known as a resource manager. The
operating system also acts like a government because just as the government has authority over
everything, similarly the operating system has authority over all resources. Various tasks that are
handled by OS are file management, task management, garbage management, memory
management, process management, disk management, I/O management, peripherals
management, etc.
Generation of Operating System
Below are four generations of operating systems.
 The First Generation
 The Second Generation
 The Third Generation
 The Fourth Generation
1. The First Generation (1940 to early 1950s)
In 1940, an operating system was not included in the creation of the first electrical computer.
Early computer users had complete control over the device and wrote programs in pure machine
language for every task. During the computer generation, a programmer can merely execute and
solve basic mathematical calculations. an operating system is not needed for these computations.
2. The Second Generation (1955 – 1965)
GMOSIS, the first operating system (OS) was developed in the early 1950s. For the IBM
Computer, General Motors has created the operating system. Because it gathers all related jobs
into groups or batches and then submits them to the operating system using a punch card to
finish all of them, the second-generation operating system was built on a single-stream batch
processing system.
3. The Third Generation (1965 – 1980)
Because it gathers all similar jobs into groups or batches and then submits them to the second
generation operating system using a punch card to finish all jobs in a machine, the second-
generation operating system was based on a single stream batch processing system. Control is
transferred to the operating system upon each job’s completion, whether it be routinely or
unexpectedly. The operating system cleans up after each work is finished before reading and
starting the subsequent job on a punch card. Large, professionally operated machines known as
mainframes were introduced after that. Operating system designers were able to create a new
operating system in the late 1960s that was capable of multiprogramming—the simultaneous
execution of several tasks in a single computer program.
In order to create operating systems that enable a CPU to be active at all times by carrying out
multiple jobs on a computer at once, multiprogramming has to be introduced. With the release of
the DEC PDP-1 in 1961, the third generation of minicomputers saw a new phase of growth and
development.
4. The Fourth Generation (1980 – Present Day)
The fourth generation of personal computers is the result of these PDPs. The Generation IV
(1980–Present)The evolution of the personal computer is linked to the fourth generation of
operating systems. Nonetheless, the third-generation minicomputers and the personal computer
have many similarities. At that time, minicomputers were only slightly more expensive than
personal computers, which were highly expensive.
The development of Microsoft and the Windows operating system was a significant influence in
the creation of personal computers. In 1975, Microsoft developed the first Windows operating
system. Bill Gates and Paul Allen had the idea to advance personal computers after releasing the
Microsoft Windows OS. As a result, the MS-DOS was released in 1981, but users found it
extremely challenging to decipher its complex commands. Windows is now the most widely
used and well-liked operating system available. Following then, Windows released a number of
operating systems, including Windows 95, Windows 98, Windows XP, and Windows 7, the most
recent operating system. The majority of Windows users are currently running Windows 10.
Apple is another well-known operating system in addition to Windows.

Defining an RTOS

Real-time operating systems (RTOS) are specialized software systems designed to manage and
execute applications that process data in real-time. Unlike general-purpose operating systems
(such as Windows or Linux), which prioritize overall system efficiency and user interaction,
RTOSs prioritize predictable timing and fast response times for specific tasks or processes. Key
characteristics of RTOS include:

The Scheduler

The scheduler is the part of the kernel responsible for deciding which task should be executing
at any particular time. The kernel can suspend and later resume a task many times during the task
lifetime.

The scheduling policy is the algorithm used by the scheduler to decide which task to execute at
any point in time. The policy of a (non real time) multi user system will most likely allow each
task a "fair" proportion of processor time. The policy used in real time / embedded systems is
described later.

In addition to being suspended involuntarily by the kernel a task can choose to suspend itself. It
will do this if it either wants to delay (sleep) for a fixed period, or wait (block) for a resource to
become available (eg a serial port) or an event to occur (eg a key press). A blocked or sleeping
task is not able to execute, and will not be allocated any processing time.
Referring to the numbers in the diagram above:

 At (1) task 1 is executing.


 At (2) the kernel suspends (swaps out) task 1 ...
 ... and at (3) resumes task 2.
 While task 2 is executing (4), it locks a processor peripheral for its own exclusive access.
 At (5) the kernel suspends task 2 ...
 ... and at (6) resumes task 3.
 Task 3 tries to access the same processor peripheral, finding it locked task 3 cannot
continue so suspends itself at (7).
 At (8) the kernel resumes task 1.
 Etc.
 The next time task 2 is executing (9) it finishes with the processor peripheral and unlocks
it.
 The next time task 3 is executing (10) it finds it can now access the processor peripheral
and this time executes until suspended by the kernel.

A real-time operating system (RTOS) serves real-time applications that process data without
any buffering delay. In an RTOS, the Processing time requirement is calculated in tenths of
seconds increments of time. It is a time-bound system that is defined as fixed time constraints. In
this type of system, processing must be done inside the specified constraints. Otherwise, the
system will fail.

Real-time tasks are the tasks associated with the quantitative expression of time. This
quantitative expression of time describes the behavior of the real-time tasks. Real-time tasks are
scheduled to finish all the computation events involved in it into timing constraint. The timing
constraint related to the real-time tasks is the deadline. All the real-time tasks need to be
completed before the deadline. For example, Input-output interaction with devices, web
browsing, etc.

Types of Tasks in Real-Time Systems

There are the following types of tasks in real-time systems, such as:
1. Periodic Task

In periodic tasks, jobs are released at regular intervals. A periodic task repeats itself after a fixed
time interval. A periodic task is denoted by five tuples: Ti = < Φi, Pi, ei, Di >

Where,

o Φi: It is the phase of the task, and phase is the release time of the first job in the task. If
the phase is not mentioned, then the release time of the first job is assumed to be zero.
o Pi: It is the period of the task, i.e., the time interval between the release times of two
consecutive jobs.
o ei: It is the execution time of the task.
o Di: It is the relative deadline of the task.

For example: Consider the task Ti with period = 5 and execution time = 3

Phase is not given so, assume the release time of the first job as zero. So the job of this task is
first released at t = 0, then it executes for 3s, and then the next job is released at t = 5, which
executes for 3s, and the next job is released at t = 10. So jobs are released at t = 5k where k = 0,
1. . . N
Hyper period of a set of periodic tasks is the least common multiple of all the tasks in that set.
For example, two tasks T1 and T2 having period 4 and 5 respectively will have a hyper period, H
= lcm(p1, p2) = lcm(4, 5) = 20. The hyper period is the time after which the pattern of job release
times starts to repeat.

2. Dynamic Tasks

It is a sequential program that is invoked by the occurrence of an event. An event may be


generated by the processes external to the system or by processes internal to the system.
Dynamically arriving tasks can be categorized on their criticality and knowledge about their
occurrence times.

1. Aperiodic Tasks: In this type of task, jobs are released at arbitrary time intervals.
Aperiodic tasks have soft deadlines or no deadlines.
2. Sporadic Tasks:They are similar to aperiodic tasks, i.e., they repeat at random instances.
The only difference is that sporadic tasks have hard deadlines. Three tuples denote a
sporadic task: Ti =(ei, gi, Di)
o Where
o ei: It is the execution time of the task.
o gi: It is the minimum separation between the occurrence of two consecutive
instances of the task.
o Di: It is the relative deadline of the task.

3. Critical Tasks

Critical tasks are those whose timely executions are critical. If deadlines are missed, catastrophes
occur.

For example, life-support systems and the stability control of aircraft. If critical tasks are
executed at a higher frequency, then it is necessary.

4. Non-critical Tasks

Non-critical tasks are real times tasks. As the name implies, they are not critical to the
application. However, they can deal with time, varying data, and hence they are useless if not
completed within a deadline. The goal of scheduling these tasks is to maximize the percentage of
jobs successfully executed within their deadlines.

Task Scheduling

Real-time task scheduling essentially refers to determining how the various tasks are the pick for
execution by the operating system. Every operating system relies on one or more task schedulers
to prepare the schedule of execution of various tasks needed to run. Each task scheduler is
characterized by the scheduling algorithm it employs. A large number of algorithms for real-time
scheduling tasks have so far been developed.

Classification of Task Scheduling

Here are the following types of task scheduling in a real-time system, such as:

1. Valid Schedule: A valid schedule for a set of tasks is one where at most one task is
assigned to a processor at a time, no task is scheduled before its arrival time, and the
precedence and resource constraints of all tasks are satisfied.
2. Feasible Schedule: A valid schedule is called a feasible schedule only if all tasks meet
their respective time constraints in the schedule.
3. Proficient Scheduler: A task scheduler S1 is more proficient than another scheduler S2
if S1 can feasibly schedule all task sets that S2 can feasibly schedule, but not vice versa.
S1 can feasibly schedule all task sets that S2 can, but there is at least one task set that S2
cannot feasibly schedule, whereas S1 can. If S1 can feasibly schedule all task sets that S2
can feasibly schedule and vice versa, then S1 and S2 are called equally proficient
schedulers.
4. Optimal Scheduler: A real-time task scheduler is called optimal if it can feasibly
schedule any task set that any other scheduler can feasibly schedule. In other words, it
would not be possible to find a more proficient scheduling algorithm than an optimal
scheduler. If an optimal scheduler cannot schedule some task set, then no other scheduler
should produce a feasible schedule for that task set.
5. Scheduling Points: The scheduling points of a scheduler are the points on a timeline at
which the scheduler makes decisions regarding which task is to be run next. It is
important to note that a task scheduler does not need to run continuously, and the
operating system activates it only at the scheduling points to decide which task to run
next. The scheduling points are defined as instants marked by interrupts generated by a
periodic timer in a clock-driven scheduler. The occurrence of certain events determines
the scheduling points in an event-driven scheduler.
6. Preemptive Scheduler: A preemptive scheduler is one that, when a higher priority task
arrives, suspends any lower priority task that may be executing and takes up the higher
priority task for execution. Thus, in a preemptive scheduler, it cannot be the case that a
higher priority task is ready and waiting for execution, and the lower priority task is
executing. A preempted lower priority task can resume its execution only when no higher
priority task is ready.
7. Utilization: The processor utilization (or simply utilization) of a task is the average time
for which it executes per unit time interval. In notations:
for a periodic task Ti, the utilization ui = ei/pi, where
o ei is the execution time and
o pi is the period of Ti.

For a set of periodic tasks {Ti}: the total utilization due to all tasks U = i=1∑ n ei/pi.
Any good scheduling algorithm's objective is to feasibly schedule even those task sets
with very high utilization, i.e., utilization approaching 1. Of course, on a uniprocessor, it
is not possible to schedule task sets having utilization of more than 1.

8. Jitter
Jitter is the deviation of a periodic task from its strict periodic behavior. The arrival time
jitter is the deviation of the task from the precise periodic time of arrival. It may be
caused by imprecise clocks or other factors such as network congestions. Similarly,
completion time jitter is the deviation of the completion of a task from precise periodic
points.
The completion time jitter may be caused by the specific scheduling algorithm employed,
which takes up a task for scheduling as per convenience and the load at an instant, rather
than scheduling at some strict time instants. Jitters are undesirable for some applications.
Sometimes actual release time of a job is not known. Only know that r i is in a range [ri-,
ri+]. This range is known as release time jitter. Here
o ri is how early a job can be released and,
o ri+ is how late a job can be released.

Only the range [ei-, ei+] of the execution time of a job is known. Here

o ei- is the minimum amount of time required by a job to complete its execution
and,
o ei+ is the maximum amount of time required by a job to complete its execution.

Precedence Constraint of Jobs


Jobs in a task are independent if they can be executed in any order. If there is a specific order in
which jobs must be executed, then jobs are said to have precedence constraints. For representing
precedence constraints of jobs, a partial order relation < is used, and this is called precedence
relation. A job Ji is a predecessor of job Jj if Ji < Jj, i.e., Jj cannot begin its execution until
Ji completes. Ji is an immediate predecessor of Jj if Ji < Jj, and there is no other job Jk such that
Ji < Jk < Jj. Ji and Jj are independent if neither Ji < Jj nor Jj < Ji is true.

An efficient way to represent precedence constraints is by using a directed graph G = (J, <)
where J is the set of jobs. This graph is known as the precedence graph. Vertices of the graph
represent jobs, and precedence constraints are represented using directed edges. If there is a
directed edge from Ji to Jj, it means that Ji is the immediate predecessor of Jj.

For example: Consider a task T having 5 jobs J 1, J2, J3, J4, and J5, such that J2 and J5 cannot begin
their execution until J1 completes and there are no other constraints. The precedence constraints
for this example are:

J1 < J2 and J1 < J5

AD

Set representation of precedence graph:

1. < (1) = { }
2. < (2) = {1}
3. < (3) = { }
4. < (4) = { }
5. < (5) = {1}

Consider another example where a precedence graph is given, and you have to find precedence
constraints.
From the above graph, we derive the following precedence constraints:

1. J1< J2
2. J2< J3
3. J2< J4
4. J3< J4

1. **Introduction to Tasks:**

- Tasks in an RTOS represent independent units of work or processes that perform


specific functions or computations.

- Each task has its own execution context, including a program counter, stack pointer,
and local variables.

- Tasks are created during system initialization or dynamically during runtime using
RTOS-specific APIs.

- Tasks may have different priorities assigned to them, influencing their order of
execution by the RTOS scheduler.

2. **Task States:**

- Tasks in an RTOS typically transition between different states based on their


execution status and interactions with the system:

- **Running:** The task is currently executing on the CPU.

- **Ready:** The task is ready to execute but waiting for CPU time. It is in the queue
of tasks eligible to run.

- **Blocked:** The task is waiting for an event or resource (e.g., I/O completion,
semaphore release). It cannot proceed until the condition is satisfied.
- **Suspended:** The task has been temporarily halted or paused by the system or
another task.

3. **Scheduling:**

- Scheduling in an RTOS refers to the mechanism by which the RTOS decides which
task should execute next on the CPU.

- The scheduler ensures that tasks are executed in a manner that meets their priority
requirements and real-time constraints.

- Common scheduling algorithms used in RTOS include:

- **Priority-Based Scheduling:** Tasks are assigned priorities, and the scheduler


selects the highest-priority task that is ready to run. This ensures that higher-priority tasks
receive CPU time over lower-priority ones.

- **Rate-Monotonic Scheduling (RMS):** Assigns priorities based on task periods


(inverse of task frequency). Shorter period tasks (higher frequency) have higher
priorities.

- **Earliest Deadline First (EDF):** Tasks are scheduled based on their deadlines.
The scheduler always selects the task with the earliest deadline that is ready to run.

4. **Relationship:**

- Tasks, task states, and scheduling are closely intertwined in an RTOS environment:

- **Task Creation and States:** Tasks are created with specific priorities and enter the
system in a ready state. They may transition to running, blocked, or suspended states
based on events or resource availability.

- **Scheduling and Task Execution:** The scheduler determines the order in which
tasks transition between states and execute on the CPU. It ensures that higher-priority
tasks preempt lower-priority tasks when necessary, maintaining responsiveness and
meeting real-time requirements.

- **Real-Time Guarantees:** The combination of task states and scheduling policies


ensures that tasks execute predictably and meet their deadlines, which is crucial for
applications requiring deterministic behavior such as industrial control systems, medical
devices, and automotive systems.
Types of Scheduling Algorithms
There are many scheduling algorithms that can be used for scheduling task execution on a
CPU. They can be classified into two main types: preemptive scheduling
algorithms and non-preemptive scheduling algorithms.

Preemptive Scheduling

Preemptive scheduling allows the interruption of a currently running task, so another one
with more “urgent” status can be run. The interrupted task is involuntarily moved by the
scheduler from running state to ready state. This dynamic switching between tasks that this
algorithm employs is, in fact, a form of multitasking. It requires assigning a priority level
for each task. A running task can be interrupted if a task with a higher priority enters the
queue.

Fig.
1 Preemptive Scheduling

As an example let’s have three tasks called Task 1, Task 2 and Task 3. Task 1 has the
lowest priority and Task 3 has the highest priority. Their arrival times and execute times
are listed in the table below.

Task Name Arrival Time [μs] Execute Time [μs]

Task 1 10 50

Task 2 40 50

Task 3 60 40
In Fig. 1 we can see that Task 1 is the first to start executing, as it is the first one to arrive
(at t = 10 μs ). Task 2 arrives at t = 40μs and since it has a higher priority, the scheduler
interrupts the execution of Task 1 and puts Task 2 into running state. Task 3 which has the
highest priority arrives at t = 60 μs. At this moment Task 2 is interrupted and Task 3 is put
into running state. As it is the highest priority task it runs until it completes at t = 100 μs.
Then Task 2 resumes its operation as the current highest priority task. Task 1 is the last to
complete is operation.

Non-preemptive Scheduling (a.k.a Co-Operative Scheduling)

In non-preemptive scheduling, the scheduler has more restricted control over the tasks. It
can only start a task and then it has to wait for the task to finish or for the task to
voluntarily return the control. A running task can’t be stopped by the scheduler.

Fig.2 Non-preemptive scheduling

If we take the three tasks specified in the table from the previous chapter and schedule
them using a non-preemptive algorithm we get the behavior shown in Fig. 2. Once started,
each task completes its operation and then the next one starts.

The non-preemptive scheduling can simplify the synchronization of the tasks, but that is at
the cost of increased response times to events. This reduces its practical use in complex
real-time systems.

Popular Scheduling Algorithms


We will now introduce some of the most popular scheduling algorithms that are used in
CPU scheduling. Not all of them are suitable for use in real-time embedded systems.
Currently, the most used algorithms in practical RTOS are non-preemptive
scheduling, round-robin scheduling, and preemptive priority scheduling.

Round robin scheduling

Round robin scheduling is a computer algorithm used in multitasking and operating systems. It's
a pre-emptive algorithm where each process is assigned a fixed time slice or quantum. Here’s
how it works:
Important Abbreviations
1. CPU - - - > Central Processing Unit
2. AT - - - > Arrival Time
3. BT - - - > Burst Time
4. WT - - - > Waiting Time
5. TAT - - - > Turn Around Time
6. CT - - - > Completion Time
7. FIFO - - - > First In First Out
8. TQ - - - > Time Quantum

Round Robin CPU Scheduling

Round Robin CPU Scheduling is the most important CPU Scheduling Algorithm which is ever
used in the history of CPU Scheduling Algorithms. Round Robin CPU Scheduling uses Time
Quantum (TQ). The Time Quantum is something which is removed from the Burst Time and lets
the chunk of process to be completed.

 Process Queue: All processes in the system are placed in a queue. The order typically doesn't
change unless a new process arrives.

 Time Slicing: Each process in the queue is given a small unit of CPU time, called a time slice
or quantum. For example, if the time slice is 10 milliseconds and there are three processes, each
process gets 10 milliseconds of CPU time in turn.

 Execution: The operating system cycles through the process queue, allocating the CPU to
each process for its time slice. If a process doesn't finish within its time slice, it's preempted
(paused) and placed back at the end of the queue to wait for its next turn.

 Completion: This cycle continues until all processes have completed their tasks.

Time Sharing is the main emphasis of the algorithm. Each step of this algorithm is carried out
cyclically. The system defines a specific time slice, known as a time quantum.

First, the processes which are eligible to enter the ready queue enter the ready queue. After
entering the first process in Ready Queue is executed for a Time Quantum chunk of time. After
execution is complete, the process is removed from the ready queue. Even now the process
requires some time to complete its execution, then the process is added to Ready Queue.

The Ready Queue does not hold processes which already present in the Ready Queue. The Ready
Queue is designed in such a manner that it does not hold non unique processes. By holding same
processes Redundancy of the processes increases.

After, the process execution is complete, the Ready Queue does not take the completed process
for holding.
Advantages

The Advantages of Round Robin CPU Scheduling are:

1. A fair amount of CPU is allocated to each job.


2. Because it doesn't depend on the burst time, it can truly be implemented in the system.
3. It is not affected by the convoy effect or the starvation problem as occurred in First Come
First Serve CPU Scheduling Algorithm.

Disadvantages

The Disadvantages of Round Robin CPU Scheduling are:

1. Low Operating System slicing times will result in decreased CPU output.
2. Round Robin CPU Scheduling approach takes longer to swap contexts.
3. Time quantum has a significant impact on its performance.
4. The procedures cannot have priorities established.

Examples:

1. S. No Process ID Arrival Time Burst Time


2. ___ ______ ______ _______
3. 1 P1 0 7
4. 2 P2 1 4
5. 3 P3 2 15
6. 4 P4 3 11
7. 5 P5 4 20
8. 6 P6 4 9

Assume Time Quantum TQ = 5

Ready Queue:

1. P1, P2, P3, P4, P5, P6, P1, P3, P4, P5, P6, P3, P4, P5

Gantt chart:

Average Completion Time

1. Average Completion Time = ( 31 +9 + 55 +56 +66 + 50 ) / 6


2. Average Completion Time = 267 / 6
3. Average Completion Time = 44.5

Average Waiting Time

1. Average Waiting Time = ( 5 + 26 + 5 + 42 + 42 + 37 ) / 6


2. Average Waiting Time = 157 / 6
3. Average Waiting Time = 26.16667

Average Turn Around Time

1. Average Turn Around Time = ( 31 + 8 + 53 + 53 + 62 + 46 ) / 6


2. Average Turn Around Time = 253 / 6
3. Average Turn Around Time = 42.16667
Cooperative scheduling

Cooperative scheduling is a type of scheduling where processes voluntarily yield control of the
CPU to other processes at specified points during their execution. Unlike preemptive scheduling,
where the operating system forcibly interrupts a process and allocates CPU time to another
process according to a scheduling algorithm, cooperative scheduling relies on processes
cooperating by giving up control.

loop forever
Read Queue
Process Data/Update Outputs
end loop

### Key Characteristics of Cooperative Scheduling:

1. **Voluntary Yielding:** Processes explicitly relinquish CPU control. This can happen when a
process reaches a certain point in its execution (e.g., after completing a task or during a wait
operation).

2. **No Preemption:** Once a process starts executing, it continues until it voluntarily yields
control. The operating system does not forcefully interrupt the process.

3. **Simplicity:** Cooperative scheduling is relatively simple to implement because it relies on


the cooperation of processes rather than complex scheduling algorithms.

4. **Potential for Deadlock:** If a process fails to yield control when required, it can lead to
deadlock or starvation of other processes waiting to execute.

### Advantages of Cooperative Scheduling:

- **Efficiency:** Since processes yield CPU control voluntarily, there is minimal overhead
associated with context switching.

- **Predictability:** The order of process execution can be more predictable because it depends
on the processes' cooperation rather than a scheduler's decisions.

### Disadvantages of Cooperative Scheduling:

- **Risk of Unresponsiveness:** If a process does not yield control properly, other processes
may be blocked indefinitely, leading to system unresponsiveness.

- **Fairness Concerns:** It heavily relies on processes behaving cooperatively. Non-cooperative


processes can monopolize the CPU, impacting fairness.

### Use Cases:


- **Legacy Systems:** Some older operating systems and embedded systems use cooperative
scheduling due to its simplicity and lower overhead.

- **Real-Time Systems:** In certain real-time applications where strict timing requirements are
essential, cooperative scheduling can be employed to ensure deterministic behavior.

### Examples:

- **Classic Mac OS:** Versions of the Macintosh operating system before Mac OS X used
cooperative scheduling.

- **DOS (Disk Operating System):** Early versions of DOS relied on cooperative scheduling
where applications had to yield control to the system when performing I/O operations or waiting
for user input.

Introduction to Semaphores

In embedded systems, semaphores play a crucial role in managing shared resources and
synchronizing tasks or threads that operate concurrently. Embedded systems are typically
constrained by limited resources such as memory, processing power, and often operate in real-
time environments where responsiveness and determinism are critical. Here’s an introduction to
how semaphores are used in embedded systems:

### Purpose of Semaphores in Embedded Systems:

1. **Resource Protection:**

- Embedded systems often have multiple tasks or threads that need to access shared
resources like peripherals (e.g., sensors, actuators), memory buffers, communication
interfaces (e.g., UART, SPI), or system-wide data structures.

- Semaphores are used to enforce mutual exclusion, ensuring that only one task or
thread accesses a shared resource at any given time. This prevents data corruption and
ensures data integrity.

2. **Task Synchronization:**

- In real-time embedded systems, tasks (or threads) often need to synchronize their
execution based on specific conditions or events.

- Semaphores provide a mechanism for tasks to wait for signals or events before
proceeding, thereby coordinating their execution and ensuring tasks complete their
operations in a synchronized manner.
3. **Interrupt Handling:**

- Embedded systems frequently handle interrupts from external devices or timers.

- Semaphores can be used in interrupt service routines (ISRs) to communicate with


tasks or synchronize access to shared data between ISRs and main application tasks.

### Types of Semaphores Used in Embedded Systems:

- **Binary Semaphore:** Often used to implement mutual exclusion where only one task
can access a resource at a time (e.g., controlling access to a shared hardware peripheral).

- **Counting Semaphore:** Useful for managing multiple instances of a shared resource


(e.g., multiple buffers or instances of a hardware driver).

### Implementation Considerations:

- **Efficiency:** Since embedded systems often operate with limited resources,


semaphore implementations should be efficient in terms of memory usage and processing
overhead.

- **Priority Inversion:** Careful consideration is required to avoid priority inversion


issues where a higher-priority task is blocked by a lower-priority task holding a
semaphore.

- **Real-Time Requirements:** Semaphores must be implemented in a way that meets


real-time constraints, ensuring timely response to events and minimal latency.

### Example Use Case:

Consider an embedded system controlling a robotic arm with multiple motors:

- Each motor control task needs to access shared memory containing motor positions.

- Semaphores can be used to ensure that only one task accesses the shared memory at a
time to update motor positions, preventing conflicts and ensuring accurate control of the
robotic arm.
Unit-5

Embedded systems application development:

Embedded systems application development involves creating software that runs on


embedded devices, which are specialized computer systems designed to perform dedicated
functions within larger systems or standalone products. Here’s an overview of the process
and considerations involved in developing applications for embedded systems:

### 1. **Understanding Requirements:**

- **Functionality:** Define the specific tasks and functions the embedded system needs to
perform. This could range from controlling hardware peripherals (sensors, actuators) to
processing data, managing communication interfaces, and interacting with users.

- **Constraints:** Consider the limitations of embedded systems such as memory size,


processing power, real-time requirements, power consumption, and environmental factors
(temperature, humidity).

### 2. **Choosing Hardware and Software Platform:**

- **Microcontroller/Microprocessor Selection:** Select hardware components based on the


system requirements, including CPU performance, peripheral support, power efficiency,
and availability of development tools.

- **Operating System (OS) Selection:** Decide whether to use a real-time operating system
(RTOS) or develop the application without an OS (bare-metal programming). RTOSs like
FreeRTOS, uC/OS-II, or Linux-based systems provide scheduling, task management, and
device drivers, while bare-metal programming offers greater control over resources but
requires more effort in managing tasks and interrupts.

### 3. **Development Environment Setup:**

- **Toolchain:** Set up the development toolchain including compilers, debuggers, and


IDEs (Integrated Development Environments) suitable for the selected hardware and
software platform.

- **Hardware Interface:** Understand and interface with hardware peripherals such as


GPIOs, UART, SPI, I2C, ADC, DAC, timers, and interrupts using appropriate APIs or
drivers provided by the hardware manufacturer.

### 4. **Application Design and Implementation:**


- **Modular Design:** Break down the application into manageable modules or tasks
based on functionality and dependencies.

- **RTOS Tasks/Threads:** If using an RTOS, define tasks and manage their execution,
ensuring proper task prioritization and scheduling to meet real-time requirements.

- **Interrupt Handling:** Implement interrupt service routines (ISRs) to respond to


hardware events promptly and efficiently.

- **Power Management:** Optimize power consumption by implementing low-power


modes and efficient use of peripherals.

### 5. **Testing and Debugging:**

- **Unit Testing:** Verify the functionality of each module independently.

- **Integration Testing:** Test the integrated system to ensure all modules work together
as expected.

- **Hardware-in-the-Loop (HIL) Testing:** Validate the application on the target


hardware to ensure compatibility and performance

- **Debugging Tools:** Use debuggers, emulators, and logging mechanisms to diagnose


and fix software issues.

### 6. **Deployment and Maintenance:**

- **Deployment:** Flash the compiled application onto the embedded device and ensure
proper initialization and startup sequences.

- **Field Updates:** Plan for software updates and patches, considering mechanisms for
firmware updates in the field (e.g., over-the-air updates or via physical access).

- **Documentation:** Maintain comprehensive documentation including hardware


schematics, software architecture, APIs, and user manuals for future reference and
support.

Objectives:

The Embedded Product Life Cycle (EDLC) encompasses various phases from concept to
retirement of an embedded system. Each phase has specific objectives aimed at ensuring the
successful development, deployment, and maintenance of the embedded product. Here’s an
overview of objectives in different phases and modeling of the Embedded Product Life Cycle:

### 1. **Concept Phase:**


- **Objective:** Define the purpose, feasibility, and initial requirements of the embedded
system.

### 2. **Requirements Phase:**

- **Objective:** Gather and document detailed functional and non-functional requirements


that guide the system design and development.

### 3. **Design Phase:**

- **Objective:** Develop a detailed design specification that meets the requirements and
constraints identified in the previous phases.

### 4. **Implementation Phase:**

- **Objective:** Translate the design into executable software and hardware components,
ensuring adherence to design specifications.

### 5. **Testing and Verification Phase:**

- **Objective:** Validate that the embedded system meets specified requirements and
functions correctly in its intended environment.

### 6. **Deployment Phase:**

- **Objective:** Deploy the embedded system into the target environment for operational
use.

### 7. **Maintenance and Support Phase:**

- **Objective:** Provide ongoing support, updates, and enhancements throughout the


system’s operational lifespan.

### Modeling of Embedded Product Life Cycle (EDLC):

- **Waterfall Model:** Sequential phases from concept to deployment, each phase feeding
into the next.

- **Iterative Model:** Iterative development with cycles of requirements, design,


implementation, and testing.
- **V-Model:** Corresponds each phase of development (requirements, design, etc.) with a
testing phase, emphasizing validation and verification.

Each phase in the Embedded Product Life Cycle serves distinct objectives crucial for ensuring
the successful development, deployment, and maintenance of embedded systems. Effective
management of these objectives and activities throughout the EDLC is essential for delivering
reliable, efficient, and cost-effective embedded products.

Different phases and modeling of the embedded product life cycle(edlc)


Today, almost every modern application comes with an embedded system. From the latest
smartphone, smartwatches, and automobiles to sophisticated home security alarms, medical
equipment, IoT products, and more, embedded systems exist within millions of simple and
complex products around us, enhancing our quality of life.

Embedded systems impact our daily life activities, interactions, and tasks—the way we spend
our time off, the way we travel, and the way we do business. With diverse applications in
communications, transportation, manufacturing, retail, consumer electronics, healthcare, and
energy, embedded systems have transformed how we interact with technology in our everyday
lives.

So, what is an embedded system? An embedded system/solution is a computer system—a


combination of hardware and software—designed to perform a specific function within a larger
mechanical or electrical system. Most embedded systems are based on programmable
microcontrollers or processors. Typically, embedded systems have three main components: the
hardware, the software, and the real-time operating system.

Launching a new embedded product can be exciting and challenging at the same time. In this
article, we’ll explain the general outline of the four different development life cycle phases of an
embedded system.

What is Embedded Development Life Cycle (or EDLC)?

An embedded (or IoT) product development life cycle is similar to the typical product
development life cycle for all software.

For building and developing a successful embedded product, following a well-defined embedded
system design and development life cycle is critical. It ensures high-quality products for end-
users, defects prevention within the product development phase, and maximized productivity for
a better ROI.

Here are the four stages of the development process of embedded systems:

1. Planning and Analysis


2. Design
3. Implementation
4. Deployment
Other than the fact that IoT solutions need network connectivity, the development life cycle
phases for embedded and IoT products are almost similar.

Please note that this is an optimal approach. Each of these 4 steps comprises sub-steps that may
vary or require some adjustments as per the project.

4 Different Steps of Embedded Development Life Cycle

Step #1 – Planning & Analysis


Define your business idea and demonstrate the concept feasibility

The first step in the product development life cycle is to clearly define your product idea that will
fulfill a market niche and address a problem. You must then perform the analysis to see if the
idea can transform into a viable concept before development is started.

Identify ‘the need’

The development life cycle of an embedded product should initiate as a response to a need. The
need may come from an individual, public, or company. Based on the need, a statement or
“concept proposal” is prepared, which should get approval from the senior management as well
as the funding agency.

The need can be visualized in any one of three types:

 New/custom product development: the need for a product that does not exist in the market or will
act as a competitor to an existing product.
 Product re-engineering: the need to reengineer a product already available by adding new
features or functionality.
 Product maintenance: the need to launch a new version of a product followed by a failure due to
non-functioning or to provide technical support for an existing product.
Define your target audience

A crucial yet often overlooked component of the embedded product development process is to
identify and define the target market of the product. When analyzing the potential target
audience, ask some of the following questions to yourself:

 Who will be the end-user of this product?


 What are the end user’s demographics such as gender, age, profession, etc.?
 When is the product used by the end-user? And how often?
 Why will the end-user need the product?

Only after carefully answering these questions, you can determine your target audience and
identify your market.

Work out the requirements

Before moving onto the development stage, you should use the data collected during the research
of the target audience to define the product purpose, its functional model, and the required
hardware & software.

Discover competitors or collaborators

Once you carefully define your market, start identifying who your potential competitors may be.
Acquaint yourself with their experience going through the development life cycle to learn and
choose a better approach for your project. Analyze your competitors’ products to anticipate end
users’ reactions to your final product.

Take time to carry out further market research to connect with potential collaborators and realize
how well your business idea will be received in the market.

Step #2 – Design
Choose a development approach for implementation of your idea

Before designing the prototype, you need to decide on the development approach so that your
idea can turn into a reality within budget.

Create product’s architecture

The designing process starts with developing the architecture of the product based on the specific
requirements gathered in the planning & analysis phase. The architecture should reflect software
and hardware components that will ensure the performance of target functions.
Identify the right tools & technologies

Carefully identify the technical resources needed to build a proof-of-concept that can be used for
market research, concept refinement, and investment pitches before moving to the
implementation stage.

A proof-of-concept is a small model that has the MVP (minimum viable product) features based
on development kits.

Types of Development Kits (for developing a proof-of-concept):


 Microcontroller kits
 Application processor kits
 Processor modules
 Breakout boards

These out-of-the-box, pre-built hardware platforms usually come with integrated software to kick
start a project.

It is important to pay attention to several features when deciding on a development board for
embedded processor, including the available peripherals, connectors, other communication
peripherals, and onboard sensors, as well as the mechanical form factor and mounting style in a
prototype enclosure.

Choose Embedded Software Development Tools


 coding languages (C, C++, JavaScript, Python, etc.)
 operating systems (Linux based or RTOS based)
 SDKs (PyCharm, WebStorm, Qt Creator), IDEs, compilers, debuggers, and more.
Pick a development team

At a minimum, an embedded product development team will need one each of the following:

 Embedded software engineer


 Hardware engineer
 Mechanical engineer
 PCB layout engineer

Depending on the project complexity and budget, you can decide if you need more than one of
each of these engineers. Additionally, you may also need experts with knowledge about security
management, cloud-based software development, and team management for embedded or IoT
product development.

Getting these technical resources to work under one roof for your project can be costly.
However, you can also hire embedded design & development engineers and outsource it to a
reliable service provider that specializes in embedded product development.
Step #3 – Implementation
Create a prototype realizing the design; test and improve quality of embedded solution
In the implementation stage, a prototype of the embedded product is created. It also includes
adding new features and improving the quality of product by embedded software engineers.

When your product’s hardware components like sensors and processors are integrated on a PCB
for the first time, we call those PCBs alpha prototypes. Generally, small functional issues may
occur which can be fixed with some tuning and adjustments.
As new features are added to the product, you finally get a production-intent design, which we
refer to as a beta prototype.

The product is then tested in the field to check if your solution is working as expected and
enhanced for quality based on end-user feedback.

Software developers will also consider the marketing feedback and further check if the solution
meets regulatory requirements, and ensure the embedded solution is secure, scalable, and
maintainable.

Step #4 – Deployment (Product Launch)


Launch a real-life product ready for mass production

Launching a new, fully functional model of the embedded product can be an exciting and
challenging time. In this stage of the product development life cycle, you need to procure the
hardware components and set up a manufacturing line where they will be placed on the PCB.
Procuring the spare parts and setting up the manufacturing facility require a 90-day notice. So,
you should start communicating with your manufacturer and order components up to three
months in advance.

Make sure the first batch of boards is tested post-manufacturing to figure out any defects or
faults in the production process. Once tested, assemble them into their enclosure, do final testing
before boxing and then send them to the end-users. Don’t forget about post-production support
and maintenance as this is an important aspect of the embedded product development life cycle.

The above-discussed four-step process can help you successfully launch your own new
embedded product.

As technological advancements and movements like the IoT, Industry 4.0, and “smart” cities
continue to gain ground, embedded system development will become a major spot of innovation,
which is expected to grow exponentially on a per-year basis. However, with increased adoption
of embedded systems, the complexity in embedded software has also increased, which is
increasing the overall cost of embedded software development.

Outsourcing embedded software projects can be a great idea for SMEs to build their product
comprehensively while cutting costs and improving time to market. If you are looking to take the
embedded development off your plate, what you need is a reputed offshore embedded
development company that understands the challenges of embedded product development. When
you hire embedded design & development engineers with knowledge and expertise in all aspects
of embedded system design and development life cycle from such an offshore company, you
take your business to the next level.

2. Case study : Smart Card Smart card is one of the most used embedded system today. It is used
for credit card, debit bank card, e-wallet card, identification card, medical card (for history and
diagnosis details) and card for some new innovative applications. Smart card improves the
convenience and security of any transaction. It provides a tamper-proof storage of user and
account identity. Smart card systems have proven to be more reliable than other machine-
readable cards, like magnetic stripe and barcode, etc. It also provides vital components of system
security for the exchange of data throughout virtually any type of network. It is also a cost
effective method. The smart cards are used today in various applications, including healthcare,
banking, entertainment, and transportation. Smart card provides the security by added features
that benefit all these applications.

2.1 Smart Cards Smart cards are plastic cards embedded with a microprocessor/ microcontroller
or memory chip that stores and transacts data. The smart card is differentiated into two types
based on application as follows:

● Identification and process based smart card

● Identification based smart card Identification and process based smart cards are of two types.
They are: Contact based smart cards- In the Contact based cards, the chip is attached to the
materials itself as shown in Figure Contactless smart cards.

Contact based smart card


Figure Contactless smart cards

RFID tags

This type of smart card is not attached directly to the system. Example is USB smart card RFID
as shown in above figure.

2.1 Smart Card Specifications


2.2.1 Embedded hardware components Hardware components needed for smart card are:
Microcontroller or ASIP, RAM for storing temporary variables and Stack; OTP ROM for
application codes and RTOS codes for scheduling the tasks; Flash for storing user data, user
address, user identification codes, card number and expiry date; Timer and interrupt controller
for controlling purpose; a carrier frequency generating circuit and ASK modulator for
modulating the carrier frequency according to the system. Finally, an Interfacing circuit for the
IOs is needed. Then charge pumps for delivering power to the antenna for transmission and for
the system circuits are also needed.

2.2.2 Embedded software components Software components needed for the smart card system
are Boot-up, system initialization and embedded system features. It needs a secure three layered
file system called Smart card secure file system. This Smart card secure file system is needed for
storing the files. Connection establishment and termination is provided by TCP/IP port
connection. Then cryptographic algorithm is used for the added features like host connection. OS
is stored in the protected part of the ROM. Host and card authentication are also needed for the
smart card. Optimum code size and multidimensional array are needed to save the data.

2.2.3 Smart Card system requirements Purpose: It enables authentication and verification of card
and card holder by a host. Smart card enables GUI at host machine to interact with the
cardholder/user for the required transactions. For example, financial transactions with a bank or
credit card transactions. Inputs IO port is required to receive header and messages for the smart
card system. Internal Signals, Events and Notifications: On power up, radiation is generated that
gives the power supply to smart card (to activate the card). On activation a reset_Task is initiated
which initialises the necessary timers and creates the tasks (task_ReadPort, task_PW, task_Appl)
necessary to perform other functions. The task_ReadPort is responsible for sending & receiving
the messages and for starting & closing applications. The task_PW is responsible for handling
the passwords. The task_Appl runs the actual application. Output Headers and messages are
transmitted through antenna at Port_IO. Control panel No control panel is at the card. The
control panel and GUIs are at the host machine (for example, at a ATM credit card reader).
Function of the system The functions are as follows: First the card is inserted at a host machine.
Then the radiations from the host activate a charge pump at the card. The charge pump powers
the SoC(SystemOn Chip). On power up, system reset signals reset_Task to start. All transactions
between cardholder/user now takes place through GUIs at the host control panel (screen or touch
screen or LCD display panel). Design metrics The following are the design metrics used for the
smart card case study.

● Power Dissipation: Maximum (tolerance) amount of heat it can generate while working must
be less.

● Code size: The optimum system memory needed should not exceed 64 KB memory.

● Limited use of data types: The multidimensional arrays, long 64-bit integer and floating points
are the data types used in smart card. The smart card should support a limited use of error
handlers, exceptions, signals, serialization, debugging and profiling mechanisms.
● File management: There is either a fixed length file management or a variable length file
management with each file with a predefined offset. The file system stores the data using a three
layer mechanism (explained below) .

● Microcontroller hardware: It generates distinct coded physical addresses for the program and
logical addresses for the data. It is a protected, once-writable memory space.

● Validity: System is embedded with expiry date, after which the card authorization through the
hosts is disabled.

● Extendability: The system expiry date is extendable by transactions and authorization of


master control unit.

● User Interfaces: At host machine, graphics at LCD or touch screen display on LCD and
commands for card holder (card user) transactions (within 1s) are the user interface
requirements. Apart from these metrics the manufacturing and engineering cost is also
considered for the design metrics. Test conditions and validations It must be tested on different
host machine versions for fail-proof card-host communication.

2.2.4 Smart Card hardware A smart Card hardware system is shown in Figure 6 given below. It
consists of a plastic card in ISO standard and it is an embedded SoC. Figure 6. Smart Card
Hardware System The CPU in the hardware locks certain section of memory and protects 1 kB
or more data from modification and access by any external source or instruction outside that
memory ( protecting the data). Another way of protecting is to allow the CPU to access through

Smart card hardware system

the physical addresses, which are different from the logical addresses used in the program. The
EEPROM or Flash memory is needed to store P.I.N. (Personal Identification Number). It is also
used to store the unblocking P.I.N., access condition for the data files, card-user data, application
generated data, applications non-volatile data, and invalidation lock to invalidate the card after
the expiry date or server instruction. The ROM in smart card contains a Fabrication key (unique
security key for each card), Personalization key (this key is inserted after the testing phase and it
preserves the fabrication key; after that the RTOS only uses logical address), RTOS codes,
application codes and Utilization lock. Then the need of RAM is to store the run time temporary
variables. The chip-supply system voltage is extracted by a charge pump I/O system. It extracts
the charge from the signals from the host and then it generates regulated voltage to card chip,
memory and I/O system. The I/O system of the chip and host interact through asynchronous
UART at 9.6 k or 106 k or 115.2 k baud/s.

2.2.5 Smart Card Software Smart Cards are the most used systems today in the area of secure
SoC Systems. It needs a cryptographic software. Embedded system in the card needs a special
feature in its OS over and above the MS-DOS and UNIX system features. The special features
needed are a Protected environment where the OS is stored, i.e., in a protected part of ROM. It
needs a restricted runtime environment and every method, class and runtime library in the OS
should be scalable. It requires an optimum code size and a limited use of data types and
multidimensional arrays. It needs a three-layered file system for the data. One for the master file
containing the header (a header means file status, access conditions, and the file lock). Second is
a dedicated file used to hold a file grouping and headers of the immediate successor. A third file
(called the elementary file) is used to hold the file header and its file data. There may be either
fixed or variable length file management with each file predefined with offset. It should have
classes for network, sockets, connections, datagrams etc.

Sure, let's outline a case study of an embedded system for Adaptive Cruise Control (ACC) in a
car:

### Introduction

Adaptive Cruise Control (ACC) is an advanced driver-assistance system (ADAS) that enhances
driving comfort and safety by automatically adjusting the vehicle's speed to maintain a safe
distance from vehicles ahead. It relies on sensors, actuators, and sophisticated control algorithms
to operate effectively.

### System Components

1. **Sensors**:

- **Radar Sensors**: These are essential for detecting vehicles in the car's vicinity. They emit
radio waves and measure the time it takes for them to bounce back, calculating distance and
relative speed.

- **Camera Systems**: Some ACC systems use cameras to detect lane markings and identify
vehicles ahead. These cameras provide additional data for the control algorithms to make
decisions.
2. **Control Algorithms**:

- **Distance Control**: Determines the safe following distance based on sensor inputs and
user settings. It calculates the desired acceleration or deceleration to maintain this distance.

- **Speed Regulation**: Adjusts the vehicle's speed smoothly by controlling throttle and
braking systems.

- **Integration with Braking System**: Interfaces with the car's braking system to apply
brakes when necessary, ensuring safe distance maintenance.

3. **Actuators**:

- **Throttle Control**: Adjusts the throttle opening to control acceleration or deceleration


based on the calculated inputs from the control algorithms.

- **Brake Actuators**: Applies brakes as needed to slow down or maintain safe distances from
vehicles ahead.

4. **Embedded System Unit**:

- **Microcontroller/Microprocessor**: Controls the overall operation of the ACC system,


processing sensor data, executing control algorithms, and sending commands to actuators.

- **Memory**: Stores control algorithms, sensor calibration data, and system parameters.

- **Interfaces**: Interfaces with sensors (radar, cameras), actuators (throttle, brakes), and the
vehicle's CAN bus for communication with other vehicle systems.

### Operation Scenario

1. **Sensor Data Acquisition**:

- Radar sensors continuously emit and receive signals to detect nearby vehicles' positions and
speeds.

- Camera systems capture video frames to identify lane markings and vehicles.
2. **Data Processing**:

- Sensor data is processed in real-time by the embedded system to calculate distances, relative
speeds, and potential collision risks.

- Control algorithms determine the appropriate acceleration or braking commands based on the
desired speed set by the driver and the detected traffic conditions.

3. **Actuation**:

- Commands generated by the control algorithms are sent to actuators.

- Throttle control adjusts engine power to accelerate or decelerate smoothly.

- Brake actuators apply gradual braking if necessary to maintain a safe following distance.

4. **Driver Interaction**:

- The driver sets the desired speed and following distance using controls (e.g., steering wheel
buttons, touchscreen interface).

- The ACC system operates autonomously within these parameters, reducing the need for
constant manual speed adjustments.

### Safety Considerations

- **Redundancy**: The system incorporates redundancy in sensors and actuators to ensure


reliability and safety in diverse driving conditions (e.g., adverse weather, low visibility).

- **Fail-safe Mechanisms**: In case of sensor failure or critical system malfunction, the ACC
system is designed to revert control back to the driver or activate emergency braking to prevent
collisions.
Certainly! Let's delve into a case study of embedded systems in mobile phone software for key
inputs:

case study in embedded system for mobile phone software for key inputs

### Introduction

Embedded systems play a crucial role in mobile phones by managing key inputs from users,
including touchscreens, physical buttons, and other sensors. These systems ensure seamless
interaction between users and the device, translating physical inputs into digital commands that
drive various functionalities of the phone.

### System Components

1. **Touchscreen Interface**:

- **Capacitive Touch Sensors**: Embedded systems interpret touch gestures (tap, swipe,
pinch) using capacitive sensors integrated into the touchscreen.

- **Touch Controller**: A microcontroller or ASIC manages touch data processing,


converting analog signals from touch sensors into digital coordinates and recognizing multi-
touch inputs.
2. **Physical Buttons and Controls**:

- **Power Button**: An embedded controller interprets presses to turn the device on/off and
manage sleep/wake functions.

- **Volume Buttons**: Embedded systems process button presses to adjust audio volume
levels.

- **Navigation Buttons**: On some devices, embedded systems interpret physical buttons for
navigation and control purposes.

3. **Proximity and Ambient Light Sensors**:

- **Proximity Sensor**: Used for detecting when the phone is held to the ear during calls to
turn off the screen and save power.

- **Ambient Light Sensor**: Adjusts screen brightness based on environmental lighting


conditions to optimize visibility and conserve battery life.

4. **System-on-Chip (SoC)**:

- **Processor**: Executes firmware and software responsible for interpreting inputs and
controlling device operations.

- **Integrated Circuits**: Manage power, communication, and data processing tasks within the
phone.

5. **Embedded Software and Firmware**:

- **Device Drivers**: Interface with hardware components to translate inputs into commands
that applications and the operating system can understand.

- **Input Handling Algorithms**: Determine how inputs from various sensors and buttons are
interpreted and processed to trigger specific actions or events.

- **Low-Level System Management**: Control power states, manage interrupts, and handle
resource allocation to optimize performance and battery life.
### Operation Scenario

1. **Touchscreen Interaction**:

- User touches or swipes the screen.

- Capacitive sensors detect touch locations and movements.

- Embedded software processes touch events, determining the type of gesture (tap, swipe,
pinch) and triggering corresponding actions (opening apps, scrolling content).

2. **Physical Button Inputs**:

- User presses physical buttons (e.g., power, volume).

- Embedded controllers interpret button presses and send signals to the operating system or
applications to perform functions such as turning the device on/off, adjusting volume levels, or
taking screenshots.

3. **Sensor-Based Inputs**:

- **Proximity Sensor**: Detects when the phone is near the user's face during calls, turning off
the display to prevent accidental touches.

- **Ambient Light Sensor**: Adjusts screen brightness based on the surrounding light
conditions, enhancing user experience and saving battery life.

### Safety and Reliability Considerations

- **Input Validation**: Embedded systems ensure that inputs are validated to prevent
unintended actions or errors caused by accidental touches or button presses.

- **Hardware Redundancy**: Some critical inputs, like power buttons, may have redundancy to
ensure the device can be powered on or off even if one mechanism fails.
- **Error Handling**: Robust firmware and software handle errors gracefully, preventing
crashes or system instability due to unexpected inputs or sensor malfunctions.

You might also like