embedded system.docs
embedded system.docs
Digital watches
Washing Machine
Toys
Televisions
Digital phones
Laser Printer
Cameras
Industrial machines
Electronic Calculators
Automobiles
Medical Equipment
3. Real-Time Requirements:
- Embedded Systems: Many embedded systems require real-time operation, meaning they
must respond to external events within strict timing constraints. This is critical in applications
like automotive control systems, medical devices, and industrial automation where timing and
reliability are paramount.
- General Computing Systems: Real-time operation is not a primary concern for general-
purpose computing systems. They prioritize tasks based on scheduling algorithms but do not
typically require deterministic timing guarantees.
5. Examples:
- Embedded Systems: Examples include automotive control units, industrial PLCs
(Programmable Logic Controllers), medical devices (like pacemakers and insulin pumps),
consumer electronics (like smart watches and IoT devices), and more.
- General Computing Systems: Examples include desktop computers, laptops, servers, and
smart phones, which are capable of running a wide range of applications and tasks.
So while both embedded systems and general computing systems involve computing
technology, they are tailored for different purposes, environments, and constraints. Embedded
systems prioritize specific functionality, real-time performance, and efficiency within a
dedicated application context, whereas general computing systems offer versatility,
multitasking capabilities, and support for a broad range of applications in diverse computing
environments.
The history of embedded systems traces back to the mid-20th century and has evolved
significantly alongside advancements in electronics, computing, and technology. Here are key
milestones and developments in the history of embedded systems:
1940s - 1950s: Early Developments:
The roots of embedded systems can be traced to the era of early computers and electronic
control systems. One notable example is the Harvard Mark I, an electromechanical computer
developed during World War II.
During this period, industrial automation and early electronic control systems began to emerge,
employing simple embedded systems for tasks such as process control and monitoring.
1960s - 1970s: Rise of Microprocessors:
The introduction of microprocessors in the early 1970s revolutionized embedded systems. The
Intel 4004 microprocessor, released in 1971, marked a significant milestone by integrating all
essential computing functions on a single chip.
This era saw the development of early embedded systems for applications like industrial
control, automotive electronics (e.g., engine control units), and early consumer electronics.
1980s - 1990s: Expansion and Diversification:
The 1980s witnessed rapid advancements in microcontroller technology, which integrated
microprocessors with additional peripherals (e.g., memory, I/O ports) on a single chip. This
made embedded systems more powerful and cost-effective.
Embedded systems found widespread adoption in various industries, including
telecommunications (e.g., modems), automotive (e.g., anti-lock braking systems), aerospace
(e.g., flight control systems), and consumer electronics (e.g., video game consoles, handheld
devices).
2000s - Present: Proliferation and Connectivity:
The 21st century saw the proliferation of embedded systems enabled by advancements in
semiconductor technology, miniaturization, and connectivity (IoT - Internet of Things).
Embedded systems became increasingly interconnected, forming the backbone of IoT
applications. This led to the development of smart devices, wearable technology, home
automation systems, and industrial IoT (IIoT) solutions.
Modern embedded systems continue to evolve with advancements in real-time operating
systems (RTOS), low-power design, wireless communication protocols (e.g., Wi-Fi, Bluetooth,
LoRa), and embedded software development tools.
Emerging Trends:
Current trends in embedded systems include the integration of artificial intelligence (AI) and
machine learning (ML) algorithms for enhanced decision-making and automation.
Security and reliability remain critical concerns, especially with the rise of connected devices
and potential vulnerabilities in IoT ecosystems.
The demand for embedded systems continues to grow across diverse sectors, driven by
advancements in autonomous vehicles, robotics, healthcare devices, and smart infrastructure.
Classification of Embedded systems
Embedded systems can be classified into various categories based on different criteria such as
performance, complexity, real-time requirements, and application domains. Here are common
classifications of embedded systems:
- Small-scale embedded systems: These systems typically have limited processing power and
memory, designed for simple control functions. Examples include microcontrollers used in
household appliances, toys, and simple industrial controls.
- Medium-scale embedded systems: These systems are more capable than small-scale
systems and may include microprocessors with additional peripherals. They are used in
applications like automotive electronics (e.g., engine control units), consumer electronics (e.g.,
digital cameras), and medical devices.
- Large-scale embedded systems: These are complex systems with significant computational
power, often featuring high-performance processors and advanced software architectures.
Examples include embedded systems in aerospace and defense (e.g., avionics systems),
telecommunications (e.g., base stations), and industrial automation (e.g., robotic systems).
1. **Automotive**:
- **Engine Control**: Embedded systems manage fuel injection, ignition timing, and other
engine parameters for optimal performance and efficiency.
- **Safety Systems**: Anti-lock braking systems (ABS), electronic stability control (ESC),
airbag deployment systems, and collision avoidance systems rely on embedded systems for
real-time decision-making and response.
2. **Consumer Electronics**:
- **Smartphones and Tablets**: Embedded systems handle user interfaces, multimedia
playback, connectivity (Wi-Fi, Bluetooth), and sensor integration (gyroscopes,
accelerometers).
- **Digital Cameras**: Embedded systems control image processing, autofocus, exposure,
and other camera functionalities.
3. **Industrial Automation**:
- **PLCs (Programmable Logic Controllers)**: Embedded systems monitor and control
machinery and processes in manufacturing environments, ensuring precise operation and
coordination.
- **SCADA Systems**: Embedded systems are used in supervisory control and data
acquisition systems for real-time monitoring and control of industrial processes.
4. **Medical Devices**:
- **Patient Monitoring Systems**: Embedded systems track vital signs, administer
medication (e.g., infusion pumps), and provide alarms/alerts for healthcare professionals.
- **Implantable Medical Devices**: Pacemakers, insulin pumps, neurostimulators, and other
implants rely on embedded systems for controlling therapeutic interventions and monitoring
patient conditions.
6. **Telecommunications**:
- **Base Stations**: Embedded systems manage signal processing, network protocols, and
data routing in cellular and wireless communication networks.
- **Networking Equipment**: Routers, switches, and modems use embedded systems for
network management and data transmission.
9. **Energy Management**:
- **Smart Grids**: Embedded systems help manage electricity generation, distribution, and
consumption efficiently.
- **Renewable Energy Systems**: Embedded systems optimize the operation of solar
panels, wind turbines, and energy storage systems.
4. **Compact Size and Integration**: Embedded systems are often compact and integrated
into the device or system they control. This integration reduces physical footprint and
complexity, which is particularly advantageous in applications with space constraints or where
multiple functions need to be consolidated.
5. **Reliability and Stability**: Embedded systems are engineered for high reliability and
stability, minimizing the risk of failure or malfunction. This is crucial in critical applications
such as medical devices, automotive systems, and industrial automation where system
downtime or errors can have significant consequences.
7. **Security and Safety**: Embedded systems often include security features to protect
against unauthorized access and ensure data integrity. In safety-critical applications like
medical devices or automotive systems, embedded systems play a vital role in ensuring
operational safety and compliance with regulatory standards.
8. **Cost-Effectiveness**: By focusing on specific functions and optimizing resources,
embedded systems can be cost-effective solutions compared to general-purpose computing
platforms. This is particularly beneficial in mass-produced consumer electronics, industrial
automation, and other high-volume applications.
Embedded systems are basically designed to regulate a physical variable (such Microwave
Oven) or to manipulate the state of some devices by sending some signals to the actuators or
devices connected to the output port system (such as temperature in Air Conditioner), in
response to the input signal provided by the end users or sensors which are connected to the
input ports.
Embedded systems typically consist of several key elements that work together to perform
specific functions within a larger system or device. These elements include:
2. **Memory**: Embedded systems require memory to store program instructions (code) and
data. This includes:
- **RAM (Random Access Memory)**: Used for temporary data storage during program
execution.
- **ROM (Read-Only Memory)**: Stores permanent data and program instructions that are
essential for the system's operation, typically including the bootloader and firmware.
7. **Sensors and Actuators**: Embedded systems interface with physical world through
sensors (e.g., temperature sensors, motion detectors) to gather input data, and actuators (e.g.,
motors, valves) to perform physical actions based on processed data.
8. **User Interface**: In many embedded systems, a user interface is provided to interact with
the device or system. This may include displays (LCD, LED), buttons, touch screens, or
communication interfaces (USB, Ethernet) for configuration, monitoring, and control.
10. **Security Features**: Depending on the application, embedded systems may include
security mechanisms such as encryption, secure boot, and authentication to protect against
unauthorized access, data breaches, or tampering.
These elements collectively define the architecture and functionality of embedded systems,
enabling them to perform specific tasks efficiently and reliably in diverse applications ranging
from consumer electronics and automotive systems to industrial automation and medical
devices.
Unit-2
Communication buses in embedded systems:
In embedded systems, communication buses play a crucial role in enabling various components
and peripherals to exchange data and control signals efficiently. Here are some common
communication buses used in embedded systems:
Onboard communication interfaces:
Inter-integrates circuit I2C:
Inter-integrates circuit (I2C) is the serial communication protocol. This protocol is effective for
the sensors and modules but not for the PCB device communications. The bus transmits
information bidirectionally between connected devices using only two wires. With them, you
can connect up to 128 devices to the mainboard while maintaining a clear communication
pathway between them. They are ideal for projects that require many different parts to work
together (e.g., sensors, pins, expansions, and drivers). Also, I2C speed depends on the speed of
data, the quality of the wires, and the amount of external noise. I2C also uses two-wire
interfaces for connecting low-speed devices, live ADC converters, microcontrollers, I/O
interfaces, etc.
Working Principle of I2C
I2C has two lines: SCL and SDA, a clock line, and a serial data line acceptance port,
respectively. The clock line CL is for synchronizing the data transmission. Data bits are sent
and received through SDA lines.
The master device transmits the data and thus generates the clock signal. This clock signal
opens the transferred device, and any address devices are considered slave devices.
On the bus, master and slave devices, transmission and reception, do not always follow the
same relationship. Data transfer direction depends on the time. The masters must address the
slave before sending data to the slave if they want to send data to the slave. This will stop the
data transfer. At the same time, the Master should address the slave if it wants to receive the
data from the slave devices. Finally, the receiver terminates the receiving process by receiving
the data sent by the slave.
Additionally, the host generates the timing clock and completes data transfers. Power supply
connections must also be made through pull-up resistors. Both lines operate at high power
levels when the bus is idle.
The master also transceives the data frame, and receiving device acknowledges the successful
transmission by sending an ACK bit. The master device sends a stop signal to stop the data
transmission, where the SCL switch is high before the SDA switch to high.
Clock Synchronisation
Data transmission requires clock signals from every master on the SCL line. Data in I2C
transmissions remain valid only during the high period of the clock.
Mode of Transmission
Quick mode and high-speed mode are the two data transmission modes.
Quick Mode
Devices and transceive data rate at 400 kbit/s. They must sync with this transmission to slow it
down and extend the SCL signal's low period.
High-speed Mode
In this mode, information is transmitted by 3.4 Mbps. It has backward compatibility with the
standard mode. This mode transmits the data at higher data compared to previous modes.
Pros
Cons
I2C is a bit slower protocol because of the need for pull-up resistors.
It takes up more space.
The architecture is more complex with the increasing number of devices.
This protocol is half-duplex, which is quite problematic and requires different devices for
complete communication.
I2C in Microcontroller
It is the Seeed product that is compatible with the Rasberry Pi. This 16-bit ADC is used when a
more precise ADC is required in the circuit.
I2C Driver/Adapter
It is an open-source tool that is easy to use. Usually, it is used for controlling I2C devices. It is
compatible with all OS. It offers a built-in color screen that provides the live dashboard of I2C
activity. Thus, when an I2C drive connects to the I2C bus, it displays the traffic on the screen.
Besides, it can help to debug the I2C issues and troubleshoot them.
I2C Arduino
I2C Communication interfaces between two Arduino boards are also possible. It is used for
short-distance communication interfaces and uses the synchronized clock pulse. This I2C
Arduino is used while communicating with the other sensors and devices that need to send the
information to the Master.
PCF 8574
It provides the general purpose remote I/O expansion through two bidirectional I2C buses.
Serial Peripheral Interface (SPI) is one of the serial communication interfaces of synchronous
types. In an embedded system, they are used for short-distance communication. This is the full-
duplex communication protocol. It allows simultaneous data exchange, both transmission and
reception. PIC, ARM, and AVR controllers are some of the controllers that use the SPI
interface.
The master-slave architecture of SPI has a single master device and microcontroller, while the
slaves are the peripherals like the GSM modem, sensors, GPS, etc.
SPI uses four wires, MISO, MOSI, SS, and CLK. Their wires help in the communication
interfaces between the master and slave devices. The master devices both read and write the data.
SPI serial bus allows multiple slaves to interface with the master device thus, SPI protocol's
major benefit is the speed used where speed is crucial. Furthermore, SPI protocol's applications
include SD cards, display modules, etc.
SPI supports two communication interface modes; point-to-point and standard mode. In point-to-
point mode, a single controller follows the single slave, while in standards mode, a single master
controller can communicate with two slave devices enabling the select chip lines.
The first method selects each device using the CS, which is the select chip line. Each of these
devices needs the unique Chip Select line.
The second method is daisy chaining. In this method, each device is connected to the other via
the data out of one to the data in line or another.
SPI devices can connect to various unlimited devices. But the hardware selects line limits this
connection. An SPI interface provides efficient, simple, point-to-point communication without
addressing operations, allowing full-duplex communication.
Pros
Cons
SPI Driver/Adapter
It is one of the easy tools to control SPI devices. It is compatible with all operating systems. The
live logic analyzer displays the SPI traffic on the screen. The operating voltage of the SPI driver
is 3.3 V -5 V.
MCP 3008
It is a 10-bit ADC having 8 channels. Moreover, it connects to the Rasberry Pi with the help of
an SPI serial connection.
A master Arduino and a slave Arduino can communicate using SPI serial communication
interfaces with Arduino. The main aim is to communicate over a short distance at a higher
speed.
I2C Communication
Features SPI Communication Protocol
Protocol
Number of wires 2 (SDA and SCL) 4 (MOSI, MISO, SCK, and SS)
feature
Multi-master
Yes No
configuration
Synchronous
Yes Yes
communication
Arbitration Yes No
In embedded systems, the One-Wire (1-Wire) bus is a communication protocol and bus system
that allows devices to communicate and receive power over a single wire. It was developed by
Dallas Semiconductor, which is now part of Maxim Integrated. Here’s an overview of the One-
Wire bus system and its characteristics:
1. **Single Wire Communication**: As the name suggests, the One-Wire bus requires only one
signal wire for communication. This significantly simplifies the wiring required for connecting
multiple devices in an embedded system.
2. **Power over Data Line**: One notable feature of the One-Wire protocol is that it can
provide power to connected devices over the same single wire used for communication. This is
achieved using a technique called "parasite power", where devices can draw power during
communication intervals.
- **Flexibility**: Devices can be easily added or removed from the bus without complex
addressing schemes.
- **Identification and Authentication**: Devices such as RFID tags and memory devices use the
One-Wire protocol for identification and authentication purposes.
- **Data Logging**: One-Wire devices like EEPROMs can store and retrieve data from
embedded systems.
- **DS2401 Silicon Serial Number**: Provides a unique 64-bit serial number to identify each
device on the One-Wire bus.
- **Timing Requirements**: Proper timing and synchronization between master and slave
devices are crucial for reliable communication.
- **Parasitic Powering**: Devices must support or manage parasitic powering if they are to be
powered solely from the One-Wire bus.
A 1-wire IC can extract operating power from a serial-data signal by means of an internal power-
supply circuit consisting of a diode and a capacitor. When the data line is logic high, some extra
current is used to charge the capacitor, and then the diode prevents the capacitor from
discharging when the data line is logic low.
Parallel interface:
In embedded systems, a parallel interface refers to a method of communication where data is
transferred simultaneously over multiple wires (or lines) between devices. This contrasts with
serial communication, where data is transmitted sequentially over a single wire or pair of wires.
Here's an overview of parallel interfaces in embedded systems, including their characteristics,
applications, and considerations:
1. **Multiple Data Lines**: Parallel interfaces typically use a set of data lines (e.g., 8, 16, 32, or
more) to transfer data simultaneously. Each line carries a different bit of the data word being
transmitted.
3. **Higher Data Rates**: Parallel interfaces can achieve higher data transfer rates compared to
serial interfaces because they transmit multiple bits of data in parallel. This makes them suitable
for applications requiring high-speed data communication.
4. **Wider Bus Width**: The number of data lines (bus width) determines the size of the data
word that can be transferred in one cycle. For example, an 8-bit parallel interface transfers 8 bits
of data simultaneously.
5. **Address and Control Lines**: In addition to data lines, parallel interfaces often include lines
for addressing (selecting devices on the bus) and control signals (e.g., read/write signals,
handshaking signals).
- **Memory Interfaces**: Parallel interfaces are commonly used for interfacing with memory
devices such as RAM (Random Access Memory) and ROM (Read-Only Memory) in embedded
systems. This allows for fast access and retrieval of data.
- **Display Interfaces**: Many embedded systems use parallel interfaces to drive LCD (Liquid
Crystal Display) panels or other types of graphical displays. This requires transferring a large
amount of data (pixel information) quickly to update the display.
- **Simultaneous Transfer**: Data bits are transferred simultaneously, reducing latency and
improving overall system performance for tasks like data acquisition or video processing.
- **Direct Memory Access (DMA)**: Parallel interfaces often support DMA, allowing
peripherals to transfer data directly to and from memory without CPU intervention, further
enhancing system efficiency.
- **Power Consumption**: Parallel interfaces may consume more power compared to serial
interfaces due to the larger number of data lines actively transferring data simultaneously.
- **Crosstalk and Signal Integrity**: Maintaining signal integrity is crucial in parallel interfaces
to prevent crosstalk and ensure reliable data transmission, especially at high speeds.
RS-232
In RS232, ‘RS’ stands for Recommended Standard. It defines the serial communication using
DTE and DCE signals. Here, DTE refers to Data Terminal Equipment and DCE refers to the
Data Communication Equipment. Example of DTE device is a computer and DCE is a modem.
Formally, it is specified as the interface between DTE equipment and DCE equipment using
serial binary data exchange.
Communication between DTE and DCE
The DTE (computer) transmits the information serially to the other end equipment DCE
(modem). In this case, DTE sends binary data “11011101” to DCE and DCE sends binary data
“11010101” to the DTE device.
RS232 describes the common voltage levels, electrical standards, operation mode and number of
bits to be transferred from DTE to DCE. This standard is used for transmission of information
exchange over the telephone lines.
The working of RS-232 can be understood by the protocol format. As RS-232 is a point-to-point
asynchronous communication protocol, it sends data in a single direction. Here, no clock is
required for synchronizing the transmitter and receiver. The data format is initiated with a start
bit followed by 7-bit binary data, parity bit and stop bit which are sent one after another.
Protocol Format
RS232 Framing
The transmission begins by sending a Start bit ‘0’. This is succeeded by 7 bits of ASCII data.
The parity bit is appended to this data for the receiver validation. The data sent from the
transmitter should match at the receiver. Finally, the transmission is halted using a stop bit and it
is represented by the binary ‘1’. Generally, 1 or 2 stop bits can be sent.
In the above diagram, ASCII character ‘A’ is sent using a serial binary stream of ‘1’s and ‘0’s.
While sending data, there should be a certain delay between each bit. This delay is considered as
inactive time and the RS232 line is at negative logic state (-12V).
Console Interfaces: Many embedded systems use RS-232 for console communication,
allowing developers to monitor and control the device using a terminal or computer.
Peripheral Communication: RS-232 interfaces are used to connect various peripherals
such as modems, printers, barcode scanners, and serial devices to embedded systems.
Programming and Debugging: RS-232 is often used for programming and debugging
embedded systems during development, enabling firmware updates and diagnostic
outputs.
Industrial Automation: RS-232 interfaces are prevalent in industrial automation
systems for connecting PLCs (Programmable Logic Controllers), HMI (Human-Machine
Interface) devices, and sensors.
Advantages of RS-232:
Voltage Levels: The wide range of voltage levels used in traditional RS-232
implementations can pose compatibility issues with modern devices that operate at lower
voltage levels.
Signal Integrity: Maintaining signal integrity over long distances or in noisy
environments may require additional measures such as shielding and proper grounding.
Speed Limitations: RS-232 has limitations in data transfer speed compared to more
modern serial communication standards like USB and Ethernet, typically operating up to
speeds of 115,200 bps (bits per second) in standard implementations.
RS485
Despite the wide variety of modern alternative solutions, today RS-485 technology remains the
basis of many communication networks. The major advantages of RS-485 interface are:
4. **Speed and Distance**: RS-485 supports higher data rates than RS-232, typically
ranging from 100 kbps to 10 Mbps, depending on cable length and environment. It can
transmit data over distances up to 1.2 kilometers (4,000 feet) at lower speeds, making it
suitable for industrial and commercial applications.
5. **Common Mode Voltage Range**: RS-485 has a wide common mode voltage range,
typically from -7V to +12V, allowing it to tolerate ground potential differences and noise
levels commonly found in industrial environments.
- **Data Acquisition Systems**: RS-485 interfaces are used in data acquisition systems
where multiple sensors or measurement devices are connected to a central data acquisition
unit.
- **Telecommunication**: RS-485 is used in telecommunications equipment for
communication between network devices, such as routers, switches, and modems.
- **Noise Immunity**: Differential signaling and balanced lines provide high noise
immunity, making RS-485 suitable for operation in electrically noisy environments.
- **Longer Cable Runs**: RS-485 supports longer cable runs compared to other serial
communication standards like RS-232, making it ideal for applications spread over large
areas.
- **Multi-Drop Capability**: Multiple devices can be connected to the same bus, reducing
wiring complexity and cost in systems with multiple sensors or control points.
- **High Data Rates**: RS-485 supports higher data rates than RS-232, enabling faster
communication speeds in applications requiring real-time data exchange.
- **Termination and Biasing**: Proper termination and biasing of the RS-485 bus are
critical to ensure signal integrity and reliable communication, especially over long cable
lengths.
- **Protocol Implementation**: While RS-485 defines the physical layer, protocols for data
framing, error checking, and addressing must be implemented at the application level.
- **Power Consumption**: RS-485 transceivers typically consume more power than RS-
232 transceivers, especially when driving long cables or operating at high speeds.
USB
A common interface that is used to allow communication between different peripheral devices
like mice, digital cameras, printers, keyboards, media devices, scanners, flash drives & external
hard drives as well as a host controller like a smartphone or PC is known as USB protocol.
A universal serial bus is intended to allow hot swapping & enhance plug-N- play. The plug-and-
play allows the OS to configure and discover a new peripheral device spontaneously without
starting the computer whereas hot swapping removes and replaces a new peripheral device
without rebooting.
There are different types of USB connectors available in the market where Type A and Type B
are the most frequently used ones. At present, older connectors are replaced by Mini-USB,
Micro-USB & USB-C cables.
Pin Configuration
The typical Type-A USB connector is used in various applications. These USBs include 4 pins
that are given below. This type of USB is observed mostly in connecting various devices to PC
because it is the typical four-pin USB connector. This connector is taller and narrower including
4-pins arranged within a box.
The pins of Type A USB are indicated with color wires to perform a particular function.
Pin1 (VBUS): It is a red color wire, used for providing power supply.
Pin2 (D-): It is a differential pair pin available in white color, used for connectivity of USB.
Pin3 (D+): It is a differential pair pin available in green color, used for connectivity of USB.
Pin4 (GND): It is a Ground pin, available in black color.
In the above pins, both the D+ & D- pins indicate the transfer of data. When a ‘1’ is sent across
the wires, then the D+ line will have positive flow, and if ‘0’ is sent then the reverse happens.
In this architecture, I/O devices are connected to the computer through USB which is called as a
hub. The Hub within the architecture is the connecting point between both the I/O devices as
well as the computer. The root hub in this architecture is used to connect the whole structure to
the hosting computer. The I/O devices in this architecture are a keyboard, mouse, speaker,
camera, etc.
The USB protocol simply works on the polling principle because, in polling, the processor
continuously checks whether the input/output device is prepared for transmitting data or not.
Thus, the I/O devices do not have to update the processor regarding their conditions because it is
the main responsibility of the processor to check continuously. So this will make the USB low-
cost & simple.
Whenever a new device is allied to the hub then it is addressed like ‘0’. During a normal period,
the host computer will poll the hubs to obtain their condition which allows the host to know the
I/O devices from the system are attached or detached from the system.
Once the host becomes responsive to the new device then it knows the device capacities by
reading the available data within the particular memory of the USB interface of the device. So
that the host uses a suitable driver to communicate with devices. After that, the host allocates an
address to the new device which is written to the device register. With this device, USB provides
plug-and-play features.
This feature simply allows the host to identify the new available I/O device automatically once
the device is connected. The I/O capacities of the devices will be determined by host software.
Another feature of the USB protocol is “hot-pluggable” which means, the I/O device is
connected or removed from the host system without doing any shutdown or restart. So your
system runs continuously when the I/O device is connected or detached.
USB protocol can also support the isochronous traffic wherever the data is transmitted at a preset
interval of time. The transmission of isochronous data is very faster as compared to synchronous
& asynchronous data transfer.
To hold the traffic isochronous, the root hub transmits a series of bits over the USB that specifies
the start of isochronous data & the actual data can be transmitted after this series of bits.
It is a high-speed USB with 480Mbps of maximum data transfer speed. This USB supports all
connectors.
The maximum length of the cable is 5 meters.
Its max charging power is up to 15w.
USB 3.2 Standard
USB 3.2 (Generation1) is a super speed USB with 5Gbps of maximum data transfer speed.
It supports different connectors like USB 3 USB-A, USB 3 USB-B & USB-C.
The maximum length of cable for this USB is 3 meters.
Its max charging power is up to 15w.
USB 3.2 (Generation2)
USB 3.2 (Generation2) is also a super speed USB with 10Gbps of maximum data transfer
speed.
The maximum length of cable for this USB is 1meter.
It also supports different connectors like USB 3 USB-A, USB 3 USB-B & USB-C.
Its max charging power is up to 100w.
USB 3.2 Generation 2×2
USB 3.2 Generation 2×2 is a super speed USB with 20Gbps of maximum data transfer speed.
The maximum length of cable for this USB is 1meter.
It also supports USB Connector.
Its max charging power is up to 100w.
Thunderbolt 3 Standard
This USB is also called thunderbolt including up to 40Gbps of maximum data transfer speed.
The maximum length of cable for this USB is 2 meters for active and 0.8meters for passive
cables.
It supports USB Connector.
Its max charging power is up to 100w.
USB 4 Standard
This USB is also known as Thunderbolt 4 with up to 40Gbps of maximum data transfer speed.
The maximum length of cable for this USB is 2m for active & 0.8m for passive cables.
It supports USB Connector.
Its max charging power is up to 100w.
USB Protocol Timing Diagram
The timing diagram of the USB protocol is shown below which is mainly used in the engineering
field to explain the ON/OFF values of USB wires along a timeline.
A ‘1’ specifies no charge and a ‘0’ specifies active. As time grows you can observe the on/off
progression. The below system shows Non-Return to Zero Invert (NRZI) encoding which is a
more efficient method to transmit data.
USB Timing Diagram
In the above diagram, bit stuffing is happening which means that logic 1s are added for allowing
synchronization. If the data includes several 1s, then the USB cannot synchronize the data. So in
this manner, the hardware notices an additional bit & ignores it. It includes overhead to the USB
although ensures consistent transfer also.
It is very essential for host devices to communicate effectively with each other. Once the
peripheral device is connected to the computer through USB, then the computer will notice what
type of device it is & load a driver automatically that permits the device to function.
The small amount of data transmitted between the two devices is called as ‘packets’ where a unit
of digital information is transferred with every packet. The data transfer that can be occurred
within the USB protocol is discussed below.
Message Format
The data of the USB protocol is transmitted within packets LSB first. There are mainly four
types of USB packets Token, Data, Handshake & Start of the Frame. Every packet is designed
from various field types which are shown in the following message format diagram.
In a Hi-Speed USB system, the synchronization needs 15 KJ pairs followed through 2 K’s to
frame 32-bits of data. This field is long with 8 bits at high &low speed otherwise 32-bits long for
maximum speed & it is utilized to synchronize the CLK of the transmitter & receiver. The final
2-bits will indicate wherever the PID field begins.
Address Field
The address field of the USB protocol indicates which packet device is mainly designated for.
The 7-bits length simply allows support of 127 devices. The address zero is invalid because any
device which is not yet allocated an address should be reacted to transmitted packets to the zero
address.
Endpoint Field
The endpoint field within the USB protocol is 4-bits long & allows for extra flexibility within
addressing. Usually, these are divided for the data moving IN/OUT. Endpoint ‘0’ is a special
case called as the CONTROL endpoint & each device includes an endpoint 0.
Data Field
The length of the data field is not fixed, so it ranges from 0 to 8192 bits long & always an
integral the number of bytes.
CRC Field
The Cyclic Redundancy Checks (CRC) are executed on the data in the packet payload where all
the token packets include 5-bit CRC & the data packets include a 16-bit CRC. The CRC-5 is five
bits long & used by the token packet as well as the start of the frame packet.
EOP Field
Every packet is terminated by an EOP (End of the Packet) field which includes an SE0 or single-
ended zero for 2-bit times followed through the J for 1-bit time.
Synchronized Issues
The commonly faced synchronized issues within USB protocol include the following. Whenever
USB devices are developing then USB developer’s experiences commonly face many
synchronized issues which are also called communication errors of USB. Some of these errors
will cause failures of the system. The following examples are some of the issues with USB bus
that can happen:
High-Speed Data Transfer: IEEE 1394 FireWire provides high-speed data transfer rates,
which is advantageous in embedded systems where large amounts of data need to be transferred
quickly. It supports data rates of 100, 200, 400, and even 800 Mbps, depending on the version.
Isochronous Data Transfer: FireWire supports isochronous data transfer, which is essential
for real-time applications in embedded systems such as audio and video streaming. This ensures
consistent timing and delivery of data packets, crucial for maintaining quality in multimedia
applications.
Availability of Controllers: FireWire controllers and chipsets are available from various
vendors, making it feasible to integrate FireWire into custom embedded designs with off-the-
shelf components.
How it works
FireWire, as defined in IEEE 1394, uses 64-bit device addresses. FireWire cables use two
twisted-pair wires for data transmission and two wires for power.
A backplane interface: Runs at speeds between 12.5 and 50 Mbps for bus connections
within a computer system.
A point-to-point interface: Runs at speeds of 98.304 Mbps (S100 specification),
196.608 Mbps (S200), and 393.216 Mbps (S400) for connecting devices to computers
using serial cables.
Devices: Typically have 3 ports but can have up to 27 ports and can be daisy-chained up
to 16 devices.
Splitters: Provide extra IEEE 1394 ports if needed to accommodate the number and
arrangement of devices used.
Repeaters: Overcome distance limitations in IEEE 1394 cables.
Bridges: Isolate traffic within a specific portion of an IEEE 1394 bus.
FireWire
FireWire connections have a maximum distance of 4.5 meters, but up to 16 components can be
daisy-chained to a maximum distance of 72 meters without using repeaters.
Applications
1. Digital Cameras and Camcorders: Used for fast transfer of high-resolution photos and
videos to computers.
2. External Hard Drives: Provides quick data transfer speeds for storing and accessing
large files.
3. Audio Interfaces: Used in recording studios for high-quality audio input and output.
4. Industrial Automation: Connects sensors and controllers for real-time monitoring and
control.
5. Medical Imaging: Transfers high-resolution medical images swiftly between devices.
6. Aerospace Systems: Sends critical data and video feeds onboard aircraft.
7. Consumer Electronics: Previously used in DVRs and high-definition TVs for peripheral
connections.
8. Networking: Connects computers in a peer-to-peer or daisy-chain configuration.
These differential pairs carry signals between FireWire devices, ensuring reliable data
transmission over the bus. The TPA and TPB lines are part of the physical layer of the FireWire
protocol and are essential for maintaining signal integrity and minimizing electromagnetic
interference.
IrDA
Introduction:
IrDA(Infrared Data Association) is one type of personal communication area network which is
deployed in infrared rays.
IrDA Applications:
The data transfer takes place between a laptop(computer) and a Mobile when both come
into vicinity and line-of-sight of the IR receivers and detectors in each of them.
To Send a document from a notebook computer to a printer.
By Exchanging business cards which are handheld by the PCs.
This provides the flexibility for coordinating schedules and telephone books with a desktop
and notebook computers.
Point to shoot communication from peer to peer is a main characteristic for this protocol.
IrDA Protocol Layers:
There are different IrDA protocol layers are there
Application Layer
Session Layer
IrLMIAS
IrTinyTP
IrLMP
Physical Layer
Application Layer:
In this application layer protocol security plays a vital role.
Sync(PIM), Object Push(PIM) or Binary File Transfer are the functions provided by this
layer.
Session Layer:
IrOBEX , IrLAN , IrBus , IrMC , TrTran , IrComm are present in this layer to perform
different tasks.
IrTinyTP:
Segmentation and reassembly takes place in this layer .
It provides connection to IrLMP.
IrLMP:
It multiplexes multiple applications data as well as exclusive link access.
It provides an Ad-hoc connection between peers.
Physical Layer:
This layer has an ability for accessing half duplex or alternating directions duplex access.
It provides a value 1 m or 10 cm(For low power LED).
Different Modes: Synchronous PPM, Synchronous Serial ,Asynchronous Serial
Session and Transport IrDA Protocol:
For infrared LAN access IrLAN is used.
For accessing the serial bus by joysticks, keyboard, mice and game ports IrBUS is used.
In this protocol IrMC provides mobile communication and telephony protocol.
IrTran is a transport protocol for image file transfers.
IrComm protocol is used by emulating serial(Ex. RS232CCOM) or parallel port.
Bluetooth
Bluetooth Modes:
1. Classic Bluetooth:
o Used for data-intensive applications like audio streaming.
o Supports various profiles such as A2DP (Advanced Audio Distribution Profile)
for audio streaming and HFP (Hands-Free Profile) for hands-free communication.
2. Bluetooth Low Energy (BLE):
o Designed for low-power, short-range communication in applications like fitness
trackers, smartwatches, and IoT devices.
o Operates in connection-oriented and connectionless modes.
1. Bluetooth Modules:
o Many embedded systems use pre-built Bluetooth modules that encapsulate the
Bluetooth functionality.
o These modules often come with their own firmware, making integration into an
embedded system more straightforward.
2. Microcontroller Integration:
o Some microcontrollers have built-in Bluetooth capabilities.
o Bluetooth functionality may be implemented through software libraries and APIs
provided by the microcontroller manufacturer.
3. Software Stacks:
o Implementing Bluetooth requires a software stack that manages the protocol
layers.
o Stack implementations may be provided by Bluetooth SIG (Special Interest
Group) or customized for specific applications.
4. Power Management:
o Bluetooth Low Energy is designed to be power-efficient, allowing devices to
operate on battery power for extended periods.
5. Application Development:
o Application developers interface with the Bluetooth stack to enable specific
functionalities.
o APIs provided by the Bluetooth stack allow developers to interact with lower-
level Bluetooth features.
Challenges:
1. Interference:
o The 2.4 GHz band is shared with other wireless technologies, leading to potential
interference.
2. Security Concerns:
oImplementing robust security measures is crucial to prevent unauthorized access
and data interception.
3. Compatibility:
o Ensuring compatibility with various Bluetooth versions and profiles can be
challenging.
A Bluetooth embedded system involves the integration of hardware components, firmware, and
software stacks to enable wireless communication between devices. The choice of Bluetooth
version (Classic or BLE) depends on the specific requirements of the application, balancing
factors such as data throughput, power consumption, and range.
Piconet
It includes the master the number of devices that can be connected is limited to 8. Due to less
number of devices active at a time the usage of channel band width is not more.
Number of devices that can be connected is limited to 8. It is applicable for devices belonging to
small areas.
AD
Scatternet
It is a network which connects multiple piconets using Bluetooth and it acts as a master and
another type of piconet acts as a slave. It has more than 6 devices that can be connected.
AD
Differences
The figure given below depicts the piconet and scatternet together −
Zigbee
Introduction of ZigBee
ZigBee is a Personal Area Network task group with low rate task group 4. It is a technology of
home networking. ZigBee is a technological standard created for controlling and sensing the
network. As we know that ZigBee is the Personal Area Network of task group 4 so it is based on
IEEE 802.15.4 and is created by Zigbee Alliance.
ZigBee is an open, global, packet-based protocol designed to provide an easy-to-use
architecture for secure, reliable, low power wireless networks. Flow or process control
equipment can be place anywhere and still communicate with the rest of the system. It can also
be moved, since the network doesn’t care about the physical location of a sensor, pump or valve.
IEEE802.15.4 developed the PHY and MAC layer whereas, the ZigBee takes care of upper
higher layers.
ZigBee is a standard that addresses the need for very low-cost implementation of Low power
devices with Low data rates for short-range wireless communications.
IEEE 802.15.4 supports star and peer-to-peer topologies. The ZigBee specification supports star
and two kinds of peer-to-peer topologies, mesh and cluster tree. ZigBee-compliant devices are
sometimes specified as supporting point-to-point and point-to-multipoint topologies.
Why another short-range communication standard??
Zigbee Coordinator Device: It communicates with routers. This device is used for
connecting the devices.
Zigbee Router: It is used for passing the data between devices.
Zigbee End Device: It is the device that is going to be controlled.
General Characteristics of Zigbee Standard:
Features of Zigbee:
Star Topology (ZigBee Smart Energy): Consists of a coordinator and several end devices,
end devices communicate only with the coordinator.
Mesh Topology (Self Healing Process): Mesh topology consists of one coordinator, several
routers, and end devices.
Tree Topology: In this topology, the network consists of a central node which is a
coordinator, several routers, and end devices. the function of the router is to extend the
network coverage.
Architecture of Zigbee:
Zigbee Applications:
1. Home Automation
2. Medical Data Collection
3. Industrial Control Systems
4. meter reading system
5. light control system
6. Commercial
7. Government Markets Worldwide
8. Home Networking
Unit-3
Software development tools encompass a wide range of software applications and utilities
designed to aid developers in creating, maintaining, and debugging software. These tools serve
different purposes throughout the software development lifecycle, from initial design and coding
to testing, deployment, and maintenance. Here's an overview of the categories and types of
software development tools commonly used:
2. **Assembler**:
- An assembler is a software tool that converts assembly language code into machine code
or object code. Assembly language consists of mnemonic instructions that are specific to a
particular processor architecture. The assembler translates these human-readable
instructions into binary machine code that can be directly executed by the computer's
CPU. It handles tasks such as assigning memory addresses, managing labels and symbols,
and generating executable code from assembly source files.
3. **Compiler**:
4. **Linker**:
- A linker is a utility that combines object code generated by a compiler with libraries,
modules, and runtime components to produce an executable program or shared library.
During compilation, source code is translated into object code, which contains references to
external functions and variables. The linker resolves these references, ensuring that all
necessary components are linked together to create a coherent executable file. It also
performs address binding, symbol resolution, and generates the final machine code ready
for execution.
5. **Simulator**:
- A simulator is a software tool that models the behavior of hardware systems or software
components without using the actual physical hardware. It allows developers to test and
debug applications in a controlled environment, simulating different scenarios and inputs.
Simulators are commonly used in embedded systems development, where testing on real
hardware may be impractical or costly. They provide insights into system performance,
timing behavior, and interaction with peripherals, helping developers identify and fix
issues before deployment.
6. **Debugger**:
- A debugger is a software tool that allows developers to monitor, control, and analyze the
execution of a program. It helps identify and resolve bugs (errors) by providing features
such as breakpoints, stepping through code, inspecting variables and memory, and
evaluating expressions. Debuggers enable developers to track the flow of program
execution, understand runtime behavior, and diagnose issues effectively. They are essential
for software development, ensuring code correctness and optimizing performance.
These software development tools collectively support different stages of the software
development lifecycle, from writing and testing code to debugging and deploying
applications. They empower developers to create robust, efficient, and reliable software
solutions by providing the necessary tools and environments for development, testing, and
optimization. Each tool plays a crucial role in ensuring code quality, performance, and
compatibility across various platforms and hardware configurations.
Certainly! Here's a concise summary of the need for hardware-software partitioning and
co-design:
2. **Resource Efficiency**: Efficient use of hardware resources like power and processing
capability by offloading compute-intensive tasks to dedicated hardware accelerators.
3. **Real-Time Constraints**: Ensuring timely and predictable execution of critical tasks
by implementing them in hardware, avoiding software overheads.
4. **Flexibility and Scalability**: Supporting flexible system architectures that can adapt
to changing requirements with minimal redesign, ensuring long-term scalability and
adaptability.
These practices are crucial for developing efficient, reliable, and cost-effective solutions
across a wide range of applications, from embedded systems to high-performance
computing and IoT devices.
Unified Modeling Language (UML) Diagrams
Unified Modeling Language (UML) is a general-purpose modeling language. The main aim of
UML is to define a standard way to visualize the way a system has been designed. It is quite
similar to blueprints used in other fields of engineering. UML is not a programming language,
it is rather a visual language.
1. What is UML?
Unified Modeling Language (UML) is a standardized visual modeling language used in the
field of software engineering to provide a general-purpose, developmental, and intuitive way to
visualize the design of a system. UML helps in specifying, visualizing, constructing, and
documenting the artifacts of software systems.
We use UML diagrams to portray the behavior and structure of a system.
UML helps software engineers, businessmen, and system architects with modeling, design,
and analysis.
The Object Management Group (OMG) adopted Unified Modelling Language as a standard
in 1997. It’s been managed by OMG ever since.
The International Organization for Standardization (ISO) published UML as an approved
standard in 2005. UML has been revised over the years and is reviewed periodically.
2. Why do we need UML?
Complex applications need collaboration and planning from multiple teams and hence
require a clear and concise way to communicate amongst them.
Businessmen do not understand code. So UML becomes essential to communicate with non-
programmers about essential requirements, functionalities, and processes of the system.
A lot of time is saved down the line when teams can visualize processes, user interactions,
and the static structure of the system.
Unified Modeling Language (UML) has a broad scope and is widely used in software
engineering for modeling and designing software systems. Here are key aspects of the scope
of UML modeling:
1. **Visual Modeling Language**: UML provides a standardized and visual language for
expressing and communicating the design of software systems. It uses diagrams to depict
different aspects of a system, such as structure, behavior, interactions, and architecture.
2. **System Analysis and Design**: UML supports both system analysis and design phases
of software development. During analysis, UML diagrams help capture requirements, define
use cases, and model business processes. In the design phase, UML diagrams facilitate the
specification of system structure, components, and their interactions.
5. **Design Patterns and Reuse**: UML enables the representation and application of
design patterns, which are proven solutions to common design problems. Design patterns
can be expressed in UML diagrams, making it easier to reuse successful design strategies
across projects and domains.
7. **Tool Integration**: UML models can be integrated with various software development
tools and environments, including IDEs (Integrated Development Environments), CASE
(Computer-Aided Software Engineering) tools, and version control systems. This
integration supports automated code generation, model validation, and synchronization
between models and implementation.
8. **Support for Agile and Iterative Development**: UML is adaptable to agile and
iterative development methodologies by allowing incremental refinement of models based
on evolving requirements and feedback. It supports iterative modeling and design, ensuring
that models remain aligned with changing project needs.
Once you understand these elements, you will be able to read and recognize the models as well
as create some of them.
Figure –
A Conceptual Model of the UML
Building Blocks:
The vocabulary of the UML encompasses three kinds of building blocks:
1. Things: Things are the abstractions that are first-class citizens in a model; relationships tie
these things together; diagrams group interesting collections of things. There are 4 kinds of
things in the UML:
1. Structural things
2. Behavioral things
3. Grouping things
4. Annotational things
These things are the basic object-oriented building blocks of the UML. You use them to
write well-formed models.
2. Relationships: There are 4 kinds of relationships in the UML:
1. Dependency
2. Association
3. Generalization
4. Realization
These relationships are the basic relational building blocks of the UML.
Diagrams: It is the graphical presentation of a set of elements. It is rendered as a connected
graph of vertices (things) and arcs (relationships).
1. Class diagram
2. Object diagram
3. Use case diagram
4. Sequence diagram
5. Collaboration diagram
6. Statechart diagram
7. Activity diagram
8. Component diagram
9. Deployment diagram
Rules:
The UML has a number of rules that specify what a well-formed model should look like. A
well-formed model is one that is semantically self-consistent and in harmony with all its
related models. The UML has semantic rules for:
1. Names – What you can call things, relationships, and diagrams.
2. Scope – The context that gives specific meaning to a name.
3. Visibility – How those names can be seen and used by others.
4. Integrity – How things properly and consistently relate to one another.
5. Execution – What it means to run or simulate a dynamic model.
Common Mechanisms:
The UML is made simpler by the four common mechanisms. They are as follows:
1. Specifications
2. Adornments
3. Common divisions
4. Extensibility mechanisms
UML- Architecture
Software architecture is all about how a software system is built at its highest level. It is needed
to think big from multiple perspectives with quality and design in mind. The software team is
tied to many practical concerns, such as:
Software architecture provides a basic design of a complete software system. It defines the
elements included in the system, the functions each element has, and how each element relates to
one another. In short, it is a big picture or overall structure of the whole system, how everything
works together.
To form an architecture, the software architect will take several factors into consideration:
Each developer will know what needs to be implemented and how things relate to meet the
desired needs efficiently. One of the main advantages of software architecture is that it provides
high productivity to the software team. The software development becomes more effective as it
comes up with an explained structure in place to coordinate work, implement individual features,
or ground discussions on potential issues. With a lucid architecture, it is easier to know where the
key responsibilities are residing in the system and where to make changes to add new
requirements or simply fixing the failures.
In addition, a clear architecture will help to achieve quality in the software with a well-designed
structure using principles like separation of concerns; the system becomes easier to maintain,
reuse, and adapt. The software architecture is useful to people such as software developers, the
project manager, the client, and the end-user. Each one will have different perspectives to view
the system and will bring different agendas to a project. Also, it provides a collection of several
views. It can be best understood as a collection of five views:
o It is a view that shows the functionality of the system as perceived by external actors.
o It reveals the requirements of the system.
o With UML, it is easy to capture the static aspects of this view in the use case diagrams,
whereas it?s dynamic aspects are captured in interaction diagrams, state chart diagrams,
and activity diagrams.
Design View
o It is a view that shows how the functionality is designed inside the system in terms of
static structure and dynamic behavior.
o It captures the vocabulary of the problem space and solution space.
o With UML, it represents the static aspects of this view in class and object diagrams,
whereas its dynamic aspects are captured in interaction diagrams, state chart diagrams,
and activity diagrams.
Implementation View
o It is the view that represents the organization of the core components and files.
o It primarily addresses the configuration management of the system?s releases.
o With UML, its static aspects are expressed in component diagrams, and the dynamic
aspects are captured in interaction diagrams, state chart diagrams, and activity diagrams.
Process View
Deployment View
o It is the view that shows the deployment of the system in terms of physical architecture.
o It includes the nodes, which form the system hardware topology where the system will be
executed.
o It primarily addresses the distribution, delivery, and installation of the parts that build the
physical system.
3. Different Types of UML Diagrams
UML is linked with object-oriented design and analysis. UML makes use of elements and
forms associations between them to form diagrams. Diagrams in UML can be broadly
classified as:
Creating Unified Modeling Language (UML) diagrams involves a systematic process that
typically includes the following steps:
Step 1: Identify the Purpose:
o Determine the purpose of creating the UML diagram. Different types of UML
diagrams serve various purposes, such as capturing requirements, designing
system architecture, or documenting class relationships.
Step 2: Identify Elements and Relationships:
o Identify the key elements (classes, objects, use cases, etc.) and their relationships
that need to be represented in the diagram. This step involves understanding the
structure and behavior of the system you are modeling.
Step 3: Select the Appropriate UML Diagram Type:
o Choose the UML diagram type that best fits your modeling needs. Common types
include Class Diagrams, Use Case Diagrams, Sequence Diagrams, Activity
Diagrams, and more.
Step 4: Create a Rough Sketch:
o Before using a UML modeling tool, it can be helpful to create a rough sketch on
paper or a whiteboard. This can help you visualize the layout and connections
between elements.
Step 5: Choose a UML Modeling Tool:
o Select a UML modeling tool that suits your preferences and requirements. There
are various tools available, both online and offline, that offer features for creating
and editing UML diagrams.
Step 6: Create the Diagram:
o Open the selected UML modeling tool and create a new project or diagram. Begin
adding elements (e.g., classes, use cases, actors) to the diagram and connect them
with appropriate relationships (e.g., associations, dependencies).
Step 7: Define Element Properties:
o For each element in the diagram, specify relevant properties and attributes. This
might include class attributes and methods, use case details, or any other
information specific to the diagram type.
Step 8: Add Annotations and Comments:
o Enhance the clarity of your diagram by adding annotations, comments, and
explanatory notes. This helps anyone reviewing the diagram to understand the
design decisions and logic behind it.
Step 9: Validate and Review:
o Review the diagram for accuracy and completeness. Ensure that the relationships,
constraints, and elements accurately represent the intended system or process.
Validate your diagram against the requirements and make necessary adjustments.
Step 10: Refine and Iterate:
o Refine the diagram based on feedback and additional insights. UML diagrams are
often created iteratively as the understanding of the system evolves.
Step 11: Generate Documentation:
o Some UML tools allow you to generate documentation directly from your
diagrams. This can include class documentation, use case descriptions, and other
relevant information.
Note: Remember that the specific steps may vary based on the UML diagram type and the tool
you are using.
9. UML diagrams best practices
Unified Modeling Language (UML) is a powerful tool for visualizing and documenting the
design of a system. To create effective and meaningful UML diagrams, it’s essential to follow
best practices. Here are some UML best practices:
1. Understand the Audience: Consider your audience when creating UML diagrams. Tailor
the level of detail and the choice of diagrams to match the understanding and needs of your
audience, whether they are developers, architects, or stakeholders.
2. Keep Diagrams Simple and Focused: Aim for simplicity in your diagrams. Each diagram
should focus on a specific aspect of the system or a particular set of relationships. Avoid
clutter and unnecessary details that can distract from the main message.
3. Use Consistent Naming Conventions: Adopt consistent and meaningful names for classes,
objects, attributes, methods, and other UML elements. Clear and well-thought-out naming
conventions enhance the understandability of your diagrams.
4. Follow Standard UML Notations: Adhere to standard UML notations and symbols.
Consistency in using UML conventions ensures that your diagrams are easily understood by
others who are familiar with UML.
5. Keep Relationships Explicit: Clearly define and label relationships between elements. Use
appropriate arrows, multiplicity notations, and association names to communicate the nature
of connections between classes, objects, or use cases.
10. UML and Agile Development
Unified Modeling Language (UML) and Agile development are two different approaches to
software development, and they can be effectively integrated to enhance the overall
development process. Here are some key points about the relationship between UML and Agile
development:
10.1. UML in Agile Development
Visualization and Communication: UML diagrams provide a visual way to represent
system architecture, design, and behavior. In Agile development, where communication is
crucial, UML diagrams can serve as effective communication tools between team members,
stakeholders, and even non-technical audiences.
User Stories and Use Cases: UML use case diagrams can be used to capture and model
user stories in Agile development. Use cases help in understanding the system from an end-
user perspective and contribute to the creation of user stories.
Iterative Modeling: Agile methodologies emphasize iterative development, and UML can
be adapted to support this approach. UML models can be created and refined incrementally
as the understanding of the system evolves during each iteration.
Agile Modeling Techniques: Agile modeling techniques, such as user story mapping and
impact mapping, complement UML by providing lightweight ways to visualize and
communicate requirements and design. These techniques align with the Agile principle of
valuing working software over comprehensive documentation.
10.2. Balancing Agility and Modeling
Adaptive Modeling: Adopt an adaptive modeling approach where UML is used to the
extent necessary for effective communication and understanding. The focus should be on
delivering value through working software rather than exhaustive documentation.
Team Empowerment: Empower the development team to choose the right level of
modeling based on the project’s needs. Team members should feel comfortable using UML
as a communication tool without feeling burdened by excessive modeling requirements.
11. Common Challenges in UML Modeling
1. Time-Intensive: UML modeling can be perceived as time-consuming, especially in fast-
paced Agile environments where rapid development is emphasized. Teams may struggle to
keep up with the need for frequent updates to UML diagrams.
2. Over-Documentation: Agile principles value working software over comprehensive
documentation. There’s a risk of over-documentation when using UML, as teams may
spend too much time on detailed diagrams that do not directly contribute to delivering
value.
3. Changing Requirements: Agile projects often face changing requirements, and UML
diagrams may become quickly outdated. Keeping up with these changes and ensuring that
UML models reflect the current system state can be challenging.
4. Collaboration Issues: Agile emphasizes collaboration among team members, and
sometimes UML diagrams are seen as artifacts that only certain team members understand.
Ensuring that everyone can contribute to and benefit from UML models can be a challenge.
12. Benefits of Using UML Diagrams
1. Standardization: UML provides a standardized way of representing system models,
ensuring that developers and stakeholders can communicate using a common visual
language.
2. Communication: UML diagrams serve as a powerful communication tool between
stakeholders, including developers, designers, testers, and business users. They help in
conveying complex ideas in a more understandable manner.
3. Visualization: UML diagrams facilitate the visualization of system components,
relationships, and processes. This visual representation aids in understanding and designing
complex systems.
4. Documentation: UML diagrams can be used as effective documentation tools. They
provide a structured and organized way to document various aspects of a system, such as
architecture, design, and behavior.
5. Analysis and Design: UML supports both analysis and design phases of software
development. It helps in modeling the requirements of a system and then transforming them
into a design that can be implemented.
Unit-3
Real-time operating systems (RTOS) are used in environments where a large number of events,
mostly external to the computer system, must be accepted and processed in a short time or within
certain deadlines. such applications are industrial control, telephone switching equipment, flight
control, and real-time simulations. With an RTOS, the processing time is measured in tenths of
seconds. This system is time-bound and has a fixed deadline. The processing in this type of
system must occur within the specified constraints. Otherwise, This will lead to system failure.
Examples of real-time operating systems are airline traffic control systems, Command Control
Systems, airline reservation systems, Heart pacemakers, Network Multimedia Systems, robots,
etc.
The real-time operating systems can be of 3 types –
RTOS
1. Hard Real-Time Operating System: These operating systems guarantee that critical tasks
are completed within a range of time.
For example, a robot is hired to weld a car body. If the robot welds too early or too late, the
car cannot be sold, so it is a hard real-time system that requires complete car welding by the
robot hardly on time., scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.
2. Soft real-time operating system: This operating system provides some relaxation in the time
limit.
For example – Multimedia systems, digital audio systems, etc. Explicit, programmer-defined,
and controlled processes are encountered in real-time systems. A separate process is changed
by handling a single external event. The process is activated upon the occurrence of the
related event signaled by an interrupt.
Multitasking operation is accomplished by scheduling processes for execution independently
of each other. Each process is assigned a certain level of priority that corresponds to the
relative importance of the event that it services. The processor is allocated to the highest-
priority processes. This type of schedule, called, priority-based preemptive scheduling is used
by real-time systems.
3. Firm Real-time Operating System: RTOS of this type have to follow deadlines as well. In
spite of its small impact, missing a deadline can have unintended consequences, including a
reduction in the quality of the product. Example: Multimedia applications.
4. Deterministic Real-time operating System: Consistency is the main key in this type of real-
time operating system. It ensures that all the task and processes execute with predictable
timing all the time, which make it more suitable for applications in which timing accuracy is
very important. Examples: INTEGRITY, Pike OS.
History of Operating System
An operating system is a type of software that acts as an interface between the user and the
hardware. It is responsible for handling various critical functions of the computer and utilizing
resources very efficiently so the operating system is also known as a resource manager. The
operating system also acts like a government because just as the government has authority over
everything, similarly the operating system has authority over all resources. Various tasks that are
handled by OS are file management, task management, garbage management, memory
management, process management, disk management, I/O management, peripherals
management, etc.
Generation of Operating System
Below are four generations of operating systems.
The First Generation
The Second Generation
The Third Generation
The Fourth Generation
1. The First Generation (1940 to early 1950s)
In 1940, an operating system was not included in the creation of the first electrical computer.
Early computer users had complete control over the device and wrote programs in pure machine
language for every task. During the computer generation, a programmer can merely execute and
solve basic mathematical calculations. an operating system is not needed for these computations.
2. The Second Generation (1955 – 1965)
GMOSIS, the first operating system (OS) was developed in the early 1950s. For the IBM
Computer, General Motors has created the operating system. Because it gathers all related jobs
into groups or batches and then submits them to the operating system using a punch card to
finish all of them, the second-generation operating system was built on a single-stream batch
processing system.
3. The Third Generation (1965 – 1980)
Because it gathers all similar jobs into groups or batches and then submits them to the second
generation operating system using a punch card to finish all jobs in a machine, the second-
generation operating system was based on a single stream batch processing system. Control is
transferred to the operating system upon each job’s completion, whether it be routinely or
unexpectedly. The operating system cleans up after each work is finished before reading and
starting the subsequent job on a punch card. Large, professionally operated machines known as
mainframes were introduced after that. Operating system designers were able to create a new
operating system in the late 1960s that was capable of multiprogramming—the simultaneous
execution of several tasks in a single computer program.
In order to create operating systems that enable a CPU to be active at all times by carrying out
multiple jobs on a computer at once, multiprogramming has to be introduced. With the release of
the DEC PDP-1 in 1961, the third generation of minicomputers saw a new phase of growth and
development.
4. The Fourth Generation (1980 – Present Day)
The fourth generation of personal computers is the result of these PDPs. The Generation IV
(1980–Present)The evolution of the personal computer is linked to the fourth generation of
operating systems. Nonetheless, the third-generation minicomputers and the personal computer
have many similarities. At that time, minicomputers were only slightly more expensive than
personal computers, which were highly expensive.
The development of Microsoft and the Windows operating system was a significant influence in
the creation of personal computers. In 1975, Microsoft developed the first Windows operating
system. Bill Gates and Paul Allen had the idea to advance personal computers after releasing the
Microsoft Windows OS. As a result, the MS-DOS was released in 1981, but users found it
extremely challenging to decipher its complex commands. Windows is now the most widely
used and well-liked operating system available. Following then, Windows released a number of
operating systems, including Windows 95, Windows 98, Windows XP, and Windows 7, the most
recent operating system. The majority of Windows users are currently running Windows 10.
Apple is another well-known operating system in addition to Windows.
Defining an RTOS
Real-time operating systems (RTOS) are specialized software systems designed to manage and
execute applications that process data in real-time. Unlike general-purpose operating systems
(such as Windows or Linux), which prioritize overall system efficiency and user interaction,
RTOSs prioritize predictable timing and fast response times for specific tasks or processes. Key
characteristics of RTOS include:
The Scheduler
The scheduler is the part of the kernel responsible for deciding which task should be executing
at any particular time. The kernel can suspend and later resume a task many times during the task
lifetime.
The scheduling policy is the algorithm used by the scheduler to decide which task to execute at
any point in time. The policy of a (non real time) multi user system will most likely allow each
task a "fair" proportion of processor time. The policy used in real time / embedded systems is
described later.
In addition to being suspended involuntarily by the kernel a task can choose to suspend itself. It
will do this if it either wants to delay (sleep) for a fixed period, or wait (block) for a resource to
become available (eg a serial port) or an event to occur (eg a key press). A blocked or sleeping
task is not able to execute, and will not be allocated any processing time.
Referring to the numbers in the diagram above:
A real-time operating system (RTOS) serves real-time applications that process data without
any buffering delay. In an RTOS, the Processing time requirement is calculated in tenths of
seconds increments of time. It is a time-bound system that is defined as fixed time constraints. In
this type of system, processing must be done inside the specified constraints. Otherwise, the
system will fail.
Real-time tasks are the tasks associated with the quantitative expression of time. This
quantitative expression of time describes the behavior of the real-time tasks. Real-time tasks are
scheduled to finish all the computation events involved in it into timing constraint. The timing
constraint related to the real-time tasks is the deadline. All the real-time tasks need to be
completed before the deadline. For example, Input-output interaction with devices, web
browsing, etc.
There are the following types of tasks in real-time systems, such as:
1. Periodic Task
In periodic tasks, jobs are released at regular intervals. A periodic task repeats itself after a fixed
time interval. A periodic task is denoted by five tuples: Ti = < Φi, Pi, ei, Di >
Where,
o Φi: It is the phase of the task, and phase is the release time of the first job in the task. If
the phase is not mentioned, then the release time of the first job is assumed to be zero.
o Pi: It is the period of the task, i.e., the time interval between the release times of two
consecutive jobs.
o ei: It is the execution time of the task.
o Di: It is the relative deadline of the task.
For example: Consider the task Ti with period = 5 and execution time = 3
Phase is not given so, assume the release time of the first job as zero. So the job of this task is
first released at t = 0, then it executes for 3s, and then the next job is released at t = 5, which
executes for 3s, and the next job is released at t = 10. So jobs are released at t = 5k where k = 0,
1. . . N
Hyper period of a set of periodic tasks is the least common multiple of all the tasks in that set.
For example, two tasks T1 and T2 having period 4 and 5 respectively will have a hyper period, H
= lcm(p1, p2) = lcm(4, 5) = 20. The hyper period is the time after which the pattern of job release
times starts to repeat.
2. Dynamic Tasks
1. Aperiodic Tasks: In this type of task, jobs are released at arbitrary time intervals.
Aperiodic tasks have soft deadlines or no deadlines.
2. Sporadic Tasks:They are similar to aperiodic tasks, i.e., they repeat at random instances.
The only difference is that sporadic tasks have hard deadlines. Three tuples denote a
sporadic task: Ti =(ei, gi, Di)
o Where
o ei: It is the execution time of the task.
o gi: It is the minimum separation between the occurrence of two consecutive
instances of the task.
o Di: It is the relative deadline of the task.
3. Critical Tasks
Critical tasks are those whose timely executions are critical. If deadlines are missed, catastrophes
occur.
For example, life-support systems and the stability control of aircraft. If critical tasks are
executed at a higher frequency, then it is necessary.
4. Non-critical Tasks
Non-critical tasks are real times tasks. As the name implies, they are not critical to the
application. However, they can deal with time, varying data, and hence they are useless if not
completed within a deadline. The goal of scheduling these tasks is to maximize the percentage of
jobs successfully executed within their deadlines.
Task Scheduling
Real-time task scheduling essentially refers to determining how the various tasks are the pick for
execution by the operating system. Every operating system relies on one or more task schedulers
to prepare the schedule of execution of various tasks needed to run. Each task scheduler is
characterized by the scheduling algorithm it employs. A large number of algorithms for real-time
scheduling tasks have so far been developed.
Here are the following types of task scheduling in a real-time system, such as:
1. Valid Schedule: A valid schedule for a set of tasks is one where at most one task is
assigned to a processor at a time, no task is scheduled before its arrival time, and the
precedence and resource constraints of all tasks are satisfied.
2. Feasible Schedule: A valid schedule is called a feasible schedule only if all tasks meet
their respective time constraints in the schedule.
3. Proficient Scheduler: A task scheduler S1 is more proficient than another scheduler S2
if S1 can feasibly schedule all task sets that S2 can feasibly schedule, but not vice versa.
S1 can feasibly schedule all task sets that S2 can, but there is at least one task set that S2
cannot feasibly schedule, whereas S1 can. If S1 can feasibly schedule all task sets that S2
can feasibly schedule and vice versa, then S1 and S2 are called equally proficient
schedulers.
4. Optimal Scheduler: A real-time task scheduler is called optimal if it can feasibly
schedule any task set that any other scheduler can feasibly schedule. In other words, it
would not be possible to find a more proficient scheduling algorithm than an optimal
scheduler. If an optimal scheduler cannot schedule some task set, then no other scheduler
should produce a feasible schedule for that task set.
5. Scheduling Points: The scheduling points of a scheduler are the points on a timeline at
which the scheduler makes decisions regarding which task is to be run next. It is
important to note that a task scheduler does not need to run continuously, and the
operating system activates it only at the scheduling points to decide which task to run
next. The scheduling points are defined as instants marked by interrupts generated by a
periodic timer in a clock-driven scheduler. The occurrence of certain events determines
the scheduling points in an event-driven scheduler.
6. Preemptive Scheduler: A preemptive scheduler is one that, when a higher priority task
arrives, suspends any lower priority task that may be executing and takes up the higher
priority task for execution. Thus, in a preemptive scheduler, it cannot be the case that a
higher priority task is ready and waiting for execution, and the lower priority task is
executing. A preempted lower priority task can resume its execution only when no higher
priority task is ready.
7. Utilization: The processor utilization (or simply utilization) of a task is the average time
for which it executes per unit time interval. In notations:
for a periodic task Ti, the utilization ui = ei/pi, where
o ei is the execution time and
o pi is the period of Ti.
For a set of periodic tasks {Ti}: the total utilization due to all tasks U = i=1∑ n ei/pi.
Any good scheduling algorithm's objective is to feasibly schedule even those task sets
with very high utilization, i.e., utilization approaching 1. Of course, on a uniprocessor, it
is not possible to schedule task sets having utilization of more than 1.
8. Jitter
Jitter is the deviation of a periodic task from its strict periodic behavior. The arrival time
jitter is the deviation of the task from the precise periodic time of arrival. It may be
caused by imprecise clocks or other factors such as network congestions. Similarly,
completion time jitter is the deviation of the completion of a task from precise periodic
points.
The completion time jitter may be caused by the specific scheduling algorithm employed,
which takes up a task for scheduling as per convenience and the load at an instant, rather
than scheduling at some strict time instants. Jitters are undesirable for some applications.
Sometimes actual release time of a job is not known. Only know that r i is in a range [ri-,
ri+]. This range is known as release time jitter. Here
o ri is how early a job can be released and,
o ri+ is how late a job can be released.
Only the range [ei-, ei+] of the execution time of a job is known. Here
o ei- is the minimum amount of time required by a job to complete its execution
and,
o ei+ is the maximum amount of time required by a job to complete its execution.
An efficient way to represent precedence constraints is by using a directed graph G = (J, <)
where J is the set of jobs. This graph is known as the precedence graph. Vertices of the graph
represent jobs, and precedence constraints are represented using directed edges. If there is a
directed edge from Ji to Jj, it means that Ji is the immediate predecessor of Jj.
For example: Consider a task T having 5 jobs J 1, J2, J3, J4, and J5, such that J2 and J5 cannot begin
their execution until J1 completes and there are no other constraints. The precedence constraints
for this example are:
AD
1. < (1) = { }
2. < (2) = {1}
3. < (3) = { }
4. < (4) = { }
5. < (5) = {1}
Consider another example where a precedence graph is given, and you have to find precedence
constraints.
From the above graph, we derive the following precedence constraints:
1. J1< J2
2. J2< J3
3. J2< J4
4. J3< J4
1. **Introduction to Tasks:**
- Each task has its own execution context, including a program counter, stack pointer,
and local variables.
- Tasks are created during system initialization or dynamically during runtime using
RTOS-specific APIs.
- Tasks may have different priorities assigned to them, influencing their order of
execution by the RTOS scheduler.
2. **Task States:**
- **Ready:** The task is ready to execute but waiting for CPU time. It is in the queue
of tasks eligible to run.
- **Blocked:** The task is waiting for an event or resource (e.g., I/O completion,
semaphore release). It cannot proceed until the condition is satisfied.
- **Suspended:** The task has been temporarily halted or paused by the system or
another task.
3. **Scheduling:**
- Scheduling in an RTOS refers to the mechanism by which the RTOS decides which
task should execute next on the CPU.
- The scheduler ensures that tasks are executed in a manner that meets their priority
requirements and real-time constraints.
- **Earliest Deadline First (EDF):** Tasks are scheduled based on their deadlines.
The scheduler always selects the task with the earliest deadline that is ready to run.
4. **Relationship:**
- Tasks, task states, and scheduling are closely intertwined in an RTOS environment:
- **Task Creation and States:** Tasks are created with specific priorities and enter the
system in a ready state. They may transition to running, blocked, or suspended states
based on events or resource availability.
- **Scheduling and Task Execution:** The scheduler determines the order in which
tasks transition between states and execute on the CPU. It ensures that higher-priority
tasks preempt lower-priority tasks when necessary, maintaining responsiveness and
meeting real-time requirements.
Preemptive Scheduling
Preemptive scheduling allows the interruption of a currently running task, so another one
with more “urgent” status can be run. The interrupted task is involuntarily moved by the
scheduler from running state to ready state. This dynamic switching between tasks that this
algorithm employs is, in fact, a form of multitasking. It requires assigning a priority level
for each task. A running task can be interrupted if a task with a higher priority enters the
queue.
Fig.
1 Preemptive Scheduling
As an example let’s have three tasks called Task 1, Task 2 and Task 3. Task 1 has the
lowest priority and Task 3 has the highest priority. Their arrival times and execute times
are listed in the table below.
Task 1 10 50
Task 2 40 50
Task 3 60 40
In Fig. 1 we can see that Task 1 is the first to start executing, as it is the first one to arrive
(at t = 10 μs ). Task 2 arrives at t = 40μs and since it has a higher priority, the scheduler
interrupts the execution of Task 1 and puts Task 2 into running state. Task 3 which has the
highest priority arrives at t = 60 μs. At this moment Task 2 is interrupted and Task 3 is put
into running state. As it is the highest priority task it runs until it completes at t = 100 μs.
Then Task 2 resumes its operation as the current highest priority task. Task 1 is the last to
complete is operation.
In non-preemptive scheduling, the scheduler has more restricted control over the tasks. It
can only start a task and then it has to wait for the task to finish or for the task to
voluntarily return the control. A running task can’t be stopped by the scheduler.
If we take the three tasks specified in the table from the previous chapter and schedule
them using a non-preemptive algorithm we get the behavior shown in Fig. 2. Once started,
each task completes its operation and then the next one starts.
The non-preemptive scheduling can simplify the synchronization of the tasks, but that is at
the cost of increased response times to events. This reduces its practical use in complex
real-time systems.
Round robin scheduling is a computer algorithm used in multitasking and operating systems. It's
a pre-emptive algorithm where each process is assigned a fixed time slice or quantum. Here’s
how it works:
Important Abbreviations
1. CPU - - - > Central Processing Unit
2. AT - - - > Arrival Time
3. BT - - - > Burst Time
4. WT - - - > Waiting Time
5. TAT - - - > Turn Around Time
6. CT - - - > Completion Time
7. FIFO - - - > First In First Out
8. TQ - - - > Time Quantum
Round Robin CPU Scheduling is the most important CPU Scheduling Algorithm which is ever
used in the history of CPU Scheduling Algorithms. Round Robin CPU Scheduling uses Time
Quantum (TQ). The Time Quantum is something which is removed from the Burst Time and lets
the chunk of process to be completed.
Process Queue: All processes in the system are placed in a queue. The order typically doesn't
change unless a new process arrives.
Time Slicing: Each process in the queue is given a small unit of CPU time, called a time slice
or quantum. For example, if the time slice is 10 milliseconds and there are three processes, each
process gets 10 milliseconds of CPU time in turn.
Execution: The operating system cycles through the process queue, allocating the CPU to
each process for its time slice. If a process doesn't finish within its time slice, it's preempted
(paused) and placed back at the end of the queue to wait for its next turn.
Completion: This cycle continues until all processes have completed their tasks.
Time Sharing is the main emphasis of the algorithm. Each step of this algorithm is carried out
cyclically. The system defines a specific time slice, known as a time quantum.
First, the processes which are eligible to enter the ready queue enter the ready queue. After
entering the first process in Ready Queue is executed for a Time Quantum chunk of time. After
execution is complete, the process is removed from the ready queue. Even now the process
requires some time to complete its execution, then the process is added to Ready Queue.
The Ready Queue does not hold processes which already present in the Ready Queue. The Ready
Queue is designed in such a manner that it does not hold non unique processes. By holding same
processes Redundancy of the processes increases.
After, the process execution is complete, the Ready Queue does not take the completed process
for holding.
Advantages
Disadvantages
1. Low Operating System slicing times will result in decreased CPU output.
2. Round Robin CPU Scheduling approach takes longer to swap contexts.
3. Time quantum has a significant impact on its performance.
4. The procedures cannot have priorities established.
Examples:
Ready Queue:
1. P1, P2, P3, P4, P5, P6, P1, P3, P4, P5, P6, P3, P4, P5
Gantt chart:
Cooperative scheduling is a type of scheduling where processes voluntarily yield control of the
CPU to other processes at specified points during their execution. Unlike preemptive scheduling,
where the operating system forcibly interrupts a process and allocates CPU time to another
process according to a scheduling algorithm, cooperative scheduling relies on processes
cooperating by giving up control.
loop forever
Read Queue
Process Data/Update Outputs
end loop
1. **Voluntary Yielding:** Processes explicitly relinquish CPU control. This can happen when a
process reaches a certain point in its execution (e.g., after completing a task or during a wait
operation).
2. **No Preemption:** Once a process starts executing, it continues until it voluntarily yields
control. The operating system does not forcefully interrupt the process.
4. **Potential for Deadlock:** If a process fails to yield control when required, it can lead to
deadlock or starvation of other processes waiting to execute.
- **Efficiency:** Since processes yield CPU control voluntarily, there is minimal overhead
associated with context switching.
- **Predictability:** The order of process execution can be more predictable because it depends
on the processes' cooperation rather than a scheduler's decisions.
- **Risk of Unresponsiveness:** If a process does not yield control properly, other processes
may be blocked indefinitely, leading to system unresponsiveness.
- **Real-Time Systems:** In certain real-time applications where strict timing requirements are
essential, cooperative scheduling can be employed to ensure deterministic behavior.
### Examples:
- **Classic Mac OS:** Versions of the Macintosh operating system before Mac OS X used
cooperative scheduling.
- **DOS (Disk Operating System):** Early versions of DOS relied on cooperative scheduling
where applications had to yield control to the system when performing I/O operations or waiting
for user input.
Introduction to Semaphores
In embedded systems, semaphores play a crucial role in managing shared resources and
synchronizing tasks or threads that operate concurrently. Embedded systems are typically
constrained by limited resources such as memory, processing power, and often operate in real-
time environments where responsiveness and determinism are critical. Here’s an introduction to
how semaphores are used in embedded systems:
1. **Resource Protection:**
- Embedded systems often have multiple tasks or threads that need to access shared
resources like peripherals (e.g., sensors, actuators), memory buffers, communication
interfaces (e.g., UART, SPI), or system-wide data structures.
- Semaphores are used to enforce mutual exclusion, ensuring that only one task or
thread accesses a shared resource at any given time. This prevents data corruption and
ensures data integrity.
2. **Task Synchronization:**
- In real-time embedded systems, tasks (or threads) often need to synchronize their
execution based on specific conditions or events.
- Semaphores provide a mechanism for tasks to wait for signals or events before
proceeding, thereby coordinating their execution and ensuring tasks complete their
operations in a synchronized manner.
3. **Interrupt Handling:**
- **Binary Semaphore:** Often used to implement mutual exclusion where only one task
can access a resource at a time (e.g., controlling access to a shared hardware peripheral).
- Each motor control task needs to access shared memory containing motor positions.
- Semaphores can be used to ensure that only one task accesses the shared memory at a
time to update motor positions, preventing conflicts and ensuring accurate control of the
robotic arm.
Unit-5
- **Functionality:** Define the specific tasks and functions the embedded system needs to
perform. This could range from controlling hardware peripherals (sensors, actuators) to
processing data, managing communication interfaces, and interacting with users.
- **Operating System (OS) Selection:** Decide whether to use a real-time operating system
(RTOS) or develop the application without an OS (bare-metal programming). RTOSs like
FreeRTOS, uC/OS-II, or Linux-based systems provide scheduling, task management, and
device drivers, while bare-metal programming offers greater control over resources but
requires more effort in managing tasks and interrupts.
- **RTOS Tasks/Threads:** If using an RTOS, define tasks and manage their execution,
ensuring proper task prioritization and scheduling to meet real-time requirements.
- **Integration Testing:** Test the integrated system to ensure all modules work together
as expected.
- **Deployment:** Flash the compiled application onto the embedded device and ensure
proper initialization and startup sequences.
- **Field Updates:** Plan for software updates and patches, considering mechanisms for
firmware updates in the field (e.g., over-the-air updates or via physical access).
Objectives:
The Embedded Product Life Cycle (EDLC) encompasses various phases from concept to
retirement of an embedded system. Each phase has specific objectives aimed at ensuring the
successful development, deployment, and maintenance of the embedded product. Here’s an
overview of objectives in different phases and modeling of the Embedded Product Life Cycle:
- **Objective:** Develop a detailed design specification that meets the requirements and
constraints identified in the previous phases.
- **Objective:** Translate the design into executable software and hardware components,
ensuring adherence to design specifications.
- **Objective:** Validate that the embedded system meets specified requirements and
functions correctly in its intended environment.
- **Objective:** Deploy the embedded system into the target environment for operational
use.
- **Waterfall Model:** Sequential phases from concept to deployment, each phase feeding
into the next.
Each phase in the Embedded Product Life Cycle serves distinct objectives crucial for ensuring
the successful development, deployment, and maintenance of embedded systems. Effective
management of these objectives and activities throughout the EDLC is essential for delivering
reliable, efficient, and cost-effective embedded products.
Embedded systems impact our daily life activities, interactions, and tasks—the way we spend
our time off, the way we travel, and the way we do business. With diverse applications in
communications, transportation, manufacturing, retail, consumer electronics, healthcare, and
energy, embedded systems have transformed how we interact with technology in our everyday
lives.
Launching a new embedded product can be exciting and challenging at the same time. In this
article, we’ll explain the general outline of the four different development life cycle phases of an
embedded system.
An embedded (or IoT) product development life cycle is similar to the typical product
development life cycle for all software.
For building and developing a successful embedded product, following a well-defined embedded
system design and development life cycle is critical. It ensures high-quality products for end-
users, defects prevention within the product development phase, and maximized productivity for
a better ROI.
Here are the four stages of the development process of embedded systems:
Please note that this is an optimal approach. Each of these 4 steps comprises sub-steps that may
vary or require some adjustments as per the project.
The first step in the product development life cycle is to clearly define your product idea that will
fulfill a market niche and address a problem. You must then perform the analysis to see if the
idea can transform into a viable concept before development is started.
The development life cycle of an embedded product should initiate as a response to a need. The
need may come from an individual, public, or company. Based on the need, a statement or
“concept proposal” is prepared, which should get approval from the senior management as well
as the funding agency.
New/custom product development: the need for a product that does not exist in the market or will
act as a competitor to an existing product.
Product re-engineering: the need to reengineer a product already available by adding new
features or functionality.
Product maintenance: the need to launch a new version of a product followed by a failure due to
non-functioning or to provide technical support for an existing product.
Define your target audience
A crucial yet often overlooked component of the embedded product development process is to
identify and define the target market of the product. When analyzing the potential target
audience, ask some of the following questions to yourself:
Only after carefully answering these questions, you can determine your target audience and
identify your market.
Before moving onto the development stage, you should use the data collected during the research
of the target audience to define the product purpose, its functional model, and the required
hardware & software.
Once you carefully define your market, start identifying who your potential competitors may be.
Acquaint yourself with their experience going through the development life cycle to learn and
choose a better approach for your project. Analyze your competitors’ products to anticipate end
users’ reactions to your final product.
Take time to carry out further market research to connect with potential collaborators and realize
how well your business idea will be received in the market.
Step #2 – Design
Choose a development approach for implementation of your idea
Before designing the prototype, you need to decide on the development approach so that your
idea can turn into a reality within budget.
The designing process starts with developing the architecture of the product based on the specific
requirements gathered in the planning & analysis phase. The architecture should reflect software
and hardware components that will ensure the performance of target functions.
Identify the right tools & technologies
Carefully identify the technical resources needed to build a proof-of-concept that can be used for
market research, concept refinement, and investment pitches before moving to the
implementation stage.
A proof-of-concept is a small model that has the MVP (minimum viable product) features based
on development kits.
These out-of-the-box, pre-built hardware platforms usually come with integrated software to kick
start a project.
It is important to pay attention to several features when deciding on a development board for
embedded processor, including the available peripherals, connectors, other communication
peripherals, and onboard sensors, as well as the mechanical form factor and mounting style in a
prototype enclosure.
At a minimum, an embedded product development team will need one each of the following:
Depending on the project complexity and budget, you can decide if you need more than one of
each of these engineers. Additionally, you may also need experts with knowledge about security
management, cloud-based software development, and team management for embedded or IoT
product development.
Getting these technical resources to work under one roof for your project can be costly.
However, you can also hire embedded design & development engineers and outsource it to a
reliable service provider that specializes in embedded product development.
Step #3 – Implementation
Create a prototype realizing the design; test and improve quality of embedded solution
In the implementation stage, a prototype of the embedded product is created. It also includes
adding new features and improving the quality of product by embedded software engineers.
When your product’s hardware components like sensors and processors are integrated on a PCB
for the first time, we call those PCBs alpha prototypes. Generally, small functional issues may
occur which can be fixed with some tuning and adjustments.
As new features are added to the product, you finally get a production-intent design, which we
refer to as a beta prototype.
The product is then tested in the field to check if your solution is working as expected and
enhanced for quality based on end-user feedback.
Software developers will also consider the marketing feedback and further check if the solution
meets regulatory requirements, and ensure the embedded solution is secure, scalable, and
maintainable.
Launching a new, fully functional model of the embedded product can be an exciting and
challenging time. In this stage of the product development life cycle, you need to procure the
hardware components and set up a manufacturing line where they will be placed on the PCB.
Procuring the spare parts and setting up the manufacturing facility require a 90-day notice. So,
you should start communicating with your manufacturer and order components up to three
months in advance.
Make sure the first batch of boards is tested post-manufacturing to figure out any defects or
faults in the production process. Once tested, assemble them into their enclosure, do final testing
before boxing and then send them to the end-users. Don’t forget about post-production support
and maintenance as this is an important aspect of the embedded product development life cycle.
The above-discussed four-step process can help you successfully launch your own new
embedded product.
As technological advancements and movements like the IoT, Industry 4.0, and “smart” cities
continue to gain ground, embedded system development will become a major spot of innovation,
which is expected to grow exponentially on a per-year basis. However, with increased adoption
of embedded systems, the complexity in embedded software has also increased, which is
increasing the overall cost of embedded software development.
Outsourcing embedded software projects can be a great idea for SMEs to build their product
comprehensively while cutting costs and improving time to market. If you are looking to take the
embedded development off your plate, what you need is a reputed offshore embedded
development company that understands the challenges of embedded product development. When
you hire embedded design & development engineers with knowledge and expertise in all aspects
of embedded system design and development life cycle from such an offshore company, you
take your business to the next level.
2. Case study : Smart Card Smart card is one of the most used embedded system today. It is used
for credit card, debit bank card, e-wallet card, identification card, medical card (for history and
diagnosis details) and card for some new innovative applications. Smart card improves the
convenience and security of any transaction. It provides a tamper-proof storage of user and
account identity. Smart card systems have proven to be more reliable than other machine-
readable cards, like magnetic stripe and barcode, etc. It also provides vital components of system
security for the exchange of data throughout virtually any type of network. It is also a cost
effective method. The smart cards are used today in various applications, including healthcare,
banking, entertainment, and transportation. Smart card provides the security by added features
that benefit all these applications.
2.1 Smart Cards Smart cards are plastic cards embedded with a microprocessor/ microcontroller
or memory chip that stores and transacts data. The smart card is differentiated into two types
based on application as follows:
● Identification based smart card Identification and process based smart cards are of two types.
They are: Contact based smart cards- In the Contact based cards, the chip is attached to the
materials itself as shown in Figure Contactless smart cards.
RFID tags
This type of smart card is not attached directly to the system. Example is USB smart card RFID
as shown in above figure.
2.2.2 Embedded software components Software components needed for the smart card system
are Boot-up, system initialization and embedded system features. It needs a secure three layered
file system called Smart card secure file system. This Smart card secure file system is needed for
storing the files. Connection establishment and termination is provided by TCP/IP port
connection. Then cryptographic algorithm is used for the added features like host connection. OS
is stored in the protected part of the ROM. Host and card authentication are also needed for the
smart card. Optimum code size and multidimensional array are needed to save the data.
2.2.3 Smart Card system requirements Purpose: It enables authentication and verification of card
and card holder by a host. Smart card enables GUI at host machine to interact with the
cardholder/user for the required transactions. For example, financial transactions with a bank or
credit card transactions. Inputs IO port is required to receive header and messages for the smart
card system. Internal Signals, Events and Notifications: On power up, radiation is generated that
gives the power supply to smart card (to activate the card). On activation a reset_Task is initiated
which initialises the necessary timers and creates the tasks (task_ReadPort, task_PW, task_Appl)
necessary to perform other functions. The task_ReadPort is responsible for sending & receiving
the messages and for starting & closing applications. The task_PW is responsible for handling
the passwords. The task_Appl runs the actual application. Output Headers and messages are
transmitted through antenna at Port_IO. Control panel No control panel is at the card. The
control panel and GUIs are at the host machine (for example, at a ATM credit card reader).
Function of the system The functions are as follows: First the card is inserted at a host machine.
Then the radiations from the host activate a charge pump at the card. The charge pump powers
the SoC(SystemOn Chip). On power up, system reset signals reset_Task to start. All transactions
between cardholder/user now takes place through GUIs at the host control panel (screen or touch
screen or LCD display panel). Design metrics The following are the design metrics used for the
smart card case study.
● Power Dissipation: Maximum (tolerance) amount of heat it can generate while working must
be less.
● Code size: The optimum system memory needed should not exceed 64 KB memory.
● Limited use of data types: The multidimensional arrays, long 64-bit integer and floating points
are the data types used in smart card. The smart card should support a limited use of error
handlers, exceptions, signals, serialization, debugging and profiling mechanisms.
● File management: There is either a fixed length file management or a variable length file
management with each file with a predefined offset. The file system stores the data using a three
layer mechanism (explained below) .
● Microcontroller hardware: It generates distinct coded physical addresses for the program and
logical addresses for the data. It is a protected, once-writable memory space.
● Validity: System is embedded with expiry date, after which the card authorization through the
hosts is disabled.
● User Interfaces: At host machine, graphics at LCD or touch screen display on LCD and
commands for card holder (card user) transactions (within 1s) are the user interface
requirements. Apart from these metrics the manufacturing and engineering cost is also
considered for the design metrics. Test conditions and validations It must be tested on different
host machine versions for fail-proof card-host communication.
2.2.4 Smart Card hardware A smart Card hardware system is shown in Figure 6 given below. It
consists of a plastic card in ISO standard and it is an embedded SoC. Figure 6. Smart Card
Hardware System The CPU in the hardware locks certain section of memory and protects 1 kB
or more data from modification and access by any external source or instruction outside that
memory ( protecting the data). Another way of protecting is to allow the CPU to access through
the physical addresses, which are different from the logical addresses used in the program. The
EEPROM or Flash memory is needed to store P.I.N. (Personal Identification Number). It is also
used to store the unblocking P.I.N., access condition for the data files, card-user data, application
generated data, applications non-volatile data, and invalidation lock to invalidate the card after
the expiry date or server instruction. The ROM in smart card contains a Fabrication key (unique
security key for each card), Personalization key (this key is inserted after the testing phase and it
preserves the fabrication key; after that the RTOS only uses logical address), RTOS codes,
application codes and Utilization lock. Then the need of RAM is to store the run time temporary
variables. The chip-supply system voltage is extracted by a charge pump I/O system. It extracts
the charge from the signals from the host and then it generates regulated voltage to card chip,
memory and I/O system. The I/O system of the chip and host interact through asynchronous
UART at 9.6 k or 106 k or 115.2 k baud/s.
2.2.5 Smart Card Software Smart Cards are the most used systems today in the area of secure
SoC Systems. It needs a cryptographic software. Embedded system in the card needs a special
feature in its OS over and above the MS-DOS and UNIX system features. The special features
needed are a Protected environment where the OS is stored, i.e., in a protected part of ROM. It
needs a restricted runtime environment and every method, class and runtime library in the OS
should be scalable. It requires an optimum code size and a limited use of data types and
multidimensional arrays. It needs a three-layered file system for the data. One for the master file
containing the header (a header means file status, access conditions, and the file lock). Second is
a dedicated file used to hold a file grouping and headers of the immediate successor. A third file
(called the elementary file) is used to hold the file header and its file data. There may be either
fixed or variable length file management with each file predefined with offset. It should have
classes for network, sockets, connections, datagrams etc.
Sure, let's outline a case study of an embedded system for Adaptive Cruise Control (ACC) in a
car:
### Introduction
Adaptive Cruise Control (ACC) is an advanced driver-assistance system (ADAS) that enhances
driving comfort and safety by automatically adjusting the vehicle's speed to maintain a safe
distance from vehicles ahead. It relies on sensors, actuators, and sophisticated control algorithms
to operate effectively.
1. **Sensors**:
- **Radar Sensors**: These are essential for detecting vehicles in the car's vicinity. They emit
radio waves and measure the time it takes for them to bounce back, calculating distance and
relative speed.
- **Camera Systems**: Some ACC systems use cameras to detect lane markings and identify
vehicles ahead. These cameras provide additional data for the control algorithms to make
decisions.
2. **Control Algorithms**:
- **Distance Control**: Determines the safe following distance based on sensor inputs and
user settings. It calculates the desired acceleration or deceleration to maintain this distance.
- **Speed Regulation**: Adjusts the vehicle's speed smoothly by controlling throttle and
braking systems.
- **Integration with Braking System**: Interfaces with the car's braking system to apply
brakes when necessary, ensuring safe distance maintenance.
3. **Actuators**:
- **Brake Actuators**: Applies brakes as needed to slow down or maintain safe distances from
vehicles ahead.
- **Memory**: Stores control algorithms, sensor calibration data, and system parameters.
- **Interfaces**: Interfaces with sensors (radar, cameras), actuators (throttle, brakes), and the
vehicle's CAN bus for communication with other vehicle systems.
- Radar sensors continuously emit and receive signals to detect nearby vehicles' positions and
speeds.
- Camera systems capture video frames to identify lane markings and vehicles.
2. **Data Processing**:
- Sensor data is processed in real-time by the embedded system to calculate distances, relative
speeds, and potential collision risks.
- Control algorithms determine the appropriate acceleration or braking commands based on the
desired speed set by the driver and the detected traffic conditions.
3. **Actuation**:
- Brake actuators apply gradual braking if necessary to maintain a safe following distance.
4. **Driver Interaction**:
- The driver sets the desired speed and following distance using controls (e.g., steering wheel
buttons, touchscreen interface).
- The ACC system operates autonomously within these parameters, reducing the need for
constant manual speed adjustments.
- **Fail-safe Mechanisms**: In case of sensor failure or critical system malfunction, the ACC
system is designed to revert control back to the driver or activate emergency braking to prevent
collisions.
Certainly! Let's delve into a case study of embedded systems in mobile phone software for key
inputs:
case study in embedded system for mobile phone software for key inputs
### Introduction
Embedded systems play a crucial role in mobile phones by managing key inputs from users,
including touchscreens, physical buttons, and other sensors. These systems ensure seamless
interaction between users and the device, translating physical inputs into digital commands that
drive various functionalities of the phone.
1. **Touchscreen Interface**:
- **Capacitive Touch Sensors**: Embedded systems interpret touch gestures (tap, swipe,
pinch) using capacitive sensors integrated into the touchscreen.
- **Power Button**: An embedded controller interprets presses to turn the device on/off and
manage sleep/wake functions.
- **Volume Buttons**: Embedded systems process button presses to adjust audio volume
levels.
- **Navigation Buttons**: On some devices, embedded systems interpret physical buttons for
navigation and control purposes.
- **Proximity Sensor**: Used for detecting when the phone is held to the ear during calls to
turn off the screen and save power.
4. **System-on-Chip (SoC)**:
- **Processor**: Executes firmware and software responsible for interpreting inputs and
controlling device operations.
- **Integrated Circuits**: Manage power, communication, and data processing tasks within the
phone.
- **Device Drivers**: Interface with hardware components to translate inputs into commands
that applications and the operating system can understand.
- **Input Handling Algorithms**: Determine how inputs from various sensors and buttons are
interpreted and processed to trigger specific actions or events.
- **Low-Level System Management**: Control power states, manage interrupts, and handle
resource allocation to optimize performance and battery life.
### Operation Scenario
1. **Touchscreen Interaction**:
- Embedded software processes touch events, determining the type of gesture (tap, swipe,
pinch) and triggering corresponding actions (opening apps, scrolling content).
- Embedded controllers interpret button presses and send signals to the operating system or
applications to perform functions such as turning the device on/off, adjusting volume levels, or
taking screenshots.
3. **Sensor-Based Inputs**:
- **Proximity Sensor**: Detects when the phone is near the user's face during calls, turning off
the display to prevent accidental touches.
- **Ambient Light Sensor**: Adjusts screen brightness based on the surrounding light
conditions, enhancing user experience and saving battery life.
- **Input Validation**: Embedded systems ensure that inputs are validated to prevent
unintended actions or errors caused by accidental touches or button presses.
- **Hardware Redundancy**: Some critical inputs, like power buttons, may have redundancy to
ensure the device can be powered on or off even if one mechanism fails.
- **Error Handling**: Robust firmware and software handle errors gracefully, preventing
crashes or system instability due to unexpected inputs or sensor malfunctions.