0% found this document useful (0 votes)
21 views47 pages

Bal Kishore Bharti - 20 - Cse - Embedded System Assignment

The document provides an introduction and overview of embedded systems including definitions, examples, characteristics, classifications, components, architecture and more. Embedded systems are defined as computing devices designed for specific functions within larger systems and applications.

Uploaded by

ayuskumr07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views47 pages

Bal Kishore Bharti - 20 - Cse - Embedded System Assignment

The document provides an introduction and overview of embedded systems including definitions, examples, characteristics, classifications, components, architecture and more. Embedded systems are defined as computing devices designed for specific functions within larger systems and applications.

Uploaded by

ayuskumr07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Department of Computer Science & Engineering

School of Engineering and Technology

HEMWATI NANDAN BAHUGUNA GARHWAL


UNIVERSITY SRINAGAR, UTTRAKHAND, 246174

SESSION :2023-24
Assignment of
EMBEDDED SYSTEM

SUBMITTED TO: - SUBMITTED BY: -


ASHISH SEMWAL SIR BAL KISHORE BHARTI
Dept. Of CSE B. TECH(CSE), SEM – 7th

Roll No.20134501020
UNIT-1
Introduction of Embedded System
Embedded Systems are omnipresent. We find everywhere – at our homes,
in our offices, in shopping malls and so on.
What is an Embedded System?
“Embedded System is Special purpose Computer.”
An embedded system can be defined as a computing device that does a
specific focused job.
An embedded system is a combination of computer hardware and software
designed for a specific function. Embedded systems may also function
within a larger system. The systems can be programmable or have a fixed
functionality.
Examples of embedded system –
Air-conditioner, VCD player, DVD player, printer, fax machine, mobile
phone etc.
Characteristics of Embedded System: -
1. Performs specific tasks: Embedded systems do a very specific task;
they cannot be programmed to do different things.
2. Low Cost: Embedded systems have very limited resources,
particularly the memory. generally, they do not have secondary
storage devices such as the CDROM or the floppy disk.
3. Time Specific: Embedded systems have to work against some
deadlines. A specific job has to be completed within a specific time.
In some embedded systems, called real-time systems, the deadlines
are stringent. Missing a deadline may cause a catastrophe-loss of life
or damage to property.
4. Low Power: Embedded systems are constrained for power. As many
embedded systems operate through a battery, the power consumption
has to be very low.
5. Minimal User Interface: Many embedded systems do not require a
complex user interface. They are often designed to operate
autonomously or with minimal user intervention.
6. High Efficiency: Embedded systems are designed to be highly
efficient in terms of processing power, memory usage, and energy
consumption. This ensures that they can perform their specific task
with maximum efficiency and reliability.
7. Highly Stable: Embedded systems are typically designed to be stable
and reliable. They are often used in applications where failure is not
an option, such as in medical devices or aviation.
8. High Reliability: Embedded systems are designed to operate reliably
and consistently over long periods. This is important in applications
where downtime can be costly or dangerous.

Classification of Embedded System: -


Embedded Systems are classified based on the two factors i.e.

Performance and Functional Requirements


Performance of Micro-controllers
Based on Performance and Functional Requirements it is divided into
4 types as follows:

Real-Time Embedded Systems:


A Real-Time Embedded System is strictly time specific which means
these embedded systems provides output in a particular/defined time
interval. These types of embedded systems provide quick response in
critical situations which gives most priority to time-based task
performance and generation of output. That’s why real time embedded
systems are used in defence sector, medical and health care sector, and
some other industrial applications where output in the right time is
given more importance.
Further this Real-Time Embedded System is divided into two types
i.e.
Soft Real Time Embedded Systems –
In these types of embedded systems time/deadline is not so strictly
followed. If deadline of the task is passed (means the system didn’t
give result in the defined time) still result or output is accepted.
Hard Real-Time Embedded Systems –
In these types of embedded systems time/deadline of task is strictly
followed. Task must be completed in between time frame (defined
time interval) otherwise result/output may not be accepted.
Examples:

Traffic control system


Military usage in defence sector
Medical usage in health sector

Stand Alone Embedded Systems:


Stand Alone Embedded Systems are independent systems which can
work by themselves they don’t depend on a host system. It takes input
in digital or analog form and provides the output.
Examples:

MP3 players
Microwave ovens
calculator

Networked Embedded Systems:


Networked Embedded Systems are connected to a network which
may be wired or wireless to provide output to the attached device.
They communicate with embedded web server through network.
Examples:

Home security systems


ATM machine
Card swipe machine

Mobile Embedded Systems:


Mobile embedded systems are small and easy to use and requires less
resources. They are the most preferred embedded systems. In
portability point of view mobile embedded systems are also best.
Examples:

MP3 player
Mobile phones
Digital Camera
Based on Performance and micro-controller it is divided into 3 types
as follows:

Small Scale Embedded Systems:


Small Scale Embedded Systems are designed using an 8-bit or 16-bit
micro-controller. They can be powered by a battery. The processor
uses very less/limited resources of memory and processing speed.
Mainly these systems do not act as an independent system they act as
any component of computer system but they did not compute and
dedicated for a specific task.

Medium Scale Embedded Systems:


Medium Scale Embedded Systems are designed using a 16-bit or 32-
bit micro-controller. These medium Scale Embedded Systems are
faster than that of small-Scale Embedded Systems. Integration of
hardware and software is complex in these systems. Java, C, C++ are
the programming languages are used to develop medium scale
embedded systems. Different type of software tools like compiler,
debugger, simulator etc are used to develop these types of systems.

Sophisticated or Complex Embedded Systems:


Sophisticated or Complex Embedded Systems are designed using
multiple 32-bit or 64-bit micro-controller. These systems are
developed to perform large scale complex functions. These systems
have high hardware and software complexities. We use both hardware
and software components to design final systems or hardware
products.
Components of Embedded System: -

Memory

Input Port System Core Output port

Other devices

Microcontroller or Microprocessor: This is the heart of the embedded


system, responsible for executing instructions and controlling the system's
operation. Microcontrollers are often used in simpler systems, while more
complex systems may use microprocessors.
Memory: Embedded systems include various types of memory: Program
Memory (Flash or ROM): Stores the program or firmware that the
embedded system runs. Data Memory (RAM): Stores data and variables
used by the system during its operation.
Input Devices: These components allow the embedded system to gather
data or receive input. Common input devices include sensors, switches,
keypads, and communication interfaces like UART or USB.
Output Devices: These components allow the embedded system to provide
feedback or control external elements. Examples of output devices include
LEDs, displays, motors, and communication interfaces like Ethernet or Wi-
Fi.
Peripherals: Embedded systems may include various peripherals to
support their operation, such as timers, counters, analog-to-digital
converters (ADC), and digital-to-analog converters (DAC).
Power Supply: This provides the necessary electrical power to operate the
embedded system. It can be a battery, external power source, or power
management circuitry.
Software/Firmware: This includes the software code or firmware that the
microcontroller or microprocessor executes. It defines the embedded
system's behaviour and functionality.
User Interface: Some embedded systems have user interfaces for
configuration or interaction. This can include buttons, touchscreens, or
graphical displays.
Sensors: Sensors are input devices that allow the embedded system to
gather data from the environment. These sensors can include temperature
sensors, motion sensors, light sensors, and more.
Real-Time Operating System (RTOS): In more complex embedded
systems, an RTOS may be used to manage tasks, scheduling, and resource
allocation, ensuring real-time responsiveness.
Communication Interfaces: Embedded systems often need to
communicate with other devices or systems. Common communication
interfaces include UART, SPI, I2C, CAN, Ethernet, and wireless
interfaces like Wi-Fi, Bluetooth, or Zigbee.
Clock Source: A clock source provides a reference clock signal to
synchronize the operation of the microcontroller or microprocessor and
other components.
Actuators: Actuators are output devices that allow the embedded system to
control or manipulate the environment. Examples include motors,
solenoids, and relays.
Power Management Circuitry: This component manages power
distribution, battery charging, and power-saving features to optimize energy
efficiency.
Enclosure and Housing: In many cases, embedded systems are enclosed
in a physical housing to protect them from environmental factors, dust, and
physical damage.
Architecture of Embedded System: -
Von Neumann Architecture
The Von Neumann architecture was first proposed by a computer scientist
John von Neumann. In this architecture, one data path or bus exists for both
instruction and data. As a result, the CPU does one operation at a time. It
either fetches an instruction from memory, or performs read/write operation
on data. So, an instruction fetch and a data operation cannot occur
simultaneously, sharing a common bus.

Von-Neumann architecture supports simple hardware. It allows the use of a


single, sequential memory. Today's processing speeds vastly outpace
memory access times, and we employ a very fast but small amount of
memory (cache) local to the processor.

Harvard Architecture
The Harvard architecture offers separate storage and signal buses for
instructions and data. This architecture has data storage entirely contained
within the CPU, and there is no access to the instruction storage as data.
Computers have separate memory areas for program instructions and data
using internal data buses, allowing simultaneous access to both instructions
and data.
Programs needed to be loaded by an operator; the processor could not boot
itself. In a Harvard architecture, there is no need to make the two memories
share properties.
EMBEDDED SYSTEM /SOFTWARE ARCHITECTURE

Application
Programming Interface
Application Software

Communication

Software
Operating
System

ffffffffffffffffffffffffffffffffffffddd
aaaaaaaaaaaaaaaaaaaaax File
Libraries Kernal
system

Device
Manage
r

The software is an embedded system consists of operating system and


the application software. The operating system is optional; if it is not
present, you need to write your own software routines to access the
hardware.
As embedded systems are constrained for memory, we cannot use an
operating system such as Windows or Unix on them. But still, we need the
services provided by an operating system.
There are two main types of processors used in embedded systems:
1- Ordinary microprocessors:
"They use separate integrated circuits for memory and peripherals" "
2- Microcontrollers:
They have on-chip peripherals therefore, it is smaller, and consume less
power."

Types of embedded software architectures:


1-Simple control loop:
This software uses a loop, that calls subroutines which is responsible for a
part of the hardware or software.
In this design, the software simply has a loop which monitors the input
devices. The loop calls subroutines, each of which manages a part of the
hardware or software. Hence it is called a simple control loop or
programmed input-output.
2- Interrupt-controlled system:
This is a method to resolve situations during which the system is interrupted
by multiple actions at the same time. This issue is handeled by the
microprocessor in a way that allows it to process high-priority tasks first,
and then process other tasks.
Some embedded systems are predominantly controlled by interrupts. This
means that tasks performed by the system are triggered by different kinds
of events; an interrupt could be generated, for example, by a timer at a
predefined interval, or by a serial port controller receiving data.
This architecture is used if event handlers need low latency, and the event
handlers are short and simple. These systems run a simple task in a main
loop also, but this task is not very sensitive to unexpected delays.
Sometimes the interrupt handler will add longer tasks to a queue structure.
Later, after the interrupt handler has finished, these tasks are executed by
the main loop. This method brings the system close to a multitasking kernel
with discrete processes.
3- Cooperative multitasking
It is also known as non-preemptive multitasking, basically it "is a style of
computer multitasking in which the operating system never initiates a
context switch from a running process to another process."
Cooperative multitasking is very similar to the simple control loop scheme,
except that the loop is hidden in an API.[3][1] The programmer defines a
series of tasks, and each task gets its own environment to run in. When a
task is idle, it calls an idle routine which passes control to another task.
The advantages and disadvantages are similar to that of the control loop,
except that adding new software is easier, by simply writing a new task, or
adding to the queue.
4- Preemptive multitasking / multi-threading
It uses a simple piece of code that is responsible for switching between tasks
or threads according to a timer. Thus, the system is considered to have an
"operating system" kernel. It helps increase the flexibility in terms of the
complexity of managing multiple tasks running in parallel.
In this type of system, a low-level piece of code switches between tasks or
threads based on a timer invoking an interrupt. This is the level at which the
system is generally considered to have an operating system kernel.
Depending on how much functionality is required, it introduces more or
less of the complexities of managing multiple tasks running conceptually in
parallel.
As any code can potentially damage the data of another task (except in
systems using a memory management unit) programs must be carefully
designed and tested, and access to shared data must be controlled by some
synchronization strategy such as message queues, semaphores or a non-
blocking synchronization scheme. Because of these complexities, it is
common for organizations to use an off-the-shelf RTOS, allowing the
application programmers to concentrate on device functionality rather than
operating system services. The choice to include an RTOS brings in its own
issues, however, as the selection must be made prior to starting the
application development process. This timing forces developers to choose
the embedded operating system for their device based on current
requirements and so restricts future options to a large extent.[19]
The level of complexity in embedded systems is continuously growing as
devices are required to manage peripherals and tasks such as serial, USB,
TCP/IP, Bluetooth, Wireless LAN, trunk radio, multiple channels, data and
voice, enhanced graphics, multiple states, multiple threads, numerous wait
states and so on. These trends are leading to the uptake of embedded
middleware in addition to an RTOS.
5- Microkernels and exokernels
A microkernel allocates memory and switches the CPU to different threads
of execution. User-mode processes implement major functions such as file
systems, network interfaces, etc.
Exokernels communicate efficiently by normal subroutine calls. The
hardware and all the software in the system are available to and extensible
by application programmers.
6- Monolithic kerneld
A monolithic kernel is a relatively large kernel with sophisticated
capabilities adapted to suit an embedded environment. This gives
programmers an environment similar to a desktop operating system like
Linux or Microsoft Windows, and is therefore very productive for
development. On the downside, it requires considerably more hardware
resources, is often more expensive, and, because of the complexity of these
kernels, can be less predictable and reliable.
Common examples of embedded monolithic kernels are embedded Linux,
VXWorks and Windows CE.
Despite the increased cost in hardware, this type of embedded system is
increasing in popularity, especially on the more powerful embedded devices
such as wireless routers and GPS navigation systems.
Exotic Custom Operating System
A small fraction of embedded systems requires safe, timely, reliable or
efficient behaviour unobtainable with any of the above architectures. In this
case an organization builds a system to suit. In some cases, the system may
be partitioned into a "mechanism controller" using special techniques, and
a "display controller" with a conventional operating system. A
communication system passes data between the two.
Creating an exotic custom operating system for an embedded system is a
complex task that requires a deep understanding of embedded systems,
operating system design, and low-level programming. Such a project would
typically involve the following steps and considerations:

Define Requirements:
Start by clearly defining the requirements of your embedded system,
including hardware constraints, real-time requirements, and specific
features you need in your custom OS.
Hardware Compatibility:
Ensure that your custom OS is compatible with the hardware of the
embedded system. This may involve writing or porting device drivers for
the specific hardware components.
Kernel Design:
Design the core of your operating system, known as the kernel. Decide on
the architecture (monolithic, microkernel, etc.) and data structures that will
be used.
Real-Time Considerations:
If real-time capabilities are required, design and implement a real-time
scheduler to meet the system's timing constraints.
Memory Management:
Implement memory management for the embedded system, taking into
account limited memory resources. This may involve designing memory
protection mechanisms.
File System (if needed):
Develop a file system or adapt an existing one to suit the storage
requirements of your embedded system.
Communication and Interprocess Communication (IPC):
Implement communication mechanisms to facilitate interprocess
communication (IPC) between different components or tasks running on
the system.
Security:
Consider security measures, especially if the embedded system will be used
in critical applications. Implement security features to protect against
potential threats.
Development Tools:
Set up development tools and environment for building and testing the
custom OS. This includes cross-compilers, debugging tools, and emulators.
Testing and Debugging:
Rigorously test the operating system on the target hardware, using various
test cases to ensure its stability, reliability, and real-time behavior.
Documentation:
Document the design, implementation, and usage of the custom operating
system for future reference and maintenance.
Maintenance and Updates:
Plan for future maintenance and updates, as embedded systems often have
long lifecycles. Consider how you will patch and improve the OS as needed.
Licensing:
If you plan to distribute your custom OS, consider the licensing and legal
aspects. Be aware of any open-source components or licenses you use.
Optimization: Optimize the OS for performance and resource usage to
ensure that it runs efficiently on the embedded system.

Deployment:
Deploy the custom OS to the embedded system and ensure that it meets the
functional and performance requirements of the target application.
UNIT 2
EMBEDDED HARDWARE ARCHITECTURE-
32 BIT MICROCONTROLLORS
You probably already know that a microcontroller is a piece of a
semiconductor chip that does arithmetic processing and controls the circuit
through the I/O and peripheral interface. The name ‘32-bit microcontroller’
implies that the microcontroller is capable of handling arithmetic operation
for a 32-bit value. Compared to an 8-bit microcontroller, the 32-bit
microcontroller takes fewer instruction cycles to execute a function due to
its wider data bus.
With its superior performance, a 32-bit microcontroller is often built with
more peripherals and memory. For example, the NXP LPC1700 series
features 4 32-bit timers, SD/MMC, USB, Ethernet Mac, CAN, and other
peripherals, which is not possible with an 8-bit MCU.
While they boast powerful performance and are rich in peripherals, 32-bit
microcontrollers are power-hungry components. They operate at higher
frequencies that range between tens to hundreds of Mhz.

When Should You Use a 32-Bit Microcontroller?


First of all, it is more expensive than the 8-bit MCU. In some designs, using
a 32-bit microcontroller is deemed overkill and introduces unnecessary cost
to the design.

32-bit microcontrollers are also a bad fit for battery-operated circuits, such
as wireless IoT sensors. They would quickly drain the battery even when
operating at the slowest clock rate.
32-bit microcontrollers are a great choice in these circumstances:
When you need a microcontroller capable of handling intense data
processing. For example, a biometric controller that compares a fingerprint
with tens of thousands of records and responds in a split second.
When you have a complex circuit in need of a microcontroller that can
handle multiple peripherals. In such cases, it is more economical to use a
32-bit microcontroller than using a few logic ICs in the circuit.

When the code size of the program is too large for an 8-bit microcontroller.
32-bit microcontrollers are built with larger flash memory.

ARM2 TDM1 CORE BASED 32 BIT MICROCONTROLLERS


The term "ARM2 TDM1 Core-Based 32-bit Microcontrollers" appears to
be a specific description or a reference to a particular family or type of
microcontroller that may use the ARM architecture. However, there are a
few clarifications and points to consider regarding this term:
ARM Architecture: ARM (Advanced RISC Machine) is a well-known
architecture for designing microprocessors and microcontrollers. ARM
processors are known for their energy efficiency, which makes them a
popular choice for a wide range of embedded applications.
32-Bit Microcontrollers: This part of the description suggests that the
microcontrollers in question are designed with a 32-bit architecture. 32-bit
microcontrollers offer advantages in terms of performance and memory
addressing when compared to 8-bit or 16-bit microcontrollers.
ARM2 TDM1 Core-Based: The terms "ARM2" and "TDM1 Core-Based"
are less common and may refer to a specific microcontroller family, model,
or variant that uses ARM architecture and is based on a particular core or
technology. "TDM1" is not a commonly known term in the context of ARM
cores, so it might be a manufacturer-specific term or a specific product
designation.
When dealing with microcontrollers, it's essential to consult the datasheets,
technical documentation, or official websites of the microcontroller's
manufacturer to understand the specific features, capabilities, and
characteristics of the microcontroller in question. This information is
crucial for effective development and programming of embedded systems
using these microcontrollers.
FAMILY OF PROCESS REGISTER, MEMORY AND DATA TRANSFER,
ARTITHMETIC AND LOGIC INSTRUCTIONS
In embedded systems, the family of process registers, memory and data
transfer instructions, arithmetic instructions, and logic instructions are
essential components of the instruction set architecture of the
microcontroller or processor. These instructions play a crucial role in
controlling the microcontroller, managing data, and performing
computations. Below, I'll provide an overview of each of these instruction
categories:
Process Register Instructions:
Process registers instructions operate on the processor's internal registers.
These instructions allow data manipulation and control flow within the
CPU. Some common register-related instructions include:
MOV: Move data from one register to another.
LDR/STR: Load and store instructions that move data between registers
and memory locations.
PUSH/POP: Push and pop instructions to save and restore registers on the
stack.
Memory and Data Transfer Instructions:
Memory and data transfer instructions are used to read from and write to
memory and perform data transfers between registers and memory. Typical
instructions include:
LDR/STR: These instructions load and store data from/to memory
addresses.
LDM/STM: Load and store multiple registers to/from memory.
MOV: Besides register-to-register moves, MOV instructions can also move
data between registers and memory.
SWP: Swap instruction for exchanging data between a register and a
memory location atomically.
Arithmetic Instructions:
Arithmetic instructions perform mathematical operations on data, typically
using the processor's ALU (Arithmetic Logic Unit). Common arithmetic
instructions include:
ADD: Addition instruction for adding two values.
SUB: Subtraction instruction for subtracting one value from another.
MUL: Multiplication instruction.
DIV: Division instruction.
INC/DEC: Increment and decrement instructions.
Logic Instructions:
Logic instructions perform bitwise logical operations and bit-shifting.
These are essential for tasks like bit manipulation and boolean operations.
Common logic instructions include:
AND: Bitwise AND operation.
OR: Bitwise OR operation.
XOR: Bitwise XOR operation.
NOT: Bitwise NOT operation.
LSL/LSR: Logical shift left and logical shift right.
The specific instructions available will depend on the architecture of the
microcontroller or processor you are working with. Different processor
families may have variations in their instruction sets, so it's crucial to refer
to the processor's technical documentation, datasheets, and reference
manuals to understand the precise set of instructions and their syntax.

Assembly Language
Assembly language is introduced for providing mnemonics or symbols for
the machine level code instructions. Assembly language program is
consisting of mnemonics that is translated into machine code. A program
that is used for this conversion is known as assembler.
Assembly language is also called as low-level language because it directly
works with the internal structure of CPU. For programming in assembly
language, a programmer must have the knowledge of all the registers in a
CPU.
Different programming languages like C, C++, Java and various other
languages are called as high-level languages because they are not dealing
with the internal details of CPU.
Structure of Assembly Language
An assembly language program is a series of statements, which are either
assembly language instructions such as ADD and MOV, or statements
called directives.
An instruction tells the CPU what to do, while a directive (also called
pseudo-instructions) gives instruction to the assembler. For example, ADD
and MOV instructions are commands which the CPU runs, while ORG and
END are assembler directives. The assembler places the opcode to the
memory location 0 when the ORG directive is used, while END indicates
to the end of the source code. A program language instruction consists of
the following four fields −
[ label: ] mnemonics [ operands ] [;comment ]
A square bracket ( [ ] ) indicates that the field is optional.

The label field allows the program to refer to a line of code by name. The
label fields cannot exceed a certain number of characters.
The mnemonics and operands fields together perform the real work of the
program and accomplish the tasks. Statements like ADD A , C & MOV C,
#68 where ADD and MOV are the mnemonics, which produce opcodes ;
"A, C" and "C, #68" are operands. These two fields could contain directives.
Directives do not generate machine code and are used only by the
assembler, whereas instructions are translated into machine code for the
CPU to execute.
1.0000 ORG 0H ;start (origin) at location 0
2 0000 7D25 MOV R5,#25H ;load 25H into R5
3.0002 7F34 MOV R7,#34H ;load 34H into R7
4.0004 7400 MOV A,#0 ;load 0 into A
5.0006 2D ADD A, R5 ; add contents of R5 to A
6.0007 2F ADD A,R7 ;add contents of R7 to A
7.0008 2412 ADD A,#12H ;add to A value 12 H
8.000A 80FE HERE: SJMP HERE ;stay in this loop
9.000C END ;end of asm source file
The comment field begins with a semicolon which is a comment indicator.
Notice the Label "HERE" in the program. Any label which refers to an
instruction should be followed by a colon.

Assembling and running an 8051 Program


The steps to create, assemble, and run an assembly language program are
as follows-
First, we use an editor to type in a program similar to the above program.
Editors like MS-DOS EDIT program that comes with all Microsoft
operating systems can be used to create or edit a program. The Editor must
be able to produce an ASCII file. The "asm" extension for the source file is
used by an assembler in the next step.
The "asm" source file contains the program code created in Step 1. It is fed
to an 8051 assembler. The assembler then converts the assembly language
instructions into machine code instructions and produces an .obj file (object
file) and a .lst file (list file). It is also called as a source file, that's why some
assemblers require that this file have the "src" extensions. The "lst" file is
optional. It is very useful to the program because it lists all the opcodes and
addresses as well as errors that the assemblers detected.
Assemblers require a third step called linking. The link program takes one
or more object files and produces an absolute object file with the extension
"abs".
Next, the "abs" file is fed to a program called "OH" (object to hex
converter), which creates a file with the extension "hex" that is ready to
burn in to the ROM.
Data Type
The 8051 microcontroller contains a single data type of 8-bits, and each
register is also of 8-bits size. The programmer has to break down data larger
than 8-bits (00 to FFH, or to 255 in decimal) so that it can be processed by
the CPU.

DB (Define Byte)
The DB directive is the most widely used data directive in the assembler. It
is used to define the 8-bit data. It can also be used to define decimal, binary,
hex, or ASCII formats data. For decimal, the "D" after the decimal number
is optional, but it is required for "B" (binary) and "Hl" (hexadecimal).

To indicate ASCII, simply place the characters in quotation marks ('like


this'). The assembler generates ASCII code for the numbers/characters
automatically. The DB directive is the only directive that can be used to
define ASCII strings larger than two characters; therefore, it should be used
for all the ASCII data definitions. Some examples of DB are given below −

ORG 500H
DATA1: DB 28 ; DECIMAL (1C in hex)
DATA2: DB 00110101B ; BINARY (35 in hex)
DATA3: DB 39H ;HEX
ORG 510H
DATA4: DB "2591" ;ASCII NUMBERS
ORG 520H
DATA6: DA "MY NAME IS Michael” ; ASCII CHARACTERS
Either single or double quotes can be used around ASCII strings. DB is also
used to allocate memory in byte-sized chunks.

Assembler Directives
Some of the directives of 8051 are as follows −
ORG (origin) − The origin directive is used to indicate the beginning of the
address. It takes the numbers in hexa or decimal format. If H is provided
after the number, the number is treated as hexa, otherwise decimal. The
assembler converts the decimal number to hexa.
EQU (equate) − It is used to define a constant without occupying a memory
location. EQU associates a constant value with a data label so that the label
appears in the program, its constant value will be substituted for the label.
While executing the instruction "MOV R3, #COUNT", the register R3 will
be loaded with the value 25 (notice the # sign). The advantage of using EQU
is that the programmer can change it once and the assembler will change all
of its occurrences; the programmer does not have to search the entire
program.
END directive − It indicates the end of the source (asm) file. The END
directive is the last line of the program; anything after the END directive is
ignored by the assembler.
Labels in Assembly Language
All the labels in assembly language must follow the rules given below −
Each label name must be unique. The names used for labels in assembly
language programming consist of alphabetic letters in both uppercase and
lowercase, number 0 through 9, and special characters such as question
mark (?), period (.), at the rate @, underscore (_), and dollar($).
The first character should be in alphabetical character; it cannot be a
number.

Reserved words cannot be used as a label in the program. For example,


ADD and MOV words are the reserved words, since they are instruction
mnemonics.
I/O OPERATIONS INTRUPT STRUCTURE
An interrupt is a signal to the processor emitted by hardware or software
indicating an event that needs immediate attention. Whenever an interrupt
occurs, the controller completes the execution of the current instruction and
starts the execution of an Interrupt Service Routine (ISR) or Interrupt
Handler. ISR tells the processor or controller what to do when the interrupt
occurs. The interrupts can be either hardware interrupts or software
interrupts.
Hardware Interrupt
A hardware interrupt is an electronic alerting signal sent to the processor
from an external device, like a disk controller or an external peripheral. For
example, when we press a key on the keyboard or move the mouse, they
trigger hardware interrupts which cause the processor to read the keystroke
or mouse position.
Software Interrupt
A software interrupt is caused either by an exceptional condition or a special
instruction in the instruction set which causes an interrupt when it is
executed by the processor. For example, if the processor's arithmetic logic
unit runs a command to divide a number by zero, to cause a divide-by-zero
exception, thus causing the computer to abandon the calculation or display
an error message. Software interrupt instructions work similar to subroutine
calls.
What is Polling?
The state of continuous monitoring is known as polling. The
microcontroller keeps checking the status of other devices; and while doing
so, it does no other operation and consumes all its processing time for
monitoring. This problem can be addressed by using interrupts.
In the interrupt method, the controller responds only when an interruption
occurs. Thus, the controller is not required to regularly monitor the status
(flags, signals etc.) of interfaced and inbuilt devices.
Interrupt Service Routine For every interrupt, there must be an interrupt
service routine (ISR), or interrupt handler. When an interrupt occurs, the
microcontroller runs the interrupt service routine. For every interrupt, there
is a fixed location in memory that holds the address of its interrupt service
routine, ISR. The table of memory locations set aside to hold the addresses
of ISRs is called as the Interrupt Vector Table.

Interrupt Vector Table


There are six interrupts including RESET in 8051.

Interrupts ROM Location (Hex) Pin


Interrupts ROM Location (Hex)
Serial COM (R1 and T1) 0023
Timer 1 interrupts (TF1) 001B
External HW interrupt 1(INT1) 0013 P3.3(13)
External HW interrupt 0(INT0) 0003 P3.2(12)
Timer 0 (TF0) 000B
Reset 0000 9

When the reset pin is activated, the 8051 jumps to the address location 0000.
This is power-up reset. Two interrupts are set aside for the timers: one for
timer 0 and one for timer 1. Memory locations are 000BH and 001BH
respectively in the interrupt vector table.
Two interrupts are set aside for hardware external interrupts. Pin no. 12 and
Pin no. 13 in Port 3 are for the external hardware interrupts INT0 and INT1,
respectively. Memory locations are 0003H and 0013H respectively in the
interrupt vector table.
Serial communication has a single interrupt that belongs to both receive and
transmit. Memory location 0023H belongs to this interrupt.
Steps to Execute an Interrupt
When an interrupt gets active, the microcontroller goes through the
following steps −
• The microcontroller closes the currently executing instruction and saves
the address of the next instruction (PC) on the stack.
• It also saves the current status of all the interrupts internally (i.e., not on
the stack).
• It jumps to the memory location of the interrupt vector table that holds
the address of the interrupts service routine.
• The microcontroller gets the address of the ISR from the interrupt vector
table and jumps to it. It starts to execute the interrupt service subroutine,
which is RETI (return from interrupt).
• Upon executing the RETI instruction, the microcontroller returns to the
location where it was interrupted. First, it gets the program counter (PC)
address from the stack by popping the top bytes of the stack into the PC.
Then, it starts to execute from that address.
Edge Triggering vs. Level Triggering
Interrupt modules are of two types − level-triggered or edge-triggered.
Level Triggered Edge Triggered
A level-triggered interrupt module always generates an An edge-triggered interrupt module generates
interrupt whenever the level of the interrupt source is an interrupt only when it detects an asserting
asserted. edge of the interrupt source. The edge gets
detected when the interrupt source level
actually changes. It can also be detected by
periodic sampling and detecting an asserted
level when the previous sample was de-
asserted.
If the interrupt source is still asserted when the Edge-triggered interrupt modules can be acted
firmware interrupt handler handles the interrupt, the immediately, no matter how the interrupt
interrupt module will regenerate the interrupt, causing source behaves.
the interrupt handler to be invoked again.
Level-triggered interrupts are cumbersome for Edge-triggered interrupts keep the firmware's
firmware. code complexity low, reduce the number of
conditions for firmware, and provide more
flexibility when interrupts are handled.
Enabling and Disabling an Interrupt
Upon Reset, all the interrupts are disabled even if they are activated. The
interrupts must be enabled using software in order for the microcontroller
to respond to those interrupts.
IE (interrupt enable) register is responsible for enabling and disabling the
interrupt. IE is a bit addressable register.
Interrupt Enable Register

EA - ET2 ES ET1 EX1 ET0 EX0

• EA - Global enable/disable.
• - - Undefined.
• ET2 – Enable Timer 2 interrupt.
• ES – Enable Serial port interrupt.
• ET1 – Enable Timer 1 interrupt.
• EX1 – Enable External 1 interrupt.
• ET0 – Enable Timer 0 interrupt.
EX0 – Enable External 0 interrupt

To enable an interrupt, we take the following steps −


Bit D7 of the IE register (EA) must be high to allow the rest of register to
take effect.
If EA = 1, interrupts will be enabled and will be responded to, if their
corresponding bits in IE are high. If EA = 0, no interrupts will respond, even
if their associated pins in the IE register are high.
Interrupt Priority in 8051
We can alter the interrupt priority by assigning the higher priority to any
one of the interrupts. This is accomplished by programming a register called
IP (interrupt priority).
The following figure shows the bits of IP register. Upon reset, the IP register
contains all 0's. To give a higher priority to any of the interrupts, we make
the corresponding bit in the IP register high.

- - - - PT1 PX1 PT0 PX0

- IP.7 Not Implemented


- IP.6 Not Implemented
- IP.5 Not Implemented
- IP.4 Not Implemented
PT1 IP.3 Defines the Timer 1 interrupt priority level.
PX1 IP.2 Defines the Timer 1 interrupt priority level.
PT0 IP.1 Defines the Timer 1 interrupt priority level.
PX0 IP.0 Defines the Timer 1 interrupt priority level.

Interrupt inside Interrupt


What happens if the 8051 is executing an ISR that belongs to an interrupt
and another one gets active? In such cases, a high-priority interrupt can
interrupt a low-priority interrupt. This is known as interrupt inside interrupt.
In 8051, a low-priority interrupt can be interrupted by a high-priority
interrupt, but not by any another low-priority interrupt.
Triggering an Interrupt by Software
There are times when we need to test an ISR by way of simulation. This can
be done with the simple instructions to set the interrupt high and thereby
cause the 8051 to jump to the interrupt vector table. For example, set the IE
bit as 1 for timer 1. An instruction SETB TF1 will interrupt the 8051 in
whatever it is doing and force it to jump to the interrupt vector table.
NETWORKS FOR EMBEDDED SYSTEMS: -
In digital communication, there are two types of data transfer:
Serial Communication
Parallel Communication
Serial Communication: In serial communication, the data bits are
transmitted one at a time in a sequential manner over the data bus or
communication channel. In order to understand this properly, let us consider
this situation:

Imagine you are shooting at a target with a bow and arrow. How do the
arrows fly from the bow? One at a time, right? This is exactly the case with
serial communication; the data bits travel from one embedded device to
another one at a time, serially.

Now that we’ve covered the basics of serial communication in embedded


systems, let’s move ahead and discuss the various types of serial
communication protocols.
CAN Protocol
The CAN or Controller Area Network protocol was conceived by Robert
Bosch (of GmbH) in the 1980s. Previously during the late 70’s,
manufacturers started using advanced features in their automobiles, such as
anti-lock braking systems, air conditioners, central door locks, airbags, gear
control, engine management systems and so on.
Even though drivers (consumers) loved these new features, they came with
some downsides too. These advancements required the addition of heavy
and bulky wires, expensive mechanical parts and complex designs, which
led to a rise in both the costs and complexity of the in-vehicle electrical and
mechanical systems. Fortunately, Robert Bosch made life easier for the
engineers by introducing the CAN protocol. The CAN protocol changed
the management of electronic sub-systems and the communication between
intelligent sensors– a simpler, cheaper method that did all that with a single
cable
The widespread popularity of the CAN protocol led to its standardization
as the ISO 11898 in 1993. Today, the application of CAN protocol spans
the embedded systems spectrum from industrial automation to commercial
restaurant fryers and beyond.
The development of these CAN applications ranges from fairly simple to
extremely complex. The devices that rely on this protocol are substantial. If
not designed, developed, and tested properly can cause severe damage. It is
very important to make sure development is well monitored and tested. One
easy and important development and test tool for CAN applications is the
protocol analyser.

Uses of the CAN Protocol:


The CAN protocol is often used for in-vehicle networking of electronic
components.
It is also used in aerospace applications for in-flight analysis and
networking of components such as fuel systems, pumps and more.
Manufacturers of medical equipment often use CAN for creating an
embedded network within medical devices.
USB Protocol
It isn’t a secret that USB, the Universal Serial Bus protocol, is by far the
most common protocol in use. You can probably find a dozen USB cables
and connectors lying around in your home. Originally developed in the
1990s, it was intended to standardize the connection of a number of
peripheral devices to a computer. Today, you can connect almost anything
from external hard drives to printers to your laptop/computer through USB
cables.

USB protocol was designed for two specific purposes:


Communicate with peripheral devices
Supply power to the connected devices if applicable
There are many variations of USB connectors - the standard USB that you
find on keyboards, mice, and printers. Micro USB and USB Type-C are
used mostly with cell phones - however, their popularity in other devices is
growing.
When a device communicates with another device through USB protocol,
data travels in the form of packets. All the data packets are composed of 8-
bit bytes (or multiples of 8-bit bytes, depending on the supported bitrate),
where the LSB or Least Significant Bit is transmitted first. If you are
building an embedded system that involves USB, make sure you use a USB
protocol analyser to monitor the data on the bus.
Uses of the USB Protocol:
Connect peripheral devices such as keyboards, mouse, printers, etc. to a
computer
Supply power to the peripheral devices
Charge accessories such as power banks and devices like cell phones and
Bluetooth speakers directly from the power outlet or from computers.
Parallel Communication
Parallel communication in embedded systems refers to the simultaneous
transmission of multiple bits or data lines between various components or
devices within the system. It is an essential concept in embedded systems
where high-speed and real-time data exchange is required. Parallel
communication is in contrast to serial communication, where data is
transmitted one bit at a time.
Here are some key aspects of parallel communication in embedded systems:
Data Width: Parallel communication involves multiple data lines or
channels that transmit data simultaneously. The width of the data bus
determines how many bits can be transmitted in parallel. Common data bus
widths include 8-bit, 16-bit, and 32-bit, though wider buses are also used in
more advanced systems.
Synchronous and Asynchronous: Parallel communication can be either
synchronous or asynchronous. In synchronous communication, data is
transferred in sync with a clock signal, ensuring precise timing.
Asynchronous communication does not rely on a clock signal and instead
uses start and stop bits to frame data.
Speed and Throughput: Parallel communication is typically faster than
serial communication due to the simultaneous transmission of multiple bits.
This higher data throughput makes parallel communication suitable for
applications where real-time data exchange is crucial, such as in high-
performance embedded systems, data buses, or memory interfaces.
Crosstalk and Signal Integrity: As the number of parallel lines increases,
the potential for crosstalk (interference between adjacent lines) and signal
integrity issues also increases. Proper design and shielding techniques are
essential to mitigate these issues.
Wiring Complexity: Parallel communication requires more wires or traces
on a printed circuit board (PCB) than serial communication. This can lead
to increased complexity, board space, and cost.
Applications: Parallel communication is commonly used in embedded
systems for various purposes, such as connecting microcontrollers to
memory, interfacing with sensors, and communicating with peripheral
devices like displays and high-speed data interfaces (e.g., PCI, PCIe).
Examples of Interfaces: Common examples of parallel communication
interfaces in embedded systems include the Parallel Peripheral Interface
(PPI), Parallel Advanced Technology Attachment (PATA) for connecting
hard drives, and memory buses like the Synchronous Dynamic Random-
Access Memory (SDRAM) interface.
Challenges: Ensuring data synchronization, managing data bus width, and
dealing with power consumption are some challenges associated with
parallel communication in embedded systems.
When choosing between parallel and serial communication in an embedded
system, it's essential to consider factors such as data transfer speed, the
number of devices to connect, available pins, and the specific requirements
of your application. Parallel communication is often favoured when high-
speed data transfer and real-time processing are necessary, but it comes with
design complexities that need to be addressed effectively.
Parallel Bus Protocol and The PCI And GPIB Bus In Embedded
System
Parallel bus protocols and specific bus standards like PCI (Peripheral
Component Interconnect) and GPIB (General Purpose Interface Bus) are
important in embedded systems for various data transfer and
communication tasks. Let's take a closer look at these two parallel bus
protocols and their use in embedded systems:
Parallel Bus Protocol: A parallel bus protocol is a set of rules and standards
that govern how data is transmitted and received on a parallel data bus. It
defines the format of data, timing, voltage levels, and other parameters
required for proper communication. Parallel bus protocols can vary based
on the application and the specific embedded system's requirements.
Common parallel bus protocols include Address-Data Multiplexing
(ADM), Split Transaction Bus, and Demand Transfer Protocol.
PCI (Peripheral Component Interconnect):
PCI is a widely used high-speed parallel bus standard that provides a
standardized interface for connecting various hardware components in
computers and embedded systems.
PCI supports 32-bit and 64-bit data buses, and it offers high-speed data
transfer with defined bus protocols and features like bus mastering and
plug-and-play capabilities.
In embedded systems, PCI can be used to connect devices such as graphics
cards, network adapters, and other peripherals. It's especially prevalent in
applications that require high bandwidth and low latency.
GPIB (General Purpose Interface Bus):
GPIB, also known as IEEE-488, is a parallel bus standard designed for
instrument control and data transfer in laboratory and industrial
environments.
GPIB is relatively slower than PCI but is still widely used in embedded
systems that involve test and measurement equipment, data acquisition, and
instrumentation.
It uses a multi-master, parallel communication protocol, making it suitable
for connecting multiple instruments and devices in a daisy-chain fashion.
In embedded systems, both PCI and GPIB can serve specific purposes:
PCI in Embedded Systems: PCI provides a high-speed, low-latency
communication interface that is useful in applications requiring fast data
transfer, such as data-intensive signal processing or high-performance
computing.
Embedded systems may use PCI or its variants (e.g., PCIe) for connecting
peripherals like high-speed data acquisition cards, graphics cards, and
storage controllers.
GPIB in Embedded Systems:
GPIB is commonly used in embedded systems that involve scientific
instruments, measurement devices, and other equipment found in
laboratory and industrial settings.
Embedded systems can use GPIB to control and gather data from multiple
instruments simultaneously, making it suitable for automated testing, data
logging, and process control.
The choice between these parallel bus protocols depends on the specific
requirements of the embedded system, including the data transfer rate, the
types of devices to be connected, and the available hardware. Additionally,
newer standards like PCIe have evolved from PCI to offer even higher data
transfer rates and advanced features, making them suitable for modern
embedded systems with demanding performance needs.
UNIT 3
SOFTWARE DEVELOPMENT
Embedded programming in C and C++ source code engineering Tools
for Embedded Systems with C/C++
Embedded programming in C and C++ requires specialized tools and
software to develop and maintain embedded systems. These tools help with
code development, debugging, and optimizing for resource-constrained
environments. Here's a list of some essential tools and software for
embedded systems development using C and C++:
Integrated Development Environments (IDEs):
Eclipse: Eclipse is a widely used open-source IDE that supports embedded
development through various plugins like CDT (C/C++ Development
Tooling) and platform-specific toolchains.
Keil MDK (Microcontroller Development Kit): A popular IDE specifically
designed for ARM-based microcontrollers. It includes a compiler,
debugger, and other tools.
IAR Embedded Workbench: Another popular IDE with support for various
microcontroller families, offering optimizing C/C++ compilers and
debugging tools.
Atmel Studio: Developed by Microchip, Atmel Studio is tailored for their
AVR and ARM microcontrollers.
Cross-Platform Compilers:
GNU Compiler Collection (GCC): GCC is an open-source compiler suite
that supports multiple platforms and microcontrollers. It's commonly used
in embedded development.
ARM Compiler: For ARM-based microcontrollers, ARM provides their
compiler tools that offer excellent optimization for their architectures.
Clang/LLVM: These open-source compilers are gaining popularity in
embedded systems for their modular architecture and C/C++ language
support.
Debugging Tools
GDB (GNU Debugger): An open-source debugger that can be used with
various IDEs. It supports a wide range of embedded platforms.
JTAG Debuggers: Hardware debugging tools like Segger J-Link, ST-Link,
and others are essential for low-level debugging in embedded systems.
Code Analysis and Profiling Tools:
Lint: Static code analysis tools like PC-lint and CppCheck help identify
potential issues in your code.
GProf and Valgrind: Profiling tools to optimize code performance and
memory usage
Version Control Systems (VCS):
Git: Git is the most popular VCS, essential for keeping track of changes in
your embedded codebase and collaborating with a team.
RTOS (Real-Time Operating System)
FreeRTOS: A popular open-source RTOS for embedded systems.
uC/OS: Micrium's microcontroller OS is a well-established choice.
Build Systems:
Make: Makefiles are often used to automate the building process for
embedded systems.
CMake: CMake is a more modern build system that can generate makefiles
and other build configurations.
Device Libraries:
Many microcontroller manufacturers provide libraries to interface with
their hardware, making it easier to program embedded systems. For
example, STM32Cube for STM32 microcontrollers.
Simulation Tools:
QEMU: An open-source emulator that can simulate various architectures,
which is useful for testing code without the actual hardware.
Serial Communication Tools:
PuTTY, Tera Term, or Real Term: These are terminal emulation programs
for debugging serial communication with embedded systems.
Documentation Tools:
Doxygen: A popular tool for generating documentation from code
comments.
IDE Plugins:
Many IDEs have plugins and extensions that cater to embedded
development, offering features like code generation, device-specific
support, and debugging.
Program modelling concepts in Single and Multiprocessor System
Program modelling concepts in single and multiprocessor systems are
important for designing, understanding, and optimizing the behaviour of
programs running on these systems. Here are some program modelling
concepts for both types of systems:
Single Processor System:
1. Sequential Execution: In single processor systems, programs are
typically modelled as a sequence of instructions executed one after the
other. This is a straightforward, linear model of program execution.

2. Instruction-Level Modelling: Programs can be analysed at the instruction


level to understand their flow and dependencies. Techniques like pipelining
and instruction scheduling can be used to optimize instruction execution.

3. Control Flow Graph (CFG): A CFG is a model that represents the control
flow within a program. It shows the possible execution paths and decisions
that can be made during program execution.

4. Data Flow Analysis: Data flow analysis helps identify how data values
change as a program executes. This is important for optimizing code and
detecting potential issues.
5. Performance Profiling: Profiling tools can be used to collect data on
program execution, such as the time spent in different functions and the
frequency of certain instructions. This data helps identify performance
bottlenecks.

6. Loop Analysis: Programs often contain loops. Analysing loops and their
characteristics can help in optimizing program performance.

7. Resource Utilization Modelling: In single processor systems, it's


essential to model the utilization of the processor's resources, such as the
CPU, cache, and memory, to optimize program performance.

Multiprocessor System:
1. Parallelism Models:
- Task-Level Parallelism: In multiprocessor systems, tasks within a
program can be executed concurrently on multiple processors. Task-level
parallelism models help identify opportunities for parallel execution.
- Data-Level Parallelism: Data-level parallelism involves parallelizing
operations on different data elements. SIMD (Single Instruction, Multiple
Data) and MIMD (Multiple Instruction, Multiple Data) models are used for
data-level parallelism.

2. Task Scheduling: In multiprocessor systems, task scheduling algorithms


are crucial for deciding which tasks are executed on which processors and
when.

3.Load Balancing: Load balancing algorithms aim to distribute the


workload evenly across multiple processors to make efficient use of
resources and ensure fairness.
4.Inter-Processor Communication: Models for inter-processor
communication are vital for understanding and optimizing how tasks and
data are communicated between processors.

5.Shared Memory and Distributed Memory Models: Multiprocessor


systems can be categorized into shared-memory and distributed-memory
architectures. These models affect how data is shared and accessed among
processors.

6.Scalability Analysis: Analysing how program performance scales with the


number of processors is essential in multiprocessor systems. Understanding
Amdahl's Law and Gustafson's Law is key to modelling scalability.

7.Synchronization and Data Dependencies: In multiprocessor systems,


modelling and managing synchronization primitives and dependencies
between tasks are critical to avoid race conditions and ensure correctness.

8.Parallel Performance Metrics: Models for assessing parallel program


performance may include speedup, efficiency, and scalability.

9. Fault Tolerance: In some multiprocessor systems, models for fault


tolerance are important for ensuring system reliability in the presence of
hardware failures.

Both single and multiprocessor systems benefit from program modeling to


understand and optimize their performance, albeit with different focuses.
Single processor systems emphasize optimizing the execution of sequential
code, while multiprocessor systems aim to exploit parallelism and efficient
resource utilization.
Software Development Process
The software development process in embedded systems is a specialized
field that focuses on creating software for resource-constrained, real-time,
and often safety-critical applications. It typically involves several stages,
including requirements analysis, design, coding, testing, and maintenance.
Here's an overview of the software development process in embedded
systems:
Requirements Analysis:
Define the functional and non-functional requirements of the embedded
system. This includes understanding the system's purpose, its
environmental constraints, and its hardware and software requirements.
Identify any industry or safety standards that the system must comply with,
such as ISO 26262 for automotive or DO-178C for avionics.
System Design:
Define the system architecture, including the selection of hardware
components (microcontrollers, sensors, actuators) and communication
protocols.
Create a high-level software architecture, outlining the major software
components and their interactions.
Develop a real-time operating system (RTOS) and/or firmware architecture,
if necessary.
Detailed Design:
Create detailed design documents for each software module, specifying the
algorithms, data structures, and interfaces.
Consider memory and performance constraints, as embedded systems often
have limited resources.
Implementation (Coding):
Write the source code for each software module following the design
specifications.
Use programming languages like C and C++ that are commonly used in
embedded systems.
Pay special attention to low-level hardware access and optimization, as this
is a critical aspect of embedded development.
Testing:
Perform various levels of testing, including unit testing, integration testing,
and system testing.
Ensure that the software meets the specified functional and safety
requirements.
Real-time and performance testing is crucial to verify that the system meets
its real-time constraints.
Verification and Validation (V&V):
Verification ensures that the software is built correctly according to the
design specifications.
Validation confirms that the software meets the user's needs and is fit for its
intended purpose.
This step is especially important for safety-critical embedded systems.
Integration:
Integrate the software with the hardware components, device drivers, and
any external systems it needs to communicate with.
Test the complete system for proper functionality and performance.
Deployment:
Install the software onto the target embedded hardware.
Validate that the system behaves as expected in its operational environment.
Maintenance and Updates:
Embedded systems often have long lifecycles, and maintenance is critical
to address issues, apply updates, and ensure continued functionality.
Keep track of hardware and software changes that may affect the system.
Documentation:
Maintain comprehensive documentation throughout the development
process. This includes design documents, source code comments, and user
manuals.
Documentation is essential for future maintenance and potential audits.
Version Control:
Use a version control system (e.g., Git) to manage changes to the source
code, allowing for collaboration and tracking of revisions.
Regulatory Compliance:
For safety-critical and regulated industries (e.g., automotive, aerospace,
medical), ensure that the software development process complies with
industry-specific standards and regulations.
Security Considerations:
Implement security measures to protect against potential vulnerabilities, as
security is increasingly important in embedded systems (e.g., IoT devices).
Code Reviews and Quality Assurance:
Conduct code reviews to ensure code quality, correctness, and adherence to
coding standards.
software engineering practices in the embedded software development
Software engineering practices in embedded software development are
critical to ensure the reliability, performance, and maintainability of the
software that runs on resource-constrained embedded systems. Here are
some essential software engineering practices for embedded software
development:

Requirements Analysis:
Begin with a clear understanding of the system's functional and non-
functional requirements. Define the software's scope and limitations.
Design for Resource Constraints: Consider the limited processing power,
memory, and storage available in embedded systems. Optimize code and
data structures for minimal resource usage.
Modular Design:
Use a modular approach to design software components that can be
developed, tested, and maintained independently. This simplifies
debugging and makes it easier to replace or upgrade specific parts of the
software.
Real-time Analysis: Analyse and specify real-time requirements for the
system. Ensure that the software can meet its deadlines and respond to
events within defined time frames.
Use of Real-time Operating Systems (RTOS):
Employ an RTOS to manage task scheduling, inter-task communication,
and resource allocation. RTOSs provide a structured way to handle real-
time requirements.
Coding Standards:
Define and follow coding standards to ensure consistency in code style and
improve code readability. This is crucial for maintainability.
Testing and Validation:
Rigorous testing is essential. Perform unit testing, integration testing, and
system testing. Validate that the software meets the specified functional and
safety requirements.
Static Analysis: Use static analysis tools to detect potential issues in the
code early in the development process. This can help identify issues related
to code quality, resource usage, and safety.
Dynamic Analysis: Employ dynamic analysis tools for profiling, code
coverage analysis, and memory analysis. This helps in optimizing
performance and identifying memory leaks or corruption.
Continuous Integration: Implement a continuous integration (CI) process to
automate builds and testing. CI systems can ensure that the software
remains functional as changes are made to the codebase.
Version Control: Use a version control system (e.g., Git) to manage changes
to the source code. This enables collaboration, tracks revisions, and
provides a history of the codebase.
Documentation: Maintain comprehensive documentation, including
design documents, source code comments, and user manuals.
Documentation is vital for understanding and maintaining the software.
Safety Standards Compliance: For safety-critical systems (e.g., automotive,
aerospace), adhere to relevant safety standards such as ISO 26262
(automotive) or DO-178C (avionics).
Security Considerations: Implement security measures to protect against
potential vulnerabilities. Embedded systems are increasingly targeted by
security threats, especially in IoT applications.
Code Reviews and Peer Collaboration: Conduct code reviews to ensure
code quality, correctness, and adherence to coding standards. Collaborate
with team members to share knowledge and insights.
Update and Maintenance Planning: Plan for software updates and
maintenance throughout the product's lifecycle. Embedded systems often
have long lifecycles and require ongoing support.
Regulatory Compliance: Ensure that the software development process
complies with industry-specific standards and regulations, particularly in
regulated industries.
Fault Tolerance and Error Handling: Implement fault tolerance mechanisms
and robust error handling to ensure the system can recover from unexpected
failures.
Performance Optimization: Profile and optimize critical parts of the
software for performance, making sure the software can meet its real-time
constraints.
Code Reusability: Encourage code reusability to reduce development effort
and maintain consistent functionality across projects.
HARDWARE/SOFTWARE CO-DESIGN IN EMBEDDED SYSTEM: -
Hardware/software co-design in embedded systems is a collaborative
approach to developing both the hardware and software components of a
system in parallel, with the goal of optimizing system performance, power
efficiency, and cost. It acknowledges that the hardware and software are
interdependent and aims to find the best trade-offs between them. Here are
the key aspects and benefits of hardware/software co-design in embedded
systems:
1. Simultaneous Design: Hardware and software development occurs
concurrently rather than sequentially. This means that the two teams or
aspects of development work closely together from the project's inception.
2. System Optimization:
Performance: Co-design allows for fine-tuning the hardware and software
to maximize system performance. Hardware accelerators can offload
specific tasks, allowing the software to focus on other critical functions.
Power Efficiency: Co-design can help minimize power consumption by
selecting energy-efficient hardware components and optimizing software
algorithms.
3. Trade-offs: Hardware vs. Software: Co-design helps determine which
tasks are better suited for hardware acceleration and which are more
efficiently handled in software. This includes decisions like whether to use
a hardware coprocessor or implement a particular function in software.
Flexibility vs. Performance: Co-design balances the need for system
flexibility with the goal of achieving high performance. Decisions on where
to introduce reconfigurability or programmability are crucial.
4. System-level Perspective: Co-design takes a holistic view of the system,
considering interactions between hardware and software components. It's
not just about optimizing one aspect but achieving an overall system-level
optimization.
5. Rapid Prototyping: Prototyping hardware and software simultaneously
allows for early testing and validation of the system's functionality and
performance. This iterative approach can lead to quicker development
cycles.
6. Reusability: Co-design promotes the creation of modular and reusable
components, both in hardware and software. This reusability can save time
and resources in future projects.
7. Design Space Exploration: Co-design enables the exploration of
different design alternatives and trade-offs, which is especially valuable
when working with constrained resources.
8. Communication and Collaboration: Effective communication and
collaboration between hardware and software teams are essential in co-
design. Teams must work together to define interfaces, share requirements,
and resolve issues.
9. Domain-specific Optimization: In some embedded systems, domain-
specific hardware accelerators are designed to improve performance. Co-
design allows for their seamless integration into the system.
10. Mixed-criticality Systems: - In safety-critical systems,
hardware/software co-design can be used to ensure that high-priority,
safety-critical tasks are handled by dedicated hardware while non-safety-
critical tasks are managed by software.
11. Cost Reduction: Co-design can lead to cost savings by minimizing the
need for overpowered hardware components or by reducing the time
required for software optimization.
12. Real-time Systems: For real-time embedded systems, co-design is
critical for meeting strict timing constraints and ensuring that tasks are
appropriately allocated between hardware and software.
13. Prototyping Tools: Use of tools like virtual prototyping, field-
programmable gate arrays (FPGAs), and simulation environments to
facilitate hardware/software co-design.
Hardware/software co-design is particularly relevant in applications such
as IoT devices, automotive systems, consumer electronics, and any
embedded systems where a balance between hardware and software is
critical for success. It requires a multidisciplinary team with expertise in
both hardware and software development and an integrated approach to
design and development.

You might also like