Unit 1 Ed
Unit 1 Ed
• An embedded system contains a computer as part of a larger system and does not
exist primarily to provide standard computing services to a user.
Definition:
•Must be dependable:
• Energy efficient
• Run-time efficient
• Weight efficient
• Cost efficient
• A real-time system must react to stimuli from the controlled object (or the
operator) within the time interval dictated by the environment.
• For real-time systems, right answers arriving too late (or even too early) are
wrong.
Processor
Processor is the heart of an embedded system. It is the basic unit that takes inputs
and produces an output after processing the data. For an embedded system
designer, it is necessary to have the knowledge of both microprocessors and
microcontrollers.
Processors in a System
Microprocessor
Types of RAM
The RAM family includes two important memory devices: static RAM (SRAM)
and dynamic RAM (DRAM). The primary difference between them is the lifetime
of the data they store. SRAM retains its contents as long as electrical power is
applied to the chip. If the power is turned off or lost temporarily, its contents will
be lost forever. DRAM, on the other hand, has an extremely short data lifetime-
typically about four milliseconds. This is true even when power is applied
constantly.
In short, SRAM has all the properties of the memory you think of when you hear
the word RAM. Compared to that, DRAM seems kind of useless. By itself, it is.
However, a simple piece of hardware called a DRAM controller can be used to
make DRAM behave more like SRAM. The job of the DRAM controller is to
periodically refresh the data stored in the DRAM. By refreshing the data before it
expires, the contents of memory can be kept alive for as long as they are needed.
So DRAM is as useful as SRAM after all.
When deciding which type of RAM to use, a system designer must consider access
time and cost. SRAM devices offer extremely fast access times (approximately
four times faster than DRAM) but are much more expensive to produce. Generally,
SRAM is used only where access speed is extremely important. A lower cost-per-
byte makes DRAM attractive whenever large amounts of RAM are required. Many
embedded systems include both types: a small block of SRAM (a few kilobytes)
along a critical data path and a much larger block of DRAM (perhaps even
Megabytes) for everything else.
Types of ROM
Memories in the ROM family are distinguished by the methods used to write new
data to them (usually called programming), and the number of times they can be
rewritten. This classification reflects the evolution of ROM devices from
hardwired to programmable to erasable-and-programmable. A common feature of
all these devices is their ability to retain data and programs forever, even during a
power failure.
The very first ROMs were hardwired devices that contained a preprogrammed set
of data or instructions. The contents of the ROM had to be specified before chip
production, so the actual data could be used to arrange the transistors inside the
chip. Hardwired memories are still used, though they are now called "masked
ROMs" to distinguish them from other types of ROM. The primary advantage of a
masked ROM is its low production cost. Unfortunately, the cost is low only when
large quantities of the same ROM are required.
One step up from the masked ROM is the PROM (programmable ROM), which is
purchased in an unprogrammed state. If you were to look at the contents of an
unprogrammed PROM, you would see that the data is made up entirely of 1's. The
process of writing your data to the PROM involves a special piece of equipment
called a device programmer. The device programmer writes data to the device one
word at a time by applying an electrical charge to the input pins of the chip. Once a
PROM has been programmed in this way, its contents can never be changed. If the
code or data stored in the PROM must be changed, the current device must be
discarded. As a result, PROMs are also known as one-time programmable (OTP)
devices.
Hybrids
As memory technology has matured in recent years, the line between RAM and
ROM has blurred. Now, several types of memory combine features of both. These
devices do not belong to either group and can be collectively referred to as hybrid
memory devices. Hybrid memories can be read and written as desired, like RAM,
but maintain their contents without electrical power, just like ROM. Two of the
hybrid devices, EEPROM and flash, are descendants of ROM devices. These are
typically used to store code. The third hybrid, NVRAM, is a modified version of
SRAM. NVRAM usually holds persistent data.
Flash memory combines the best features of the memory devices described thus
far. Flash memory devices are high density, low cost, nonvolatile, fast (to read, but
not to write), and electrically reprogrammable. These advantages are
overwhelming and, as a direct result, the use of flash memory has increased
dramatically in embedded systems. From a software viewpoint, flash and
EEPROM technologies are very similar. The major difference is that flash devices
can only be erased one sector at a time, not byte-by-byte. Typical sector sizes are
in the range 256 bytes to 16KB. Despite this disadvantage, flash is much more
popular than EEPROM and is rapidly displacing many of the ROM devices as
well.
Embedded systems communicate with the outside world via their peripherals, such
as following &mins;
Serial Communication Interfaces (SCI) like RS-232, RS-422, RS-485, etc.
Synchronous Serial Communication Interface like I2C, SPI, SSC, and ESSI
Universal Serial Bus (USB)
Multi Media Cards (SD Cards, Compact Flash, etc.)
Networks like Ethernet, LonWorks, etc.
Fieldbuses like CAN-Bus, LIN-Bus, PROFIBUS, etc.
imers like PLL(s), Capture/Compare and Time Processing Units.
Discrete IO aka General Purpose Input/Output (GPIO)
Analog to Digital/Digital to Analog (ADC/DAC)
Debugging like JTAG, ISP, ICSP, BDM Port, BITP, and DP9 ports
Embedded software
Embedded software is computer software, written to control machines or devices
that are not typically thought of as computers, commonly known as embedded
systems. It is typically specialized for the particular hardware that it runs on and
has time and memory constraints.[1] This term is sometimes used interchangeably
with firmware.[2]
A close-up of the SMSC LAN91C110 (SMSC 91x) chip, an
embedded Ethernet chip
A precise and stable characteristic feature is that no or not all functions of
embedded software are initiated/controlled via a human interface, but through
machine-interfaces instead.[3]
Manufacturers build embedded software into the electronics of cars, telephones,
modems, robots, appliances, toys, security systems, pacemakers, televisions and
set-top boxes, and digital watches, for example.[4] This software can be very
simple, such as lighting controls running on an 8-bit microcontroller with a
few kilobytes of memory with the suitable level of processing complexity
determined with a Probably Approximately Correct Computation framework[5] (a
methodology based on randomized algorithms). However, embedded software can
become very sophisticated in applications such as routers, optical network
elements,airplanes, missiles, and process control systems.[6]
Microcontroller
Compiling
The GNU C/C++ compiler (gcc) and assembler (as) can be configured as either
native compilers or cross-compilers. As cross-compilers these tools support an
impressive set of host-target combinations. Table 3-1 lists some of the most
popular of the supported host platforms and target processors. Of course, the
selections of host and target are inde pendent, so these tools can be configured for
any supported combination.
Linking
All of the object files resulting from step one must be combined in a special way
before the program can be executed. The object files themselves are individually
incomplete, most notably in that some of the internal variable and function
references have not yet been resolved. The job of the linker is to combine these
object files and, in the process, to resolve all of the unresolved symbols.
The output of the linker is a new object file that contains all of the code and data
from the input object files and is in the same object file format. It does this by
merging the text, data, and bss sections of the input files. So, when the linker is
done executing, all of the machine language code from all of the input object files
will be in the text section of the new file. And all of the initialized and uninitialized
variables will reside in the new data and bss sections, respectively.
While the linker is in the process of merging the section contents, it is also on the
lookout for unresolved symbols. For example, if one object file contains an
unresolved reference to a variable named foo and a variable with that same name is
declared in one of the other object files, then the linker will match them up. The
unresolved reference will be replaced with a reference to the actual variable. In
other words, if foo is located at offset 14 of the output data section, its entry in the
symbol table will now contain that address.
The GNU linker (ld) runs on all of the same host platforms as the GNU compiler.
It is essentially a command-line tool that takes the names of all the object files to
be linked together as arguments. (For embedded development, a special object file
containing the compiled startup code (see sidebar) must also be included within
this list.) The GNU linker also has a scripting language that can be used to exercise
tighter control over the object file that is output.
Locating
The tool that performs the conversion from relocatable program to executable
binary image is called a locator. It takes responsibility for the easiest step of the
three. In fact, you will have to do most of the work in this step yourself, by
providing information about the memory on the target board as input to the locator.
The locator will use this information to assign physical memory addresses to each
of the code and data sections within the relocatable program. It will then produce
an output file containing a binary memory image that can be loaded into the target
ROM.
In many cases, the locator is a separate development tool. However, in the case of
the GNU tools this functionality is built right into the linker. Try not to be
confused by this one particular implementation. Whether you are writing software
for a general-purpose computer or an embedded system, at some point the sections
of your relocatable program must have actual addresses assigned to them. In the
first case, the operating system does it for you at load time. In the second, you must
perform the step with a special tool. This is true even if the locator is a part of the
linker, as it is in the case of ld.
The memory information required by the GNU linker can be passed to it in the
form of a linker script. Such scripts are sometimes used to control the exact order
of the code and data sections within the relocatable program. But here, we want to
do more than just control the order, we also want to the establish the location of
each section in memory.
Linker:
A linker is a program in a system, also known as a link editor and binder, which
combines object modules into a single object file. Generally, it is a program that
performs the process of linking; it takes one or multiple object files, which are
generated by compiler. And, then combines these files into an executable files.
Modules are called for the different pieces of code, which are written in
programming languages. Linking is a process that helps to gather and maintain a
different piece of code into an executable file or single file. With the help of a
linker, a specific module is also linked into the system library.
Loader:
The main function of a loader is to load executable files to the main memory.
It takes the executable files (generated by linker) as its input.
It can be understood as a process of loading the executable codes into main
memory where it is execute further.
There are 3 types of loaders: Absolute loading, Relocatable loading and
Dynamic run-time loading.
It helps allocate the addresses to executable codes or files.
It is also responsible to adjust the references that are used within the
program.
What is Debugging : Types & Techniques in Embedded Systems
Every programmer in their life has a chance to experience bugs or errors in their
code while developing an operating system or application or any other program. In
such cases, developers use debugging and tools to find bugs in a code and make
the code or program error-free. There is a chance to identify the bug and find
where it has occurred in the entire program. In software technology, This is an
important process to find bugs in any new program or any application process.
Errors like fatal and logical errors can be found and removed to get the desired
output. For example, GDB, Visual studio, and LLDB are the standard debuggers
for different operating systems.
What is Debugging?
Definition: The important technique to find and remove the number of errors or
bugs or defects in a program is called Debugging. It is a multistep process in
software development. It involves identifying the bug, finding the source of the
bug and correcting the problem to make the program error-free. In software
development, the developer can locate the code error in the program and remove it
using this process. Hence, it plays a vital role in the entire software development
lifecycle.
Types of Debugging
Depending upon the type of code error, there are different types of toolset plugins.
It is necessary to understand what is happening and what type of tool is used for
debugging. There are two types of debugging to solve any general issue of the
toolset plugin and provides technical information.
In PHP, the PHP code can be debugged to attach a debugger client using any one
of these tools. Debug utilities like Xdebug and Zendbugger are used to work
with PHPstorm. Kint is used as a debugging tool for PHP debugging.
For example, to enable the PHP debugging in WordPress, edit the file wp-
config.php and add the code needed. An error file (error_log.txt) is produced in the
word root dictionary which can be created and writable using a sever web. Else use
an FTP program to create and write. Hence all the errors that occurred in the front-
end and back-end can be logged into that error file.
Javascript debugging uses the browser’s debugger tool and javascript console.
Any javascript error can be occurred and stops the execution and functioning of
the operations in WordPress. When the javascript console is open, all the error
messages will be cleared. However, some console warnings appeared can create
an error message that should be fixed.
There are different types of debugging for different operating systems. They are,
For Linux and Unix operating systems, GDB is used as a standard debugger.
For Windows OS, the visual studio is a powerful editor and debugger.
For Mac OS, LLDB is a high-level debugger.
Intel parallel inspector is used as a source of debugging for memory errors in
C/C++ operations.
Debugging Process
The process of finding bugs or errors and fixing them in any application or
software is called debugging. To make the software programs or products bug-free,
this process should be done before releasing them into the market. The steps
involved in this process are,
Identifying the error – It saves time and avoids the errors at the user site.
Identifying errors at an earlier stage helps to minimize the number of errors and
wastage of time.
Identifying the error location – The exact location of the error should be found
to fix the bug faster and execute the code.
Analyzing the error – To understand the type of bug or error and reduce the
number of errors we need to analyze the error. Solving one bug may lead to
another bug that stops the application process.
Prove the analysis – Once the error has been analyzed, we need to prove the
analysis. It uses a test automation process to write the test cases through the test
framework.
Cover the lateral damage – The bugs can be resolved by making the
appropriate changes and move onto the next stages of the code or programs to
fix the other errors.
Fix and Validate – This is the final stage to check all the new errors, changes in
the software or program and executes the application.
Debugging Software
This software plays a vital role in the software development process. Software
developers use it to find the bugs, analyze the bugs and enhance the quality and
performance of the software. The process of resolving the bugs using manual
debugging is very tough and time-consuming. We need to understand the program,
it’s working, and the causes of errors by creating breakpoints.
As soon as the code is written, the code is combined with other stages of
programming to form a new software product. Several strategies like unit tests,
code reviews, and pair programming are used to debug the large program (contains
thousands of lines of code). The standard debugger tool or the debug mode of the
Integral Development Environment (IDE) helps determine the code’s logging and
error messages.
The bug is identified in a system and defect report is created. This report helps
the developer to analyze the error and find the solutions.
The debugging tool is used to know the cause of the bug and analyze it by step-
by-step execution process.
After identifying the bug, we need to make the appropriate changes to fix the
issues.
The software is retested to ensure that no error is left and checks all the new
errors in the software during the debugging software process.
A sequence-based method used in this software process made it easier and more
convenient for the developer to find the bugs and fix them using the code
sequences.
Debugging Techniques
To perform the debugging process easily and efficiently, it is necessary to follow
some techniques. The most commonly used debugging strategies are,
Induction strategy includes the Location of relevant data, the Organization of data,
the Devising hypothesis (provides possible causes of errors), and the Proving
hypothesis.
The backtracking strategy is used to locate errors in small programs. When an error
occurs, the program is traced one step backward during the evaluation of values to
find the cause of bug or error.
These techniques reduce the error count and increase the quality and functionality
of the code. Debugging of the embedded systems depends on physical memory
addresses and virtual memory.
There are 6 debugging techniques in an embedded system.
A software tool or program used to test and debug the other programs is called a
debugger or a debugging tool. It helps to identify the errors of the code at the
various stages of the software development process. These tools analyze the test
run and find the lines of codes that are not executed. Simulators in other debugging
tools allow the user to know about the display and behavior of the operating
system or any other computing device. Most of the open-source tools and scripting
languages don’t run an IDE and they require the manual process.
GDB Tool: This type of tool is used in Unix programming. GDB is pre-
installed in all Linux systems if not, it is necessary to download the GCC
compiler package.
DDD Tool: DDD means Data Display Debugger, which is used to run a
Graphic User Interface (GUI) in Unix systems.
Eclipse: An IDE tool is the integration of an editor, build tool, debugger and
other development tools. IDE is the most popular Eclipse tool. It works more
efficiently when compared to the DDD, GDB and other tools.
The list of debugging tools is listed below.
High Level:
If software is written in a high level language, it is possible to test large parts of it
without the need for the hardware at all. Software that does not need or use I/O or
other system dependent facilities can be run and tested on other machines, such as
a PC or a engineering workstation. The advantage of this is that it allows a parallel
development of the hardware and software and added confidence, when the two
parts are integrated, that it will work.
Using this technique, it is possible to simulate I/O using the keyboard as input or
another task passing input data to the rest of the modules. Another technique is to
use a data table which contains data sequences that are used to test the soft-ware.
This method is not without its restrictions. The most common mistake with this
method is the use of non-standard libraries which are not supported by the target
system com-piler or environment. If these libraries are used as part of the code that
will be transferred, as opposed to providing a user interface or debugging facility,
then the modifications needed to port the code will devalue the benefit of the
simulation.
The ideal is when the simulation system is using the same library interface as the
target. This can be achieved by using the target system or operating system as the
simulation system or using the same set of system calls. Many operating systems
support or provide a UNIX compatible library which allows UNIX software to be
ported using a simple recompilation. As a result, UNIX systems are often
employed in this simula-tion role. This is an advantage which the POSIX
compliant operating system Lynx offers.
This simulation allows logical testing of the software but rarely offers quantitative
information unless the simulation environment is very close to that of the target, in
terms of hardware and software environments.
Low Level:
Using another system to simulate parts of the code is all well and good, but what
about low level code such as initialisation routines? There are simulation tools
available for these routines as well. CPU simulators can simulate a processor,
memory system and, in some cases, some peripherals and allow low level
assembler code and small HLL programs to be tested without the need for the
actual hardware. These tools tend to fall into two categories: the first simulate the
program-ming model and memory system and offer simple debugging tools similar
to those found with an onboard debugger. These are inevitably slow, when
compared to the real thing, and do not provide timing information or permit
different memory configurations to be tested. However, they are very cheap and
easy to use and can provide a low cost test bed for individuals within a large
software team. There are even shareware simu-lators for the most common
processors such as the one from the University of North Carolina which simulates
an MC68000 processor.
Onboard debugger:
The onboard debugger provides a very low level method of debugging software.
Usually supplied as a set of EPROMs which are plugged into the board or as a set
of software routines that are combined with the applications code, they use a serial
connection to communicate with a PC or workstation. They provide several
functions: the first is to provide initialisa-tion code for the processor and/or the
board which will nor-mally initialise the hardware and allow it to come up into a
known state. The second is to supply basic debugging facilities and, in some cases,
allow simple access to the board’s periph-erals. Often included in these facilities is
the ability to download code using a serial port or from a floppy disk.
>TR
PC=000404 SR=2000 SS=00A00000 US=00000000 X=0
A0=00000000 A1=000004AA A2=00000000 A3=00000000 N=0
A4=00000000 A5=00000000 A6=00000000 A7=00A00000 Z=0
D0=00000001 D1=00000013 D2=00000000 D3=00000000 V=0
D4=00000000 D5=00000000 D6=00000000 D7=00000000 C=0
---------->LEA $000004AA,A1
>TR
PC=00040A SR=2000 SS=00A00000 US=00000000 X=0
A0=00000000 A1=000004AA A2=00000000 A3=00000000 N=0
A4=00000000 A5=00000000 A6=00000000 A7=00A00000 Z=0
D0=00000001 D1=00000013 D2=00000000 D3=00000000 V=0
D4=00000000 D5=00000000 D6=00000000 D7=00000000 C=0
---------->MOVEQ #19,D1
>
The second method, which relies on processor support, allows the vector table to
be moved elsewhere in the memory map. With the later M68000 processors, this
can also be done by changing the vector base register which is part of the supervi-
sor programming model.
The debugger usually operates at a very low level and allows basic memory and
processor register display and change, setting RAM-based breakpoints and so on.
This is normally performed using hexadecimal notation, although some debuggers
can provide a simple disassembler function. To get the best out of these systems, it
is important that a symbol table is generated when compiling or linking software,
which will provide a cross-reference between labels and symbol names and their
physical address in memory. In addition, an assem-bler source listing which shows
the assembler code generated for each line of C or other high level language code
is invalu-able. Without this information it can be very difficult to use the debugger
easily. Having said that, it is quite frustrating having to look up references in very
large tables and this highlights one of the restrictions with this type of debugger.
While considered very low level and somewhat limited in their use, onboard
debuggers are extremely useful in giving confidence that the board is working
correctly and working on an embedded system where an emulator may be
impractical. However, this ability to access only at a low level can also place
severe limitations on what can be debugged.
The first problem concerns the initialisation routines and in particular the
processor’s vector table. Breakpoints use either a special breakpoint instruction or
an illegal instruction to generate a processor exception when the instruction is
executed. Program control is then transferred to the debugger which displays the
breakpoint and associated information. Similarly, the debugger may use other
vectors to drive the serial port that is connected to the terminal.
This vector table may be overwritten by the initialisation routines of the operating
system which can replace them with its own set of vectors. The breakpoint can still
be set but when it is reached, the operating system will see it instead of the
debugger and not pass control back to it. The system will normally crash because it
is not expecting to see a breakpoint or an illegal instruction!
To get around this problem, the operating system may need to be either patched so
that its initialisation routine writes the debugger vector into the appropriate
location or this must be done using the debugger itself. The operating system is
single stepped through its initialisation routine and the in-struction that overwrites
the vector simply skipped over, thus preserving the debugger’s vector. Some
operating systems can be configured to preserve the debugger’s exception vectors,
which removes the need to use the debugger to preserve them.
A second issue is that of memory management where there can be a problem with
the address translation. Break-points will still work but the addresses returned by
the debugger will be physical, while those generated by the symbol table will
normally be logical. As a result, it can be very difficult to reconcile the physical
address information with the logical information.
Symbolic debug
The ability to use high level language instructions, func-tions and variables instead
of the more normal addresses and their contents is known as symbolic debugging.
Instead of using an assembler listing to determine the address of the first
instruction of a C function and using this to set a breakpoint, the symbolic
debugger allows the breakpoint to be set by quoting a line reference or the function
name. This interaction is far more efficient than working at the assembler level,
although it does not necessarily mean losing the ability to go down to this level if
needed.
The reason for this is often due to the way that symbolic debuggers work. In
simple terms, they are intelligent front ends for assembler level debuggers, where
software performs the automatic look-up and conversion between high level
language structures and their respective assembler level ad-dresses and contents.
12 int prime,count,iter;
13
15 {
16 count = 0;
18 flags[i] = 1;
19 for(i = 0; i<MAX_PRIME; i++)
20 if(flags[i])
21 {
22 prime = i + i + 3;
23 k = i + prime;
25 {
26 flags[k] = 0;
27 k += prime;
28 }
29 count++;
Assembler listing
›>> 15 {
›>> 16 count = 0;
›>> 18 flags[i] = 1;
› 000100B0 207C 0001 2148 MOVEA.L #$12148,A0 {flags}
› 000100B6 11BC 0001 2000 MOVE.B #$1,($0,A0,D2.W)
›— 17 for(i = 0; i<MAX_PRIME; => i++)<=
The key to this is the creation of a symbol table which provides the cross-
referencing information that is needed. This can either be included within the
binary file format used for object and absolute files or, in some cases, stored as a
separate file. The important thing to remember is that symbol tables are often not
automatically created and, without them, symbolic debug is not possible.
When the file or files are loaded or activated by the debugger, it searches for the
symbolic information which is used to display more meaningful information as
shown in the various listings. The symbolic information means that break-points
can be set on language statements as well as individual addresses. Similarly, the
code can be traced or stepped through line by line or instruction by instruction.
This has several repercussions. The first is the number of symbolic terms and the
storage they require. Large tables can dramatically increase file size and this can
pose constraints on linker operation when building an application or a new version
of an operating system. If the linker has insufficient space to store the symbol
tables while they are being corrected — they are often held in RAM for faster
searching and update — the linker may crash with a symbol table overflow error.
The solution is to strip out the symbol tables from some of the modules by
recompiling them with symbolic debugging disa-bled or by allocating more storage
space to the linker.
The problems may not stop there. If the module is then embedded into a target and
symbolic debugging is required, the appropriate symbol tables must be included in
the build and this takes up memory space. It is not uncommon for the symbol
tables to take up more space than the spare system memory and prevent the system
or task from being built or running correctly. The solution is to add more memory
or strip out the symbol tables from some of the modules.
It is normal practice to remove all the symbol table information from the final
build to save space. If this is done, it will also remove the ability to debug using the
symbol informa-tion. It is a good idea to have at least a hard copy of the symbol
table to help should any debugging be needed.
Emulation: