Embedded Systems Unit - 2 About Embedded Linux

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Embedded Systems

Unit_2
About Embedded Linux
Linux is an incredible piece of software. It’s an operating system that’s just as at home
running on IBM’s zSeries supercomputers as it is on a cell phone, manufacturing device,
network switch, or even cow milking machine. What’s more incredible is that this software is
currently maintained by thousands of the best software engineers and it is available for free.
Linux didn’t start as an embedded operating system. Linux was created by a Finnish
university student (Linus Torvalds) who was smart enough to make his work available to all,
take input from others and, most important, delegate to other talented engineers. As the
project grew, it attracted other talented engineers who were able to contribute to Linux,
increasing the burgeoning project’s value and visibility and thus bootstrapping a virtuous
cycle that continues to this day.
Linux was first written to run on the Intel IA-32 architecture and was first ported to a
Motorola processor. The porting process was difficult enough that Linus Torvalds decided to
rethink the architecture so that it could be easily ported, creating a clean interface between the
processor dependent parts of the software and those that are architecture independent. This
design decision paved the way for Linux to be ported to other processors. inux is just a
kernel, which by itself isn’t that useful. An embedded Linux system, or any Linux system for
that matter, uses software from many other projects in order to provide a complete operating
system.
The Linux kernel is written largely in C (with some assembler) and uses the GNU tool
set, such as make; the GCC compiler. The fact that this software could be used on embedded
system or could be modified to make it suitable for embedded deployment contributed greatly
to the acceptance of Linux for devices other than desktop machines.
Why Use Embedded Linux?
Embedded Linux is just like the Linux distributions running on millions of desktops and
servers worldwide, but it’s adapted to a specific use case. On desktop and server machines,
memory, processor cycles, power consumption, and storage space are limited resources—
they just aren’t as limiting as they are for embedded devices. A few extra MB or GB of
storage can be nothing but rounding errors when you’re configuring a desktop or server. In
the embedded field, resources matter because they drive the unit cost of a device that may be
produced in the millions; or the extra memory may require additional batteries, which add
weight. A processor with a high clock speed produces heat; some environments have very
tight heat budgets, so only so much cooling is available. As such, most of the efforts in
embedded programming, if you’re using Linux or some other operating system, focus on
making the most with limited resources.
The biggest difference between using a traditional embedded operating system and Linux is
the separation between the kernel and the applications. Under Linux, applications run in a
execution context completely separate from the kernel. There’s no way for the application to
access memory or resources other than what the kernel allocates. This level of process
protection means that a defective program is isolated from kernel and other programs,
resulting in a more secure and survivable system. All of this protection comes at a cost.
Technical Reasons to Use Embedded Linux
The technical qualities of Linux drives its adoption. Linux is more than the Linux kernel
project. That software is also at the forefront of technical development, meaning that Linux is
the right choice for solving today’s technical problems as well as being the choice for the
foreseeable future. For example, an embedded Linux system includes software such as the
following:
SSL/SSH: The OpenSSH project is the most commonly used encryption and security
mechanism today.
Apache and other web servers: The Apache web server finds its way into embedded
devices that need a full-featured web server.
The C Library: The Linux environment has a wealth of options in this area, from
the fully featured GNU C Library to the minimalist dietlibc.
Berkeley sockets (IP): Many projects move to Linux from another operating system because
of the complete, high- performance network stack included in the operating system. A
networked device is becoming the rule and not the exception.
The following sections explain why the Linux operating system is the best technological fit
for
embedded development.
Standards Based:
The Linux operating system and accompanying open source projects adhere to industry
standards; in most cases, the implementation available in open source is the canonical, or
reference, implementation of a standard. A reference implementation embodies the
interpretation of the specification and is the basis for conformance testing. In short, the
reference implementation is the standard by which others are measured.
Using standards-based software is not only about quality but also about independence.
Basing a project on software that adheres to standards reduces the chances of lock-in due to
vendor-specific features. A vendor may be well meaning, but the benefits of those extra
features are frequently outweighed by the lack of interoperability and freedom that silently
become part of the transaction and frequently don’t receive the serious consideration they
merit.
Process Isolation and Control
The Linux kernel, at its most basic level, offers these services as a way of providing a
common API for accessing system resources:
1. Manage tasks, isolating them from the kernel and each other
Process Isolation and Control The Linux kernel, at its most basic level, offers
these services as a way of providing a common API for accessing system resources.Manage
tasks, isolating them from the kernel and each other.
Memory Management and Linux: Linux uses a virtual memory-
management system. The concept of virtual memory has been around since the early 1960s
and is simple: the process sees its memory as a vector of bytes; and when the
program reads or writes to memory, the processor, in conjunction with the operating system,
translates the address into a physical address.

2. Provide a uniform interface for the system’s hardware resources


3. Serve as an arbiter to resources when contention exists

Creating Linux Distribution from scratch:


Creating a Linux distribution from scratch is viewed as a difficult task, and it
shouldn’t be and Linux becoming the platform of choice for embedded devices, support for
most embedded hardware platforms is part of the Linux kernel and toolchain. The process
involves creating a toolchain, using that toolchain to create a root file system, and then
building a kernel. The process isn’t complex, it just requires you to carefully follow many
steps to build the proper tools. After you go over the instructions for building a toolchain and
understand how it works, the smartest route may be to use a tool that builds a toolchain, such
as crosstool-NG, to build the root file system and kernel.
Making a Linux distribution involves these steps
1. Build a cross-compiler.
2. Use the cross-compiler to build a kernel.
3. Use the cross-compiler to build a root file system.
4. Roll the root file system into something the kernel can use to boot
A cross-compiler is important for embedded development. Until this tool has been built, the
kernel can’t be built nor can the root file system’s programs be built. Another slightly
confusing part of the process is that the kernel sources are used in the build process. One step
in building the cross-compiler is to build the corresponding C Standard Library. In order for
the C Standard Library to build, it needs to know some information about the target machine,
which is kept in the kernel. For the uninitiated, this seems to be a circular dependency; but
the parts of the kernel that are used don’t need to be cross-compiled. When you have the
cross-compiler in hand, you create the root file system with the BusyBox project. Linux
requires a root file system and refuses to boot if one can’t be found. A root file system can be
one file: the application program for the device. Most embedded Linux systems use
additional libraries and utilities. The root file system in this chapter is linked with the GNU C
Library at runtime, so these files must be on the target as well.
Cross Compiler:
“A cross compiler is a compiler capable of creating executable code for a platform other
than the one on which the compiler is running. For example, a compiler that runs on a Linux
x86 machine but generates code that runs on ARM7 is a cross compiler”
● A cross-compiler is part of the development environment and, in the most basic terms,
produces code that runs on a different processor or operating system than where the
compiler ran.
● For example, a compiler that runs on a Linux x86 host that produces code to execute
on an ARM9 target is a cross-compiler.
● Another example is a compiler running on Windows that produces code that runs on a
x86 Linux host. In both cases, the compiler doesn’t produce binaries that can be
executed on the machine where the compiler ran.
● In Linux, the cross-compiler is frequently referred to as a tool chain because it’s a
confederation of tools that work together to produce an executable: the compiler,
assembler, and linker. The debugger is a separate software component. EX: GCC
● Linux is GNU licensed software, and subsequently users who receive a Linux kernel
must have the ability to get the source code for the binaries they receive. Having the
source code is just one part of what’s necessary to rebuild the software for the board.
Without the cross-compiler, you can’t transform that source into something that can
run on the remote target.
Some basic terminology is used to describe the players in the process of building the
compiler:
● Build machine: The computer used to compile the code
● Host machine: The computer where the compiler runs
● Target machine: The computer for which GCC produces code

There are two types of build as follows:

A Canadian1 cross-compiler build is one where the build, host, and target
machines are all different. You probably aren’t configuring the compiler to run on anything
other than an x86 Linux host, but you may need to support development groups running on
Windows or Mac OS and want to build the software on a Linux host. It’s possible that the
build is running an a 64-bit Linux host, producing a compiler that runs on a 32-bit host that
generates code for an ARM processor, so it pays to understand the mechanics of cross-
building.
Some of the most confusing parameters passed into configure are the values for host, target,
and build. These parameters are called configuration names, or triplets, despite the fact that
they now contain four parts. When you’re building a toolchain, the triplet describing the
target machine is the four-part variety: for example, arm-none-linux-gnueabi. This unwieldy
string prefixes the name of the toolchain executables so it’s clear what the cross-compiler is
targeting. It’s possible to change this prefix, but the best practice is to leave it be.

Why Cross Compilers?


• The characteristics of most embedded systems
• Limited memory
• Diskless
• No output screen
• Poor performance
• Sometimes, it is impossible to execute compiler, assembler / linker and debugger on
target platform
• So for embedded system, we need cross system-development tools to help software
development
Requirements for Embedded system:
● Hardware
● Software => software-development tools
o Compiler, assembler, linker, debugger and C standard library
o Commercial => very high cost
o Open source => free. Users can modify, distribute and use the software
Example : GNU Compiler collections
GNU Compiler Collections (GCC) :
● Developed by Richard Stallman
● GCC is a key component of so-called "GNU Toolchain", for developing applications
and writing operating systems. The GNU Toolchain includes:
• GNU Compiler Collection (GCC): a compiler suit that supports many
languages, such as C/C++ and Objective-C/C++.
• GNU Make: an automation tool for compiling and building applications.
• GNU Binutils: a suit of binary utility tools, including linker and assembler.
• GNU Debugger (GDB).
• GNU Autotools: A build system including Autoconf, Autoheader, Automake
and Libtool and much more.
GNU Binutils:
• The GNU binutils are a collection of binary tools mainly contains:
• ld - the GNU linker
• as - the GNU assembler
• Other binary tools
• ar - A utility for creating, modifying and extracting from archives
• gprof - Displays profiling information
• objcopy - Copies and translates object files from one format to another
eg. .elf to .bin
• objdump - Displays information from object files
• …
***Best suitable for embedded-system development

Bootloaders:
“The first section of code to be executed after the embedded system is powered on or reset on
any platform”
Comparison between PC and Embedded System:
• In an embedded system the role of the boot loader is more complicated since these
systems do not have a BIOS to perform the initial system configuration.
• Boot Loader in x86 PC consists of two parts
– BIOS(Basic Input/Output System)
– OS Loader(located in MBR of Hard Disk)
• Ex. LILO and GRUB

A boot loader isn’t unique to Linux or embedded systems. It’s a program first run by a
computer so that a more sophisticated program can be loaded next. In a Linux system, two
boot loaders usually run before the Linux kernel starts running. The program that the boot
loader runs can be anything, but it’s usually an operating system that then starts additional
programs so the system can be used by a user. The interesting question is, how does the
processor know to run the boot loader? At power-up, the processor goes to a certain memory
address (put there by the processor designer), reads the content of that address, and performs
a jump to the address stored at that location. This address is hard-coded into the processor so
the chip maker doesn’t have to change the silicon design for each customer or when the boot
loader changes. This little bit of code is referred to as the first-stage boot loader.
The code that runs next is what is commonly viewed as the boot loader in a Linux
ystem. This may be a program like U-Boot, RedBoot, or something else smart enough to then
load the operating system. After Linux has been loaded into memory, the boot loader is no
longer needed and is discarded; any evidence of the code in RAM is overwritten by the
operating system. Any device configuration done by the boot loader—for example, setting
the speed of the serial port or an IP address assigned to the network adapter—is also lost.
Linux reinitializes these devices during its startup process.
Boot loaders can be configured either to automatically run a sequence of commands or
to wait for user input via a serial connection or console. Because embedded systems
traditionally lack monitors and keyboards, you interact with the boot loader over a serial
connection using a terminal emulator like minicom. The serial port is the favored way of
presenting a user interface because the programming necessary to interact with a universal
asynchronous receiver/transmitter (UART) that manages a serial port is an order of
magnitude simpler than code to use a USB device or start a remote console over the network.
The boot loader’s user interface looks a little like a terminal shell in Linux, but it lacks the
bells and whistles you normally find, like auto-completion and line editing.
Boot loaders also act as an interface to the flash devices on the board. Flash memory
(named so because the reprogramming process was reminiscent of a flash camera to the
designer) is a type of Electrically Erasable Programmable Read-Only Memory (EEPROM)
where individual areas (called blocks or erase blocks) can be erased and written; before the
invention of flash memory, EEPROMs could only be erased and rewritten in their entirety.
With no moving parts and low power consumption, flash memory is an excellent storage
medium for embedded devices. Flash support in the boot loader gives you the ability to
manage the flash memory by creating segments, which are named areas of the flash memory,
and to write data, such as a root file system or kernel image, into those areas.
Boot loaders on a desktop system run in two steps: a first- and a second-stage boot
loader. The firststage boot loader does just enough to get the second-stage boot loader
running. On a desktop system, the boot sequence reads one sector of data into memory and
begins executing that code. The first-stage boot loader contains a driver so that it can access a
file system on a fixed drive or possibly download the kernel from a remote source, like a
TFTP server or an NFS share. On a PowerPC or ARM system, the first-stage boot loader is
the code the chip runs after power-up instead of code that is loaded from the hardware.
After the boot loader does its job of getting Linux into memory, the Linux boot-up
process starts. Sometimes the kernel is compressed, and the first code decompresses the
kernel and jumps to an address that’s the kernel’s entry point. This code runs and performs
processor-level configuration, such as configuring the memory management unit (MMU—the
part of the processor that handles virtual memory addressing) and enabling the processor’s
cache. The code also populates a data structure that you can view by doing the following after
the system is up and running:
$ cat /proc/cpuinfo
Next, the kernel runs its board-level configuration. Some boards have peripherals like
PCI controllers or flash-management hardware that must be initialized so they can be
accessed later during the kernel startup process. When it’s ready, the code jumps into the
processor-independent startup code. Linux is now in control of the system; the software starts
the kernel’s threads and process management, parses the command line, and runs the main
kernel process. The kernel first runs what
you indicated on the command line (via the init parameter) and then attempts to execute
/sbin/init, /etc/init, /bin/init, and, finally, /bin/sh.
When the kernel starts the main user process, that process must continue running,
because when the process stops the kernel panics and stops as well. A kernel panic is the
worst sort of kernel error, because this sort of problem results in the system coming to halt.
The following illustration describes the booting process from power-on until the first program
runs.
1. The first-stage boot loader is the code in the processor that reads a location in memory
and may put a message on the console or screen.
2. The second-stage boot loader is responsible for loading what the system will run. In
this case, that is Linux, but it could be any program.
3. If the kernel is compressed, it’s now uncompressed into memory. A jump instruction
follows the decompression step that places the instruction at the next executable
instruction.
4. Processor and board initialization runs. This low-level code performs hardware
initialization. The code then reaches the kernel’s entry point, and processor-
independent code runs to get the kernel ready to run the init process.
5. The kernel entry point is the code that’s in the architecture-independent part of the
kernel tree. This code is located in the start_kernel function in the init/main.c file in
the kernel source tree.
6. The system mounts the initial RAM disk, sees if it contains an /init program, and if so,
runs it. If this program doesn’t exist or isn’t found, the boot process moves to the next
step.
7. One of the parameters passed to the kernel is the device containing the root file
system and the file system type. The kernel attempts to mount this file system and
panics if it isn’t found or isn’t mountable.
8. The program that the kernel first attempts to run is the value of the kernel parameter
init. In lieu of this parameter, the system looks for an init executable by attempting to
run /sbin/init, /etc/init, /bin/init, and finally /bin.sh.
Linux boot process
Boot Loaders for embedded System:
• U-Boot ("Universal Boot loader". Boot loader for PowerPC or ARM based
embedded Linux systems.)
• RedBoot (RedHat eCos derived, portable, embedded system boot loader)
• rrload (Boot loader for ARM based embedded Linux systems)
• FILO (x86 compatible boot loader which loads boot images from the local
filesystem, without help from legacy BIOS services. Expected usage is to flash it
into the BIOS ROM together with LinuxBIOS.)
• CRL/OHH (Flash boot loader for ARM based embedded Linux systems)
• PPCBOOT (Boot loader for PowerPC based embedded Linux systems)
• Alios (Assembler based Linux loader which can do basic hardware initialization from
ROM or RAM. The goal is to eliminate the need for a firmware BIOS on embedded
systems.)
Development Environment:
Much of the activity around embedded development occurs on a desktop. Although
embedded processors have become vastly more powerful, they still pale in comparison to the
dual core multigigabyte machine found on your desk. You run the editor, tools, and compiler
on a desktop system and produce binaries for execution on the target. When the binary is
ready, you place it on the target board and run it. This activity is called cross-compilation
because the output produced by the compiler isn’t suitable for execution on your machine.
Use the same set of software tools and configuration to boot the board and to put the
newly compiled programs on the board. When the development environment is complete,
work on the application proper can begin.
System Design
The Linux distribution used to boot the board isn’t the one shipped in the final product. The
requirements for the device and application largely dictate what happens in this area. Your
application may need a web server or drivers for a USB device. If the project doesn’t have a
serial port, network connection, or screen, those drivers are removed. On the other hand, if
marketing says a touch-screen UI is a must-have, then a suitable UI library must be located.
In order for the distribution to fit in the amount of memory specified, other changes are also
necessary.
Even though this is the last step, most engineers dig in here first after getting Linux
to boot. When you’re working with limited resources, this can seem like a reasonable
approach; but it suffers from the fact that you don’t have complete information about
requirements and the person doing the experimentation isn’t aware of what can be done to
meet the requirements.

At runtime, an embedded Linux system contains the following software components:


1. Boot loader: What gets the operating system loaded and running on the board.
2. Kernel: The software that manages the hardware and the processes.
3. Root file system: Everything under the / directory, containing the programs run by
the kernel. Every Linux system has a root file system. Embedded systems have a
great amount of flexibility in this respect: the root file system can reside in flash,
can be bundled with the kernel, or can reside on another computer on the
network.
4. Application: The program that runs on the board. The application can be a single
file or a collection of hundreds of executables.
All these components are interrelated and thus depend on each other to create a running
system. Working on an embedded Linux system requires interaction with all of these, even if
your focus is only on the application.

Root File System:


A file system is a way of representing a hierarchical collection of directories, where
each directory can contain either more directories or files. For computer science types, this
hierarchy is a tree structure in which the files are always leaf nodes and directories are
internal nodes when they contain something and leaf nodes otherwise. The point of making
this trip down data-structure memory lane is that the top node in a tree structure is the root
node and that, in Linux, the file system mounted at the top node is aptly called the root file
system.
“The root filesystem is the filesystem that is contained on the same partition on which
the root directory is located, and it is the filesystem on which all the other filesystems
are mounted (i.e., logically attached to the system) as the system is booted up “
A partition is a logically independent section of a hard disk drive (HDD). A filesystem
is a hierarchy of directories (also referred to as a directory tree) that is used to organize files
on a computer system. On Linux and and other Unix-like operating systems, the directories
start with the root directory, which contains a series of subdirectories, each of which, in turn,
contains further subdirectories, etc. A variant of this definition is the part of the entire
hierarchy of directories (i.e., of the directory tree) that is located on a single partition or disk.

Linux Root File System


Assembling a Root File System
The process of assembling a root file system is less complicated than you may think: it
mainly involves gathering the right bits and pieces and putting them in one place so the tools
used to create the file system itself can create the binary image that is then placed into the
embedded device’s storage area.
Here are the coarse-grained steps involved:
1. Create the staging area. The staging area is where the files reside to create the file
system. Although this sounds complex, the process isn’t much more than creating a
directory.
2. Create a directory skeleton. A Linux system has some requirements as to the
directories that must exist on a root file system. Some of these directories serve as
mount points for virtual file systems.
3. Gather libraries and required files. This step involves getting together the files for
your application, along with any libraries that are necessary. At this point, you also
want to add files for logging into the host, if that’s how it will be used.
4. Create initialization scripts. When the system starts, you need to run any background
processes, perform any configuration steps, and get the application running.
5. Set permissions. The files in the staging area may be owned by root or some other
user. The permissions, such as files being readable or writable, may also need to be
changed.

The main software utility tool


Translation tools : (QEMU)
QEMU (short for Quick Emulator]) is a free and open-source emulator that
performs hardware virtualization. QEMU is a hosted virtual machine monitor: it emulates the
machine's processor through dynamic binary translation and provides a set of different
hardware and device models for the machine, enabling it to run a variety of guest operating
systems. 
QEMU is a growing emulation project started by Fabrice Bellard. It’s available for
Linux and Windows hosts and emulated PowerPC, ARM, MIPS, and SPARC targets. QEMU
takes the approach of providing a minimal translation layer between the host and target
processor. The host processor is the one running the emulator, and the target processor is
what’s being emulated.
QEMU also provides support for USB, serial, graphics, and network devices by
mapping them to a real device on the host machine. For an embedded board, this support
makes it possible for QEMU to be a reasonable stand-in for kernel and application
development. In addition to being a useful tool to emulate an entire system, QEMU can also
execute programs compiled from the target machine on the host machine. This means you
can test or debug a program without starting an entire emulated system, thus making it even
easier to quickly debug and test programs.
QEMU provides both virtualization software and emulation. This chapter looks at
using QEMU as an emulator. If the target machine happens to be the same as the host
machine, the software performs virtualization, but that’s an implementation detail. QEMU
has a kernel module for accelerating virtualization; unless your target machine is also a x86,
this feature isn't that helpful.
The QEMU maintainer has thoughtfully included ready-to-boot packages for the following
processors:
1. MIPS
2. ARM
3. X86
4. ColdFire
Compiling QEMU:
QEMU is available in source form; the site has precompiled binaries as well. In spite of the
binary distribution, this is open source software, so knowing how to compile it is important in
case a patch becomes available—or just because it’s open source, and compiling from source
is the right thing to do. QEMU requires GCC 3.0 in order to build. To check what version of
GCC is currently installed, do the following:
$ gcc –dump-version
4.2.3
If it does present you with a 4.0 or higher version number, install the GCC 3.4 package.
Don’t worry about multiple GCC installations on the system. GCC installs as gcc-<version>
and creates a symlink gcc that points at the newest version. If you set the environment
variable CC to gcc-3.4, that executable is used instead of the most recent gcc:
$ apt-get install gcc-3.4
$ export CC=gcc-3.4

QEMU’s build also requires some additional development libraries in order to build. Fetch
these by doing the following on an Ubuntu or a Debian system:

$ sudo apt-get install libsdl-gfx1.2-dev zlib1g-dev

Start compiling QEMU by getting the source code at http://bellard.org/qemu/download.html.


The current version is 0.9.1; you can download it using wget, like so:

$ cd ~
$ wget http://bellard.org/qemu/qemu-0.9.1.tar.gz

Then, untar:
$ tar zxf qemu-0.9.1.tar.gz

Start the build process by doing a configure:


$ cd qemu-0.9.1
$ ./configure

Does this message appear?


WARNING: "gcc" looks like gcc 4.x
Looking for gcc 3.x
gcc 3.x not found!
QEMU is known to have problems when compiled with gcc 4.x

It is recommended that you use gcc 3.x to build QEMU


To use this compiler anyway, configure with --disable-gcc-check
This message means the GCC installed on the system isn’t a 3.x version. Check that
GCC 3.4 has been installed, and make sure the environment has the CC variable set to gcc-
3.4. Running configure with the –disable-gcc-check flag results in the configure step working
correctly, but the compilation fails.

After the configure step, typing


$ make
$ sudo make install

builds and installs QEMU.


QEMU is one of the best tool for kernel boot-up.
Follow these steps to download and unpack the file:
$ cd ~
$ wget http://bellard.org/qemu/arm-test-0.2.tar.gz
$ tar xzf arm-test-0.2.tar.gz
Use these commands to start the machine without a graphical terminal:
$ cd arm-test
$ ../qemu-0.9.1/arm-softmmu/qemu-system-arm -kernel zImage.integrator -initrd
arm_root.img -nographic -append "console=ttyAMA0"

Debugging Tools: (GDB)


What can debuggers do?
• Run programs
• Make the program stops on specified places or on specified conditions
• Give information about current variables’ values, the memory and the stack
• Let you examine the program execution step by step - stepping
• Let you examine the change of program variables’ values - tracing
! To be able to debug your program, you must compile it with the -g option (creates the
symbol table) !
CC –g my_prog
Starting GDB : gdb or gdb my_prog(Executable file name)
Exit GDB : quit
Building GDB
Like any open source project, GDB is built by downloading the source, using configure, and
then compiling the program. If you built the toolchain yourself, building GDB is substantially
easier. In order to do remote debugging, you also need to build a GDB server on the target
that serves as the controller program with which GDB communicates.
$ mkdir ~/gdb ; mkdir ~/gdb/build
$ cd ~/gdb
$ wget ftp://sourceware.org/pub/gdb/releases/gdb-<version>.tar.gz
$ tar xjf gdb-<version>.tar.gz
Like the projects you’ve build so far, there’s a configuration step. Before you run configure,
make sure the cross-compiler that will be used with GDB is in the path. The configure script
looks for these tools, and the build won’t work properly if they can’t be found. The other
thing that’s a little different is that the –target flag is passed into configure script because
you’re building GDB to run on your host machine but work with code that runs on the target
architecture:
$ export PATH=$PATH:<tool chain bin directory>
$ cd ~/gdb/build
$ export CFLAGS="-Wno-unused"
$ ../gdb-<version>/configure --target=<system> --prefix=
What should <system> be? This is the four-part “triplet” that’s the prefix for the GCC
crosscompiler. If you invoke GCC by doing the following
powerpc-405-linux-gnu-gcc
Once GDB is build, the next step is to build gdbserver.
$ mkdir ~/gdb/build-gdbserver
$ cd ~/gdb/build-gdbserver
$ export LDFLAGS=-static
$ export CFLAGS-Wno-unused
$ CC=powerpc-405-linux-gnu-gcc ../gdb-6.8/gdb/gdbserver/ \
configure --host=powerpc-405-linux-gnu –prefix=<board rfs>/bin/gdbserver
$ make
Depending on the toolchain, a supporting library for handling threads (such as libthread_db)
may not be present when linking with -static. If you’ll be debugging programs with several
threads, you must link gdbserver with shared libraries. If you want to create a static
gdbserver, do the following when configuring the project, before running make:
$ LDFLAGS=-static CC=powerpc-405-linux-gnu-gcc \
../gdb-6.8/gdb/gdbserver/configure --host=powerpc-405-linux-gnu \
--prefix=<board rfs>/bin/gdbserver

GDB Front Ends


GDB is a command-line debugger, and most users are accustomed to a graphical debugger or
at least one with some menus. Several tools work with GDB to provide a GUI that displays
the code that’s currently being debugged and lets you set breakpoints by clicking a line. The
design of GDB is such that it can work in the background with a front end; there also happen
to be several excellent choices for GDB front ends. This section covers a few of the more
popular solutions that work well during remote debugging:
● Data Display Debugger (DDD): This tool works well when you’re debugging
remotely because it has a great mix of point and click and command-line oriented
features. The command-line interface makes it easy to start a remote session, and the
UI features make normal debugging features accessible.
● Emacs: The Emacs GDB debugger works very well, even if you aren’t inculcated
with the Emacs tradition. As with DDD, you can access the command line and need to
do so to start a remote debugging session, but the rest of the functionality is more or
less a point and click interface.
● Eclipse: This tool can also be used as a debugger front end. However, unless it’s
being used as the IDE as well, the overhead for using it as a debugging tool may
outweigh what it has to offer. For example, in order to debug a project, you must
import the project into Eclipse; that process alone is time consuming enough to
suggest looking elsewhere.
Testing on host machine and Simulator:
***Refer compiling QEMU
Laboratory Tools:
***Refer Cross Compilation Toolchain performed in laboratory

You might also like