0% found this document useful (0 votes)
110 views25 pages

Embedded Systems-Unit-3

Embedded software is software written specifically for hardware devices with limited computing capabilities. It varies in complexity from simple programs controlling lighting to complex systems in smart cars. The main difference between embedded and application software is that embedded software is tied to specific hardware with strict resource constraints, while application software runs on full operating systems with fewer restrictions. Developing embedded software requires specialized tools to effectively utilize limited resources and enable debugging on hardware. Common embedded development tools include editors, compilers, assemblers, debuggers, linkers, libraries, and simulators. Integrated development environments integrate multiple tools into a single software package.

Uploaded by

Kushal Rajvanshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views25 pages

Embedded Systems-Unit-3

Embedded software is software written specifically for hardware devices with limited computing capabilities. It varies in complexity from simple programs controlling lighting to complex systems in smart cars. The main difference between embedded and application software is that embedded software is tied to specific hardware with strict resource constraints, while application software runs on full operating systems with fewer restrictions. Developing embedded software requires specialized tools to effectively utilize limited resources and enable debugging on hardware. Common embedded development tools include editors, compilers, assemblers, debuggers, linkers, libraries, and simulators. Integrated development environments integrate multiple tools into a single software package.

Uploaded by

Kushal Rajvanshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT-3

EMBEDDED SYSTEMS

I. EMBEDDED SOFTWARE
What is embedded software?
Embedded software is a piece of software that is embedded in hardware or non-PC devices. It is written specifically for the
particular hardware that it runs on and usually has processing and memory constraints because of the device’s limited computing
capabilities. Examples of embedded software include those found in dedicated GPS devices, factory robots, some calculators and
even modern smart watches.
Embedded software varies in complexity as much the devices it is used to control. Although the term is often used interchangeably
with firmware, embedded software is often the only computer code running on a piece of hardware, while firmware, in contrast,
hands over control to an operating system that in turn launches and controls programs.

Embedded software can be very simple, such as that used for controlling lighting in homes, and can run on an 8-bit microcontroller
with just a few kilobytes of memory, or it can be quite complex such as the software running all of the electronic components of a
modern smart car, complete with climate controls, automatic cruising and collision sensing, as well as control navigations.

Difference between embedded software and application software


The main difference between embedded software and application software is that the former is usually tied to a specific device,
serving as the OS itself, with restrictions tied to that device’s specifications, so updates and additions are strictly controlled,
whereas application software provides the functionality in a computer and runs on top of an actual full OS, so it has fewer
restrictions in terms of resources.

The need for embedded tools

A significant factor in getting any kind of job done properly is having the right tools. This is true whether you are remodelling a
kitchen, fixing your car, or developing embedded software. Of course, it is the last of these that is of interest here. I have been
evangelizing on this topic for years (decades!). The problem is that there is a similarity - arguably superficial - between
programming an embedded system and programming a desktop computer. The same kinds of languages are used and software
design techniques are fairly universal. However, there are some major differences.
There are three key areas of difference between desktop and embedded programming:

 The degree of control required by the embedded developer is much greater, in order to utilize resources (time, memory and
power) effectively.
 The approaches to verification and debugging are quite different for an embedded system, as an external connection and/or a
selection of instruments need to be employed. Also, further tools may be needed to optimize the performance of the application.
 Every embedded system is different, whereas every desktop computer is basically the same. So, desktop programming tools
can make numerous assumptions about the execution environment and user requirements and expectations. Embedded development
tools make few such assumptions. This means that the tools (like the programmers) need to be much more flexible and adaptable.

Getting tools

Because there are so many desktop programmers, all working in the same environment, there is a huge demand for tools. The result
is that very good tools are effectively (or literally) free. The apparent similarity of embedded to desktop programming means that
developers have a misguided expectation that their tools should be free too - regardless of their specialized needs and much lower
demand level. In the electronic design (EDA) world, there is no such expectation. Tools are valued and price tickets are commonly
in five figures.

There are essentially two ways that embedded developers can currently get tools:

 They can purchase commercial tools that are dedicated to the needs of the embedded developer. This is undoubtedly the best
approach, but their costs are substantial. There is a reasonable expectation that the tools will work "out of the box" and that good
quality technical support is available.
 They can choose free open source tools that have been adapted for embedded use, or do the adaptation themselves. The direct
costs are lower, but the extra
time needed to get the tools into shape and to obtain support from the open source community cannot be ignored.
The challenges associated with utilizing open source toolsshould not be under-estimated.

Open source tools deployment

There are a number of key issues to be addressed and decisions to be made when deploying open source tools. This process starts
with consideration of exactly which tools are needed for the project; each one needs to be considered in turn:
C/C++ compilers – Do you need C++? GNU C++ runtime library can be difficult to build. Some C++ features, like EHS, require
additional configuration and validation.
Assembler
Linker
Runtime Libraries – libgcc provides low level compiler support. Which C runtime library is appropriate?

 GLIBC is POSIX compliant


 uClibc has smaller footprint
 Newlib smaller still for no OS
Debugger – GDB gives source- and assembly-level debugging. Can be used as command line or as back end to Eclipse.
Debug Stub(s) – How does your JTAG unit connect to GDB? How will you program flash memory? Standard stubs only support
TCP/IP. Need to find or write full featured stub.
Integrated Development Environment – Do you want a GUI? Proper configuration of Eclipse with GDB and stub is challenging.
Eclipse must not be modified as this may compromise 3rd party plug-ins.

EDITOR
 The first tool you need for Embedded Systems Software Development Tools is text editor.
 This is where you write the code for your embedded system.
 The code is written in some programming language. Most commonly used language is C or C++.
 The code written in editor is also referred to source code.

COMPILER

 The second among Embedded Systems Software Development Tools is a compiler.


 A compiler is used when you are done with the editing part and made a source code.
 The function of compiler is to convert the source code in to object code.
 Object code is understandable by computer as it in low level programming language.
 So we can say that a compiler is used to convert a high level language code in to low level programming language.

ASSEMBLER
The third and an important one among Embedded Systems Software Development Tools is an assembler.
 The function of an assembler is to convert a code written in assembly language into machine language.
 All the mnemonics and data is converted in to op codes and bits by an assembler.
 We all know that our computer understands binary and it works on 0 or 1, so it is important to convert the code into machine
language.
 That was the basic function of an assembler, now I am going to tell you about a debugger.

DEBUGGER

 As the name suggests, a debugger is a tool used to debug your code.


 It is important to test whether the code you have written is free from errors or not. So, a debugger is used for this testing.
 Debugger goes through the whole code and tests it for errors and bugs.
 It tests your code for different types of errors for example a run time error or a syntax error and notifies you wherever it
occurs.
 The line number or location of error is shown by debugger so you can go ahead and rectify it.
 So from the function, you can see how important tool a debugger is in the list of Embedded Systems Software Development
Tools.

LINKER

 The next one in basic Embedded Systems Software Development Tools is a linker.
 A linker is a computer program that combines one or more object code files and library files together in to executable
program.
 It is very common practice to write larger programs in to small parts and modules to make job easy and to use libraries in
your program.
 All these parts must be combined into a single file for execution, so this function requires a linker.
 Now let’s talk about libraries.

LIBRARY

 A library is a pre written program that is ready to use and provides specific functionality.
 For Embedded Systems Software Development Tools, libraries are very important and convenient.
 Library is a file written in C or C++ and can be used by different programs and users.
 For example, arduino microcontroller comes with a number of different libraries that you can download and use while
developing your software.
 For instance, controlling LED or reading sensor like an encoder can be done with a library.

SIMULATOR

 Among all embedded software tools, simulating software is also needed.


 A simulator helps you to see how your code will work in real time.
 You can see how sensors are interacting, you can change the input values from sensors, and you can see how the components
are working and how changing certain values can change parameters.
 These were the basic software tools required for embedded software development.

INTEGRATED DEVELOPMENT ENVIRONMENT-EMBEDDED SYSTEM SOFTWARE DEVELOPMENT TOOLS

 An Integrated Development Environment is software that contains all the necessary tools required for embedded software
development.
 For creating software for you embedded system, you need all of the above mentioned tools.
 So it is very helpful to have software that can provide all of the necessary tools from writing to testing of your code, in one
package.
 An IDE normally consists of a code editor, compiler and a debugger.
 An Integrated Development Environment also provides a user interface.
 An example of integrated development environment is Microsoft visual studio. It is used for developing computer programs
and supports different programming languages.
 Other examples of IDE that are common are given below:

 Android Studio
 Eclipse
 Code Blocks
 BlueJ
 Xcode
 Adobe Flash Builder etc.

Depending on what kind of microcontroller you are using, you can choose from many different software applications. I am
going to share a few of these Embedded Systems Software Development Tools here in my article.
II. Concepts of Real-Time System
Definition of Real-time Systems(RTS)
 A real-time system is any information processing system which has to respond to externally generated input
stimuli within a finite and specified period.

 The correctness depends not only on the logical result but also the time it was delivered.
 Failure to respond is as bad as the wrong response!

 Real-time system has four types of pattern


 Hard real-time systems
 Soft real-time systems
 Firm real-time systems
 Weakly hard real-time
A deadline is a given time after triggering event, by which a response has to be completed.
 What’s needed of a RTOS?
 Should be fast context switches
 Should be small in size
 Not necessarily quick to response external triggers but predictable
 Often used multitasking, but not necessarily
 “LOW LEVEL” programming interfaces might be needed as with other embedded systems
 High processor utilization is desirable in any system (avoid oversized system)

 Hard real-time systems

 An overrun in response time leads to potential loss of life and/or big financial damage
 Many of these systems are considered to be safety critical
 Sometimes they are only mission critical, with the mission being very expensive
 In general there is a cost function associated with the system

 Soft real-time
 Deadline overrun are tolerable, but not desired
 There are no catastrophic consequences of missing one or more deadline
 There is a cost associated to overrunning, but this cost may be abstract
 Often connected to quality-of-service(Qos)
 Firm real-time system
 The computation is obsolete if the job is not finished on time
 Cost may be interpreted as loss of revenue
 Typical example is forecast systems

 Weakly hard real-time systems


 Systems where m out of k deadlines have to be met
 In most cases feedback control systems, in which control becomes unstable with too many missed
control cycles
 Best suited if system has to deal with other failures as
well (eg. Electro magnetic interference EMI )
 Likely probabilistic guarantees sufficient.
BASIC CONCEPTS OF REAL TIME OPERATING SYSTEMS

• Introduction
Most embedded systems are bound to real-time constraints. In production control the various machines have to receive their
orders at the right time to ensure smooth operation of a plant and to fulfill customer orders in time. Rail- way switching systems
obviously have to act in a timely manner. In flight control systems the situation is even more restrictive. Inside technical artifacts
many operations depend on timing, e.g. the control of turbines or combustion engines. This is just a small fraction of such
applications. Even augmented reality systems are real-time applications as augmenting a moving reality with outdated information
is useless or even dangerous.
“Real-time” means that the IT system is no longer controlling its own time domain. Now it is the progress of time of the
environment which dictates how time has to progress inside the system. This environmental time may be the real one of our
physical world or it may be artificially generated by some surrounding environment as well. For the embedded system there is no
differ- ence between these options. Kopetz defines real-time systems as “A real-time computer system is a computer system in
which the correctness of the system behaviour depends not only on the logical results of the computation, but also on the
physical instant at which these results are produced” . This means that in strict real-time systems a late result is not just late
but wrong. The meaning of “late” of course has to be defined dependent on the specific application. In case of an air-bag
controller it is intuitively clear what real- time means and it is easy to understand that a late firing of the air-bag is not only late
but definitely wrong.
It can be concluded that in real-time systems the program logic of appli- cation tasks has to be augmented by information
about timing. Such timing information contains the earliest point of time the task may be started as well as the latest allowed
finishing time. This, together with the program logic may be seen as a specification for the computing system what to do and
when to do it.
Many such tasks may have to be executed concurrently on an embedded computing system. Such situations usually are
handled by some kind of op- erating system. The same is true in case of real-time systems. But now an additional objective
function is introduced, an objective function which domi- nates most other ones: Formulated real-time constraints have to be
respected. An operating system which is capable of taking care of this is called a “Real- time Operating System (RTOS)”. Of
course some additional information is needed by an RTOS to manage real-time tasks. Especially the worst-case ex- ecution time
(WCET) on the specific target architecture of any real-time task has to be available. Determining the WCET of a task is a
demanding goal on its own. It must never be underestimated. On the other hand the potential
over-estimation has to be reduced as far as possible to allow efficient system implementations.
The above discussion indicates that we first have to discuss fundamental properties of real-time tasks. On this basis we can
then introduce basic tech- niques used in RTOS to handle such tasks. We will concentrate on real-time scheduling and on
schedulability analysis.

• Characteristics of Real-Time Tasks


First of all a real-time task is a task like any other. However, there is an essential difference to other computation: the notion
of time. Time means that the correctness of the system depends not only on logical results but also on the time the results are
produced. In contrary to other classes of systems in a real- time system the system time (internal time) has to be measured with
same time scale as the controlled environment (external time). One parameter constitutes the main difference between real time
and non-real-time: the deadline. Any postulated deadline has to be met under all (even the worst) circumstances. This has the
consequence that real-time means predictability. It is a wide- spread myth that real-time systems have to be fast. Of course
they have to be fast enough to enable guaranteeing the required deadlines. Most of all, however, a real-time system has to be
predictable. Ensuring this predictability even may slow down a system.
Real-time systems can be characterized by the strictness of real-time restric- tions.
A real-time task is called hard if missing its deadline may cause catastrophic consequences on the environment under control.
Typical application areas can be found in the automotive domain when looking at e.g. power-train control, air-bag control, steer
by wire, and brake by wire. In the aeronautics domain engine control or aerodynamic control may serve as examples.
A RT task is called firm if missing its deadline makes the result useless, but missing does not cause serious damage. Typical
application areas are weather forecast or decisions on stock exchange orders.
A RT task is called soft if meeting its deadline is desirable (e.g. for per- formance reasons) but missing does not cause
serious damage. Here typical application areas are communication systems (voice over IP), any kind of user interaction, or
comfort electronics (most body electronics in cars).

• Real-Time Scheduling
Given is a set of n generic tasks Ŵ = {τ 1,..., τn}, a set of m processors P = {P1,..., Pm}, and a set of s resources R =
{R1,..., Rs }. There may exist precedences, specified using a precedence graph (DAG) and, as we are
considering real-time systems, timing constraints are associated to each task.

The goal of real-time scheduling is to assign processors from P and resources from R to tasks from Ŵ in such a way that all task
instances are completed un- der the imposed constraints. This problem in its general form is NP-complete! Therefore relaxed
situations have to be enforced and/or proper heuristics have to be applied. In principle scheduling is an on-line algorithm. Under
certain assumptions large parts of scheduling can be done off-line. Generating static cyclic schedules may serve as an example.
In any case all exact or heuristic algorithms should have very low complexity.
In principle scheduling algorithms may be preemptive or non preemptive. In preemptive approaches a running task instance
may be stopped (preempted) at any time and restarted (resumed) at any time later. Any preemption means some delay in
executing the task instance, a delay which the RTOS has to take care of as it has to guarantee respecting the deadline. In case of
non preemptive scheduling a task instance once started will execute undisturbed until it finishes or is blocked due to an attempt
to access an unavailable exclusive resource. Non preemptive approaches result in less context switches (replacement of one task
by another one, usually a very costly operation as many processor loca- tions have to be saved and restored). This may lead to the
conclusion that non preemptive approaches should be preferable in real-time scheduling. However, not allowing preemption
imposes such hard restrictions on the scheduler’s free- dom that for most non-static cases predictable real-time scheduling
solutions with an acceptable processor utilization rate are known only if preemption is allowed. In the sequel basic preemptive
real-time scheduling algorithms for periodic and aperiodic tasks will be discussed shortly.
• Operating System Designs
The most common Operating Systems are based on kernel designs. The kernel design has been around for almost 40 years
and offers a clear separation between the operating system and the application running on top of it, as they are allocated in
different memory locations. The processes can use the kernel functionality by performing system calls. System calls are software
interrupts which allow switching from the application to the operating system. Therefore the kernel needs to install an interrupt
handler for different modes of operation, depicted in Fig. 2.4, that can be enabled in the program status word (PSW): User
mode and Supervisor mode. For this reason, protection is done in modern SoCs at peripheral side. Some registers can be changed
only if the CPU signals a specific execution mode (e.g. master mode) via a set of additional HW-signals in the bus infrastructure.
Processes outside the OS are executed within user mode and are not al- lowed to execute instructions which are only
available in supervisor mode. This means that the user mode instructions constitute a non-critical subset of the supervisor mode
instructions. During runtime of a process the supervisor mode bit within the PSW is disabled and can only be enabled if an
interrupt such as a system call or an external interrupt occurs. The operating system is responsible for enabling the user mode at
the time a user process is activated. Typically a user process has its own virtual memory address space which sep- arates it
completely from the kernel. However this is not possible on all em- bedded microcontrollers as they may lack a memory
management unit (MMU) enabling the use of virtual memory.
The use of virtual memory, if there is a MMU available, has to be realized without any unbound memory accesses like
swapping on an external disk or re- placing translation lookaside buffer (TLB) entries by searching a dynamically sized page
table.
To use the functionality provided by the OS kernel it is necessary to define an interface that allows applications to use it. This
interface is called the appli-
cation binary interface (ABI). The ABI defines a set of system calls, a register usage convention, a stack layout and enables
binary compatibility whereas an application programming interface (API) enables source code compatibility through the
definition of a set of function signatures providing a fixed interface to call these functions. Figure 2.5 shows the location of the
ABI within an architectural schemata.
The kernel itself can be built in many ways and usually provides the fol- lowing basic activities: Process management,
process communication, in- terrupt handling, and process synchronization.
Process management is responsible for process creation, process termina- tion, scheduling, dispatching, context switching and
other related activities.
Interrupt handling in a RTOS is different to the standard implementation of an ordinary OS. In an ordinary OS interrupts can
preempt running processes at any time. This can lead to unbound delays which are not acceptable in a RTOS. Therefore the
handling of interrupts is integrated into the scheduling so that it can be scheduled along with the other processes and a guarantee
of feasibility can be achieved even in the presence of interrupt requests.
Another important role of the kernel is to provide functionalities for the syn- chronization and communication of processes. The
use of ordinary semaphores is not possible within a RTOS as the caller may experience unbound delays in case of a priority
inversion problem. Therefore the synchronization mecha- nisms need to support a resource access protocol such as Priority
Inheritance, Priority Ceiling or Stack Resource policy [But04, p. 191ff].
As already stated there are different ways to realize a kernel. Today the main design question is whether to us a monolithic
kernel, a microkernel or a combination called hybrid kernel [Sta01, Tan01].

• Real-Time Requirements of Multimedia Application


The timing constraints for multimedia traffic originate from the requirement
to maintain the same temporal relationship in the sequence of informa- tion on transmission from service provider to service
requester
from the necessity of preferably low offset delays between information departure and arrival
the requisite to keep multiple types of media in sync
Consequently, each piece of information needs to be transmitted within a bound time frame and the traffic becomes real-time.
Any failure to meet the timing constraints impairs the user-perceived Quality of Service (QoS) of net- worked multimedia
applications. Different types of applications, however, have different QoS requirements. Common multimedia applications can
be classified as multimedia playback applications, streaming applications, and real-time interactive.

• Conclusions
Embedded applications in most cases are bound to real-time constraints and are usually executed on top of a Real-time
Operating System (RTOS). Real-time tasks have to be annotated with basic timing information in order to enable the
underlying RTOS to manage them properly. Such parameters include arrival time, worst case execution time (WCET) and
(relative or ab- solute) deadline, just to mention the most important ones. Explicitly provid- ing these information distinct real-
time applications from ordinary ones where such information (usually characterized as non-functional properties) is avail- able
only in implicit manner. Having such characteristics in hand, specific scheduling algorithms can be designed. Most real-time
applications show pe- riodic behavior. When possible, static cyclic schedules are calculated off-line. If more flexibility is needed
on-line techniques are applied. These algorithms are bound to priorities which can be assigned statically as in the case of Rate

III.SOFTWARE QUALITY MANAGEMENT

DEFINITION:
Software quality measurement quantifies to what extent a software program or system rates along each of these five dimensions.
An aggregated measure of software quality can be computed through a qualitative or a quantitative scoring scheme or a mix of both
and then a weighting system reflecting the priorities. This view of software quality being positioned on a linear continuum is
supplemented by the analysis of "critical programming errors" that under specific circumstances can lead to catastrophic outages or
performance degradations that make a given system unsuitable for use regardless of rating based on aggregated measurements.
Such programming errors found at the system level represent up to 90% of production issues, whilst at the unit-level, even if far
more numerous, programming errors account for less than 10% of production issues.

WHY SOFTWARE QUALITY MANAGEMENT IS MOTIVATED:


 Measuring software quality is motivated by at least two reasons:
 Risk Management: Software failure has caused more than inconvenience. Software errors have caused human fatalities. The
causes have ranged from poorly designed user interfaces to direct programming errors. An example of a programming error
that led to multiple deaths is discussed in Dr. Leveson's paper. This resulted in requirements for the development of some
types of software, particularly and historically for software embedded in medical and other devices that regulate critical
infrastructures: "[Engineers who write embedded software] see Java programs stalling for one third of a second to perform
garbage collection and update the user interface, and they envision airplanes falling out of the sky". In the United States,
within the Federal Aviation Administration (FAA), the FAA Aircraft Certification Service provides software programs,
policy, guidance and training, focus on software and Complex Electronic Hardware that has an effect on the airborne product
(a "product" is an aircraft, an engine, or a propeller).

 Cost Management: As in any other fields of engineering, an application with good structural software quality costs less to
maintain and is easier to understand and change in response to pressing business needs. Industry data demonstrate that poor
application structural quality in core business applications (such as enterprise resource planning(ERP), customer relationship
management (CRM) or large transaction processing systems in financial services) results in cost and schedule overruns and
creates waste in the form of rework (up to 45% of development time in some organizations). Moreover, poor structural
quality is strongly correlated with high-impact business disruptions due to corrupted data, application outages, security
breaches, and performance problems.

However, the distinction between measuring and improving software quality in an embedded system (with emphasis on risk
management) and software quality in business software (with emphasis on cost and maintainability management) is becoming
somewhat irrelevant. Embedded systems now often include a user interface and their designers are as much concerned with issues
affecting usability and user productivity as their counterparts who focus on business applications. The latter are in turn looking at
ERP or CRM system as a corporate nervous system whose uptime and performance are vital to the well-being of the enterprise.
This convergence is most visible in mobile computing: a user who accesses an ERP application on their smart phone is depending
on the quality of software across all types of software layers.
Both types of software now use multi-layered technology stacks and complex architecture so software quality analysis and
measurement have to be managed in a comprehensive and consistent manner, decoupled from the software's ultimate purpose or
use. In both cases, engineers and management need to be able to make rational decisions based on measurement and fact-based
analysis in adherence to the precept "In God (we) trust. All others bring data". ((mis-)attributed to W. Edwards Deming and others).

QUALITY:
There are many different definitions of quality. For some it is the "capability of a software product to conform to requirements."
while for others it can be synonymous with "customer value" or even defect level.
The first definition of quality History remembers is from Shewhart in the beginning of 20th century: There are two common aspects
of quality: one of them has to do with the consideration of the quality of a thing as an objective reality independent of the existence
of man. The other has to do with what we think, feel or sense as a result of the objective reality. In other words, there is a subjective
side of quality.

CISQ'S QUALITY MODEL:


Even though "quality is a perceptual, conditional and somewhat subjective attribute and may be understood differently by different
people" (as noted in the article on quality in business), software structural quality characteristics have been clearly defined by the
Consortium for IT Software Quality (CISQ). Under the guidance of Bill Curtis, co-author of the Capability Maturity
Model framework and CISQ's first Director; and Capers Jones, CISQ's Distinguished Advisor, CISQ has defined five major
desirable characteristics of a piece of software needed to provide business value. In the House of Quality model, these are "Whats"
that need to be achieved:
Reliability:
An attribute of resiliency and structural solidity. Reliability measures the level of risk and the likelihood of potential
application failures. It also measures the defects injected due to modifications made to the software (its "stability" as termed
by ISO). The goal for checking and monitoring Reliability is to reduce and prevent application downtime, application
outages and errors that directly affect users, and enhance the image of IT and its impact on a company's business
performance.
Efficiency:
The source code and software architecture attributes are the elements that ensure high performance once the application is in
run-time mode. Efficiency is especially important for applications in high execution speed environments such as algorithmic
or transactional processing where performance and scalability are paramount. An analysis of source code efficiency and
scalability provides a clear picture of the latent business risks and the harm they can cause to customer satisfaction due to
response-time degradation.
Security:
A measure of the likelihood of potential security breaches due to poor coding practices and architecture. This quantifies the
risk of encountering critical vulnerabilities that damage the business.
Maintainability:
Maintainability includes the notion of adaptability, portability and transferability (from one development team to another).
Measuring and monitoring maintainability is a must for mission-critical applications where change is driven by tight time-to-
market schedules and where it is important for IT to remain responsive to business-driven changes. It is also essential to keep
maintenance costs under control.
Size:
While not a quality attribute, the sizing of source code is a software characteristic that obviously impacts maintainability.
Combined with the above quality characteristics, software size can be used to assess the amount of work produced and to be done
by teams, as well as their productivity through correlation with time-sheet data, and other SDLC-related metrics.
Software functional quality is defined as conformance to explicitly stated functional requirements, identified for example
using Voice of the Customer analysis (part of the Design for Six Sigma toolkit and/or documented through use cases) and the level
of satisfaction experienced by end-users. The latter is referred as to as usability and is concerned with how intuitive and responsive
the user interface is, how easily simple and complex operations can be performed, and how useful error messages are. Typically,
software testing practices and tools ensure that a piece of software behaves in compliance with the original design, planned user
experience and desired testability, i.e. a piece of software's disposition to support acceptance criteria.
The dual structural/functional dimension of software quality is consistent with the model proposed in Steve McConnell's Code
Complete which divides software characteristics into two pieces: internal and external quality characteristics. External quality
characteristics are those parts of a product that face its users, where internal quality characteristics are those that do not.

MEASUREMENT:

Software quality measurement is about quantifying to what extent a system or software possesses desirable characteristics. This can
be performed through qualitative or quantitative means or a mix of both. In both cases, for each desirable characteristic, there are a
set of measurable attributes the existence of which in a piece of software or system tend to be correlated and associated with this
characteristic. For example, an attribute associated with portability is the number of target-dependent statements in a program.
More precisely, using the Quality Function Deployment approach, these measurable attributes are the "hows" that needs to be
enforced to enable the "whats" in the Software Quality definition above.
The structure, classification and terminology of attributes and metrics applicable to software quality management have been
derived or extracted from the ISO and the subsequent ISO/IEC quality model. The main focus is on internal structural quality.
Subcategories have been created to handle specific areas like business application architecture and technical characteristics such as
data access and manipulation or the notion of transactions.
The dependence tree between software quality characteristics and their measurable attributes is represented in the diagram on the
right, where each of the 5 characteristics that matter for the user (right) or owner of the business system depends on measurable
attributes (left):
 Application Architecture Practices
 Coding Practices
 Application Complexity
 Documentation
 Portability
 Technical and Functional Volume
Correlations between programming errors and production defects unveil that basic code errors account for 92% of the total errors in
the source code. These numerous code-level issues eventually count for only 10% of the defects in production. Bad software
engineering practices at the architecture levels account for only 8% of total defects, but consume over half the effort spent on fixing
problems, and lead to 90% of the serious reliability, security, and efficiency issues in production.

IV. Compilers for embedded system


Compilers translate high level programming language instructions into machine language. They perform the same task for high level languages that an
assembler performs for assembly language, translating program instructions from a language such as C to an equivalent set of instructions in machine
language. This translation does not happen in a single step – three different components are responsible for changing C instructions into their machine
language equivalents. These three components are:
1) Pre-processor
2) Compiler
3) Linker
1. The Pre-processor
A program first passes through the C pre-processor. The pre-processor goes through a program and prepares it to be read by the compiler. The pre-
processor includes the contents of other programmer specified files, manipulates the program text, and passes on instructions about the particular
computer for which the compiler will be translating.
2. The Compiler
The compiler translates a program into an intermediate form containing both machine code and information about the program’s contents. The compiler
is the second component to handle your program. The compiler has the most important job: digesting and translating the program into a language
readable by the destination computer. Many compilers operate in different passes through the code. There are often passes specifically to handle
optimizations of code which will reduce the size of the machine code generated.
3. The Linker
When programs were written in the past often the development computer was not powerful enough to hold the entire program being developed in
memory at one time. Historically, programs had to be divided into separate modules where each module would be compiled into object code and a linker
would link the object modules together. Our development machines today are very powerful and the use of a linker is no longer absolutely necessary.
Many implementations of C provide function libraries which have been precompiled for a particular computer. These functions serve common program
needs such as serial port support, input/output, and description of the destination computer. Functions within libraries are usually either linked with
modules which use them or included directly by the compiler if the compiler supports library function inclusion. When your program has been pre-
processed, compiled and linked, the destination computer will be able to read and execute your program.

Cross Development
A cross compiler runs on one type of computer and produces machine code for a different type of computer. While many 8 bit embedded
microcontrollers can support sophisticated and extremely useful programs, they are not powerful enough to support the resource needs of a C
development environment. How does a developer create and compile programs for an 8 bit microcontroller? By using a cross compiler.
1. Cross compiler
An embedded systems developer writes and compiles programs on a larger computer which can support a C development environment. The
compiler used does not translate to the machine language of the development computer, it produces a version of the program in the machine
language of the 8 bit microcontroller. A compiler that runs on one type of computer and provides a translation for a different type of computer is
called a cross-platform compiler or cross-compiler. The object code formats generated by a cross-compiler are based on the target device. For
example, a compiler for the Motorola MC68HC705C8 could generate an S-record file for its object code.
2. Cross development tools
After a program is compiled it must be tested using a simulator or an emulator. After testing the developer uses a special machine called a
programmer to imprint the translated program into the memory of the 8 bit microcontroller.

Simulator
A simulator is a software program which allows a developer to run a program designed for one type of machine (the target machine) on another
(the development machine). The simulator simulates the running conditions of the target machine on the development machine. Using a simulator
you can step through your code while the program is running. You can change parts of your code in order to experiment with different solutions
to a programming problem. Simulators do not support real interrupts or devices. An in-circuit simulator includes a hardware device which can be
connected to your development system to behave like the target microcontroller. The incircuit simulator has all the functionality of the software
simulator while also supporting the emulation of the microcontroller’s I/O capabilities.

Emulator
An emulator or in-circuit emulator is a hardware device which behaves like a target machine. It is often called a real time tool because it can react
to events as the target microcontroller would. Emulators are often packaged with monitor programs which allow developers to examine registers
and memory locations and set breakpoints.

LIST OF SOME COMPILERS


1.WinAVR

WinAVR is a set of development tools for the Atmel AVR series of RISC microprocessors. It includes avr-gcc (the GNU gcc compiler for the AVR), avr-gdb
(the GNU debugger), simulator, IDE, a tool to download/upload the ROM and EEPROM contents of AVR microcontrollers, a tool for editing EPROM load
files, etc. The host platform for the tools is Windows.

2.OnBoard Suite

The OnBoard Suite creates executables (or, if you wish, a Hackmaster hack) for the Palm PDA. It includes a C compiler, assembler, programmer's editor and a
Palm-to-host porting tool. The Suite runs directly on PalmOS, on the Palm handheld.

3.Reads51 Small C Cross-Compiler

Version 2.0 of Reads51 comes with an assembler that runs under Windows 3.1 to generate 8051 code. It comes with an IDE, editor, debugger, and monitor.
Version 4.10 includes a Small C compatible 8051 compiler, a relative assembler, a linking locator (loader), an editor, a chip simulator, an assembly language
debugger and a monitor. This version runs under Windows 95, 98 and NT. With these tools, you can write, compile, assemble, debug, download and run
application software (including embedded control software) in the MCS-51 language. It comes with an online help system. If you do not buy their board, you
may only use the software for non-commercial home and educational purposes.

4.ANYC C Compiler

AnyC is a re-targetable C compiler released under the GNU GPL. It is intended for use with 8 bit microprocessors, particularly 8 bit RISC microcontrollers.
The original target is the Microchip PIC 16C5X family of 8 bit RISC microcontrollers.

5.z88dk - The z88 Development Kit

z88dk is a z80 C cross compiler based on the Small C compiler, supporting many features of ANSI C. It comes with an assembler and linker as well as a
standard C library. Supported host systems include Amiga, BeOS, HP-UX 9, Linux, BSD systems, MacOS X, Solaris, Win32, Win16 and MSDOS. The
compiler generates code for the following target systems: Cambridge Computers z88, Sinclair ZX Spectrum, Sinclair ZX81, CP/M based machines, Amstrad
NC100, VZ200/300, Sharp MZ series, TI calculators (TI82, TI83, TI83+, TI85, TI86), ABC80, Jupiter Ace, Xircom REX 6000, Sam Coupe,
MSX1,Spectravideo, Mattel Aquarius, Peters Sprinter, and C128 (in z80 mode).

You might also like