Unit-Iv: Front-End Application User Interacts

Download as pdf or txt
Download as pdf or txt
You are on page 1of 87

UNIT-IV

USER INTERFACE DESIGN:

The user interface is the front-end application view to which the user
interacts to use the software. User can manipulate and control the software as
well as hardware by means of user interface.
User interface design creates an effective communication medium
between a human and a computer. UI provides fundamental platform for human
computer interaction.
1. Attractive
2. Simple to use
3. Responsive in a short time
4. Clear to understand
5. Consistent on all interface screens
Types of User Interface
1. Command Line Interface: The Command Line Interface provides a
command prompt, where the user types the command and feeds it to the
system. The user needs to remember the syntax of the command and its
use.
2. Graphical User Interface: Graphical User Interface provides a simple
interactive interface to interact with the system. GUI can be a
combination of both hardware and software. Using GUI, the user
interprets the software.
User Interface Design Process
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user
interface consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the
system, i.e., understanding, skill and knowledge, type of user, etc., based on the
user’s profile users are made into categories. From each category requirements
are gathered. Based on the requirement’s developer understand how to develop
the interface. Once all the requirements are gathered a detailed analysis is
conducted. In the analysis part, the tasks that the user performs to establish the
goals of the system are identified, described and elaborated. The analysis of the
user environment focuses on the physical work environment. Among the
questions to be asked are:
1. Where will the interface be located physically?
2. Will the user be sitting, standing, or performing other tasks unrelated to
the interface?
3. Does the interface hardware accommodate space, light, or noise
constraints?
4. Are there special human factors considerations driven by environmental
factors?
2. Interface Design
The goal of this phase is to define the set of interface objects and actions i.e.,
control mechanisms that enable the user to perform desired tasks. Indicate how
these control mechanisms affect the system. Specify the action sequence of
tasks and subtasks, also called a user scenario. Indicate the state of the system
when the user performs a particular task. Always follow the three golden rules
stated by Theo Mandel. Design issues such as response time, command and
action structure, error handling, and help facilities are considered as the design
model is refined. This phase serves as the foundation for the implementation
phase.
3. Interface Construction and Implementation
The implementation activity begins with the creation of a prototype (model) that
enables usage scenarios to be evaluated. As iterative design process continues a
User Interface toolkit that allows the creation of windows, menus, device
interaction, error messages, commands, and many other elements of an
interactive environment can be used for completing the construction of an
interface.
4. Interface Validation
This phase focuses on testing the interface. The interface should be in such a
way that it should be able to perform tasks correctly, and it should be able to
handle a variety of tasks. It should achieve all the user’s requirements. It should
be easy to use and easy to learn. Users should accept the interface as a useful
one in their work.
User Interface Design Golden Rules
The following are the golden rules stated by Theo Mandel that must be followed
during the design of the interface. Place the user in control:
1. Define the interaction modes in such a way that does not force the
user into unnecessary or undesired actions: The user should be able to
easily enter and exit the mode with little or no effort.
2. Provide for flexible interaction: Different people will use different
interaction mechanisms, some might use keyboard commands, some
might use mouse, some might use touch screen, etc., Hence all interaction
mechanisms should be provided.
3. Allow user interaction to be interruptible and undoable: When a user
is doing a sequence of actions the user must be able to interrupt the
sequence to do some other work without losing the work that had been
done. The user should also be able to do undo operation.
4. Streamline interaction as skill level advances and allow the
interaction to be customized: Advanced or highly skilled user should be
provided a chance to customize the interface as user wants which allows
different interaction mechanisms so that user doesn’t feel bored while
using the same interaction mechanism.
5. Hide technical internals from casual users: The user should not be
aware of the internal technical details of the system. He should interact
with the interface just to do his work.
6. Design for direct interaction with objects that appear on-screen: The
user should be able to use the objects and manipulate the objects that are
present on the screen to perform a necessary task. By this, the user feels
easy to control over the screen.
Reduce the User’s Memory Load
1. Reduce demand on short-term memory: When users are involved in
some complex tasks the demand on short-term memory is significant. So
the interface should be designed in such a way to reduce the remembering
of previously done actions, given inputs and results.
2. Establish meaningful defaults: Always an initial set of defaults should
be provided to the average user, if a user needs to add some new features
then he should be able to add the required features.
3. Define shortcuts that are intuitive: Mnemonics should be used by the
user. Mnemonics means the keyboard shortcuts to do some action on the
screen.
4. The visual layout of the interface should be based on a real-world
metaphor: Anything you represent on a screen if it is a metaphor for a
real-world entity then users would easily understand.
5. Disclose information in a progressive fashion: The interface should be
organized hierarchically i.e., on the main screen the information about the
task, an object or some behavior should be presented first at a high level
of abstraction. More detail should be presented after the user indicates
interest with a mouse pick.
Make the Interface Consistent
1. Allow the user to put the current task into a meaningful
context: Many interfaces have dozens of screens. So it is important to
provide indicators consistently so that the user know about the doing
work. The user should also know from which page has navigated to the
current page and from the current page where it can navigate.
2. Maintain consistency across a family of applications: in The
development of some set of applications all should follow and implement
the same design, rules so that consistency is maintained among
applications.
3. If past interactive models have created user expectations do not make
changes unless there is a compelling reason: once a particular
interactive sequence has become standard (eg: ctrl+s to save file) the user
expects this in every application she encounters.
User interface design is a crucial aspect of software engineering, as it is
the means by which users interact with software applications. A well-
designed user interface can improve the usability and user experience of
an application, making it easier to use and more effective.
Key Principles for Designing User Interfaces
1. User-centered design: User interface design should be focused on the
needs and preferences of the user. This involves understanding the user’s
goals, tasks, and context of use, and designing interfaces that meet their
needs and expectations.
2. Consistency: Consistency is important in user interface design, as it helps
users to understand and learn how to use an application. Consistent
design elements such as icons, color schemes, and navigation menus
should be used throughout the application.
3. Simplicity: User interfaces should be designed to be simple and easy to
use, with clear and concise language and intuitive navigation. Users
should be able to accomplish their tasks without being overwhelmed by
unnecessary complexity.
4. Feedback: Feedback is significant in user interface design, as it helps
users to understand the results of their actions and confirms that they are
making progress towards their goals. Feedback can take the form of
visual cues, messages, or sounds.
5. Accessibility: User interfaces should be designed to be accessible to all
users, regardless of their abilities. This involves considering factors such
as color contrast, font size, and assistive technologies such as screen
readers.
6. Flexibility: User interfaces should be designed to be flexible and
customizable, allowing users to tailor the interface to their own
preferences and needs.
Real-time systems:
A real-time system means that the system is subjected to real-time, i.e.,
the response should be guaranteed within a specified timing constraint or
the system should meet the specified deadline. For example flight control
systems, real-time monitors, etc.
Types of real-time systems based on timing constraints:
1. Hard real-time system: This type of system can never miss its deadline.
Missing the deadline may have disastrous consequences. The usefulness
of results produced by a hard real-time system decreases abruptly and
may become negative if tardiness increases. Tardiness means how late a
real-time system completes its task with respect to its deadline. Example:
Flight controller system.
2. Soft real-time system: This type of system can miss its deadline
occasionally with some acceptably low probability. Missing the deadline
have no disastrous consequences. The usefulness of results produced by a
soft real-time system decreases gradually with an increase in tardiness.
Example: Telephone switches.
3. Firm Real-Time Systems: These are systems that lie between hard and
soft real-time systems. In firm real-time systems, missing a deadline is
tolerable, but the usefulness of the output decreases with time. Examples
of firm real-time systems include online trading systems, online auction
systems, and reservation systems.
Reference model of the real-time system:
Our reference model is characterized by three elements:
1. A workload model: It specifies the application supported by the system.
2. A resource model: It specifies the resources available to the application.
3. Algorithms: It specifies how the application system will use resources.
Terms related to real-time system:
1. Job: A job is a small piece of work that can be assigned to a processor
and may or may not require resources.
2. Task: A set of related jobs that jointly provide some system
functionality.
3. Release time of a job: It is the time at which the job becomes ready for
execution.
4. Execution time of a job: It is the time taken by the job to finish its
execution.
5. Deadline of a job: It is the time by which a job should finish its
execution. Deadline is of two types: absolute deadline and relative
deadline.
6. Response time of a job: It is the length of time from the release time of a
job to the instant when it finishes.
7. The maximum allowable response time of a job is called its relative
deadline.
8. The absolute deadline of a job is equal to its relative deadline plus its
release time.
9. Processors are also known as active resources. They are essential for the
execution of a job. A job must have one or more processors in order to
execute and proceed towards completion. Example: computer,
transmission links.
10.Resources are also known as passive resources. A job may or may not
require a resource during its execution. Example: memory, mutex
11.Two resources are identical if they can be used interchangeably else they
are heterogeneous.

Advantages:
• Real-time systems provide immediate and accurate responses to external
events, making them suitable for critical applications such as air traffic
control, medical equipment, and industrial automation.
• They can automate complex tasks that would otherwise be impossible to
perform manually, thus improving productivity and efficiency.
• Real-time systems can reduce human error by automating tasks that
require precision, accuracy, and consistency.
• They can help to reduce costs by minimizing the need for human
intervention and reducing the risk of errors.
• Real-time systems can be customized to meet specific requirements,
making them ideal for a wide range of applications.
Disadvantages:
• Real-time systems can be complex and difficult to design, implement, and
test, requiring specialized skills and expertise.
• They can be expensive to develop, as they require specialized hardware
and software components.
• Real-time systems are typically less flexible than other types of computer
systems, as they must adhere to strict timing requirements and cannot be
easily modified or adapted to changing circumstances.
• They can be vulnerable to failures and malfunctions, which can have
serious consequences in critical applications.
• Real-time systems require careful planning and management, as they
must be continually monitored and maintained to ensure they operate
correctly.
HUMAN FACTORS:
The essentially of human factors are imperative for the design and
development of any software work. It presents the underlying idea for
incorporating these factors into the software life cycle. Many giant
companies came to recognise that the success of a product depends upon
a solid Human factors design. Human factors discovers and applies
information about human behaviour, abilities, limitations and other
characteristics to the design of tools machines, systems, tasks, jobs and
environment for productive, safe, comfortable and effective human use.
Study of human factors is essential for every software manager since
he/she must be acquitted with low his/her staff members interact with
each other .Generally ,software products are used by variety of populace
and its necessary to take account the abilities of such a group to make the
software more useful and popular.
Objective of human factors design:
The purpose of human factors design into create products that meet
the operability and learn ability goals. This design should meet the user’s
needs by being effective. Efficient but also high quality keeping an eye on
the major concern of the customer in most cases, that is affordability.
The engineering discipline for designers and developers must focus on
the following:
• Users and their psychology
• Amount of work that the user must do, including task goals,
performance requirements and group communication requirements.
• Quality and performance.
• Information required by users and their job.
Benefits:
• Elevated user satisfaction.
• Decreased training time and costs.
• Reduced operator stress.
• Reduced product liability.
• Decrement of operating costs.
• Lesser operational error.

Based approach to human factors:


It is often that people take human factors not too seriously because
it is often regarded as common sense. Many companies heavily channel their
resources and time towards factors of software development like planning,
management, control. They often neglect the fact that they must present their
product in such a way that it is easy to learn and implement and that it should be
aesthetic in nature.
Interface designers and engineering psychologies apply systematic human factors
technique to produce designs for hardware and software.
A systematic approach is required in the design process in human factors design
and thus usability is required.
Usability is a software quality characteristics that surveys on software usability
cost and benefits and it can be simply be defined as the external attributes of
software quality. The process involving users in the development life cycle
ensures that the product is user friendly and is widely accepted.
Usability aims at the following:
• Shortening the time to accomplish tasks.
• Reducing the no. of mistakes made.
• Reducing learning time.
• Improving people’s satisfication with a system.

Benefits of usability:
• Elevated sales and consumer satisfaction.
• Increased productivity and efficiency.
• Decreased training costs and time.
• Lesser support and maintenance costs.
• Reduced documentation and support costs.
• Increased satisfaction, performance and productivity.

For software product to be successful with the customer, a software


engineer needs to develop his/her product in such a way that it is easy to
understand, learn and use human factors play a very important role in the
software life cycle.
A software engineer must always keep in mind the end user who is
going to use the product and should make things as simple as possible and
provide the best, at the same time not being too hard at his/her pocket.
Usability testing deals with the effective designing of a product.

Human-computer Interaction:

The Human-computer interaction (HCI) program will play a leading


role in the creation of tomorrow’s exciting new user interface design
software and technology, by supporting the broad spectrum of fundamental
research that will ultimately transform the human computer interaction
experience so the computer is no longer a distracting focus of attention.

Computer:
A Computer system comprises various elements, each of
which affects the user of the system. Input devices for interactive use,
allowing text entry, drawing and selection from the screen.
➢ Text entry: Traditional keyboard, phone text entry.
➢ Pointing: Mouse, but also touch pads.
Output display devices for interactive use
➢ Different types of screen mostly using same form of bitmap display.
➢ Large displays and situated displays for shared and public use.
Memory:
Short term memory: RAM
Long term memory: Magnetic and optical disks capacity limitation
related to
Document and wide storage.
Processing:
The effects when systems run too slow too fast, the myth of the
infinitely fast machine.
Limitations and processing speed.
Instead of workstations, computer may be in the form of
embedded computational machines, such as parts of microwave ovens.
Because the technique for designing these interfaces bear so much
relationship to the techniques for designing workstations interfaces, they
can be profitably treated together. Human computer interaction, by
contrast, studies both the mechanism side and the human side, but of a
narrower class of devices.
Human:
Humans are limited in their capacity to process information. This
has important implications for design. Information is received and response
given via a no of input and output channels.
➢ Visual channel.
➢ Auditory channel
➢ Movement
Information is stored in memory:
➢ Sensory memory.
➢ Short term memory.
➢ Long term memory.
Information is processed applied:
➢ Reasoning.
➢ Problem solving.
➢ Error.
Interaction:
The communication between the user and the system their interaction
framework has four parts:
1.User
2.Input
3.System
4.Output
Interaction models help us to understand what is going on in the interaction
between user and system. They address the translations between what the user
wants and what the system does.
Human-Computer interaction is concerned with the joint performance of tasks by
humans and machines; the structure of communication between human and
machine, human capabilities to use machines.
The goals of HCI are to produce usable and safe system as well as functional
systems. In order to produce computer system with good usability develops must
attempt to:
➢ Understand the factors that determines how people use technology.
➢ Develop tools and technique to enable building suitable system.
➢ Achieve efficient, effective and safe interactive.
➢ Put people first.

HCI arise as a field from inter wined roots in computer graphics,


operating systems, human factors, ergonomics, cognitive
psychology and the systems part of computer science.
A key aim of HCI is to understand how human interface with
computers, and to represent how knowledge is passed between the
two.
Interaction styles:
Interaction can be seen as a dialogue between the computer and the user.
Some applications have very distinct styles of interaction.
We can identify some common styles.
• Command line interface
• Menus
• Natural language
• Form-fills and spread sheets
• WIMP
Command line interface:
Way of expressing instructions to the computer directly, can be
function keys, single characters, short abbreviations.
➢ Suitable for repetitive tasks.
➢ Better for expert users than invoices.
➢ Offer direct access to system functionality.
Menus:
Set of options displayed on the screen options visible so demand less recall-rely
on recognition so names should be meaningful select by using mouse, numeric or
alphabetic keys.
Menu system can be
➢ Purely text based, with options presented as numbered choices or
➢ Can have graphical component with menu appearing in box and choices
made either by typing initial letter or moving around arrow keys.
Form filling interfaces:
➢ Primarily for data entry or data retrieval.
➢ Screen like paper form.
➢ Data put in relevant place.
WIMP interface:
➢ Windows
➢ Icon
➢ Menus
➢ Pointers
Windows: Areas of the screen that behave as if they were independent terminals.
• Can contain text bro graphics.
• Can be moved or resized.
• Scroll bars allow user to move the contents of the window up and down or
from side to side.
• Title bars describe the name of the window.
Icon: Small picture or image, used to represent same object in the interface, often
a window. Windows can be closed down to this small representation allowing
many windows to be accessible. Icons can be many and various highly stylized
or realistic representations.
Pointers: Important component, since WIMP style relies on pointing and
selecting things such as icons and menu items.
➢ Usually achieved with mouse.
➢ Joystick, track ball, cursor keys or keyboard shortcuts are also used wide
variety.
Menus: Choice of operations or services that can be performed offered on the
screen, Required option selected with pointer.
➢ Problem – menus an take up a lot of screen space.
➢ Solution – Use pull-down or pop-up menus.
➢ Pull-down menus are dragged down from single title at the top of the
screen.
➢ Popup menus appear when a particular region of the screen is clicked on.

Interaction devices:
Different tasks, different types of data and different types of users
all require different user interface devices. In most cases, interface devices are
either input or output devices. For example: A touch screen combines both.
➢ Interface devices correlate to the human senses.
➢ Now a day, a device usually is designed either for input or for output.
Input devices:
Most commonly, personal computers are equipped with text input and
pointing devices. For text input, the QWERTY keyboard is the standard solution,
but depending on the purpose of the system. At the same time, the mouse is not
only imaginable pointing device. Alternative for similar but slightly different
purposes include touchpad, track ball, joystick.
Output devices:
Output from a personal computer in most cases means output of visual data.
Devices for ‘dynamic visualisation’ include the traditional cathode ray tube
(CRT), liquid crystal display (LCD). Printers are also a very important device for
visual output but are substantially different from screens in that output is static.
The subject of HCI is very rich both terms of the disciplines it draws from
as well as opportunities for research. The study of user interface provides a
double-sided approach to understand how human and machines interact. From
studying how human psychology, We can design better into for people to interact
with computer.
Human- Computer Interface Design:
The overall process for designing a user interface begins with
the creation of different models. The intention of computer interface design is to
learn the ways of designing user-friendly interfaces or interactions.
Interface Design Models:
Four different models come into play when a human-computer
interface (HCI) is to be designed.
The software engineering creates a design model, a human engineer (or the
software engineer) establishes a user model, the end user develops a mental image
that is often called the user’s model or the system perception, and the implements
of the system create a system image.
Task Analysis and Modelling:
Task analysis and modelling can be applied to understand the tasks that
people currently perform and map these into similar set of tasks.
For example, assume that a small software company wants build a
computer-aided design system explicitly for interior designers. By of serving a
designer at work, the engineer notices that the interior design is comprised of a
number of activities : furniture layout, fabric and material selection, wall and
window covering selection, presentation costing and shopping. Each of these
major tasks can be elaborated into subtasks. For example, furniture layout can be
refined into the following tasks:
(1) Draw floor plan based on room dimensions;
(2) Place windows and doors at appropriate locations;
(3) Use furniture templates to draw scaled furniture outlines on floor
plan;
(4) Move furniture outlines to get best placement;
(5) Label all furniture outlines;
(6) Draw dimensions to show location; and
(7) Draw perspective view for customer.

Subtask 1 to 7 each be refined further. Subtask 1 to 6 will be performed by


manipulating information and performing actions with the user interface. On the
other hand, subtask 7 can be performed automatically in software and will result
in little direct user interaction.
Desing issues:
As the design of a user interface evolves, four common design issues
almost all ways surface: system response time, user help facilities, error
information handling, and command labelling.
System response time is the primarily complaint for many interactive
systems. In general, system response time is measured from the point at which
the user performs some control action until the software responds with desired
output or action.
System response has two important characteristics length and variability.
If the system response time too long, user frustration and stress is the inevitable
result.
Variability refers to the deviation from average response time, and in many
ways, it is the important of the response time characteristics.
In many cases, however, modern software provides on-line help facilities
that enable a user to get a question answered or resolve a problem without leaving
the interface.
Two different types of help facilities are encountered: integrated and add
on. An integrated help facility is designed into the software from the beginning.
An add-on help facility is added to the software after the system has been built.
In many ways, it is really an on-line user’s manual with limited query capability.
There is little doubt that the integrated help facility is preferable to the add-on
approach.
The error message provides no real indication of what is wrong or where
to look to get additional information. An error message presented in the manner
shown above does nothing to assuage user anxiety or to help correct the problem.
• The message should describe the problem in jargon that the user can
understand.
• The message should provide constructive advice for recovering from the
error.
• The message should indicate any negative consequences of the error.
Implementation Tools:
The process of user interface design is iterative. That is, a design model is
implemented as a prototype, and modified based on their comments. To
accommodate this iterative design approach a board class of interface design and
prototyping tools has evolved, called user interface toolkits, these tools provide
routines or objects that facilitate certain of windows, menus, device interaction,
error messages, commands, and many other elements of an interactive
environment.
Design Evaluation:
After the preliminary design has been completed, an operational user
interface prototype has been created. The protype is evaluated by the user, who
provides the designer with direct comments about the efficiency of the interface.
In addition, if formal evaluation techniques are used (eg.
Questionaires, rating sheets), the designers may extract information from this
information (eg. 80 percent of all users did not like the mechanism for saving data
files).
Design modifications are made based on user input and the next-
level prototype is created. The evaluation cycle continues until no further
modifications to the interface design are necessary.

Interface design :
Interface design is one of the most important part of software design. It is crucial
in a sense that user interaction with the system takes place through various
interfaces provided by the software product.
Think of the days of text based system where user had to type command on the
command line to execute a simple task.
Example of a command line interface:
• run prog1.exe /i=2 message=on
The above command line interface executes a program prog1.exe with a input i=2
with message during execution set to on. Although such command line interface
gives liberty to the user to run a program with a concise command. It is difficult
for a novice user and is error prone. This also requires the user to remember the
command for executing various commands with various details of options as
shown above. Example of Menu with option being asked from the user (refer to
Figure 3.11).

This simple menu allow the user to execute the program with option available as
a selection and further have option for exiting the program and going back to
previous screen. Although it provide grater flexibility than command line option
and does not need the user to remember the command still user can’t navigate to
the desired option from this screen. At best user can go back to the previous screen
to select a different option.
Modern graphical user interface provides tools for easy navigation and
interactivity to the user to perform different tasks.
The following are the advantages of a Graphical User Interface (GUI):
• Various information can be display and allow user to switch to different
task directly from the present screen.
• Useful graphical icons and pull down menu reduces typing effort by the user.
• Provides key-board shortcut to perform frequently performed tasks.
• Simultaneous operations of various task without loosing the present context.
Any interface design is targeted to users of different categories.
• Expert user with adequate knowledge of the system and application
• Average user with reasonable knowledge
• Novice user with little or no knowledge.
The following are the elements of good interface design:
• Goal and the intension of task must be identified.
• The important thing about designing interfaces is all about maintaining
consistency. Use of consistent color scheme, message and terminologies helps.
• Develop standards for good interface design and stick to it.
• Use icons where ever possible to provide appropriate message.
• Allow user to undo the current command. This helps in undoing mistake
committed by the user.
• Provide context sensitive help to guide the user.
• Use proper navigational scheme for easy navigation within the application.
• Discuss with the current user to improve the interface.
• Think from user prospective.
• The text appearing on the screen are primary source of information exchange
between the user and the system. Avoid using abbreviation. Be very specific in
communicating the mistake to the user. If possible provide the reason for error.
• Navigation within the screen is important and is specially useful for data entry
screen where keyboard is used intensively to input data.
• Use of color should be of secondary importance. It may be kept in mind about
user accessing application in a monochrome screen.
• Expect the user to make mistake and provide appropriate measure to handle
such errors through proper interface design.
• Grouping of data element is important. Group related data items accordingly.
• Justify the data items.
• Avoid high density screen layout. Keep significant amount of screen blank.
• Make sure an accidental double click instead of a single click may does some
thing unexpected.
• Provide file browser. Do not expect the user to remember the path of the required
file.
• Provide key-board shortcut for frequently done tasks. This saves time.
• Provide on-line manual to help user in operating the software.
• Always allow a way out (i.e., cancellation of action already completed).
• Warn user about critical task, like deletion of file, updating of critical
information.
• Programmers are not always good interface designer. Take help of expert
professional who understands human perception better than programmers.
• Include all possible features in the application even if the feature is available in
operating system.
Word the message carefully in a user understandable manner.
• Develop navigational procedure prior to developing the user interface.
Interface standards:
A user interface is the system by which people (user) interact with machine.
Why we need standards?
➢ Despite the best efforts of HCI, we are still getting if wrong.
➢ We specify the system the system behaviour.
➢ We validate our specification.
➢ We test the code and prove the correctness of our system.
➢ It is not just design issue or usability testing issue.
History of user interface standards
• In 1965, human factors specialists worked to make user interfaces- it is,
accurate and easy to learn.
• In 1985, We realised that usability was not enough. We needed consistency
standards become important.
• User interface standards are very effective for when you are developing,
testing or designing any new site or application or when you are revising
over so percent of the [pages in an existing application or site.
Creating a user interface standard helps you to create user interface that are
consistent and easy to understand
Example:
1.Modelling a system which has user controlled display options.
2.User can select from one of three choices.
3.choices determine the size of the current window display.
4.so they came up with schema and present first prototype.
Select screen display
FULL
HALF
PANEL

Problem:
➢ User testing shows the system breaks when a user selects more than one
option.
➢ Designer fixes it and present second prototype.
➢ But isn’t this the original prototype?
➢ Designer has ‘improved it’.
➢ User can now only select one checkbox.
➢ Designer has broken guidelines regarding selection controls.
Guidelines for using selection controls:
➢ Use radio buttons to indicate one or more options that must be either on or
off, but which are mutually exclusive.
➢ Use checkboxes to indicate one or more options that must be either on or
off, but which are not mutually exclusive.
Extending the specification:
➢ Design must satisfy our specification.
➢ Design must also satisfy guidelines.
➢ Find a way to specify selection widget guidelines.
➢ Ensure the described property holds in our system.
➢ So, they extend specification and present revised prototype.
Types of standards:
There are 3 types of standards
Methodological standards: This is S checklist to remind developers of the tasks
to create usable systems such as user interview, task analysis and design etc.
Design standards: This is building code. A set of absolute legal requirements that
ensure a consistent look and feel.
Design principles: Good design principles are specific and research – based and
developers work well within the design standards rules.
Building the design standards:
Major activities when building these standards are
➢ Project kick off and planning
• You collaborate with key members of the project team to define the
goals and scope of the user interface standards
• This includes whether the UI document is to be considered a
guideline, standard or style guide, which UI technology it will be
based on and who should participate in its development.
• You work closely with your team and other stake holders to identify
your key business need and business flows.
➢ Gather user interface samples
Based on the information and direction received from your team,
you begin by reviewing your major business applications and
extracting. Examples for the UI standard.
This is an iterative process that takes feedback from as wide
an audience as is appropriate.
➢ Develop user interface document
The document itself includes
• How to change and update the document.
• Common UI elements and when to use them.
• General navigation, graphic look and feel(or style), error handling,
messages.
➢ Review with team
• This is an iterative process that takes feedback from as wide an
audience as it is appropriate.
• The standard is reviewed and refined with your team and stake
holders in a consensus building process.
➢ Present user interface document.
• You present the UI document in electronic form or paper form.
Benefits of standards:
1.The goal of UI design is to made the user interaction as simple as efficient as
possible.
2.Your user or customers see a consistent UI within and between applications.
3.Reduced costs for support, user training packages and job aids.
4.Most important customer satisfaction, your users will reduce errors, training
requirement, and frustration time per transaction.
5.Reduced cost and effort for system maintenance.
UNIT-V
What is Software Quality?
Software Quality shows how good and reliable a product is. To convey an
associate degree example, think about functionally correct software. It performs
all functions as laid out in the SRS document. But, it has an associate degree
virtually unusable program. even though it should be functionally correct, we tend
not to think about it to be a high-quality product.
Software Quality Assurance (SQA):
Software Quality Assurance (SQA) is simply a way to assure quality in the
software. It is the set of activities that ensure processes, procedures as well as
standards are suitable for the project and implemented correctly.
Software Quality Assurance is a process that works parallel to Software
Development. It focuses on improving the process of development of software so
that problems can be prevented before they become major issues. Software
Quality Assurance is a kind of Umbrella activity that is applied throughout the
software process.
For those looking to deepen their expertise in SQA and elevate their professional
skills, consider exploring a specialized training program – Manual to
Automation Testing: A QA Engineer’s Guide . This program offers practical,
hands-on experience and advanced knowledge that complements the concepts
covered in.
What is quality?
Quality in a product or service can be defined by several measurable
characteristics. Each of these characteristics plays a crucial role in determining
the overall quality.

Software Quality Assurance (SQA) encompasse s


SQA process Specific quality assurance and quality control tasks (including
technical reviews and a multitiered testing strategy) Effective software
engineering practice (methods and tools) Control of all software work products
and the changes made to them a procedure to ensure compliance with software
development standards (when applicable) measurement and reporting
mechanisms
Elements of Software Quality Assurance (SQA)
1. Standards: The IEEE, ISO, and other standards organizations have
produced a broad array of software engineering standards and related
documents. The job of SQA is to ensure that standards that have been
adopted are followed and that all work products conform to them.
2. Reviews and audits: Technical reviews are a quality control activity
performed by software engineers for software engineers. Their intent is to
uncover errors. Audits are a type of review performed by SQA personnel
(people employed in an organization) with the intent of ensuring that
quality guidelines are being followed for software engineering work.
3. Testing: Software testing is a quality control function that has one primary
goal—to find errors. The job of SQA is to ensure that testing is properly
planned and efficiently conducted for primary goal of software.
4. Error/defect collection and analysis : SQA collects and analyzes error
and defect data to better understand how errors are introduced and what
software engineering activities are best suited to eliminating them.
5. Change management: SQA ensures that adequate change management
practices have been instituted.
6. Education: Every software organization wants to improve its software
engineering practices. A key contributor to improvement is education of
software engineers, their managers, and other stakeholders. The SQA
organization takes the lead in software process improvement which is key
proponent and sponsor of educational programs.
7. Security management: SQA ensures that appropriate process and
technology are used to achieve software security.
8. Safety: SQA may be responsible for assessing the impact of software
failure and for initiating those steps required to reduce risk.
9. Risk management : The SQA organization ensures that risk management
activities are properly conducted and that risk-related contingency plans
have been established.
Software Quality Assurance (SQA) focuses
The Software Quality Assurance (SQA) focuses on the following

• Software’s portability: Software’s portability refers to its ability to be


easily transferred or adapted to different environments or platforms without
needing significant modifications. This ensures that the software can run
efficiently across various systems, enhancing its accessibility and
flexibility.
• software’s usability: Usability of software refers to how easy and
intuitive it is for users to interact with and navigate through the application.
A high level of usability ensures that users can effectively accomplish their
tasks with minimal confusion or frustration, leading to a positive user
experience.
• software’s reusability: Reusability in software development involves
designing components or modules that can be reused in multiple parts of
the software or in different projects. This promotes efficiency and reduces
development time by eliminating the need to reinvent the wheel for similar
functionalities, enhancing productivity and maintainability.
• software’s correctness: Correctness of software refers to its ability to
produce the desired results under specific conditions or inputs. Correct
software behaves as expected without errors or unexpected behaviors,
meeting the requirements and specifications defined for its functionality.
• software’s maintainability: Maintainability of software refers to how
easily it can be modified, updated, or extended over time. Well-maintained
software is structured and documented in a way that allows developers to
make changes efficiently without introducing errors or compromising its
stability.
• software’s error control: Error control in software involves
implementing mechanisms to detect, handle, and recover from errors or
unexpected situations gracefully. Effective error control ensures that the
software remains robust and reliable, minimizing disruptions to users and
providing a smoother experience overall.
Software Quality Assurance (SQA) Include
1. A quality management approach.
2. Formal technical reviews.
3. Multi testing strategy.
4. Effective software engineering technology.
5. Measurement and reporting mechanism.
Major Software Quality Assurance (SQA) Activities
1. SQA Management Plan: Make a plan for how you will carry out the SQA
throughout the project. Think about which set of software engineering
activities are the best for project. check level of SQA team skills.
2. Set The Check Points: SQA team should set checkpoints. Evaluate the
performance of the project on the basis of collected data on different check
points.
3. Measure Change Impact: The changes for making the correction of an
error sometimes re introduces more errors keep the measure of impact of
change on project. Reset the new change to check the compatibility of this
fix with whole project.
4. Multi testing Strategy: Do not depend on a single testing approach. When
you have a lot of testing approaches available use them.
5. Manage Good Relations: In the working environment managing good
relations with other teams involved in the project development is
mandatory. Bad relation of SQA team with programmers team will impact
directly and badly on project. Don’t play politics.
6. Maintaining records and reports: Comprehensively document and share
all QA records, including test cases, defects, changes, and cycles, for
stakeholder awareness and future reference.
7. Reviews software engineering activities: The SQA group identifies and
documents the processes. The group also verifies the correctness of
software product.
8. Formalize deviation handling: Track and document software deviations
meticulously. Follow established procedures for handling variances.
Benefits of Software Quality Assurance (SQA)
1. SQA produces high quality software.
2. High quality application saves time and cost.
3. SQA is beneficial for better reliability.
4. SQA is beneficial in the condition of no maintenance for a long time.
5. High quality commercial software increase market share of company.
6. Improving the process of creating software.
7. Improves the quality of the software.
8. It cuts maintenance costs. Get the release right the first time, and your
company can forget about it and move on to the next big thing. Release a
product with chronic issues, and your business bogs down in a costly, time-
consuming, never-ending cycle of repairs.
Disadvantage of Software Quality Assurance (SQA)
There are a number of disadvantages of quality assurance.
• Cost: Some of them include adding more resources, which cause the more
budget its not, Addition of more resources For betterment of the product.
• Time Consuming: Testing and Deployment of the project taking more
time which cause delay in the project.
• Overhead : SQA processes can introduce administrative overhead,
requiring documentation, reporting, and tracking of quality metrics. This
additional administrative burden can sometimes outweigh the benefits,
especially for smaller projects.
• Resource Intensive : SQA requires skilled personnel with expertise in
testing methodologies, tools, and quality assurance practices. Acquiring
and retaining such talent can be challenging and expensive.
• Resistance to Change : Some team members may resist the
implementation of SQA processes, viewing them as bureaucratic or
unnecessary. This resistance can hinder the adoption and effectiveness of
quality assurance practices within an organization.
• Not Foolproof : Despite thorough testing and quality assurance efforts,
software can still contain defects or vulnerabilities. SQA cannot guarantee
the elimination of all bugs or issues in software products.
• Complexity : SQA processes can be complex, especially in large-scale
projects with multiple stakeholders, dependencies, and integration points.
Managing the complexity of quality assurance activities requires careful
planning and coordination.
Goals and Measures of Software Quality Assurance:
Software Quality simply means to measure how well software is designed i.e.
the quality of design, and how well software conforms to that design i.e. quality
of conformance. Software quality describes degree at which component of
software meets specified requirement and user or customers’ needs and
expectations.
Software Quality Assurance (SQA) is a planned and systematic pattern of
activities that are necessary to provide a high degree of confidence regarding
quality of a product. It actually provides or gives a quality assessment of quality
control activities and helps in determining validity of data or procedures for
determining quality. It generally monitors software processes and methods that
are used in a project to ensure or assure and maintain quality of software.
Goals of Software Quality Assurance :
• Quality assurance consists of a set of reporting and auditing functions.
• These functions are useful for assessing and controlling effectiveness and
completeness of quality control activities.
• It ensures management of data which is important for product quality.
• It also ensures that software which is developed, does it meet and
compiles with standard quality assurance.
• It ensures that end result or product meets and satisfies user and business
requirements.
• It simply finds or identify defects or bugs, and reduces effect of these
defects.
Measures of Software Quality Assurance :
There are various measures of software quality. These are given below:
1. Reliability –
It includes aspects such as availability, accuracy, and recoverability of
system to continue functioning under specific use over a given period of
time. For example, recoverability of system from shut-down failure is a
reliability measure.
2. Performance –
It means to measure throughput of system using system response time,
recovery time, and start up time. It is a type of testing done to measure
performance of system under a heavy workload in terms of
responsiveness and stability.
3. Functionality –
It represents that system is satisfying main functional requirements. It
simply refers to required and specified capabilities of a system.
4. Supportability –
There are a number of other requirements or attributes that software
system must satisfy. These include- testability, adaptability,
maintainability, scalability, and so on. These requirements generally
enhance capability to support software.
5. Usability –
It is capability or degree to which a software system is easy to understand
and used by its specified users or customers to achieve specified goals
with effectiveness, efficiency, and satisfaction. It includes aesthetics,
consistency, documentation, and responsiveness.
Software Quality Assurance (SQA) SET2
consists of a set of activities that monitor the software engineering
processes and methods used to ensure quality.
Software Quality Assurance (SQA) Encompasses
1. A quality management approach.
2. Effective software engineering technology (methods and tools).
3. Some formal technical reviews are applied throughout the software
process.
4. A multi-tiered testing strategy.
5. Controlling software documentation and the changes made to it.
6. Procedure to ensure compliance with software development standards
(when applicable).
7. Measurement and reporting mechanisms.
Software Quality
Software quality is defined in different ways but here it means the
conformance to explicitly stated functional and performance
requirements, explicitly documented development standards, and implicit
characteristics that are expected of all professionally developed software.
Following are the quality management system models under which the
software system is created is normally based:
1. CMMI
2. Six Sigma
3. ISO 9000
Note: There may be many other models for quality management, but the
ones mentioned above are the most popular.
Software Quality Assurance (SQA) Activities
Software Quality Assurance is composed of a variety of tasks associated
with two different fields:
1. The software engineers who do technical work.
2. SQA group that has responsibility for quality assurance planning,
oversight, record keeping, analysis, and reporting.
Basically, software engineers address quality (and perform quality
assurance and quality control activities) by applying solid technical
methods and measures, conducting formal technical reviews, and
performing well-planned software testing.
Prepares an SQA Plan for a Project
This type of plan is developed during project planning and is reviewed by
all interested parties. The quality assurance activities performed by the
software engineering team and the SQA group are governed by the plan.
The plan identifies:
• Evaluations to be performed.
• Audits and reviews to be performed.
• Standards that are applicable to the project.
• Procedures for error reporting and tracking.
• All the documents to be produced by the SQA group.
• The total amount of feedback provided to the software project team.
Measuring Software Quality using Quality Metrics:
In Software Engineering, Software Measurement is done based on
some Software Metrics where these software metrics are referred to as the
measure of various characteristics of a Software.
In Software engineering Software Quality Assurance (SAQ) assures the quality
of the software. A set of activities in SAQ is continuously applied throughout
the software process. Software Quality is measured based on some software
quality metrics.
There is a number of metrics available based on which software quality is
measured. But among them, there are a few most useful metrics which are
essential in software quality measurement. They are –
1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security
Now let’s understand each quality metric in detail –
1. Code Quality – Code quality metrics measure the quality of code used for
software project development. Maintaining the software code quality by writing
Bug-free and semantically correct code is very important for good software
project development. In code quality, both Quantitative metrics like the number
of lines, complexity, functions, rate of bugs generation, etc, and Qualitative
metrics like readability, code clarity, efficiency, and maintainability, etc are
measured.
2. Reliability – Reliability metrics express the reliability of software in different
conditions. The software is able to provide exact service at the right time or not
checked. Reliability can be checked using Mean Time Between Failure (MTBF)
and Mean Time To Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of
the software. Each software has been developed for some specific purposes.
Performance metrics measure the performance of the software by determining
whether the software is fulfilling the user requirements or not, by analyzing how
much time and resource it is utilizing for providing the service.
4. Usability – Usability metrics check whether the program is user-friendly or
not. Each software is used by the end-user. So it is important to measure that the
end-user is happy or not by using this software.
5. Correctness – Correctness is one of the important software quality metrics as
this checks whether the system or software is working correctly without any
error by satisfying the user. Correctness gives the degree of service each
function provides as per developed.
6. Maintainability – Each software product requires maintenance and up-
gradation. Maintenance is an expensive and time-consuming process. So if the
software product provides easy maintainability then we can say software quality
is up to mark. Maintainability metrics include the time required to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing
environments, etc.
7. Integrity – Software integrity is important in terms of how much it is easy to
integrate with other required software which increases software functionality
and what is the control on integration from unauthorized software’s which
increases the chances of cyberattacks.
8. Security – Security metrics measure how secure the software is. In the age of
cyber terrorism, security is the most essential part of every software. Security
assures that there are no unauthorized changes, no fear of cyber attacks, etc
when the software product is in use by the end-user.

SOFTWARE RELIABILITY
Software reliability is defined as the probability of failure-free operation of a
software system for a specified time in a specified environment.
DEFINITIONS OF SOFTWARE RELIABILITY
Software reliability is defined as the probability of failure-free operation of a
software system for a specified time in a specified environment. The key
elements of the definition include probability of failure-free operation, length of
time of failure-free operation and the given execution environment. Failure
intensity is a measure of the reliability of a software system operating in a given
environment. Example: An air traffic control system fails once in two years.
Factors Influencing Software Reliability
• A user’s perception of the reliability of a software depends upon two
categories of information.
o The number of faults present in the software.
o The way users operate the system. This is known as the operational
profile.
• The fault count in a system is influenced by the following.
o Size and complexity of code.
o Characteristics of the development process used.
o Education, experience, and training of development personnel.
o Operational environment.
Applications of Software Reliability
The applications of software reliability includes
• Comparison of software engineering technologies.
o What is the cost of adopting a technology?
o What is the return from the technology — in terms of cost and
quality?
• Measuring the progress of system testing –The failure intensity
measure tells us about the present quality of the system: high intensity
means more tests are to be performed.
• Controlling the system in operation –The amount of change to a
software for maintenance affects its reliability.
• Better insight into software development processes – Quantification of
quality gives us a better insight into the development processes.
FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS
System functional requirements may specify error checking, recovery features,
and system failure protection. System reliability and availability are specified as
part of the non-functional requirements for the system.
SYSTEM RELIABILITY SPECIFICATION
• Hardware reliability focuses on the probability a hardware component
fails.
• Software reliability focuses on the probability a software component will
produce an incorrect output.
• The software does not wear out and it can continue to operate after a bad
result.
• Operator reliability focuses on the probability when a system user makes
an error.
FAILURE PROBABILITIES
If there are two independent components in a system and the operation of the
system depends on them both then, P(S) = P (A) + P (B)
If the components are replicated then the probability of failure is P(S) = P (A) n
which means that all components fail at once.
FUNCTIONAL RELIABILITY REQUIREMENTS
• The system will check all operator inputs to see that they fall within their
required ranges.
• The system will check all disks for bad blocks each time it is booted.
• The system must be implemented in using a standard implementation of
Ada.
NON-FUNCTIONAL RELIABILITY SPECIFICATION
The required level of reliability must be expressed quantitatively. Reliability is
a dynamic system attribute. Source code reliability specifications are
meaningless (e.g. N faults/1000 LOC). An appropriate metric should be chosen
to specify the overall system reliability.
HARDWARE RELIABILITY METRICS
Hardware metrics are not suitable for software since its metrics are based on
notion of component failure. Software failures are often design failures. Often
the system is available after the failure has occurred. Hardware components can
wear out.
SOFTWARE RELIABILITY METRICS
Reliability metrics are units of measure for system reliability. System reliability
is measured by counting the number of operational failures and relating these to
demands made on the system at the time of failure. A long-term measurement
program is required to assess the reliability of critical systems.
PROBABILITY OF FAILURE ON DEMAND
The probability system will fail when a service request is made. It is useful
when requests are made on an intermittent or infrequent basis. It is appropriate
for protection systems where service requests may be rare and consequences
can be serious if service is not delivered. It is relevant for many safety-critical
systems with exception handlers.
RELIABILITY METRICS
• Probability of Failure on Demand (PoFoD)
o PoFoD = 0.001.
o For one in every 1000 requests the service fails per time unit.
• Rate of Fault Occurrence (RoCoF)
o RoCoF = 0.02.
o Two failures for each 100 operational time units of operation.
• Mean Time to Failure (MTTF)
o The average time between observed failures (aka MTBF)
o It measures time between observable system failures.
o For stable systems MTTF = 1/RoCoF.
o It is relevant for systems when individual transactions take lots of
processing time (e.g. CAD or WP systems).
• Availability = MTBF / (MTBF+MTTR)
o MTBF = Mean Time Between Failure
o MTTR = Mean Time to Repair
• Reliability = MTBF / (1+MTBF)
TIME UNITS
Time units include:
• Raw Execution Time which is employed in non-stop system
• Calendar Time is employed when the system has regular usage patterns
• Number of Transactions is employed for demand type transaction
systems
AVAILABILITY
Availability measures the fraction of time system is really available for use. It
takes repair and restart times into account. It is relevant for non-stop
continuously running systems (e.g. traffic signal).
FAILURE CONSEQUENCES – STUDY 1
Reliability does not take consequences into account. Transient faults have no
real consequences but other faults might cause data loss or corruption. Hence it
may be worthwhile to identify different classes of failure, and use different
metrics for each.
FAILURE CONSEQUENCES – STUDY 2
When specifying reliability both the number of failures and the consequences
of each matter. Failures with serious consequences are more damaging than
those where repair and recovery is straightforward. In some cases, different
reliability specifications may be defined for different failure types.
FAILURE CLASSIFICATION
Failure can be classified as the following
• Transient – only occurs with certain inputs.
• Permanent – occurs on all
• Recoverable – system can recover without operator help.
• Unrecoverable – operator has to help.
• Non-corrupting – failure does not corrupt system state or d
• Corrupting – system state or data are altered.
BUILDING RELIABILITY SPECIFICATION
The building of reliability specification involves consequences analysis of
possible system failures for each sub-system. From system failure analysis,
partition the failure into appropriate classes. For each class send out the
appropriate reliability metric.
SPECIFICATION VALIDATION
It is impossible to empirically validate high reliability specifications. No
database corruption really means PoFoD class < 1 in 200 million. If each
transaction takes 1 second to verify, simulation of one day’s transactions takes
3.5 days.
Software testing:
Software testing is an important process in the software development
lifecycle . It involves verifying and validating that a software application is
free of bugs, meets the technical requirements set by
its design and development , and satisfies user requirements efficiently and
effectively.
This process ensures that the application can handle all exceptional and
boundary cases, providing a robust and reliable user experience. By
systematically identifying and fixing issues, software testing helps deliver high-
quality software that performs as expected in various scenarios.
Software Testing is a method to assess the functionality of the software
program. The process checks whether the actual software matches the expected
requirements and ensures the software is bug-free. The purpose of software
testing is to identify the errors, faults, or missing requirements in contrast to
actual requirements. It mainly aims at measuring the specification, functionality,
and performance of a software program or application.
Software testing can be divided into two steps
1. Verification: It refers to the set of tasks that ensure that the software
correctly implements a specific function. It means “Are we building the
product right?”.
2. Validation: It refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements. It
means “Are we building the right product?”.
Different Types Of Software Testing
Explore diverse software testing methods
including manual and automated testing for improved quality assurance .
Enhance software reliability and performance through functional and non-
functional testing, ensuring user satisfaction. Learn about the significance of
various testing approaches for robust software development.

Software Testing can be broadly classified into 3 types:

1. Functional testing : It is a type of software testing that validates the


software systems against the functional requirements. It is performed to
check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are
Unit testing, Integration testing, System testing, Smoke testing, and so
on.

2. Non-functional testing : It is a type of software testing that checks the


application for non-functional requirements like performance,
scalability, portability, stress, etc. Various types of non-functional
testing are Performance testing, Stress testing, Usability Testing, and so
on.

3. Maintenance testing : It is the process of changing, modifying, and


updating the software to keep up with the customer’s needs. It
involves regression testing that verifies that recent changes to the code
have not adversely affected other previously working parts of the
software.

Apart from the above classification software testing can be further divided into
2 more ways of testing:

1. Manual testing : It includes testing software manually, i.e., without


using any automation tool or script. In this type, the tester takes over the
role of an end-user and tests the software to identify any unexpected
behaviour or bug. There are different stages for manual testing such as
unit testing, integration testing, system testing, and user acceptance
testing. Testers use test plans, test cases, or test scenarios to test
software to ensure the completeness of testing. Manual testing also
includes exploratory testing, as testers explore the software to identify
errors in it.

2. Automation testing : It is also known as Test Automation, is when the


tester writes scripts and uses another software to test the product. This
process involves the automation of a manual process. Automation
Testing is used to re-run the test scenarios quickly and repeatedly, that
were performed manually in manual testing.

Apart from Regression testing , Automation testing is also used to test the
application from a load, performance, and stress point of view. It increases the
test coverage, improves accuracy, and saves time and money when compared
to manual testing.

Different Types of Software Testing Techniques

Software testing techniques can be majorly classified into two categories:

1. Black box Testing : Testing in which the tester doesn’t have access to
the source code of the software and is conducted at the software
interface without any concern with the internal logical structure of the
software known as black-box testing.

2. White box Testing : Testing in which the tester is aware of the internal
workings of the product, has access to its source code, and is conducted
by making sure that all internal operations are performed according to
the specifications is known as white box testing.

3. Grey Box Testing : Testing in which the testers should have knowledge
of implementation, however, they need not be experts.

Software Testing can be broadly classified into 3 types:

1. Functional testing : It is a type of software testing that validates the


software systems against the functional requirements. It is performed to
check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are
Unit testing, Integration testing, System testing, Smoke testing, and so
on.

2. Non-functional testing : It is a type of software testing that checks the


application for non-functional requirements like performance,
scalability, portability, stress, etc. Various types of non-functional
testing are Performance testing, Stress testing, Usability Testing, and so
on.

3. Maintenance testing : It is the process of changing, modifying, and


updating the software to keep up with the customer’s needs. It
involves regression testing that verifies that recent changes to the code
Different Levels of Software Testing

Software level testing can be majorly classified into 4 levels:

1. Unit testing : It a level of the software testing process where individual


units/components of a software/system are tested. The purpose is to
validate that each unit of the software performs as designed.

2. Integration testing : It is a level of the software testing process where


individual units are combined and tested as a group. The purpose of this
level of testing is to expose faults in the interaction between integrated
units.

3. System testing : It is a level of the software testing process where a


complete, integrated system/software is tested. The purpose of this test
is to evaluate the system’s compliance with the specified requirements.

4. Acceptance testing : It is a level of the software testing process where a


system is tested for acceptability. The purpose of this test is to evaluate
the system’s compliance with the business requirements and assess
whether it is acceptable for delivery.

Benefits of Software Testing

• Product quality: Testing ensures the delivery of a high-quality product


as the errors are discovered and fixed early in the development cycle.

• Customer satisfaction: Software testing aims to detect the errors or


vulnerabilities in the software early in the development phase so that the
detected bugs can be fixed before the delivery of the product. Usability
testing is a type of software testing that checks the application for how
easily usable it is for the users to use the application.

• Cost-effective: Testing any project on time helps to save money and


time for the long term. If the bugs are caught in the early phases of
software testing, it costs less to fix those errors.

• Security: Security testing is a type of software testing that is focused on


testing the application for security vulnerabilities from internal or
external sources.

Path Testing:
Path Testing is a method that is used to design the test cases. In the
path testing method, the control flow graph of a program is designed to
find a set of linearly independent paths of execution. In this method,
Cyclomatic Complexity is used to determine the number of linearly
independent paths and then test cases are generated for each path.
It gives complete branch coverage but achieves that without covering all
possible paths of the control flow graph. McCabe’s Cyclomatic
Complexity is used in path testing. It is a structural testing method that
uses the source code of a program to find every possible executable
path.

• Control Flow Graph:


Draw the corresponding control flow graph of the program in which all
the executable paths are to be discovered.

• Cyclomatic Complexity:
After the generation of the control flow graph, calculate the cyclomatic
complexity of the program using the following formula .

• Make Set:
Make a set of all the paths according to the control flow graph and
calculate cyclomatic complexity. The cardinality of the set is equal to
the calculated cyclomatic complexity.
• Create Test Cases:
Create a test case for each path of the set obtained in the above step.

Path Testing Techniques

• Control Flow Graph:


The program is converted into a control flow graph by representing the
code into nodes and edges.

• Decision to Decision path:


The control flow graph can be broken into various Decision to Decision
paths and then collapsed into individual nodes.

• Independent paths:
An Independent path is a path through a Decision to Decision path
graph that cannot be reproduced from other paths by other methods.

Advantages of Path Testing

1. The path testing method reduces the redundant tests.

2. Path testing focuses on the logic of the programs.

3. Path testing is used in test case design.

Disadvantages of Path Testing

1. A tester needs to have a good understanding of programming knowledge


or code knowledge to execute the tests.

2. The test case increases when the code complexity is increased.

3. It will be difficult to create a test path if the application has a high


complexity of code.

4. Some test paths may skip some of the conditions in the code. It
may not cover some conditions or scenarios if there is an error in
the specific paths.

Control structure testing:

Control structure testing is used to increase the coverage area by testing


various control structures present in the program. The different types of testing
performed under control structure testing are as follows
1. Condition Testing

2. Data Flow Testing

3. Loop Testing

1. Condition Testing: Condition testing is a test cased design method, which


ensures that the logical condition and decision statements are free from errors.
The errors present in logical conditions can be incorrect boolean operators,
missing parenthesis in a booleans expression, error in relational operators,
arithmetic expressions, and so on. The common types of logical conditions
that are tested using condition testing are-

1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic


expressions and ‘OP’ is an operator.
2. A simple condition like any relational expression preceded by a NOT
(~) operator. For example, (~E1) where ‘E1’ is an arithmetic expression
and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions,
Boolean operator, and parenthesis. For example, (E1 & E2)|(E2 & E3)
where E1, E2, E3 denote arithmetic expression and ‘&’ and ‘|’ denote
AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like
‘AND’, OR, NOT. For example, ‘A|B’ is a Boolean expression where
‘A’ and ‘B’ denote operands and | denotes OR operator.

3. Data Flow Testing: The data flow test method chooses the test path of a
program based on the locations of the definitions and uses all the
variables in the program. The data flow test approach is depicted as
follows suppose each statement in a program is assigned a unique
statement number and that theme function cannot modify its parameters
or global variables. For example, with S as its statement number.

DEF (S) = {X | Statement S has a definition of X}


USE (S) = {X | Statement S has a use of X}

If statement S is an if loop statement, them its DEF set is empty and its USE
set depends on the state of statement S. The definition of the variable X at
statement S is called the line of statement S’ if the statement is any way from
S to statement S’ then there is no other definition of X. A definition use (DU)
chain of variable X has the form [X, S, S’], where S and S’ denote statement
numbers, X is in DEF(S) and USE(S’), and the definition of X in statement S
is line at statement S’. A simple data flow test approach requires that each DU
chain be covered at least once. This approach is known as the DU test
approach. The DU testing does not ensure coverage of all branches of a
program. However, a branch is not guaranteed to be covered by DU testing
only in rare cases such as then in which the other construct does not have any
certainty of any variable in its later part and the other part is not present. Data
flow testing strategies are appropriate for choosing test paths of a program
containing nested if and loop statements.

3. Loop Testing: Loop testing is actually a white box testing technique. It


specifically focuses on the validity of loop construction. Following are the
types of loops.

1. Simple Loop – The following set of test can be applied to simple loops,
where the maximum allowable number through the loop is n.
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.
2. Concatenated Loops – If loops are not dependent on each other,
contact loops can be tested using the approach used in simple loops. if
the loops are interdependent, the steps are followed in nested loops.

1. Nested Loops – Loops within loops are called as nested loops. when
testing nested loops, the number of tested increases as level nesting
increases. The following steps for testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
2. Unstructured loops – This type of loops should be redesigned,
whenever possible, to reflect the use of unstructured the structured
programming constructs.

Black Box Testing:


Black Box Testing is an important part of making sure software works
as it should. Instead of peeking into the code, testers check how the software
behaves from the outside, just like users would. This helps catch any issues or
bugs that might affect how the software works.
This simple guide gives you an overview of what Black Box Testing is all
about and why it matters in software development.
Black-box testing is a type of software testing in which the tester is not
concerned with the software’s internal knowledge or implementation details
but rather focuses on validating the functionality based on the provided
specifications or requirements.

Types Of Black Box Testing


The following are the several categories of black box testing:

1. Functional Testing

2. Regression Testing

3. Nonfunctional Testing (NFT)

Before we move in depth of the Black box testing do you known that their are
many different type of testing used in industry and some automation testing
tools are there which automate the most of testing so if you wish to learn the
latest industry level tools then you check-out our manual to automation testing
course in which you will learn all these concept and tools

Functional Testing

• Functional testing is defined as a type of testing that verifies that each


function of the software application works in conformance with the
requirement and specification.

• This testing is not concerned with the source code of the application.
Each functionality of the software application is tested by providing
appropriate test input, expecting the output, and comparing the actual
output with the expected output.

• This testing focuses on checking the user interface, APIs, database,


security, client or server application, and functionality of the
Application Under Test. Functional testing can be manual or
automated. It determines the system’s software functional requirements.

Regression Testing

• Regression Testing is the process of testing the modified parts of the


code and the parts that might get affected due to the modifications to
ensure that no new errors have been introduced in the software after the
modifications have been made.

• Regression means the return of something and in the software field, it


refers to the return of a bug. It ensures that the newly added code is
compatible with the existing code.

• In other words, a new software update has no impact on the


functionality of the software. This is carried out after a system
maintenance operation and upgrades.
Nonfunctional Testing

• Non-functional testing is a software testing technique that checks the


non-functional attributes of the system.

• Non-functional testing is defined as a type of software testing to check


non-functional aspects of a software application.

• It is designed to test the readiness of a system as per nonfunctional


parameters which are never addressed by functional testing.

• Non-functional testing is as important as functional testing.

• Non-functional testing is also known as NFT. This testing is not


functional testing of software. It focuses on the software’s performance,
usability, and scalability.

Advantages of Black Box Testing

• The tester does not need to have more functional knowledge or


programming skills to implement the Black Box Testing.

• It is efficient for implementing the tests in the larger system.

• Tests are executed from the user’s or client’s point of view.

• Test cases are easily reproducible.

• It is used to find the ambiguity and contradictions in the functional


specifications.

Disadvantages of Black Box Testing

• There is a possibility of repeating the same tests while implementing the


testing process.

• Without clear functional specifications, test cases are difficult to


implement.

• It is difficult to execute the test cases because of complex inputs at


different stages of testing.

• Sometimes, the reason for the test failure cannot be detected.

• Some programs in the application are not tested.


• It does not reveal the errors in the control structure.

• Working with a large sample space of inputs can be exhaustive and


consumes a lot of time.

Ways of Black Box Testing Done

1. Syntax-Driven Testing – This type of testing is applied to systems that can


be syntactically represented by some language. For example, language can be
represented by context-free grammar. In this, the test cases are generated so
that each grammar rule is used at least once.

2. Equivalence partitioning – It is often seen that many types of inputs work


similarly so instead of giving all of them separately we can group them and
test only one input of each group. The idea is to partition the input domain of
the system into several equivalence classes such that each member of the class
works similarly, i.e., if a test case in one class results in some error, other
members of the class would also result in the same error.

The technique involves two steps:

1. Identification of equivalence class – Partition any input domain into a


minimum of two sets: valid values and invalid values . For example, if
the valid range is 0 to 100 then select one valid input like 49 and one
invalid like 104.

2. Generating test cases – (i) To each valid and invalid class of input
assign a unique identification number. (ii) Write a test case covering all
valid and invalid test cases considering that no two invalid inputs mask
each other. To calculate the square root of a number, the equivalence
classes will be (a) Valid inputs:

• The whole number which is a perfect square-output will be an


integer.

• The entire number which is not a perfect square-output will be a


decimal number.

• Positive decimals

• Negative numbers(integer or decimal).

• Characters other than numbers like “a”,”!”,”;”, etc.


3. Boundary value analysis – Boundaries are very good places for errors to
occur. Hence, if test cases are designed for boundary values of the input
domain then the efficiency of testing improves and the probability of finding
errors also increases. For example – If the valid range is 10 to 100 then test for
10,100 also apart from valid and invalid inputs.

4. Cause effect graphing – This technique establishes a relationship between


logical input called causes with corresponding actions called the effect. The
causes and effects are represented using Boolean graphs. The following steps
are followed:

1. Identify inputs (causes) and outputs (effect).

2. Develop a cause-effect graph.

3. Transform the graph into a decision table.

4. Convert decision table rules to test cases.

For example, in the following cause-effect graph:


Each column corresponds to a rule which will become a test case for testing.
So there will be 4 test cases.

5. Requirement-based testing – It includes validating the requirements given


in the SRS of a software system.

6. Compatibility testing – The test case results not only depends on the
product but is also on the infrastructure for delivering functionality. When the
infrastructure parameters are changed it is still expected to work properly.
Some parameters that generally affect the compatibility of software are:

1. Processor (Pentium 3, Pentium 4) and several processors.

2. Architecture and characteristics of machine (32-bit or 64-bit).

3. Back-end components such as database servers.

4. Operating System (Windows, Linux, etc).

Tools Used for Black Box Testing:

1. Appium

2. Selenium

3. Microsoft Coded UI

4. Applitools

5. HP QTP .

What can be identified by Black Box Testing

1. Discovers missing functions, incorrect function & interface errors

2. Discover the errors faced in accessing the database

3. Discovers the errors that occur while initiating & terminating any
functions.

4. Discovers the errors in performance or behaviour of software.

Features of black box testing


1. Independent testing: Black box testing is performed by testers who are
not involved in the development of the application, which helps to
ensure that testing is unbiased and impartial.

2. Testing from a user’s perspective: Black box testing is conducted


from the perspective of an end user, which helps to ensure that the
application meets user requirements and is easy to use.

3. No knowledge of internal code: Testers performing black box testing


do not have access to the application’s internal code, which allows them
to focus on testing the application’s external behaviour and
functionality.

4. Requirements-based testing: Black box testing is typically based on


the application’s requirements, which helps to ensure that the
application meets the required specifications.

5. Different testing techniques: Black box testing can be performed using


various testing techniques, such as functional testing, usability testing,
acceptance testing, and regression testing.

6. Easy to automate: Black box testing is easy to automate using various


automation tools, which helps to reduce the overall testing time and
effort.

7. Scalability: Black box testing can be scaled up or down depending on


the size and complexity of the application being tested.

8. Limited knowledge of application: Testers performing black box


testing have limited knowledge of the application being tested, which
helps to ensure that testing is more representative of how the end users
will interact with the application.

Integration testing:

Integration testing is the process of testing the interface between two


software units or modules. It focuses on determining the correctness of the
interface. The purpose of integration testing is to expose faults in the
interaction between integrated units. Once all the modules have been unit-
tested, integration testing is performed.

What is Integration Testing?


Integration testing is a software testing technique that focuses on verifying the
interactions and data exchange between different components or modules of a
software application. The goal of integration testing is to identify any
problems or bugs that arise when different components are combined and
interact with each other. Integration testing is typically performed after unit
testing and before system testing. It helps to identify and resolve integration
issues early in the development cycle, reducing the risk of more severe and
costly problems later on.

Integration testing is one of the basic type of software testing and there are
many other basic and advance software testing. If you are interested in
learning all the testing concept and other more advance concept in the field of
the software testing

• Integration testing can be done by picking module by module. This can


be done so that there should be a proper sequence to be followed.

• And also if you don’t want to miss out on any integration scenarios then
you have to follow the proper sequence.

• Exposing the defects is the major focus of the integration testing and the
time of interaction between the integrated units.

Why is Integration Testing Important?


Integration testing is important because it verifies that individual
software modules or components work together correctly as a whole system.
This ensures that the integrated software functions as intended and helps
identify any compatibility or communication issues between different parts of
the system. By detecting and resolving integration problems early, integration
testing contributes to the overall reliability, performance, and quality of the
software product.
Integration test approaches
There are four types of integration testing approaches. Those approaches are
the following:

1. Big-Bang Integration Testing

• It is the simplest integration testing approach, where all the modules are
combined and the functionality is verified after the completion of
individual module testing.

• In simple words, all the modules of the system are simply put together
and tested.

• This approach is practicable only for very small systems. If an error is


found during the integration testing, it is very difficult to localize the
error as the error may potentially belong to any of the modules being
integrated.

• So, debugging errors reported during Big Bang integration testing is


very expensive to fix.

• Big-bang integration testing is a software testing approach in which all


components or modules of a software application are combined and
tested at once.

• This approach is typically used when the software components have a


low degree of interdependence or when there are constraints in the
development environment that prevent testing individual components.

• The goal of big-bang integration testing is to verify the overall


functionality of the system and to identify any integration problems that
arise when the components are combined.
• While big-bang integration testing can be useful in some situations, it
can also be a high-risk approach, as the complexity of the system and
the number of interactions between components can make it difficult to
identify and diagnose problems.

Advantages of Big-Bang Integration Testing

• It is convenient for small systems.

• Simple and straightforward approach.

• Can be completed quickly.

• Does not require a lot of planning or coordination.

• May be suitable for small systems or projects with a low degree of


interdependence between components.

Disadvantages of Big-Bang Integration Testing

• There will be quite a lot of delay because you would have to wait for all
the modules to be integrated.

• High-risk critical modules are not isolated and tested on priority since
all modules are tested at once.

• Not Good for long projects.

• High risk of integration problems that are difficult to identify and


diagnose.

• This can result in long and complex debugging and troubleshooting


efforts.

• This can lead to system downtime and increased development costs.

• May not provide enough visibility into the interactions and data
exchange between components.

• This can result in a lack of confidence in the system’s stability and


reliability.

• This can lead to decreased efficiency and productivity.

• This may result in a lack of confidence in the development team.


• This can lead to system failure and decreased user satisfaction.

2. Bottom-Up Integration Testing

In bottom-up testing, each module at lower levels are tested with higher
modules until all modules are tested. The primary purpose of this integration
testing is that each subsystem tests the interfaces among various modules
making up the subsystem. This integration testing uses test drivers to drive and
pass appropriate data to the lower-level modules.

Advantages of Bottom-Up Integration Testing

• In bottom-up testing, no stubs are required.

• A principal advantage of this integration testing is that several disjoint


subsystems can be tested simultaneously.

• It is easy to create the test conditions.

• Best for applications that uses bottom up design approach.

• It is Easy to observe the test results.

Disadvantages of Bottom-Up Integration Testing

• Driver modules must be produced.

• In this testing, the complexity that occurs when the system is made up of
a large number of small subsystems.

• As Far modules have been created, there is no working model can be


represented.

3. Top-Down Integration Testing

Top-down integration testing technique is used in order to simulate the


behaviour of the lower-level modules that are not yet integrated. In this
integration testing, testing takes place from top to bottom. First, high-level
modules are tested and then low-level modules and finally integrating the low-
level modules to a high level to ensure the system is working as intended.
Advantages of Top-Down Integration Testing

• Separately debugged module.

• Few or no drivers needed.

• It is more stable and accurate at the aggregate level.

• Easier isolation of interface errors.

• In this, design defects can be found in the early stages.

Disadvantages of Top-Down Integration Testing

• Needs many Stubs.

• Modules at lower level are tested inadequately.

• It is difficult to observe the test output.

• It is difficult to stub design.

4. Mixed Integration Testing

A mixed integration testing is also called sandwiched integration testing. A


mixed integration testing follows a combination of top down and bottom-up
testing approaches. In top-down approach, testing can start only after the top-
level module have been coded and unit tested. In bottom-up approach, testing
can start only after the bottom level modules are ready. This sandwich or
mixed approach overcomes this shortcoming of the top-down and bottom-up
approaches. It is also called the hybrid integration testing. also, stubs and
drivers are used in mixed integration testing.

Advantages of Mixed Integration Testing

• Mixed approach is useful for very large projects having several sub
projects.
• This Sandwich approach overcomes this shortcoming of the top-down
and bottom-up approaches.

• Parallel test can be performed in top and bottom layer tests.

Disadvantages of Mixed Integration Testing

• For mixed integration testing, it requires very high cost because one part
has a Top-down approach while another part has a bottom-up approach.

• This integration testing cannot be used for smaller systems with huge
interdependence between different modules.

Applications of Integration Testing

1. Identify the components: Identify the individual components of your


application that need to be integrated. This could include the frontend,
backend, database, and any third-party services.

2. Create a test plan: Develop a test plan that outlines the scenarios and
test cases that need to be executed to validate the integration points
between the different components. This could include testing data flow,
communication protocols, and error handling.

3. Set up test environment: Set up a test environment that mirrors the


production environment as closely as possible. This will help ensure that
the results of your integration tests are accurate and reliable.

4. Execute the tests: Execute the tests outlined in your test plan, starting
with the most critical and complex scenarios. Be sure to log any defects
or issues that you encounter during testing.

5. Analyze the results: Analyze the results of your integration tests to


identify any defects or issues that need to be addressed. This may
involve working with developers to fix bugs or make changes to the
application architecture.

6. Repeat testing: Once defects have been fixed, repeat the integration
testing process to ensure that the changes have been successful and that
the application still works as expected.

Test Cases For Integration Testing


• Interface Testing : Verify that data exchange between modules occurs
correctly. Validate input/output parameters and formats. Ensure proper
error handling and exception propagation between modules.

• Functional Flow Testing : Test end-to-end functionality by simulating


user interactions. Verify that user inputs are processed correctly and
produce expected outputs. Ensure seamless flow of data and control
between modules.

• Data Integration Testing : Validate data integrity and consistency


across different modules. Test data transformation and conversion
between formats. Verify proper handling of edge cases and boundary
conditions.

• Dependency Testing : Test interactions between dependent modules.


Verify that changes in one module do not adversely affect others.
Ensure proper synchronization and communication between modules.

• Error Handling Testing : Validate error detection and reporting


mechanisms. Test error recovery and fault tolerance capabilities. Ensure
that error messages are clear and informative.

• Performance Testing : Measure system performance under integrated


conditions. Test response times, throughput, and resource utilization.
Verify scalability and concurrency handling between modules.

• Security Testing : Test access controls and permissions between


integrated modules. Verify encryption and data protection mechanisms.
Ensure compliance with security standards and regulations.

• Compatibility Testing : Test compatibility with external systems, APIs,


and third-party components. Validate interoperability and data exchange
protocols. Ensure seamless integration with different platforms and
environments.

Validation and System Testing:

At the end of integration testing, software is completely assembled as a


package, interfacing errors have been uncovered and corrected and now
validation testing is performed. Software validation is achieved through a
series of black-box tests that demonstrate conformity with requirements.

After each validation test case has been conducted, one of two possible
condition exist:
1. The function or performance characteristics conform to specification
and are accepted or

2. a deviation from specification is uncovered and a deficiency list is


created. Deviation or error discovered at this stage in a project can rarely be
corrected prior to scheduled delivery.

Alpha and Beta Testing:

It is virtually impossible for a software developer to foresee how the


customer will really use a program:

• Instructions for use may misinterpreted.

• strange combinations of data may be regularly used

• output that seemed clear to the tester may be unintelligible to a user in


the field. When custom software is built for one customer, a series of
acceptance tests are conducted to enable the customer to validate all
requirements. If software is developed as a product to be used by many
customers, it is impractical to perform acceptance tests with each one. 

alpha and beta tests are used to uncover errors that only the end-user seems
able to find.

The Alpha Test is conducted at the developer’s site by a customer. The


software is used in a natural setting with the developer “looking over the
shoulder” of the user and recording errors and usage problems. Alpha tests
are conducted in a controlled environment.

The Beta test is conducted at one or more customer sites by the end-user of
the software. Unlike alpha testing, the developer is generally not present.

Unlike alpha testing, the developer is generally not present. Therefore, the
beta test is a "live" application of the software in an environment that
cannot be controlled by the developer. The customer records all problems
(real or imagined) that are encountered during beta testing and reports these
to the developer at regular intervals. As a result of problems reported
during beta tests, software engineers make modifications and then prepare
for release of the software product to the entire customer base

System Testing:
System testing is actually a series of different tests whose primary
purpose is to fully exercise the computer-based system. Although each test has
a different purpose, all work to verify that system elements have been properly
integrated and perform allocated functions.

System Testing is basically performed by a testing team that is


independent of the development team that helps to test the quality of the
system impartial.

System Testing is carried out on the whole system in the context of


either system requirement specifications or functional requirement
specifications or in the context of both. System testing tests the design and
behavior of the system and also the expectations of the customer.

Types of System Testing:

• Performance Testing: Performance Testing is a type of software


testing that is carried out to test the speed, scalability, stability and reliability
of the software product or application.

• Load Testing: Load Testing is a type of software testing which is


carried out to determine the behavior of a system or software product under
extreme load.

• Stress Testing: Stress Testing is a type of software testing performed


to check the robustness of the system under the varying loads.

• Scalability Testing: Scalability Testing is a type of software testing


which is carried out to check the performance of a software application or
system in terms of its capability to scale up or scale down the number of user
request load.

Reverse Engineering:

Software Reverse Engineering is a process of recovering the design,


requirement specifications, and functions of a product from an analysis of its
code. It builds a program database and generates information from this. This
article focuses on discussing reverse engineering in detail.

What is Reverse Engineering?

Reverse engineering can extract design information from source code,


but the abstraction level, the completeness of the documentation, the degree to
which tools and a human analyst work together, and the directionality of the
process are highly variable.

Objective of Reverse Engineering:

1. Reducing Costs: Reverse engineering can help cut costs in product


development by finding replacements or cost-effective alternatives for
systems or components.

2. Analysis of Security: Reverse engineering is used in cybersecurity to


examine exploits, vulnerabilities, and malware. This helps in
understanding of threat mechanisms and the development of practical
defenses by security experts.

3. Integration and Customization: Through the process of reverse


engineering, developers can incorporate or modify hardware or software
components into pre-existing systems to improve their operation or
tailor them to meet particular needs.

4. Recovering Lost Source Code: Reverse engineering can be used to


recover the source code of a software application that has been lost or is
inaccessible or at the very least, to produce a higher-level representation
of it.

5. Fixing bugs and maintenance: Reverse engineering can help find and
repair flaws or provide updates for systems for which the original source
code is either unavailable or inadequately documented.

Reverse Engineering Goals:

1. Cope with Complexity: Reverse engineering is a common tool used to


understand and control system complexity. It gives engineers the ability
to analyze complex systems and reveal details about their architecture,
relationships and design patterns.

2. Recover lost information: Reverse engineering seeks to retrieve as


much information as possible in situations where source code or
documentation are lost or unavailable. Rebuilding source code,
analyzing data structures and retrieving design details are a few
examples of this.

3. Detect side effects: Understanding a system or component’s behavior


requires analyzing its side effects. Unintended implications,
dependencies, and interactions that might not be obvious from the
system’s documentation or original source code can be found with the
use of reverse engineering.

4. Synthesis higher abstraction: Abstracting low-level features in order


to build higher-level representations is a common practice in reverse
engineering. This abstraction makes communication and analysis easier
by facilitating a greater understanding of the system’s functionality.

5. Facilitate Reuse: Reverse engineering can be used to find reusable


parts or modules in systems that already exist. By understanding the
functionality and architecture of a system, developers can extract and
repurpose components for use in other projects, improving efficiency
and decreasing development time.

Re-engineering:

Re-engineering is a process of software development that is done to


improve the maintainability of a software system. Re-engineering is the
examination and alteration of a system to reconstitute it in a new form.
This process encompasses a combination of sub-processes like reverse
engineering, forward engineering, reconstructing, etc.

What is Re-engineering?
Re-engineering, also known as software re-engineering, is the process of
analyzing, designing, and modifying existing software systems to
improve their quality, performance, and maintainability.

1. This can include updating the software to work with new hardware or
software platforms, adding new features, or improving the software’s
overall design and architecture.

2. Software re-engineering, also known as software restructuring or


software renovation, refers to the process of improving or upgrading
existing software systems to improve their quality, maintainability, or
functionality.

3. It involves reusing the existing software artifacts, such as code, design,


and documentation, and transforming them to meet new or updated
requirements.

Objective of Re-engineering

The primary goal of software re-engineering is to improve the quality


and maintainability of the software system while minimizing the risks
and costs associated with the redevelopment of the system from scratch.
Software re-engineering can be initiated for various reasons, such as:

1. To describe a cost-effective option for system evolution.

2. To describe the activities involved in the software maintenance process.

3. To distinguish between software and data re-engineering and to explain


the problems of data re-engineering.

Overall, software re-engineering can be a cost-effective way to improve


the quality and functionality of existing software systems, while
minimizing the risks and costs associated with starting from scratch.

Process of Software Re-engineering

The process of software re-engineering involves the following steps:


1. Planning: The first step is to plan the re-engineering process, which
involves identifying the reasons for re-engineering, defining the scope,
and establishing the goals and objectives of the process.

2. Analysis: The next step is to analyze the existing system, including the
code, documentation, and other artifacts. This involves identifying the
system’s strengths and weaknesses, as well as any issues that need to be
addressed.

3. Design: Based on the analysis, the next step is to design the new or
updated software system. This involves identifying the changes that
need to be made and developing a plan to implement them.

4. Implementation: The next step is to implement the changes by


modifying the existing code, adding new features, and updating the
documentation and other artifacts.

5. Testing: Once the changes have been implemented, the software system
needs to be tested to ensure that it meets the new requirements and
specifications.

6. Deployment: The final step is to deploy the re-engineered software


system and make it available to end-users.

Steps involved in Re-engineering

1. Inventory Analysis
2. Document Reconstruction

3. Reverse Engineering

4. Code Reconstruction

5. Data Reconstruction

6. Forward Engineering

Re-engineering Cost Factors

1. The quality of the software to be re-engineered.

2. The tool support available for re-engineering.

3. The extent of the required data conversion.

4. The availability of expert staff for re-engineering.

Advantages of Re-engineering

1. Reduced Risk: As the software is already existing, the risk is less as


compared to new software development. Development problems,
staffing problems and specification problems are the lots of problems
that may arise in new software development.

2. Reduced Cost: The cost of re-engineering is less than the costs of


developing new software.
3. Revelation of Business Rules: As a system is re-engineered , business
rules that are embedded in the system are rediscovered.

4. Better use of Existing Staff: Existing staff expertise can be maintained


and extended accommodate new skills during re-engineering.

5. Improved efficiency: By analyzing and redesigning processes, re-


engineering can lead to significant improvements in productivity, speed,
and cost-effectiveness.

6. Increased flexibility: Re-engineering can make systems more adaptable


to changing business needs and market conditions.

7. Better customer service: By redesigning processes to focus on


customer needs, re-engineering can lead to improved customer
satisfaction and loyalty.

8. Increased competitiveness: Re-engineering can help organizations


become more competitive by improving efficiency, flexibility, and
customer service.

9. Improved quality: Re-engineering can lead to better quality products


and services by identifying and eliminating defects and inefficiencies in
processes.

10.Increased innovation: Re-engineering can lead to new and innovative


ways of doing things, helping organizations to stay ahead of their
competitors.

11.Improved compliance: Re-engineering can help organizations to


comply with industry standards and regulations by identifying and
addressing areas of non-compliance.

Disadvantages of Re-engineering

Major architectural changes or radical reorganizing of the systems data


management has to be done manually. Re-engineered system is not likely to be
as maintainable as a new system developed using modern software Re-
engineering methods.

1. High costs: Re-engineering can be a costly process, requiring


significant investments in time, resources, and technology.
2. Disruption to business operations: Re-engineering can disrupt normal
business operations and cause inconvenience to customers, employees
and other stakeholders.

3. Resistance to change: Re-engineering can encounter resistance from


employees who may be resistant to change and uncomfortable with new
processes and technologies.

4. Risk of failure: Re-engineering projects can fail if they are not planned
and executed properly, resulting in wasted resources and lost
opportunities.

5. Lack of employee involvement: Re-engineering projects that are not


properly communicated and involve employees, may lead to lack of
employee engagement and ownership resulting in failure of the project.

6. Difficulty in measuring success: Re-engineering can be difficult to


measure in terms of success, making it difficult to justify the cost and
effort involved.

7. Difficulty in maintaining continuity: Re-engineering can lead to


significant changes in processes and systems, making it difficult to
maintain continuity and consistency in the organization.

CASE Tools:
CASE tools are set of software application programs, which are used to
automate SDLC activities. CASE tools are used by software project
managers, analysts and engineers to develop software system.

There are number of CASE tools available to simplify various stages of


Software Development Life Cycle such as Analysis tools, Design tools,
Project management tools, Database Management tools, Documentation
tools are to name a few.

Use of CASE tools accelerates the development of project to produce


desired result and helps to uncover flaws before moving ahead with next
stage in software development.

Components of CASE Tools

CASE tools can be broadly divided into the following parts based on their
use at a particular SDLC stage:
• Central Repository - CASE tools require a central repository, which can
serve as a source of common, integrated and consistent information.
Central repository is a central place of storage where product
specifications, requirement documents, related reports and diagrams,
other useful information regarding management is stored. Central
repository also serves as data dictionary.

• Upper Case Tools - Upper CASE tools are used in planning, analysis
and design stages of SDLC.
• Lower Case Tools - Lower CASE tools are used in implementation,
testing and maintenance.
• Integrated Case Tools - Integrated CASE tools are helpful in all the
stages of SDLC, from Requirement gathering to Testing and
documentation.

CASE tools can be grouped together if they have similar functionality, process
activities and capability of getting integrated with other tools.

Project Management Tools

These tools are used for project planning, cost and effort estimation, project
scheduling and resource planning. Managers have to strictly comply project
execution with every mentioned step in software project management. Project
management tools help in storing and sharing project information in real-time
throughout the organization. For example, Creative Pro Office, Trac Project,
Basecamp.
Analysis Tools

These tools help to gather requirements, automatically check for any


inconsistency, inaccuracy in the diagrams, data redundancies or erroneous
omissions. For example, Accept 360, Accompa, CaseComplete for requirement
analysis, Visible Analyst for total analysis.

Design Tools

These tools help software designers to design the block structure of the
software, which may further be broken down in smaller modules using
refinement techniques. These tools provides detailing of each module and
interconnections among modules. For example, Animated Software Design

Programming Tools

These tools consist of programming environments like IDE (Integrated


Development Environment), in-built modules library and simulation tools.
These tools provide comprehensive aid in building software product and include
features for simulation and testing. For example, Cscope to search code in C,
Eclipse.

Integration testing tools

Integration testing tools are used to test the interface between modules and find
the bugs; these bugs may happen because of the multiple modules integration.
The main objective of these tools is to make sure that the specific modules are
working as per the client's needs. To construct integration testing suites, we will
use these tools.

Some of the most used integration testing tools are as follows:

o Citrus
o FitNesse
o TESSY
o Protractor
o Rational Integration tester
Software Development Life Cycle (SDLC)

A software life cycle model (also termed process model) is a pictorial and
diagrammatic representation of the software life cycle. A life cycle model
represents all the methods required to make a software product transit
through its life cycle stages.

SDLC Cycle represents the process of developing software. SDLC framework


includes the following steps:

The stages of SDLC are as follows:

Stage1: communication and requirement analysis

In Communication, the user request for software by meeting

service provider. Requirement Analysis is the most

important and necessary stage in SDLC.

Business analyst and Project organizer set up a meeting with the client to
gather all the data like what the customer wants to build, who will be the end
user, what is the objective of the product. Before creating a product, a core
understanding or knowledge of the product is very necessary.

Once the requirement is understood, the SRS (Software Requirement


Specification) document is created. The developers should thoroughly follow
this document and also should be reviewed by the customer for future
reference.

Stage2: Feasibility study and system analysis

Rough plan and road map is done for software by using algorithms, models.

Stage3: Designing the Software

The next phase is about to bring down all the knowledge of requirements,
analysis, and design of the software project. This phase is the product of the
last two, like inputs from the customer, requirement gathering and blueprint
of software.
Stage4: Developing the project
In this phase of SDLC, the actual development begins, and the
programming is built. The implementation of design begins concerning
writing code. Developers have to follow the coding guidelines described by
their management and programming tools like compilers, interpreters,
debuggers, etc. are used to develop and implement the code.

Stage5: Testing

After the code is generated, it is tested against the requirements to make sure
that the products are solving the needs addressed and gathered during the
requirements stage.

During this stage, unit testing, integration testing, system testing, acceptance
testing are done.

Stage6: Deployment

Once the software is certified, and no bugs or errors are stated, then it is
deployed.

Then based on the assessment, the software may be released as it is or with


suggested enhancement in the object segment.

Stage7: Maintenance

Once when the client starts using the developed systems, then the real issues
come up and requirements to be solved from time to time.

This procedure where the care is taken for the developed product is known as
maintenance.

Different Software models

Waterfall Model:

The Waterfall Model was the first Process Model to be introduced. It is also
referred to as a linear- sequential life cycle model or classic model. It is
very simple to understand and use. In a waterfall model, each phase must be
completed before the next phase can begin and there is no overlapping in the
phases.
The Waterfall model is the earliest SDLC approach that was used for software
development.
The waterfall Model illustrates the software development process in a linear
sequential flow. This means that any phase in the development process begins
only if the previous phase is complete. In this waterfall model, the phases do
not overlap.
Waterfall approach was first SDLC Model to be used widely in Software
Engineering to ensure success of the project. In "The Waterfall" approach, the
whole process of software development is divided into separate phases. In
this Waterfall model, typically, the outcome of one phase acts as the input for
the next phase sequentially.
The following illustration is a representation of the different phases of the
Waterfall Model.

The sequential phases in Waterfall model are −


Requirement Gathering and analysis − All possible requirements of
the system to be developed are captured in this phase and documented
in a requirement specification document.
System Design − The requirement specifications from first phase are
studied in this phase and the system design is prepared. This system
design helps in specifying hardware and system requirements and helps
in defining the overall system architecture.
Implementation − With inputs from the system design, the system is
first developed in small programs called units, which are integrated in
the next phase. Each unit is developed and tested for its functionality,
which is referred to as Unit Testing.
Integration and Testing − All the units developed in the
implementation phase are integrated into a system after testing of each
unit. Post integration the entire system is tested for any faults and
failures.
Deployment of system − Once the functional and non-functional
testing is done; the product is deployed in the customer environment or
released into the market.
Maintenance − There are some issues which come up in the client
environment. To fix those issues, patches are released. Also to enhance
the product some better versions are released. Maintenance is done to
deliver these changes in the customer environment.
All these phases are cascaded to each other in which progress is seen as
flowing steadily downwards (like a waterfall) through the phases. The next
phase is started only after the defined set of goals are achieved for previous
phase and it is signed off, so the name "Waterfall Model". In this model, phases
do not overlap.
Advantages:

Some of the major advantages of the Waterfall Model are as follows −


Simple and easy to understand and use
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well
understood.
It is disciplined in approach.
Disadvantages:

No working software is produced until late during the life cycle.


High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate to
high risk of changing.So, risk and uncertainty is high with this
process model.

Spiral Model:

The spiral model, initially proposed by Boehm, it is the combination of


waterfall and iterative model, Using the spiral model, the software is
developed in a series of incremental releases. Each phase in spiral model
begins with planning phase and ends with evaluation phase.

The spiral model has four phases. A software project repeatedly passes
through these phases in iterations called Spirals.

Planning phase
This phase starts with gathering the business requirements in the baseline
spiral. In the subsequent spirals as the product matures, identification of
system requirements, subsystem requirements and unit requirements are all
done in this phase.
This phase also includes understanding the system requirements by
continuous communication between the customer and the system analyst. At
the end of the spiral, the product is deployed in the identified market.
Risk Analysis
Risk Analysis includes identifying, estimating and monitoring the technical
feasibility and management risks, such as schedule slippage and cost overrun.
After testing the build, at the end of first iteration, the customer evaluates the
software and provides feedback.
Engineering or construct phase
The Construct phase refers to production of the actual software product at
every spiral. In the baseline spiral, when the product is just thought of and the
design is being developed a POC (Proof of Concept) is developed in this phase
to get customer feedback.
Evaluation Phase
This phase allows the customer to evaluate the output of the project to update
before the project continues to the next spiral.
Software project repeatedly passes through all these four phases.
Advantages:
Flexible model
Project monitoring is very easy and effective
Risk management
Easy and frequent feedback from users.
Dis advantages:
It doesn’t work for smaller projects
Risk analysis require specific expertise.
It is costly model & complex.
Project success is highly dependent on risk.
Prototype Model:

To overcome the disadvantages of waterfall model, this model is


implemented with a special factor called prototype. It is also known as
revaluation model.
Step 1: Requirements gathering and analysis

A prototyping model starts with requirement analysis. In this phase, the


requirements of the system are defined in detail. During the process, the users
of the system are interviewed to know what is their expectation from the
system.
Step 2: Quick design

The second phase is a preliminary design or a quick design. In this stage, a


simple design of the system is created. However, it is not a complete design.
It gives a brief idea of the system to the user. The quick design helps in
developing the prototype.

Step 3: Build a Prototype

In this phase, an actual prototype is designed based on the information


gathered from quick design. It is a small working model of the required
system.

Step 4: Initial user evaluation

In this stage, the proposed system is presented to the client for an initial
evaluation. It helps to find out the strength and weakness of the working
model. Comment and suggestion are collected from the customer and
provided to the developer.

Step 5: Refining prototype

If the user is not happy with the current prototype, you need to refine the
prototype according to the user's feedback and suggestions.

This phase will not over until all the requirements specified by the user are
met. Once the user is satisfied with the developed prototype, a final system is
developed based on the approved final prototype.

Step 6: Implement Product and Maintain

Once the final system is developed based on the final prototype, it is


thoroughly tested and deployed to production. The system undergoes routine
maintenance for minimizing downtime and prevent large-scale failures.
Advantages:

Users are actively involved in development. Therefore, errors can


be detected in the initial stage of the software development process.
Missing functionality can be identified, which helps to reduce the risk
of failure as Prototyping is also considered as a risk reduction activity.
Helps team member to communicate effectively
Customer satisfaction exists because the customer can feel the product
at a very early stage.

Disadvantages:

Prototyping is a slow and time taking process.


The cost of developing a prototype is a total waste as the
prototype is ultimately thrown away.
Prototyping may encourage excessive change requests.
After seeing an early prototype model, the customers may think that
the actual product will be delivered to him soon.
The client may lose interest in the final product when he or she is
not happy with the initial prototype.

SDLC - V-Model

The V-model is an SDLC model where execution of processes happens in a


sequential manner in a V- shape. It is also known as Verification and
Validation model.
The V-Model is an extension of the waterfall model and is based on the
association of a testing phase for each corresponding development stage. This
means that for every single phase in the development cycle, there is a directly
associated testing phase. This is a highly-disciplined model and the next phase
starts only after completion of the previous phase.
V- Model - Design

Under the V-Model, the corresponding testing phase of the development phase is
planned in parallel. So, there are Verification phases on one side of the ‘V’
and Validation phases on the other side. The Coding Phase joins the two sides
of the V-Model.

There are the various phases of Verification Phase of V-model:

1. Business requirement analysis: This is the first step where product


requirements understood from the customer's side. This phase
contains detailed communication to understand customer's
expectations and exact requirements.
2. System Design: In this stage system engineers analyze and
interpret the business of the proposed system by studying the
user requirements document.
3. Architecture Design: The baseline in selecting the architecture is that
it should understand all which typically consists of the list of modules,
brief functionality of each module, their interface relationships,
dependencies, database tables, architecture diagrams, technology
detail, etc. The integration testing model is carried out in a particular
phase.
4. Module Design: In the module design phase, the system breaks down
into small modules. The detailed design of the modules is specified,
which is known as Low-Level Design
5. Coding Phase: After designing, the coding phase is started. Based on
the requirements, a suitable programming language is decided. There
are some guidelines and standards for coding. Before checking in the
repository, the final build is optimized for better performance, and the
code goes through many code reviews to check the performance.

There are the various phases of Validation Phase of V-model:

1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed
during the module design phase. These UTPs are executed to
eliminate errors at code level or unit level. A unit is the smallest entity
which can independently exist, e.g., a program module. Unit testing
verifies that the smallest entity can function correctly when isolated
from the rest of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during
the Architectural Design Phase. These tests verify that groups
created and tested independently can coexist and communicate
among themselves.
3. System Testing: System Tests Plans are developed during System
Design Phase. Unlike Unit and Integration Test Plans, System Tests
Plans are composed by the client’s business team. System Test
ensures that expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business
requirement analysis part. It includes testing the software product in
user atmosphere. Acceptance tests reveal the compatibility problems
with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like
load and performance defects within the real user atmosphere.

When to use V-Model?


1. When the requirement is well defined and not ambiguous.
2. The V-shaped model should be used for small to
medium-sized projects where requirements are clearly
defined and fixed.
3. The V-shaped model should be chosen when sample technical
resources are available with essential technical expertise.

Advantage:

• Easy to Understand.
• Testing Methods like planning, test designing happens well before
coding.
• This saves a lot of time. Hence a higher chance of success over the
waterfall model.
• Avoids the downward flow of the defects.
• Works well for small plans where requirements are easily understood.

Disadvantage:

• Very rigid and least flexible.


• Not a good for a complex project.
• Software is developed during the implementation stage, so no
early prototypes of the software are produced.
• If any changes happen in the midway, then the test documents
along with the required documents,

SDLC - RAD Model

The RAD (Rapid Application Development) model is based on prototyping


and iterative development with no specific planning involved. The process of
writing the software itself involves the planning required for developing the
product.
Rapid Application Development focuses on gathering customer requirements
through workshops or focus groups, early testing of the prototypes by the
customer using iterative concept, reuse of the existing prototypes
(components), continuous integration and rapid delivery.
RAD Model Design:

RAD model distributes the analysis, design, build and test phases into a series
of short, iterative development cycles.

Following are the various phases of the RAD Model −


Business Modelling:
The business model for the product under development is designed in terms
of flow of information and the distribution of information between various
business channels. A complete business analysis is performed to find the vital
information for business, how it can be obtained, how and when is the
information processed and what are the factors driving successful flow of
information.
Data Modelling:
The information gathered in the Business Modelling phase is reviewed and
analyzed to form sets of data objects vital for the business. The attributes of all
data sets is identified and defined. The relation between these data objects are
established and defined in detail in relevance to the business model.
Process Modelling:
The data object sets defined in the Data Modelling phase are converted to
establish the business information flow needed to achieve specific business
objectives as per the business model. The process model for any changes or
enhancements to the data object sets is defined in this phase. Process
descriptions for adding, deleting, retrieving or modifying a data object are
given.
Application Generation:
The actual system is built and coding is done by using automation tools to
convert process and data models into actual prototypes.
Testing and Turnover
The overall testing time is reduced in the RAD model as the prototypes are
independently tested during every iteration. However, the data flow and the
interfaces between all the components need to be thoroughly tested with
complete test coverage. Since most of the programming components have
already been tested, it reduces the risk of any major issues.
Incremental model:

It is a process of software development where requirements divided into


multiple models of SDLC.
Each module goes through the requirement, design, implementation and
testing phases, this process continues until complete system is achieved.

The various phases of Iterative model are as follows:

1. Requirement gathering & analysis: In this phase, requirements are


gathered from customers and check by an analyst whether requirements will
fulfil or not. Analyst checks that need will achieve within budget or not.
After all of this, the software team skips to the next phase.
2. Design: In the design phase, team design the software by the different
diagrams like Data Flow diagram, activity diagram, class diagram, state
transition diagram, etc.

3. Implementation: In the implementation, requirements are written in


the coding language and transformed into computer programs which are
called Software.

4. Testing: After completing the coding phase, software testing starts using
different test methods. There are many test methods, but the most common
are white box, black box, and grey box test methods.

5. Deployment: After completing all the phases, software is deployed to its


work environment.

6. Review: In this phase, after the product deployment, review phase is


performed to check the behavior and validity of the developed product.
And if there are any error found then the process starts again from the
requirement gathering.

7. Maintenance: In the maintenance phase, after deployment of the


software in the working environment there may be some bugs, some
errors or new updates are required. Maintenance involves debugging
and new addition options.

Advantages:

1. Testing and debugging during smaller iteration is easy.


2. A Parallel development can plan.
3. It is easily acceptable to ever-changing needs of the project.
4. Risks are identified and resolved during iteration.
5. Limited time spent on documentation and extra time on designing.

Disadvantages:

1. It is not suitable for smaller projects.


2. Design can be changed again and again because of imperfect
requirements.
3. Requirement changes can cause over budget.
4. Project completion date not confirmed because of changing
requirements.

You might also like