Unit-Iv: Front-End Application User Interacts
Unit-Iv: Front-End Application User Interacts
Unit-Iv: Front-End Application User Interacts
The user interface is the front-end application view to which the user
interacts to use the software. User can manipulate and control the software as
well as hardware by means of user interface.
User interface design creates an effective communication medium
between a human and a computer. UI provides fundamental platform for human
computer interaction.
1. Attractive
2. Simple to use
3. Responsive in a short time
4. Clear to understand
5. Consistent on all interface screens
Types of User Interface
1. Command Line Interface: The Command Line Interface provides a
command prompt, where the user types the command and feeds it to the
system. The user needs to remember the syntax of the command and its
use.
2. Graphical User Interface: Graphical User Interface provides a simple
interactive interface to interact with the system. GUI can be a
combination of both hardware and software. Using GUI, the user
interprets the software.
User Interface Design Process
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user
interface consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the
system, i.e., understanding, skill and knowledge, type of user, etc., based on the
user’s profile users are made into categories. From each category requirements
are gathered. Based on the requirement’s developer understand how to develop
the interface. Once all the requirements are gathered a detailed analysis is
conducted. In the analysis part, the tasks that the user performs to establish the
goals of the system are identified, described and elaborated. The analysis of the
user environment focuses on the physical work environment. Among the
questions to be asked are:
1. Where will the interface be located physically?
2. Will the user be sitting, standing, or performing other tasks unrelated to
the interface?
3. Does the interface hardware accommodate space, light, or noise
constraints?
4. Are there special human factors considerations driven by environmental
factors?
2. Interface Design
The goal of this phase is to define the set of interface objects and actions i.e.,
control mechanisms that enable the user to perform desired tasks. Indicate how
these control mechanisms affect the system. Specify the action sequence of
tasks and subtasks, also called a user scenario. Indicate the state of the system
when the user performs a particular task. Always follow the three golden rules
stated by Theo Mandel. Design issues such as response time, command and
action structure, error handling, and help facilities are considered as the design
model is refined. This phase serves as the foundation for the implementation
phase.
3. Interface Construction and Implementation
The implementation activity begins with the creation of a prototype (model) that
enables usage scenarios to be evaluated. As iterative design process continues a
User Interface toolkit that allows the creation of windows, menus, device
interaction, error messages, commands, and many other elements of an
interactive environment can be used for completing the construction of an
interface.
4. Interface Validation
This phase focuses on testing the interface. The interface should be in such a
way that it should be able to perform tasks correctly, and it should be able to
handle a variety of tasks. It should achieve all the user’s requirements. It should
be easy to use and easy to learn. Users should accept the interface as a useful
one in their work.
User Interface Design Golden Rules
The following are the golden rules stated by Theo Mandel that must be followed
during the design of the interface. Place the user in control:
1. Define the interaction modes in such a way that does not force the
user into unnecessary or undesired actions: The user should be able to
easily enter and exit the mode with little or no effort.
2. Provide for flexible interaction: Different people will use different
interaction mechanisms, some might use keyboard commands, some
might use mouse, some might use touch screen, etc., Hence all interaction
mechanisms should be provided.
3. Allow user interaction to be interruptible and undoable: When a user
is doing a sequence of actions the user must be able to interrupt the
sequence to do some other work without losing the work that had been
done. The user should also be able to do undo operation.
4. Streamline interaction as skill level advances and allow the
interaction to be customized: Advanced or highly skilled user should be
provided a chance to customize the interface as user wants which allows
different interaction mechanisms so that user doesn’t feel bored while
using the same interaction mechanism.
5. Hide technical internals from casual users: The user should not be
aware of the internal technical details of the system. He should interact
with the interface just to do his work.
6. Design for direct interaction with objects that appear on-screen: The
user should be able to use the objects and manipulate the objects that are
present on the screen to perform a necessary task. By this, the user feels
easy to control over the screen.
Reduce the User’s Memory Load
1. Reduce demand on short-term memory: When users are involved in
some complex tasks the demand on short-term memory is significant. So
the interface should be designed in such a way to reduce the remembering
of previously done actions, given inputs and results.
2. Establish meaningful defaults: Always an initial set of defaults should
be provided to the average user, if a user needs to add some new features
then he should be able to add the required features.
3. Define shortcuts that are intuitive: Mnemonics should be used by the
user. Mnemonics means the keyboard shortcuts to do some action on the
screen.
4. The visual layout of the interface should be based on a real-world
metaphor: Anything you represent on a screen if it is a metaphor for a
real-world entity then users would easily understand.
5. Disclose information in a progressive fashion: The interface should be
organized hierarchically i.e., on the main screen the information about the
task, an object or some behavior should be presented first at a high level
of abstraction. More detail should be presented after the user indicates
interest with a mouse pick.
Make the Interface Consistent
1. Allow the user to put the current task into a meaningful
context: Many interfaces have dozens of screens. So it is important to
provide indicators consistently so that the user know about the doing
work. The user should also know from which page has navigated to the
current page and from the current page where it can navigate.
2. Maintain consistency across a family of applications: in The
development of some set of applications all should follow and implement
the same design, rules so that consistency is maintained among
applications.
3. If past interactive models have created user expectations do not make
changes unless there is a compelling reason: once a particular
interactive sequence has become standard (eg: ctrl+s to save file) the user
expects this in every application she encounters.
User interface design is a crucial aspect of software engineering, as it is
the means by which users interact with software applications. A well-
designed user interface can improve the usability and user experience of
an application, making it easier to use and more effective.
Key Principles for Designing User Interfaces
1. User-centered design: User interface design should be focused on the
needs and preferences of the user. This involves understanding the user’s
goals, tasks, and context of use, and designing interfaces that meet their
needs and expectations.
2. Consistency: Consistency is important in user interface design, as it helps
users to understand and learn how to use an application. Consistent
design elements such as icons, color schemes, and navigation menus
should be used throughout the application.
3. Simplicity: User interfaces should be designed to be simple and easy to
use, with clear and concise language and intuitive navigation. Users
should be able to accomplish their tasks without being overwhelmed by
unnecessary complexity.
4. Feedback: Feedback is significant in user interface design, as it helps
users to understand the results of their actions and confirms that they are
making progress towards their goals. Feedback can take the form of
visual cues, messages, or sounds.
5. Accessibility: User interfaces should be designed to be accessible to all
users, regardless of their abilities. This involves considering factors such
as color contrast, font size, and assistive technologies such as screen
readers.
6. Flexibility: User interfaces should be designed to be flexible and
customizable, allowing users to tailor the interface to their own
preferences and needs.
Real-time systems:
A real-time system means that the system is subjected to real-time, i.e.,
the response should be guaranteed within a specified timing constraint or
the system should meet the specified deadline. For example flight control
systems, real-time monitors, etc.
Types of real-time systems based on timing constraints:
1. Hard real-time system: This type of system can never miss its deadline.
Missing the deadline may have disastrous consequences. The usefulness
of results produced by a hard real-time system decreases abruptly and
may become negative if tardiness increases. Tardiness means how late a
real-time system completes its task with respect to its deadline. Example:
Flight controller system.
2. Soft real-time system: This type of system can miss its deadline
occasionally with some acceptably low probability. Missing the deadline
have no disastrous consequences. The usefulness of results produced by a
soft real-time system decreases gradually with an increase in tardiness.
Example: Telephone switches.
3. Firm Real-Time Systems: These are systems that lie between hard and
soft real-time systems. In firm real-time systems, missing a deadline is
tolerable, but the usefulness of the output decreases with time. Examples
of firm real-time systems include online trading systems, online auction
systems, and reservation systems.
Reference model of the real-time system:
Our reference model is characterized by three elements:
1. A workload model: It specifies the application supported by the system.
2. A resource model: It specifies the resources available to the application.
3. Algorithms: It specifies how the application system will use resources.
Terms related to real-time system:
1. Job: A job is a small piece of work that can be assigned to a processor
and may or may not require resources.
2. Task: A set of related jobs that jointly provide some system
functionality.
3. Release time of a job: It is the time at which the job becomes ready for
execution.
4. Execution time of a job: It is the time taken by the job to finish its
execution.
5. Deadline of a job: It is the time by which a job should finish its
execution. Deadline is of two types: absolute deadline and relative
deadline.
6. Response time of a job: It is the length of time from the release time of a
job to the instant when it finishes.
7. The maximum allowable response time of a job is called its relative
deadline.
8. The absolute deadline of a job is equal to its relative deadline plus its
release time.
9. Processors are also known as active resources. They are essential for the
execution of a job. A job must have one or more processors in order to
execute and proceed towards completion. Example: computer,
transmission links.
10.Resources are also known as passive resources. A job may or may not
require a resource during its execution. Example: memory, mutex
11.Two resources are identical if they can be used interchangeably else they
are heterogeneous.
Advantages:
• Real-time systems provide immediate and accurate responses to external
events, making them suitable for critical applications such as air traffic
control, medical equipment, and industrial automation.
• They can automate complex tasks that would otherwise be impossible to
perform manually, thus improving productivity and efficiency.
• Real-time systems can reduce human error by automating tasks that
require precision, accuracy, and consistency.
• They can help to reduce costs by minimizing the need for human
intervention and reducing the risk of errors.
• Real-time systems can be customized to meet specific requirements,
making them ideal for a wide range of applications.
Disadvantages:
• Real-time systems can be complex and difficult to design, implement, and
test, requiring specialized skills and expertise.
• They can be expensive to develop, as they require specialized hardware
and software components.
• Real-time systems are typically less flexible than other types of computer
systems, as they must adhere to strict timing requirements and cannot be
easily modified or adapted to changing circumstances.
• They can be vulnerable to failures and malfunctions, which can have
serious consequences in critical applications.
• Real-time systems require careful planning and management, as they
must be continually monitored and maintained to ensure they operate
correctly.
HUMAN FACTORS:
The essentially of human factors are imperative for the design and
development of any software work. It presents the underlying idea for
incorporating these factors into the software life cycle. Many giant
companies came to recognise that the success of a product depends upon
a solid Human factors design. Human factors discovers and applies
information about human behaviour, abilities, limitations and other
characteristics to the design of tools machines, systems, tasks, jobs and
environment for productive, safe, comfortable and effective human use.
Study of human factors is essential for every software manager since
he/she must be acquitted with low his/her staff members interact with
each other .Generally ,software products are used by variety of populace
and its necessary to take account the abilities of such a group to make the
software more useful and popular.
Objective of human factors design:
The purpose of human factors design into create products that meet
the operability and learn ability goals. This design should meet the user’s
needs by being effective. Efficient but also high quality keeping an eye on
the major concern of the customer in most cases, that is affordability.
The engineering discipline for designers and developers must focus on
the following:
• Users and their psychology
• Amount of work that the user must do, including task goals,
performance requirements and group communication requirements.
• Quality and performance.
• Information required by users and their job.
Benefits:
• Elevated user satisfaction.
• Decreased training time and costs.
• Reduced operator stress.
• Reduced product liability.
• Decrement of operating costs.
• Lesser operational error.
Benefits of usability:
• Elevated sales and consumer satisfaction.
• Increased productivity and efficiency.
• Decreased training costs and time.
• Lesser support and maintenance costs.
• Reduced documentation and support costs.
• Increased satisfaction, performance and productivity.
Human-computer Interaction:
Computer:
A Computer system comprises various elements, each of
which affects the user of the system. Input devices for interactive use,
allowing text entry, drawing and selection from the screen.
➢ Text entry: Traditional keyboard, phone text entry.
➢ Pointing: Mouse, but also touch pads.
Output display devices for interactive use
➢ Different types of screen mostly using same form of bitmap display.
➢ Large displays and situated displays for shared and public use.
Memory:
Short term memory: RAM
Long term memory: Magnetic and optical disks capacity limitation
related to
Document and wide storage.
Processing:
The effects when systems run too slow too fast, the myth of the
infinitely fast machine.
Limitations and processing speed.
Instead of workstations, computer may be in the form of
embedded computational machines, such as parts of microwave ovens.
Because the technique for designing these interfaces bear so much
relationship to the techniques for designing workstations interfaces, they
can be profitably treated together. Human computer interaction, by
contrast, studies both the mechanism side and the human side, but of a
narrower class of devices.
Human:
Humans are limited in their capacity to process information. This
has important implications for design. Information is received and response
given via a no of input and output channels.
➢ Visual channel.
➢ Auditory channel
➢ Movement
Information is stored in memory:
➢ Sensory memory.
➢ Short term memory.
➢ Long term memory.
Information is processed applied:
➢ Reasoning.
➢ Problem solving.
➢ Error.
Interaction:
The communication between the user and the system their interaction
framework has four parts:
1.User
2.Input
3.System
4.Output
Interaction models help us to understand what is going on in the interaction
between user and system. They address the translations between what the user
wants and what the system does.
Human-Computer interaction is concerned with the joint performance of tasks by
humans and machines; the structure of communication between human and
machine, human capabilities to use machines.
The goals of HCI are to produce usable and safe system as well as functional
systems. In order to produce computer system with good usability develops must
attempt to:
➢ Understand the factors that determines how people use technology.
➢ Develop tools and technique to enable building suitable system.
➢ Achieve efficient, effective and safe interactive.
➢ Put people first.
Interaction devices:
Different tasks, different types of data and different types of users
all require different user interface devices. In most cases, interface devices are
either input or output devices. For example: A touch screen combines both.
➢ Interface devices correlate to the human senses.
➢ Now a day, a device usually is designed either for input or for output.
Input devices:
Most commonly, personal computers are equipped with text input and
pointing devices. For text input, the QWERTY keyboard is the standard solution,
but depending on the purpose of the system. At the same time, the mouse is not
only imaginable pointing device. Alternative for similar but slightly different
purposes include touchpad, track ball, joystick.
Output devices:
Output from a personal computer in most cases means output of visual data.
Devices for ‘dynamic visualisation’ include the traditional cathode ray tube
(CRT), liquid crystal display (LCD). Printers are also a very important device for
visual output but are substantially different from screens in that output is static.
The subject of HCI is very rich both terms of the disciplines it draws from
as well as opportunities for research. The study of user interface provides a
double-sided approach to understand how human and machines interact. From
studying how human psychology, We can design better into for people to interact
with computer.
Human- Computer Interface Design:
The overall process for designing a user interface begins with
the creation of different models. The intention of computer interface design is to
learn the ways of designing user-friendly interfaces or interactions.
Interface Design Models:
Four different models come into play when a human-computer
interface (HCI) is to be designed.
The software engineering creates a design model, a human engineer (or the
software engineer) establishes a user model, the end user develops a mental image
that is often called the user’s model or the system perception, and the implements
of the system create a system image.
Task Analysis and Modelling:
Task analysis and modelling can be applied to understand the tasks that
people currently perform and map these into similar set of tasks.
For example, assume that a small software company wants build a
computer-aided design system explicitly for interior designers. By of serving a
designer at work, the engineer notices that the interior design is comprised of a
number of activities : furniture layout, fabric and material selection, wall and
window covering selection, presentation costing and shopping. Each of these
major tasks can be elaborated into subtasks. For example, furniture layout can be
refined into the following tasks:
(1) Draw floor plan based on room dimensions;
(2) Place windows and doors at appropriate locations;
(3) Use furniture templates to draw scaled furniture outlines on floor
plan;
(4) Move furniture outlines to get best placement;
(5) Label all furniture outlines;
(6) Draw dimensions to show location; and
(7) Draw perspective view for customer.
Interface design :
Interface design is one of the most important part of software design. It is crucial
in a sense that user interaction with the system takes place through various
interfaces provided by the software product.
Think of the days of text based system where user had to type command on the
command line to execute a simple task.
Example of a command line interface:
• run prog1.exe /i=2 message=on
The above command line interface executes a program prog1.exe with a input i=2
with message during execution set to on. Although such command line interface
gives liberty to the user to run a program with a concise command. It is difficult
for a novice user and is error prone. This also requires the user to remember the
command for executing various commands with various details of options as
shown above. Example of Menu with option being asked from the user (refer to
Figure 3.11).
This simple menu allow the user to execute the program with option available as
a selection and further have option for exiting the program and going back to
previous screen. Although it provide grater flexibility than command line option
and does not need the user to remember the command still user can’t navigate to
the desired option from this screen. At best user can go back to the previous screen
to select a different option.
Modern graphical user interface provides tools for easy navigation and
interactivity to the user to perform different tasks.
The following are the advantages of a Graphical User Interface (GUI):
• Various information can be display and allow user to switch to different
task directly from the present screen.
• Useful graphical icons and pull down menu reduces typing effort by the user.
• Provides key-board shortcut to perform frequently performed tasks.
• Simultaneous operations of various task without loosing the present context.
Any interface design is targeted to users of different categories.
• Expert user with adequate knowledge of the system and application
• Average user with reasonable knowledge
• Novice user with little or no knowledge.
The following are the elements of good interface design:
• Goal and the intension of task must be identified.
• The important thing about designing interfaces is all about maintaining
consistency. Use of consistent color scheme, message and terminologies helps.
• Develop standards for good interface design and stick to it.
• Use icons where ever possible to provide appropriate message.
• Allow user to undo the current command. This helps in undoing mistake
committed by the user.
• Provide context sensitive help to guide the user.
• Use proper navigational scheme for easy navigation within the application.
• Discuss with the current user to improve the interface.
• Think from user prospective.
• The text appearing on the screen are primary source of information exchange
between the user and the system. Avoid using abbreviation. Be very specific in
communicating the mistake to the user. If possible provide the reason for error.
• Navigation within the screen is important and is specially useful for data entry
screen where keyboard is used intensively to input data.
• Use of color should be of secondary importance. It may be kept in mind about
user accessing application in a monochrome screen.
• Expect the user to make mistake and provide appropriate measure to handle
such errors through proper interface design.
• Grouping of data element is important. Group related data items accordingly.
• Justify the data items.
• Avoid high density screen layout. Keep significant amount of screen blank.
• Make sure an accidental double click instead of a single click may does some
thing unexpected.
• Provide file browser. Do not expect the user to remember the path of the required
file.
• Provide key-board shortcut for frequently done tasks. This saves time.
• Provide on-line manual to help user in operating the software.
• Always allow a way out (i.e., cancellation of action already completed).
• Warn user about critical task, like deletion of file, updating of critical
information.
• Programmers are not always good interface designer. Take help of expert
professional who understands human perception better than programmers.
• Include all possible features in the application even if the feature is available in
operating system.
Word the message carefully in a user understandable manner.
• Develop navigational procedure prior to developing the user interface.
Interface standards:
A user interface is the system by which people (user) interact with machine.
Why we need standards?
➢ Despite the best efforts of HCI, we are still getting if wrong.
➢ We specify the system the system behaviour.
➢ We validate our specification.
➢ We test the code and prove the correctness of our system.
➢ It is not just design issue or usability testing issue.
History of user interface standards
• In 1965, human factors specialists worked to make user interfaces- it is,
accurate and easy to learn.
• In 1985, We realised that usability was not enough. We needed consistency
standards become important.
• User interface standards are very effective for when you are developing,
testing or designing any new site or application or when you are revising
over so percent of the [pages in an existing application or site.
Creating a user interface standard helps you to create user interface that are
consistent and easy to understand
Example:
1.Modelling a system which has user controlled display options.
2.User can select from one of three choices.
3.choices determine the size of the current window display.
4.so they came up with schema and present first prototype.
Select screen display
FULL
HALF
PANEL
Problem:
➢ User testing shows the system breaks when a user selects more than one
option.
➢ Designer fixes it and present second prototype.
➢ But isn’t this the original prototype?
➢ Designer has ‘improved it’.
➢ User can now only select one checkbox.
➢ Designer has broken guidelines regarding selection controls.
Guidelines for using selection controls:
➢ Use radio buttons to indicate one or more options that must be either on or
off, but which are mutually exclusive.
➢ Use checkboxes to indicate one or more options that must be either on or
off, but which are not mutually exclusive.
Extending the specification:
➢ Design must satisfy our specification.
➢ Design must also satisfy guidelines.
➢ Find a way to specify selection widget guidelines.
➢ Ensure the described property holds in our system.
➢ So, they extend specification and present revised prototype.
Types of standards:
There are 3 types of standards
Methodological standards: This is S checklist to remind developers of the tasks
to create usable systems such as user interview, task analysis and design etc.
Design standards: This is building code. A set of absolute legal requirements that
ensure a consistent look and feel.
Design principles: Good design principles are specific and research – based and
developers work well within the design standards rules.
Building the design standards:
Major activities when building these standards are
➢ Project kick off and planning
• You collaborate with key members of the project team to define the
goals and scope of the user interface standards
• This includes whether the UI document is to be considered a
guideline, standard or style guide, which UI technology it will be
based on and who should participate in its development.
• You work closely with your team and other stake holders to identify
your key business need and business flows.
➢ Gather user interface samples
Based on the information and direction received from your team,
you begin by reviewing your major business applications and
extracting. Examples for the UI standard.
This is an iterative process that takes feedback from as wide
an audience as is appropriate.
➢ Develop user interface document
The document itself includes
• How to change and update the document.
• Common UI elements and when to use them.
• General navigation, graphic look and feel(or style), error handling,
messages.
➢ Review with team
• This is an iterative process that takes feedback from as wide an
audience as it is appropriate.
• The standard is reviewed and refined with your team and stake
holders in a consensus building process.
➢ Present user interface document.
• You present the UI document in electronic form or paper form.
Benefits of standards:
1.The goal of UI design is to made the user interaction as simple as efficient as
possible.
2.Your user or customers see a consistent UI within and between applications.
3.Reduced costs for support, user training packages and job aids.
4.Most important customer satisfaction, your users will reduce errors, training
requirement, and frustration time per transaction.
5.Reduced cost and effort for system maintenance.
UNIT-V
What is Software Quality?
Software Quality shows how good and reliable a product is. To convey an
associate degree example, think about functionally correct software. It performs
all functions as laid out in the SRS document. But, it has an associate degree
virtually unusable program. even though it should be functionally correct, we tend
not to think about it to be a high-quality product.
Software Quality Assurance (SQA):
Software Quality Assurance (SQA) is simply a way to assure quality in the
software. It is the set of activities that ensure processes, procedures as well as
standards are suitable for the project and implemented correctly.
Software Quality Assurance is a process that works parallel to Software
Development. It focuses on improving the process of development of software so
that problems can be prevented before they become major issues. Software
Quality Assurance is a kind of Umbrella activity that is applied throughout the
software process.
For those looking to deepen their expertise in SQA and elevate their professional
skills, consider exploring a specialized training program – Manual to
Automation Testing: A QA Engineer’s Guide . This program offers practical,
hands-on experience and advanced knowledge that complements the concepts
covered in.
What is quality?
Quality in a product or service can be defined by several measurable
characteristics. Each of these characteristics plays a crucial role in determining
the overall quality.
SOFTWARE RELIABILITY
Software reliability is defined as the probability of failure-free operation of a
software system for a specified time in a specified environment.
DEFINITIONS OF SOFTWARE RELIABILITY
Software reliability is defined as the probability of failure-free operation of a
software system for a specified time in a specified environment. The key
elements of the definition include probability of failure-free operation, length of
time of failure-free operation and the given execution environment. Failure
intensity is a measure of the reliability of a software system operating in a given
environment. Example: An air traffic control system fails once in two years.
Factors Influencing Software Reliability
• A user’s perception of the reliability of a software depends upon two
categories of information.
o The number of faults present in the software.
o The way users operate the system. This is known as the operational
profile.
• The fault count in a system is influenced by the following.
o Size and complexity of code.
o Characteristics of the development process used.
o Education, experience, and training of development personnel.
o Operational environment.
Applications of Software Reliability
The applications of software reliability includes
• Comparison of software engineering technologies.
o What is the cost of adopting a technology?
o What is the return from the technology — in terms of cost and
quality?
• Measuring the progress of system testing –The failure intensity
measure tells us about the present quality of the system: high intensity
means more tests are to be performed.
• Controlling the system in operation –The amount of change to a
software for maintenance affects its reliability.
• Better insight into software development processes – Quantification of
quality gives us a better insight into the development processes.
FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS
System functional requirements may specify error checking, recovery features,
and system failure protection. System reliability and availability are specified as
part of the non-functional requirements for the system.
SYSTEM RELIABILITY SPECIFICATION
• Hardware reliability focuses on the probability a hardware component
fails.
• Software reliability focuses on the probability a software component will
produce an incorrect output.
• The software does not wear out and it can continue to operate after a bad
result.
• Operator reliability focuses on the probability when a system user makes
an error.
FAILURE PROBABILITIES
If there are two independent components in a system and the operation of the
system depends on them both then, P(S) = P (A) + P (B)
If the components are replicated then the probability of failure is P(S) = P (A) n
which means that all components fail at once.
FUNCTIONAL RELIABILITY REQUIREMENTS
• The system will check all operator inputs to see that they fall within their
required ranges.
• The system will check all disks for bad blocks each time it is booted.
• The system must be implemented in using a standard implementation of
Ada.
NON-FUNCTIONAL RELIABILITY SPECIFICATION
The required level of reliability must be expressed quantitatively. Reliability is
a dynamic system attribute. Source code reliability specifications are
meaningless (e.g. N faults/1000 LOC). An appropriate metric should be chosen
to specify the overall system reliability.
HARDWARE RELIABILITY METRICS
Hardware metrics are not suitable for software since its metrics are based on
notion of component failure. Software failures are often design failures. Often
the system is available after the failure has occurred. Hardware components can
wear out.
SOFTWARE RELIABILITY METRICS
Reliability metrics are units of measure for system reliability. System reliability
is measured by counting the number of operational failures and relating these to
demands made on the system at the time of failure. A long-term measurement
program is required to assess the reliability of critical systems.
PROBABILITY OF FAILURE ON DEMAND
The probability system will fail when a service request is made. It is useful
when requests are made on an intermittent or infrequent basis. It is appropriate
for protection systems where service requests may be rare and consequences
can be serious if service is not delivered. It is relevant for many safety-critical
systems with exception handlers.
RELIABILITY METRICS
• Probability of Failure on Demand (PoFoD)
o PoFoD = 0.001.
o For one in every 1000 requests the service fails per time unit.
• Rate of Fault Occurrence (RoCoF)
o RoCoF = 0.02.
o Two failures for each 100 operational time units of operation.
• Mean Time to Failure (MTTF)
o The average time between observed failures (aka MTBF)
o It measures time between observable system failures.
o For stable systems MTTF = 1/RoCoF.
o It is relevant for systems when individual transactions take lots of
processing time (e.g. CAD or WP systems).
• Availability = MTBF / (MTBF+MTTR)
o MTBF = Mean Time Between Failure
o MTTR = Mean Time to Repair
• Reliability = MTBF / (1+MTBF)
TIME UNITS
Time units include:
• Raw Execution Time which is employed in non-stop system
• Calendar Time is employed when the system has regular usage patterns
• Number of Transactions is employed for demand type transaction
systems
AVAILABILITY
Availability measures the fraction of time system is really available for use. It
takes repair and restart times into account. It is relevant for non-stop
continuously running systems (e.g. traffic signal).
FAILURE CONSEQUENCES – STUDY 1
Reliability does not take consequences into account. Transient faults have no
real consequences but other faults might cause data loss or corruption. Hence it
may be worthwhile to identify different classes of failure, and use different
metrics for each.
FAILURE CONSEQUENCES – STUDY 2
When specifying reliability both the number of failures and the consequences
of each matter. Failures with serious consequences are more damaging than
those where repair and recovery is straightforward. In some cases, different
reliability specifications may be defined for different failure types.
FAILURE CLASSIFICATION
Failure can be classified as the following
• Transient – only occurs with certain inputs.
• Permanent – occurs on all
• Recoverable – system can recover without operator help.
• Unrecoverable – operator has to help.
• Non-corrupting – failure does not corrupt system state or d
• Corrupting – system state or data are altered.
BUILDING RELIABILITY SPECIFICATION
The building of reliability specification involves consequences analysis of
possible system failures for each sub-system. From system failure analysis,
partition the failure into appropriate classes. For each class send out the
appropriate reliability metric.
SPECIFICATION VALIDATION
It is impossible to empirically validate high reliability specifications. No
database corruption really means PoFoD class < 1 in 200 million. If each
transaction takes 1 second to verify, simulation of one day’s transactions takes
3.5 days.
Software testing:
Software testing is an important process in the software development
lifecycle . It involves verifying and validating that a software application is
free of bugs, meets the technical requirements set by
its design and development , and satisfies user requirements efficiently and
effectively.
This process ensures that the application can handle all exceptional and
boundary cases, providing a robust and reliable user experience. By
systematically identifying and fixing issues, software testing helps deliver high-
quality software that performs as expected in various scenarios.
Software Testing is a method to assess the functionality of the software
program. The process checks whether the actual software matches the expected
requirements and ensures the software is bug-free. The purpose of software
testing is to identify the errors, faults, or missing requirements in contrast to
actual requirements. It mainly aims at measuring the specification, functionality,
and performance of a software program or application.
Software testing can be divided into two steps
1. Verification: It refers to the set of tasks that ensure that the software
correctly implements a specific function. It means “Are we building the
product right?”.
2. Validation: It refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements. It
means “Are we building the right product?”.
Different Types Of Software Testing
Explore diverse software testing methods
including manual and automated testing for improved quality assurance .
Enhance software reliability and performance through functional and non-
functional testing, ensuring user satisfaction. Learn about the significance of
various testing approaches for robust software development.
Apart from the above classification software testing can be further divided into
2 more ways of testing:
Apart from Regression testing , Automation testing is also used to test the
application from a load, performance, and stress point of view. It increases the
test coverage, improves accuracy, and saves time and money when compared
to manual testing.
1. Black box Testing : Testing in which the tester doesn’t have access to
the source code of the software and is conducted at the software
interface without any concern with the internal logical structure of the
software known as black-box testing.
2. White box Testing : Testing in which the tester is aware of the internal
workings of the product, has access to its source code, and is conducted
by making sure that all internal operations are performed according to
the specifications is known as white box testing.
3. Grey Box Testing : Testing in which the testers should have knowledge
of implementation, however, they need not be experts.
Path Testing:
Path Testing is a method that is used to design the test cases. In the
path testing method, the control flow graph of a program is designed to
find a set of linearly independent paths of execution. In this method,
Cyclomatic Complexity is used to determine the number of linearly
independent paths and then test cases are generated for each path.
It gives complete branch coverage but achieves that without covering all
possible paths of the control flow graph. McCabe’s Cyclomatic
Complexity is used in path testing. It is a structural testing method that
uses the source code of a program to find every possible executable
path.
• Cyclomatic Complexity:
After the generation of the control flow graph, calculate the cyclomatic
complexity of the program using the following formula .
• Make Set:
Make a set of all the paths according to the control flow graph and
calculate cyclomatic complexity. The cardinality of the set is equal to
the calculated cyclomatic complexity.
• Create Test Cases:
Create a test case for each path of the set obtained in the above step.
• Independent paths:
An Independent path is a path through a Decision to Decision path
graph that cannot be reproduced from other paths by other methods.
4. Some test paths may skip some of the conditions in the code. It
may not cover some conditions or scenarios if there is an error in
the specific paths.
3. Loop Testing
3. Data Flow Testing: The data flow test method chooses the test path of a
program based on the locations of the definitions and uses all the
variables in the program. The data flow test approach is depicted as
follows suppose each statement in a program is assigned a unique
statement number and that theme function cannot modify its parameters
or global variables. For example, with S as its statement number.
If statement S is an if loop statement, them its DEF set is empty and its USE
set depends on the state of statement S. The definition of the variable X at
statement S is called the line of statement S’ if the statement is any way from
S to statement S’ then there is no other definition of X. A definition use (DU)
chain of variable X has the form [X, S, S’], where S and S’ denote statement
numbers, X is in DEF(S) and USE(S’), and the definition of X in statement S
is line at statement S’. A simple data flow test approach requires that each DU
chain be covered at least once. This approach is known as the DU test
approach. The DU testing does not ensure coverage of all branches of a
program. However, a branch is not guaranteed to be covered by DU testing
only in rare cases such as then in which the other construct does not have any
certainty of any variable in its later part and the other part is not present. Data
flow testing strategies are appropriate for choosing test paths of a program
containing nested if and loop statements.
1. Simple Loop – The following set of test can be applied to simple loops,
where the maximum allowable number through the loop is n.
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.
2. Concatenated Loops – If loops are not dependent on each other,
contact loops can be tested using the approach used in simple loops. if
the loops are interdependent, the steps are followed in nested loops.
1. Nested Loops – Loops within loops are called as nested loops. when
testing nested loops, the number of tested increases as level nesting
increases. The following steps for testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
2. Unstructured loops – This type of loops should be redesigned,
whenever possible, to reflect the use of unstructured the structured
programming constructs.
1. Functional Testing
2. Regression Testing
Before we move in depth of the Black box testing do you known that their are
many different type of testing used in industry and some automation testing
tools are there which automate the most of testing so if you wish to learn the
latest industry level tools then you check-out our manual to automation testing
course in which you will learn all these concept and tools
Functional Testing
• This testing is not concerned with the source code of the application.
Each functionality of the software application is tested by providing
appropriate test input, expecting the output, and comparing the actual
output with the expected output.
Regression Testing
2. Generating test cases – (i) To each valid and invalid class of input
assign a unique identification number. (ii) Write a test case covering all
valid and invalid test cases considering that no two invalid inputs mask
each other. To calculate the square root of a number, the equivalence
classes will be (a) Valid inputs:
• Positive decimals
6. Compatibility testing – The test case results not only depends on the
product but is also on the infrastructure for delivering functionality. When the
infrastructure parameters are changed it is still expected to work properly.
Some parameters that generally affect the compatibility of software are:
1. Appium
2. Selenium
3. Microsoft Coded UI
4. Applitools
5. HP QTP .
3. Discovers the errors that occur while initiating & terminating any
functions.
Integration testing:
Integration testing is one of the basic type of software testing and there are
many other basic and advance software testing. If you are interested in
learning all the testing concept and other more advance concept in the field of
the software testing
• And also if you don’t want to miss out on any integration scenarios then
you have to follow the proper sequence.
• Exposing the defects is the major focus of the integration testing and the
time of interaction between the integrated units.
• It is the simplest integration testing approach, where all the modules are
combined and the functionality is verified after the completion of
individual module testing.
• In simple words, all the modules of the system are simply put together
and tested.
• There will be quite a lot of delay because you would have to wait for all
the modules to be integrated.
• High-risk critical modules are not isolated and tested on priority since
all modules are tested at once.
• May not provide enough visibility into the interactions and data
exchange between components.
In bottom-up testing, each module at lower levels are tested with higher
modules until all modules are tested. The primary purpose of this integration
testing is that each subsystem tests the interfaces among various modules
making up the subsystem. This integration testing uses test drivers to drive and
pass appropriate data to the lower-level modules.
• In this testing, the complexity that occurs when the system is made up of
a large number of small subsystems.
• Mixed approach is useful for very large projects having several sub
projects.
• This Sandwich approach overcomes this shortcoming of the top-down
and bottom-up approaches.
• For mixed integration testing, it requires very high cost because one part
has a Top-down approach while another part has a bottom-up approach.
• This integration testing cannot be used for smaller systems with huge
interdependence between different modules.
2. Create a test plan: Develop a test plan that outlines the scenarios and
test cases that need to be executed to validate the integration points
between the different components. This could include testing data flow,
communication protocols, and error handling.
4. Execute the tests: Execute the tests outlined in your test plan, starting
with the most critical and complex scenarios. Be sure to log any defects
or issues that you encounter during testing.
6. Repeat testing: Once defects have been fixed, repeat the integration
testing process to ensure that the changes have been successful and that
the application still works as expected.
After each validation test case has been conducted, one of two possible
condition exist:
1. The function or performance characteristics conform to specification
and are accepted or
alpha and beta tests are used to uncover errors that only the end-user seems
able to find.
The Beta test is conducted at one or more customer sites by the end-user of
the software. Unlike alpha testing, the developer is generally not present.
Unlike alpha testing, the developer is generally not present. Therefore, the
beta test is a "live" application of the software in an environment that
cannot be controlled by the developer. The customer records all problems
(real or imagined) that are encountered during beta testing and reports these
to the developer at regular intervals. As a result of problems reported
during beta tests, software engineers make modifications and then prepare
for release of the software product to the entire customer base
System Testing:
System testing is actually a series of different tests whose primary
purpose is to fully exercise the computer-based system. Although each test has
a different purpose, all work to verify that system elements have been properly
integrated and perform allocated functions.
Reverse Engineering:
5. Fixing bugs and maintenance: Reverse engineering can help find and
repair flaws or provide updates for systems for which the original source
code is either unavailable or inadequately documented.
Re-engineering:
What is Re-engineering?
Re-engineering, also known as software re-engineering, is the process of
analyzing, designing, and modifying existing software systems to
improve their quality, performance, and maintainability.
1. This can include updating the software to work with new hardware or
software platforms, adding new features, or improving the software’s
overall design and architecture.
Objective of Re-engineering
2. Analysis: The next step is to analyze the existing system, including the
code, documentation, and other artifacts. This involves identifying the
system’s strengths and weaknesses, as well as any issues that need to be
addressed.
3. Design: Based on the analysis, the next step is to design the new or
updated software system. This involves identifying the changes that
need to be made and developing a plan to implement them.
5. Testing: Once the changes have been implemented, the software system
needs to be tested to ensure that it meets the new requirements and
specifications.
1. Inventory Analysis
2. Document Reconstruction
3. Reverse Engineering
4. Code Reconstruction
5. Data Reconstruction
6. Forward Engineering
Advantages of Re-engineering
Disadvantages of Re-engineering
4. Risk of failure: Re-engineering projects can fail if they are not planned
and executed properly, resulting in wasted resources and lost
opportunities.
CASE Tools:
CASE tools are set of software application programs, which are used to
automate SDLC activities. CASE tools are used by software project
managers, analysts and engineers to develop software system.
CASE tools can be broadly divided into the following parts based on their
use at a particular SDLC stage:
• Central Repository - CASE tools require a central repository, which can
serve as a source of common, integrated and consistent information.
Central repository is a central place of storage where product
specifications, requirement documents, related reports and diagrams,
other useful information regarding management is stored. Central
repository also serves as data dictionary.
• Upper Case Tools - Upper CASE tools are used in planning, analysis
and design stages of SDLC.
• Lower Case Tools - Lower CASE tools are used in implementation,
testing and maintenance.
• Integrated Case Tools - Integrated CASE tools are helpful in all the
stages of SDLC, from Requirement gathering to Testing and
documentation.
CASE tools can be grouped together if they have similar functionality, process
activities and capability of getting integrated with other tools.
These tools are used for project planning, cost and effort estimation, project
scheduling and resource planning. Managers have to strictly comply project
execution with every mentioned step in software project management. Project
management tools help in storing and sharing project information in real-time
throughout the organization. For example, Creative Pro Office, Trac Project,
Basecamp.
Analysis Tools
Design Tools
These tools help software designers to design the block structure of the
software, which may further be broken down in smaller modules using
refinement techniques. These tools provides detailing of each module and
interconnections among modules. For example, Animated Software Design
Programming Tools
Integration testing tools are used to test the interface between modules and find
the bugs; these bugs may happen because of the multiple modules integration.
The main objective of these tools is to make sure that the specific modules are
working as per the client's needs. To construct integration testing suites, we will
use these tools.
o Citrus
o FitNesse
o TESSY
o Protractor
o Rational Integration tester
Software Development Life Cycle (SDLC)
A software life cycle model (also termed process model) is a pictorial and
diagrammatic representation of the software life cycle. A life cycle model
represents all the methods required to make a software product transit
through its life cycle stages.
Business analyst and Project organizer set up a meeting with the client to
gather all the data like what the customer wants to build, who will be the end
user, what is the objective of the product. Before creating a product, a core
understanding or knowledge of the product is very necessary.
Rough plan and road map is done for software by using algorithms, models.
The next phase is about to bring down all the knowledge of requirements,
analysis, and design of the software project. This phase is the product of the
last two, like inputs from the customer, requirement gathering and blueprint
of software.
Stage4: Developing the project
In this phase of SDLC, the actual development begins, and the
programming is built. The implementation of design begins concerning
writing code. Developers have to follow the coding guidelines described by
their management and programming tools like compilers, interpreters,
debuggers, etc. are used to develop and implement the code.
Stage5: Testing
After the code is generated, it is tested against the requirements to make sure
that the products are solving the needs addressed and gathered during the
requirements stage.
During this stage, unit testing, integration testing, system testing, acceptance
testing are done.
Stage6: Deployment
Once the software is certified, and no bugs or errors are stated, then it is
deployed.
Stage7: Maintenance
Once when the client starts using the developed systems, then the real issues
come up and requirements to be solved from time to time.
This procedure where the care is taken for the developed product is known as
maintenance.
Waterfall Model:
The Waterfall Model was the first Process Model to be introduced. It is also
referred to as a linear- sequential life cycle model or classic model. It is
very simple to understand and use. In a waterfall model, each phase must be
completed before the next phase can begin and there is no overlapping in the
phases.
The Waterfall model is the earliest SDLC approach that was used for software
development.
The waterfall Model illustrates the software development process in a linear
sequential flow. This means that any phase in the development process begins
only if the previous phase is complete. In this waterfall model, the phases do
not overlap.
Waterfall approach was first SDLC Model to be used widely in Software
Engineering to ensure success of the project. In "The Waterfall" approach, the
whole process of software development is divided into separate phases. In
this Waterfall model, typically, the outcome of one phase acts as the input for
the next phase sequentially.
The following illustration is a representation of the different phases of the
Waterfall Model.
Spiral Model:
The spiral model has four phases. A software project repeatedly passes
through these phases in iterations called Spirals.
Planning phase
This phase starts with gathering the business requirements in the baseline
spiral. In the subsequent spirals as the product matures, identification of
system requirements, subsystem requirements and unit requirements are all
done in this phase.
This phase also includes understanding the system requirements by
continuous communication between the customer and the system analyst. At
the end of the spiral, the product is deployed in the identified market.
Risk Analysis
Risk Analysis includes identifying, estimating and monitoring the technical
feasibility and management risks, such as schedule slippage and cost overrun.
After testing the build, at the end of first iteration, the customer evaluates the
software and provides feedback.
Engineering or construct phase
The Construct phase refers to production of the actual software product at
every spiral. In the baseline spiral, when the product is just thought of and the
design is being developed a POC (Proof of Concept) is developed in this phase
to get customer feedback.
Evaluation Phase
This phase allows the customer to evaluate the output of the project to update
before the project continues to the next spiral.
Software project repeatedly passes through all these four phases.
Advantages:
Flexible model
Project monitoring is very easy and effective
Risk management
Easy and frequent feedback from users.
Dis advantages:
It doesn’t work for smaller projects
Risk analysis require specific expertise.
It is costly model & complex.
Project success is highly dependent on risk.
Prototype Model:
In this stage, the proposed system is presented to the client for an initial
evaluation. It helps to find out the strength and weakness of the working
model. Comment and suggestion are collected from the customer and
provided to the developer.
If the user is not happy with the current prototype, you need to refine the
prototype according to the user's feedback and suggestions.
This phase will not over until all the requirements specified by the user are
met. Once the user is satisfied with the developed prototype, a final system is
developed based on the approved final prototype.
Disadvantages:
SDLC - V-Model
Under the V-Model, the corresponding testing phase of the development phase is
planned in parallel. So, there are Verification phases on one side of the ‘V’
and Validation phases on the other side. The Coding Phase joins the two sides
of the V-Model.
1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed
during the module design phase. These UTPs are executed to
eliminate errors at code level or unit level. A unit is the smallest entity
which can independently exist, e.g., a program module. Unit testing
verifies that the smallest entity can function correctly when isolated
from the rest of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during
the Architectural Design Phase. These tests verify that groups
created and tested independently can coexist and communicate
among themselves.
3. System Testing: System Tests Plans are developed during System
Design Phase. Unlike Unit and Integration Test Plans, System Tests
Plans are composed by the client’s business team. System Test
ensures that expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business
requirement analysis part. It includes testing the software product in
user atmosphere. Acceptance tests reveal the compatibility problems
with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like
load and performance defects within the real user atmosphere.
Advantage:
• Easy to Understand.
• Testing Methods like planning, test designing happens well before
coding.
• This saves a lot of time. Hence a higher chance of success over the
waterfall model.
• Avoids the downward flow of the defects.
• Works well for small plans where requirements are easily understood.
Disadvantage:
RAD model distributes the analysis, design, build and test phases into a series
of short, iterative development cycles.
4. Testing: After completing the coding phase, software testing starts using
different test methods. There are many test methods, but the most common
are white box, black box, and grey box test methods.
Advantages:
Disadvantages: