Software Engineering Overview
Software Engineering Overview
2
Q.What is software engineering and discuss the phases like: ....................2
Q. Why SDLC is important in developing the large Software? ....................4
Q.Discuss the Models .................................................................................5
Q.Write the Organization of software management plan.What is the
important item the SPMD document has to have?....................................10
Q. Explain LOC as the message of the problem size. ................................12
Q. Discuss the main three Techniques of project estimation parameters. 12
Q. Explain COCOMO Model which helps to find the approximate software
cost..........................................................................................................14
Q.Explain the following terms with respect to Risk Management.............16
Q. Importance of Scheduling in project management. ............................17
Q. What do you understand by Software Requirement Analysis? .............18
Q. What is good Software design? Explain cohesiveness and coupling. ...19
Q. Give overview of Object-oriented concept. Discuss the following terms:
................................................................................................................20
Q. EXPLAIN CHARACTERISTICS OF A GOOD USER INTERFACE DESIGN:..23
Q. WHAT IS TESTING? EXPLAIN UNIT TESTING: ......................................25
Q- EXPLAIN BLACK BOX TESTING & INTEGRATION TESTING? .................27
Q. What do u understand by Software reliability and explain reliability
matrix and reliability growth modeling? ..................................................30
Q.Explain Programs V/S Software Products.
⌦ Individuals for their personal use develop programs but the group of software engineers develops
software.
⌦ Programs are usually small in size and have limited functionality but software is large in size and
they have multiple users.
⌦ Usually the author of the program himself uses his programs and maintains it so it is possible that
there is not a proper user interface and proper documentation. On the other hand software have
multiple users and therefore it contains good user-interface, proper users’ manuals and good
documentation support.
⌦ For Eg. The programs developed by student as part of their class assignments are just programs
and not software products. Since software product has a large no. Of users, it must be properly
designed, carefully implemented and thoroughly tested.
⌦ A program consists of only program code but software product consist not only program code but
also of associated documents. Such as requirement specification document, design document, test
document and use manual.
⌦ Software Engineering is the establishment and use of sound engineering principles in order to
obtain economically software that is reliable and works efficiently on real machine.
Fritz Bauer
⌦ Software Engineering is the application of systematic, disciplined, quantifiable approach to
the development, operation and maintenance of software; that is the application of
engineering to software.
IEEE
⌦ As computer became more powerful with the advent of integrated circuits, they were used to
solve more complex problems. The control flow-based program development techniques
were not sufficient to handle these problem and more effective program development
techniques were needed/
⌦ While developing a program it is more important to consider the design of the data structure
of the program than to the design of its control structure.
⌦ Design techniques based on this principle are called data structure oriented design techniques.
⌦ The program code structure should correspond to the data structure. The data structure
oriented design avoids any error related data.
⌦ As the requirement of more complex, integrated and sophisticated software arises the new
concept of data flow-oriented techniques were proposed.
⌦ In this concept the major data items handled by a system must be first identified and then the
processing required on these data items to be produce the required outputs should be
determined.
⌦ The data flow techniques identify the different processing statements in system and the data
that flow between the different processing stations.
⌦ This is useful in creating data flow model of entire system, which covers all the processing
and data flow in any system i.e. in below figure represents the data flow representation of a
car assembly unit where each processing station consumes certain input items and produces
certain output
Engine Store Door Store
⌦ With the further advancements in the field of software design, the data flow oriented
technique or design is reached to a concept of object-oriented design.
⌦ An object oriented techniques is a design approach where the natural objects such as
employees, payroll, register etc. occurring in a problem are first identified and then the
relationship among the objects such as composition, reference and inheritance are
determined.
⌦ Each object essentially acts as data hiding or data abstraction entity.
⌦ Object oriented designed approach is targeting the convenience of users than developer.
⌦ A SDLC identifies all the activities that are required to develop and maintain the software
through out its lifetime. This can be done by using software development life cycle models.
⌦ When a software product is being developed by a team, there must be precise understanding
between the members as to when to do what to do otherwise it may lead to chaos and failure
to entire project.
⌦ To avoid above situation there must be an entry and exit criteria for every phase of
development of software. When these entry and exit criteria are successfully satisfied then
and only then the corresponding phase can be processed or exited.
⌦ For example the software requirement specification documents once completed and reviewed
by developers team and finally approved by the customers. SRS documents contain well-
defined criteria for entry and exit for various phases.
⌦ With the specified SRS documents it becomes easier for the software project manager to
monitor the progress of the project. Thus a major advantage of a well-defined SDLC model is
that it helps control and systematically organizes various activities. We can say that SDLC
encourages the development of software in a systematic and discipline manner.
⌦ When a well-defined SDLC model is adhered to the project manager can easily tell at which
stage (i.e. design code, test etc.) of development the project currently is. If no SDLC model is
adhered then it is very difficult to determine the progress of the object and the project
manager have to depend on the estimation of the team member.
⌦ Above situation usually lead to a problem known as the 99% complete syndrome. In This
⌦ Syndrome there is not definite way to assess the actual progress of project.
This model divides the life cycle of a software development process into phase shown in fig.
Feasibility Study
Requirement Analysis
And Specification
Design
Coding
And unit testing
Integration and
System Testing
Maintenance
⌦ During each phase of the life cycle model a set of well-defined activities are carried out.
⌦ Each phase of model has well-defined starting & ending point. Therefore software engineers
know precisely when to stop a phase and start the next phase.
Feasibility Study
⌦ The main aim of feasibility study is to determine whether developing the software product is
financially and technically feasible. It involves analysis of the problem and collection of data,
which would be input to the system, t he processing required to be produced by the system.
⌦ The collected data are analyzed to arrive at the following:
☻ An abstract definition of the problem.
☻ Formulation of the different solutions strategies.
☻ Examination of alternative solution strategies and their benefits, indicating resources
required, development, cost and time in respect of each of the alternative solution.
☻ At this stage it determines that whether any of the solutions is not feasible due to high
cost, resource constraints or extraordinary technical reason.
Requirement analysis and Specification
⌦ The basic aim of this phase is to understand the extract requirement of the customer and to
documents them properly. This phase consists of two distinct activities: Requirement analysis
and Requirement Specification.
☻ Requirement analysis
☺ The goal of the requirements analysis is to collect and analyze all related data
to understand the requirement of customers clearly and to remove
inconsistencies and incompleteness in these requirements.
☺ Requirement analysis starts with the collection of all relevant data from users
through interviews, discussions and questionnaires. If data contains any
ambiguity, inconsistency or incompleteness then after resolving them all the
requirements properly organized into SRS documents.
☻ Requirement Specification
☺ In this phase users functional requirement, non – functional requirement, and
the special requirements on the maintenance and development of the software
are properly organized and documented in the SRS document.
Design
⌦ The goal of the design phase if to transform the requirements specification into a structured
that is suitable for implementation in some programming language. There are two distinct
approaches of design.
☻ Traditional Approach
☺ During this phase first the structured analysis of requirement specification is
carried out.
☺ Various processing activities of the system are identified and data flow
between the processes is also identified. To do this, the data flow
diagramming (DFD) technique is used. According to this detail design are
carried out.
☻ Object Oriented
☺ First various objects that occur in the problem and solution are identified and
relationship among these objects are identified.
☺ This object structure is further refined to obtain the detailed design.
☺ This approach has several advantages such as less development efforts, and
time and better maintainability.
☻ Coding and Unit Testing
☺ The purpose of this translates the software design in to source code.
☺ Each component of the design is implemented independently and separated
into module (unit).
☺ These units are tested, debugged and documented. The purpose of unit testing
is to determine the correct working of each module or unit. This involves clear
definition of the test cases, testing criteria’s and management of test case.
Perceptive Maintenance
To improve the implementation of system and
functionality of system according to customer’s
requirement.
Adaptive Maintenance
Porting the software to a new environment Eg. To a
new computer or to a new operating system.
► Prototype Model
This model suggests that before development of the actual software, a working prototype of the
system should be built first and temporary implemented. The several reasons for developing a
prototype is as follow:
⌦ By illustrating the input data format, messages, reports to the customer. These developers
to get clear understanding of the customer’s needs.
⌦ Prototyping used to critically examine the technical issues associated with the product
development. Often, major design decisions depend on issues like the response time of a
hardware controller, or the efficiency of a sorting algorithm etc.
⌦ Prototyping model shown in below fig.
Requirements Gathering
Quick Decision
Customer Evaluation of
the prototype
Customer suggestions
Acceptable By Customer
Design
Implement
Test
Maintain
Prototyping Model of software development
⌦ The Model starts with the initial requirements gathering phase. A quick design is carried
out and the prototype model is built using several shortcuts. The shortcuts might involve
using inefficient, inaccurate or dummy functions.
⌦ The developed prototype is submitted to the customer for his evaluation. If any
suggestion are given by the customer then the requirement are refined as per customer
desire.
⌦ This cycle continues until the user approves the prototype. The actual system is then
developed model, the requirement analysis and specification phase is most important.
⌦ The cost of overall software might decrease with this model because many user
requirements are properly defined and technical issues are resolved during the execution
of prototype model.
⌦ Thus minimizing changes requests and redesign costs after the system is delivered to the
customer.
► Spiral Model
⌦ Once a project is found to be feasible the project manager undertake project planning.Project
planning consists of the following important activities:
☻ Effort,cost,resource and project duration estimation.
☻ Risk identification,analysis and abatement procedure.
☻ Project scheduling.
☻ Staff organization and staffing plans.
☻ Miscellaneous plans such as quality assurance plan, configuration management plan etc.
1) Introduction
(a) Objective
(b) Major function
(c) Performance Issues
(d) Management and technical constraints.
2) Project Estimates
(a)Historical Data used
(b)Estimation techniques used
(c)Effort, resource ,cost & project duration estimates.
3) Risk management plan
(a) Risk Analysis
(b) Risk identification
(c) Risk Estimation
(d) Risk abatement procedures.
4) Schedule
a) Work breakdown structure
b) Task network representation
c) Grant chart representation
d) PERT chat representation
5) Project Resources
a) People
b) Hardware & Software
c) Special resources
6) Staff Organization
a)Team Structure
b) Management & Control Plan
7) Project tracking & control Plan
8) Miscellaneous
a) Project tailoring
b) Quality assurance
c) Configuration management
d) Validation & Verification
e) System Testing
f) Delivery, installation & maintenance.
Q. Explain LOC as the message of the problem size.
⌦ The simplest measure of problem size is line of code.This metric is very popular primarily
due to the fact that it is simple to use.These metric measures the number of source instruction
requires to solve the problem.Lines used for commenting the code and headers lines are ignored.
⌦ Estimating the LOC at the end of project is very simple but at the beginning of a project is
very tricky.
⌦ To estimate the LOC at the beginning of a project,project managers divide the problem into
modules, and each module into sub module, and so on until the sizes of the different loaf level
modules can be approximately predicted.By estimating the size of the lowest level modules
project mnagers arrive at the total size estimation.Here the past experience in developing similar
products is helpful.However,LOC as a measure of problem size has several shortcomings.
☻ LOC gives a numerical value of problem size that varies with coding style, as
different styles.For Eg. one programmer might write several source instruction on a
single instruction across several lines.This problem can be overcome by counting the
language token in the program rather than length of code.
☻ A good problem size measures should consider the overall complexity of the problem
and the effort needed to solve it.LOC counts the number of source lines in final
program, it does not consider the overall complexity of the problem, design,testing
etc.Coding is only a small part of the overall software development process.
☻ LOC measure does not consider the quality and efficiency of the code. For Eg. some
programmers produce a lengthy and complicated code structure might have more
number of source instruction than a neat and efficient code and therefore would have
a higher LOC counts.
☻ LOC considers only textual complexity but does not considers the more important
logical complexities. A program having a complex logic requires much more effort to
develop than a program with very simple logic.
☻ It is very difficult to obtain an accurate LOC estimation from a problem
specification.LOC can be computed only after the code has been fully
developed.Therefore, LOC is of little use to project managers,at the beginning of a
project.
These techniques are based on making an educated guess of the project parameter using
past experience.Although, these techniques are based on common sense, different
activities involved in estimation have been formalized over the year. There are two types
of these techniques.
Expert judgement
Delphi techniques
► Heuristic Techniques
Where ‘e’ is a characteristic of the software, which has already been estimated and the
resource to be predicted could be the effort, project duration,staff size etc.
Constants ‘C1’ and ‘D1’ can be determined using the data collected from the past project
(historical data) Basic COCOMO model is example of a static single variable cost
estimation model,
• The required results derived by these techniques are based upon certain
basic assumptions regarding the project. These techniques do not have any
scientific basis.
• This technique is not very useful for planning develop at project, but it is
very useful to estimate software maintenance efforts.
• In estimation of software maintenance efforts this technique is more useful
then both previous techniques.
• Halstead’s software science is an example of Analytical Estimation
Techniques.
Where,
☻ KLOC is the estimated kilo lines of code.
☻ a1,a2,b1,b2 are constants for different categories of software products.
☻ Tdev is the estimated time to develop the software in months.
☻ Effort is the total development effort required to produce the software product, in
programmer – months(PMs).
⌦ Every line of source text should be calculated as one LOC, thus, if a single instruction spans
several lines(say n lines) it is considered to be nLOC. The values of a1,a2,b1,b2 for different
categories of products as given below.
For the three classes of software products, the formulate for estimating the effort based on
the code size are shown below:
Example:-
Assume that the size of an organic software product has been estimated to be 32,000 lines of
source code. Let us determine the effort require to develop the software product and the nominal
development time.
Effort = 2.4 * (32) ^ 1.05 = 91 PM
Nominal development time = 2.5 * (91) ^ 0.38 = 14 months
Embedded
Semi Detached
Organic
Estimated Effort
Size
Embedded
Organic
Size
⌦ A risk is any unfavorable event or circumstances that can occur while a large project is
underway.
⌦ Aim of risk management is to deal with all kinds of risks that might affect a project by preparing
contingency plans in advance.
Risk Identification
It is good practice for a software company to prepare disaster list that contains all
the bas events that have happened in the past with software products. This list can
be read by the project manager in order to be aware of some of the risks that could
arise in a project therefore a disaster list very much useful to analyze typical areas
of risk in a better way.
Risk Assessment
The objective of risk assessment is to rate each risk into two ways.
i) The likelihood of a risk coming true.
ii) The effect of the problem associated with that risk.
Based on these two factors, we can prioritize different risks in the following way:
P = r*s
Where,
P is the priority of a risk
r is the probability of the risk becoming true
s is the severity of damage due to the risk.
If different risks are prioritized in this way then, the most likely and damaging risks can be handled
first and much more effectively with comprehensive risk abatement procedures can be designed for
these risks.
Risk Containment
Once risks of a project are assessed, steps are first initiated to avoid or at least contain the most
damaging and the most likely risks. Different risks require different containment (it means different
logic) procedures.
The logic behind containment procedures depend on the skill of the project manager.
Important type of risk that occurs in many software project is schedule slippage. These risks arise
primarily due to the intangible nature of software. These can be dealt with by increasing the visibility
of a software product can be increased, by producing the relevant documents during process
whenever it is feasible and getting these documents reviewed by an appropriate team.
Milestones at each phase should be places, this helps manger to review the progress.
⌦ Several tools are currently available which can help us in figure out the critical path in an
unrestricted schedule, but figuring out an optimal schedule with resource limitation and
with a large number of parallel tasks.
⌦ The time schedule for a large size task may be too long. Therefore the manager needs to
break large tasks into smaller ones, expecting to find ,ore parallelism, which could lead to
a shorter development time.
⌦ It is not useful to subdivide tasks into units, which take less than a week or two to
complete. Finer subdivision mean that a disproportionate amount of time must be spent on
estimating and chart revision.
The analysts interview the end-users and customers to collect all possible information
regarding the system.If the project involves automating some existing procedures, then
the task of the system analyst becomes a little easier.They observe the current working
system. However in the absence of a working system, much more imagination and skills
are required.
o Analysis of gathered requirements
The main purpose of analysis of the collected information is to clearly understand the
exact requirements of the customer and resolve anomalies, conflicts and inconsistencies
in the gathered requirements. These are resolved by further discussions with the end users
and customers.Some inconsistencies and anomalies can be detected easily while some
require study of the proble.
After gathering requirements and removing all inconsistencies and anomalies then the
SRS document is prepared. The SRS document usually contains all user requirement in
an informal form.
A good design should capture all the functionalities of the system correctly.
It should be efficient and easily maintainable.
It should be easily understandable.
Understandability is a major factor of good software design because easily understandable design is also
easy to maintain and change. Unless a design is easily understandable, it would require a tremendous
effort to maintain. If the software is not easily understandable, the maintenance effort would increase
manifold. The understandability of a design should have the following feature:
Modular design is one of the fundamental principles of a good design. Decomposition of a problem into
modules reduces the complexity greatly and enables to understand each module easily and separately.
Clean decomposition of a design problem into modules means that the modules in a software design
should display high cohesion and low coupling.
Neat arrangement of modules in a hierarchy essentially means low fan-out and abstraction.
Most researchers and engineers agree that a good software design implies clean decomposition of a
problem into modules, and the arrangement of these modules in a neat hierarchy. The primary
characteristics of clean decomposition are high cohesion and low coupling. Cohesion is a measure of the
functional strength of a module whereas coupling of a module with another module is interaction
between two modules.
A module having high cohesion and low coupling is said to be functionally independent of other
modules. By the term functional independence, we mean that a cohesive module performs a single task
or function. A functionally independent module has minimal interaction with other modules. Functional
independencies is a key to good design primarily due to the following reasons:
Functional independence reduces error propagation. Therefore, an error existing in one module does not
directly affect other modules and also any error existing in other modules does not directly affect this
module.
Reuse of a module is possible, because each module performs some well-defined and precise function
and the interface of the module with other modules is simple and minimal. Therefore, any such module
can be easily taken out and reused in a different program.
Complexity of the design is reduced, because different modules can be understood in isolation, as
modules are more or less independent of each other.
Even though no quantitative measures are available to determine the degree of cohesion and coupling.
Therefore, by examining the types of cohesiveness exhibited by a module, we can say that it display high
or low cohesion.
Consider an example a ‘chair’ is a member of a large object ‘furniture’ which is called as a ‘class’. A set
of generic attributes can be associated with every object of class furniture i.e. cost, dimensions, weight,
color etc. A set of functions, which are applied to modify the attributes of the object are called the
‘methods’.
Object-oriented approach lead to reuse and reuse leads to faster software development and higher quality
programs. Object-oriented software is easy to maintain, adapt and scale.
The object-oriented technology moves through the process, such as object oriented requirements
analysis, object oriented design (OOD), OODBMS, OOCASE. Some important terms related to this
approach are objects, class, inheritance, message and methods, abstraction, encapsulation,
polymorphism, dynamic binding, genericity etc.
Object:
In the object-oriented approach, a system is designed as a set of interacting objects. Each object
represents some real-world entity.
Each object essentially consists of some data that is private to the object and a set of functions that
operate on those data. An object cannot directly access the data internal to another object.
For example, in a word processing application, a paragraph, a line, and a page can be object. A
library member can be an object of a library automation system. The private data of each member
object can be:
Name of the member.
Code number assigned to this member.
Birth date.
His phone number.
E-mail address.
His membership expiry date.
Books issued to him, etc.
The date internal to an object is called ‘attributes’ of the object and the functions supported by an
object are called its ‘methods’.
Each object is essentially a data abstraction entity. Data abstraction means that each object hides
from other objects the way in which its internal data is stored and manipulated.
The principle of data abstraction reduces coupling among the objects and therefore reduces the
overall complexity of the design and also helps code reuse.
When a system is analyzed, developed, and implemented in terms of the natural objects occurring in
it, it becomes easier to understand the design and implementation of the system.
Class:
Objects possessing similar attributes or displaying similar behavior constitute a class.
For example, the set of all employees can constitute a class in an employee payroll system, since
each employee object has similar attributes such as his name, code number, salary, address etc. and
display similar behavior as other employee objects.
Once we define a class serves as a template for object creation. Thus a class can be considered as
abstract data type.
Each object must be defined as an instance of some class. It means the attributes and behavior of an
object are determined by the class it belongs to.
Inheritance:
Inheritance feature allows us to define a new class by extending or modifying an existing class.
The original class is called the base class (or super class) and the new class obtained through
inheritance is called the derived class (or sub class).
In below figure, library members are the base class for the derived classes faculty, students and staff.
Similarly, students are the base class for the derived classes undergrad, post grad, and research.
For example, in the library information system the library members base class might define the data
for name, address and library membership number for each member and its derived classes might
define additional data such as max-number books and max-duration-of-issue and which may vary for
different member categories.
Figure
A base class is a generalization of its derived classes i.e. a base class contains only those properties
that is common to all the derived classes.
Important advantage of the inheritance mechanism is code reuse. If certain methods or data are
similar in a set of classes, then instead of defining these methods and data in each of these classes
separately, these are defined only once in the base class and are inherited by each of its subclass.
Multiple Inheritance:
Multiple inheritances are the mechanism by which a derived class can inherit attributes and methods
from more than one base class. For example, in some situations, the research students can also be staff of
the institute and therefore, some of the characteristics of research class might be similar to the student
class and some other characteristics might be similar to the staff class. In below figure, multiple
inheritances are represented by arrows directly from the derived class to the representative base classes.
Abstraction:
Abstraction is a mechanism for reducing the complexity of software.
It is a way of increasing software productivity because of the fact that software productivity is
inversely proportional to software complexity.
Abstraction is the selective examination of certain aspects of a problem while ignoring other aspects
of the problem. In other words, the main idea behind abstraction is to consider only those aspects of
the problem that are relevant for a certain purpose and to suppress all other aspects of the problem
that are not relevant for the given purpose.
Thus, abstraction mechanism allows us to represent a problem in a simpler way by omitting
unimportant details.
Many different abstractions of the same problem are possible depending on the purpose for which
they are made.
Abstraction does not only help in the development of engineers to understand the problem better, but
also leads to better comprehension of the system design by the end-users and maintenance team.
Encapsulation:
The property of an object by which it only interfaces with the outside world is by means of messages is
referred to as encapsulation.
The data of an object are encapsulated and can be accessed only through message-based communication.
This property offers three advantages.
The internal implementation details of data and procedures are hidden from the outside world. Thus,
it protects data of an object from corruption by other object or from unauthorized access.
Encapsulation hides the internal structure of an object so that interaction with the object is simple
and standardized. This facilitates reuse of objects across different project. If the internal structure of
an object is modified, other objects are not affected.
Since objects communicate with each other using messages only, they are weakly coupled. The fact
that objects are inherently weakly coupled enhances the understandability of the design.
Polymorphism:
Polymorphism exactly means poly (many) morphism (forms). It denotes the following.
The same message can result in different actions when received by objects of different types. This is
referred to as static binding.
When different objects are referred to through a pointer, then in case when a message is sent, an
appropriate method is called depending upon the object the pointer is currently pointing to. This is
referred to as dynamic binding.
Using polymorphism, a programmer can send a generic message to a set of objects, which may be of
different type and leave the exact implementation to the receiving objects. The main advantage of
polymorphism is that it facilitates code reuse and maintenance. Also, new lower-level objects can be
added with minimal changes to existing objects. As illustrated in figure:
Traditional Code Object-oriented code
if(shape == CIRCLE) shape.draw();
Draw_circle();
else if(shape == RECTANGLE)
Draw_rectangle();
We can see that the object-oriented code is much more concise and easily understandable. Also, suppose
in the example program segment, if it is later found that it is necessary to add a new graphics drawing
then the procedural code has to be changed by adding a new if-the clause. However, in the case of
object-oriented program the code need not be changed, only a new class for ellipse has to be defined.
Dynamic binding:
Static binding is said to occur if the address of the called method is known at compile time.
In dynamic binding the address of an invoked method is known only at run time.
Dynamic binding is useful for implementing polymorphism.
Speed of learning:-
⇒ A good user inter face should be simple to learn. Also a good user interface should not require its
users to memorize commands. Another important characteristics of a user interface that affects
the speed of learning are consistency. User should be able to use the same command in different
circumstances for the same task.
⇒ User can learn an interface faster, if the interface should be based either on some day- to-day real
life example or on some concepts with which the user are already familiar.
Speed of use:-
The speed of use of a user interface is determined by the amount of time and effort
required to initiate and execute different command. The time and effort require to initial and
execute different commands should be minimal.
Speed to recall:-
Once users recall how to use an interface, their speed of recall about how to use the
software should be maximized. The speed of recall is improved if interface is base on some
symbolic command, graphical and simple command names.
Attractiveness:-
An Attractive user interface catches user attention and fancy. In this respect, the graphic-
based user interfaces have a definite advantage over the text-based interface.
Consistency:-
The command supported by a user interface should be consistent. The basic purpose of
constancy is to allow user to generalize the knowledge about one aspect of the interface to
another. Thus, consistency facilitates speed of learning, speed of recall and also helps in
reducing error-rate.
Feedback:-
A good user interface must provide feedback to various user actions. For example if any
user request takes more than a few seconds to process, the user must be informed that his/her
request is being processed.
Error recovery:-
A good user interface should minimize the scope of committing error while initiating
different commands. Consistency of names, issue procedure, and the behavior of similar
command and the simplicity of the command issue procedure minimize error possibilities.
Whenever users need guidance of seek help from the system, they should be provided
with appropriate guidance and help. This is a very important aspect of good user interface
design.
Testing: -
Testing a program consists of providing the program with a set of test. Input data ( or test cases)
and observing if the program behaves as expected. If the program fails to behave as expected, then the
conditions under which a failure occurs are noted for debugging and correction.
Unit testing:-
Exhaustive testing of any large and complex system is impractical due to the
extremely large domain of input data values. Therefore, we must design an optimal test
suite of reasonable size to uncover as many errors in the system as possible.
A test suite is a set of all test cases with which a given system is to be tested. The
test cases must be selected using systematic approaches.
There are essentially two main approaches for designing test cases.
In the black-box approach, the test cases are designed using only functional specification
of software, i.e. without any knowledge of the internal structure of the software.
Therefore, the black box testing is also called the functional testing.
Driver Module
Stud Module
• Equivalence partitioning allows us to divide the domain of input values into a set of equivalence
classes. So that the behavior of program is similar for every input data belonging to the same class.
• The main idea of defining the equivalence classes is that testing the code with any one value
belonging to an equivalence class is as good as testing the software with any other value belonging
to that equivalence class.
• Equivalence class for software can be designed by examining the input data.
• The following are some general guidelines for designing the equivalence classes:
o If an input condition requires a specific value of specifies a range, then one valid and two
invalid equivalence classes should be defined.
o If an input condition specifies a member of a set of is Boolean then one valid and one
invalid class are defined.
Example 1:-
For software that computes the sssquare toot of an integer in the range of 1 to 5000. there are
three equivalence classes:
The set of integers.
The set of range form 1 to 5000
Integers larger than 5000.
There fore, the test cases must include one value from each class. Thus, a possible test set can be [-
5, 500, 6000).
• Some typically programming errors occur at the boundaries of different equivalence classes of
inputs. For example, programmers may improperly use < instead of <=.
• Boundary value analysis leads to selection of test cases at the boundaries of the different
equivalence class.
• Guidelines for designing test cases are as follows:
If an condition specifies a range from values a & b, test cases should be designing with
values a, b, a-1, b-1.
If an input condition specified a set of values, the test cases should be designed with the
minimum, the maximum, minimum-1 and maximum + 1.
In example, test cases must include the values [0, 1, 5000, 5001].
INTEGRATION TESTING:-
During integration testing, different modules of a system are integrated using an integration plan.
The primary objective of integration testing is to test the module interfaces. The following are the
different types integration testing.
⇒ In this approach, testing starts with the main routine i.e. root module, and one or
two sub-routines in the system. After the top-level ‘skeleton’ has been tested, the
subroutines of the ‘skeleton’ are immediately combined with it and tested.
⇒ A pure top-down testing does not require any driver routine but only stubs
program are required.
⇒ A disadvantage is that in the absence of lower – level routines, many times it may
become difficult to exercise the top-level routines in the desired manner since lower
level routines perform several low-level functions such as I/O.
4) Mixed Integration Testing:-
⇒ A mixed integration testing follows both the top-down and bottom-up approaches.
⇒ Here, a testing can start only after the top-level modules have been coded and unit
tested. Similarly, bottom-up testing start only after the bottom level modules are
ready.]
⇒ The mixed approach overcomes these shortcomings of the top-down and bottom-
up approaches and testing can start when modules become available. Therefore, this
is the most commonly used integration testing technique.
Q. What do u understand by Software reliability and
explain reliability matrix and reliability growth
modeling?
Software Reliability
⌦ Software reliability can be defined as the probability of the product working “correctly” over a
given period of time.
⌦ A software product having a larger number of defects would be very unreliable.Reliability would
improve if the number of defects were reduced.However, there is no simple relationship between
the system reliability and the number of defects of a system would make little difference to the
software reliability.But if the error is removed from the counter part, improvement in the
reliability will be more.
A reliability growth model is a mathematical model, which shows how software reliability grows as
errors are detected and removed.
R
O
C
O
F
Time
Jelinski and Moranda Model
The assumption of this is that the reliability does not increase by a constant amount, each
time error is solve but the growth of reliability is inversely proportional to the number of
remaining error.
Although, this model is more realistic for many applications, it still suffers from the fact that the
most problem failures are discovered early during the testing process.Repairing these errors
contributes maximum to the reliability growth. Therefore, the rate of reliability growth would be
large initially and then slow down later on, contrary to the assumption of this model.There are some
other complex reliability growth models,which gives more accurate apprximate to the growth of
reliability.