Software Engineering Notes
Software Engineering Notes
Software Engineering Notes
Sofware myths:
Myth 1 : Testing is Too Expensive
Reality : There is a saying, pay less for testing during software development or pay
more for maintenance or correction later. Early testing saves both time and cost in
many aspects, however reducing the cost without testing may result in improper design
of a software application rendering the product useless.
Myth 2 : Testing is Time-Consuming
Reality : During the SDLC phases, testing is never a time-consuming process.
However diagnosing and fixing the errors identified during proper testing is a time-
consuming but productive activity.
Myth 3 : Only Fully Developed Products are Tested
Reality : No doubt, testing depends on the source code but reviewing requirements
and developing test cases is independent from the developed code. However iterative
or incremental approach as a development life cycle model may reduce the
dependency of testing on the fully developed software.
Myth 4 : Complete Testing is Possible
Reality : It becomes an issue when a client or tester thinks that complete testing is
possible. It is possible that all paths have been tested by the team but occurrence of
complete testing is never possible. There might be some scenarios that are never
executed by the test team or the client during the software development life cycle and
may be executed once the project has been deployed.
Myth 5 : A Tested Software is Bug-Free
Reality : This is a very common myth that the clients, project managers, and the
management team believes in. No one can claim with absolute certainty that a
software application is 100% bug-free even if a tester with superb testing skills has
tested the application.
This model assumes that everything is carried out and taken place perfectly as planned
in the previous stage and there is no need to think about the past issues that may arise
in the next phase. This model does not work smoothly if there are some issues left at
the previous step. The sequential nature of model does not allow us go back and undo
or redo our actions.
This model is best suited when developers already have designed and developed
similar software in the past and are aware of all its domains.
Iterative Model
This model leads the software development process in iterations. It projects the
process of development in cyclic manner repeating every step after every cycle of
SDLC process.
The software is first developed on very small scale and all the steps are followed
which are taken into consideration. Then, on every next iteration, more features and
modules are designed, coded, tested and added to the software. Every cycle produces a
software, which is complete in itself and has more features and capabilities than that of
the previous one.
After each iteration, the management team can do work on risk management and
prepare for the next iteration. Because a cycle includes small portion of whole
software process, it is easier to manage the development process but it consumes more
resources.
Spiral Model
Spiral model is a combination of both, iterative model and one of the SDLC model. It
can be seen as if you choose one SDLC model and combine it with cyclic process
(iterative model).
This model considers risk, which often goes un-noticed by most other models. The
model starts with determining objectives and constraints of the software at the start of
one iteration. Next phase is of prototyping the software. This includes risk analysis.
Then one standard SDLC model is used to build the software. In the fourth phase of
the plan of next iteration is prepared.
V model
The major drawback of waterfall model is we move to the next stage only when the
previous one is finished and there was no chance to go back if something is found
wrong in later stages. V-Model provides means of testing of software at each stage in
reverse manner.
At every stage, test plans and test cases are created to verify and validate the product
according to the requirement of that stage. For example, in requirement gathering
stage the test team prepares all the test cases in correspondence to the requirements.
Later, when the product is developed and is ready for testing, test cases of this stage
verify the software against its validity towards requirements at this stage
This makes both verification and validation go in parallel. This model is also known as
verification and validation model.
Big Bang Model
This model is the simplest model in its form. It requires little planning, lots of
programming and lots of funds. This model is conceptualized around the big bang of
universe. As scientists say that after big bang lots of galaxies, planets and stars evolved
just as an event. Likewise, if we put together lots of programming and funds, you may
achieve the best software product.
For this model, very small amount of planning is required. It does not follow any
process, or at times the customer is not sure about the requirements and future needs.
So the input requirements are arbitrary.
This model is not suitable for large software projects but good one for learning and
experimenting.
Process activities:
Real software processes are inter-leaved sequences of technical,
collaborative and managerial activities with the overall goal of specifying,
designing, implementing and testing a software system.
The four basic process activities of specification, development, validation
and evolution are organized differently in different development processes.
In the waterfall model, they are organized in sequence, whereas in
incremental development they are inter-leaved.
Software specification
The process of establishing what services are required and the constraints
on the systems operation and development.
Requirements engineering process
Feasibility study
Is it technically and financially feasible to build the system?
Requirements elicitation and analysis
What do the system stakeholders require or expect from the
system?
Requirements specification
Defining the requirements in detail
Requirements validation
Checking the validity of the requirements
The requirements engineering process
CASE Tools
CASE stands for Computer Aided Software Engineering. It means, development and
maintenance of software projects with help of various automated software tools.
CASE Tools
CASE tools are set of software application programs, which are used to automate
SDLC activities. CASE tools are used by software project managers, analysts and
engineers to develop software system.
There are number of CASE tools available to simplify various stages of Software
Development Life Cycle such as Analysis tools, Design tools, Project management
tools, Database Management tools, Documentation tools are to name a few.
Use of CASE tools accelerates the development of project to produce desired result
and helps to uncover flaws before moving ahead with next stage in software
development.
Components of CASE Tools
CASE tools can be broadly divided into the following parts based on their use at a
particular SDLC stage:
Central Repository - CASE tools require a central repository, which can
serve as a source of common, integrated and consistent information. Central
repository is a central place of storage where product specifications,
requirement documents, related reports and diagrams, other useful
information regarding management is stored. Central repository also serves as
data dictionary.
Upper Case Tools - Upper CASE tools are used in planning, analysis and
design stages of SDLC.
Lower Case Tools - Lower CASE tools are used in implementation, testing
and maintenance.
Integrated Case Tools - Integrated CASE tools are helpful in all the stages of
SDLC, from Requirement gathering to Testing and documentation.
CASE tools can be grouped together if they have similar functionality, process
activities and capability of getting integrated with other tools.
Scope of Case Tools
The scope of CASE tools goes throughout the SDLC.
Case Tools Types
Now we briefly go through various CASE tools
Diagram tools
These tools are used to represent system components, data and control flow among
various software components and system structure in a graphical form. For example,
Flow Chart Maker tool for creating state-of-the-art flowcharts.
Process Modeling Tools
Process modeling is method to create software process model, which is used to
develop the software. Process modeling tools help the managers to choose a process
model or modify it as per the requirement of software product. For example, EPF
Composer
Project Management Tools
These tools are used for project planning, cost and effort estimation, project
scheduling and resource planning. Managers have to strictly comply project execution
with every mentioned step in software project management. Project management tools
help in storing and sharing project information in real-time throughout the
organization. For example, Creative Pro Office, Trac Project, Basecamp.
Documentation Tools
Documentation in a software project starts prior to the software process, goes
throughout all phases of SDLC and after the completion of the project.
Documentation tools generate documents for technical users and end users. Technical
users are mostly in-house professionals of the development team who refer to system
manual, reference manual, training manual, installation manuals etc. The end user
documents describe the functioning and how-to of the system such as user manual. For
example, Doxygen, DrExplain, Adobe RoboHelp for documentation.
Analysis Tools
These tools help to gather requirements, automatically check for any inconsistency,
inaccuracy in the diagrams, data redundancies or erroneous omissions. For example,
Accept 360, Accompa, CaseComplete for requirement analysis, Visible Analyst for
total analysis.
Design Tools
These tools help software designers to design the block structure of the software,
which may further be broken down in smaller modules using refinement techniques.
These tools provides detailing of each module and interconnections among modules.
For example, Animated Software Design
Software Project
A Software Project is the complete procedure of software development from
requirement gathering to testing and maintenance, carried out according to the
execution methodologies, in a specified period of time to achieve intended software
product.
Need of software project management
Software is said to be an intangible product. Software development is a kind of all new
stream in world business and theres very little experience in building software
products. Most software products are tailor made to fit clients requirements. The most
important is that the underlying technology changes and advances so frequently and
rapidly that experience of one product may not be applied to the other one. All such
business and environmental constraints bring risk in software development hence it is
essential to manage software projects efficiently.
The image above shows triple constraints for software projects. It is an essential part
of software organization to deliver quality product, keeping the cost within clients
budget constrain and deliver the project as per scheduled. There are several factors,
both internal and external, which may impact this triple constrain triangle. Any of
three factor can severely impact the other two.
Therefore, software project management is essential to incorporate user requirements
along with budget and time constraints.
Software Project Manager
A software project manager is a person who undertakes the responsibility of executing
the software project. Software project manager is thoroughly aware of all the phases of
SDLC that the software would go through. Project manager may never directly involve
in producing the end product but he controls and manages the activities involved in
production.
A project manager closely monitors the development process, prepares and executes
various plans, arranges necessary and adequate resources, maintains communication
among all team members in order to address issues of cost, budget, resources, time,
quality and customer satisfaction.
Let us see few responsibilities that a project manager shoulders -
Managing People
Managing Project
Defining and setting up project scope
Management Activities
Software project management comprises of a number of activities, which contains
planning of project, deciding scope of software product, estimation of cost in various
terms, scheduling of tasks and events, and resource management. Project management
activities may include:
Project Planning
Scope Management
Project Estimation
Project Planning
Software project planning is task, which is performed before the production of
software actually starts. It is there for the software production but involves no concrete
activity that has any direction connection with software production; rather it is a set of
multiple processes, which facilitates software production. Project planning may
include the following:
Scope Management
It defines the scope of project; this includes all the activities, process need to be done
in order to make a deliverable software product. Scope management is essential
because it creates boundaries of the project by clearly defining what would be done in
the project and what would not be done. This makes project to contain limited and
quantifiable tasks, which can easily be documented and in turn avoids cost and time
overrun.
During Project Scope management, it is necessary to -
Define the scope
Divide the project into various smaller parts for ease of management.
Effort estimation
The managers estimate efforts in terms of personnel requirement and man-hour
required to produce the software. For effort estimation software size should be
known. This can either be derived by managers experience, organizations
historical data or software size can be converted into efforts by using some
standard formulae.
Time estimation
Once size and efforts are estimated, the time required to produce the software
can be estimated. Efforts required is segregated into sub categories as per the
requirement specifications and interdependency of various components of
software. Software tasks are divided into smaller tasks, activities or events by
Work Breakthrough Structure (WBS). The tasks are scheduled on day-to-day
basis or in calendar months.
The sum of time required to complete all tasks in hours or days is the total time
invested to complete the project.
Cost estimation
This might be considered as the most difficult of all because it depends on more
elements than any of the previous ones. For estimating project cost, it is required
to consider -
o Size of software
o Software quality
o Hardware
o Travel involved
o Communication
Project Scheduling
Project Scheduling in a project refers to roadmap of all activities to be done with
specified order and within time slot allotted to each activity. Project managers tend to
tend to define various tasks, and project milestones and them arrange them keeping
various factors in mind. They look for tasks lie in critical path in the schedule, which
are necessary to complete in specific manner (because of task interdependency) and
strictly within the time allocated. Arrangement of tasks which lies out of critical path
are less likely to impact over all schedule of the project.
For scheduling a project, it is necessary to -
Calculate total time required for the project from start to finish
Risk Management Process
There are following activities involved in risk management process:
Identification - Make note of all possible risks, which may occur in the
project.
Categorize - Categorize known risks into high, medium and low risk
intensity as per their possible impact on the project.
Monitor - Closely monitor the potential risks and their early symptoms.
Also monitor the effects of steps taken to mitigate or avoid them.
PERT Chart
PERT (Program Evaluation & Review Technique) chart is a tool that depicts project as
network diagram. It is capable of graphically representing main events of project in
both parallel and consecutive way. Events, which occur one after another, show
dependency of the later event over the previous one.
Events are shown as numbered nodes. They are connected by labeled arrows depicting
sequence of tasks in the project.
Resource Histogram
This is a graphical tool that contains bar or chart representing number of resources
(usually skilled staff) required over time for a project event (or phase). Resource
Histogram is an effective tool for staff planning and coordination.
Critical Path Analysis
This tools is useful in recognizing interdependent tasks in the project. It also helps to
find out the shortest path or critical path to complete the project successfully. Like
PERT diagram, each event is allotted a specific time frame. This tool shows
dependency of event assuming an event can proceed to next only if the previous one is
completed.
The events are arranged according to their earliest possible start time. Path between
start and end node is critical path which cannot be further reduced and all events
require to be executed in same order.
Project Planning:
The key to a successful project is in the planning. Creating a project plan is the
first thing you should do when undertaking any kind of project.
A project is successful when the needs of the stakeholders have been met. A
stakeholder is anybody directly, or indirectly impacted by the project.
Once you understand who the stakeholders are, the next step is to find out their
needs. The best way to do this is by conducting stakeholder interviews. Take
time during the interviews to draw out the true needs that create real benefits.
Often stakeholders will talk about needs that aren't relevant and don't deliver
benefits. These can be recorded and set as a low priority.
The next step, once you have conducted all the interviews, and have a
comprehensive list of needs is to prioritise them. From the prioritised list, create
a set of goals that can be easily measured. A technique for doing this is to review
them against the SMART principle. This way it will be easy to know when a
goal has been achieved.
Once you have established a clear set of goals, they should be recorded in the
project plan. It can be useful to also include the needs and expectations of your
stakeholders.
This is the most difficult part of the planning process completed. It's time to
move on and look at the project deliverables.
Add the deliverables to the project plan with an estimated delivery date. More
accurate delivery dates will be established during the scheduling phase, which is
next.
Create a list of tasks that need to be carried out for each deliverable identified in
step 2. For each task identify the following:
Once you have established the amount of effort for each task, you can workout
the effort required for each deliverable, and an accurate delivery date. Update
your deliverables section with the more accurate delivery dates.
At this point in the planning, you could choose to use a software package such
as Microsoft Project to create your project schedule. Alternatively, use one of
the many free templates available. Input all of the deliverables, tasks, durations
and the resources who will complete each task.
This section deals with plans you should create as part of the planning process.
These can be included directly in the plan.
Next, describe the number and type of people needed to carryout the project. For
each resource detail start dates, estimated duration and the method you will use
for obtaining them.
Communications Plan
Create a document showing who needs to be kept informed about the project
and how they will receive the information. The most common mechanism is a
weekly or monthly progress report, describing how the project is performing,
milestones achieved and work planned for the next period.
Stakeholder input is not sought, or their needs are not properly understood.
Risks can be tracked using a simple risk log. Add each risk you have identified
to your risk log; write down what you will do in the event it occurs, and what
you will do to prevent it from occurring. Review your risk log on a regular
basis, adding new risks as they occur during the life of the project. Remember,
when risks are ignored they don't go away.
Unit-2
Requirement Engineering
The process to gather the software requirements from client, analyze and document
them is known as requirement engineering.
The goal of requirement engineering is to develop and maintain sophisticated and
descriptive System Requirements Specification document.
Requirement Engineering Process
It is a four step process, which includes
Feasibility Study
Requirement Gathering
Software Requirements
We should try to understand what sort of requirements may arise in the requirement
elicitation phase and what kinds of requirements are expected from the software
system.
Broadly software requirements should be categorized in two categories:
Functional Requirements
Requirements, which are related to functional aspect of software fall into this category.
They define functions and functionality within and from the software system.
EXAMPLES -
Users can be divided into groups and groups can be given separate rights.
Should comply business rules and administrative functions.
Security
Logging
Storage
Configuration
Performance
Cost
Interoperability
Flexibility
Disaster recovery
Accessibility
Requirements are categorized logically as
Could have : Software can still properly function with these requirements.
easy to operate
quick in response
Content presentation
Easy Navigation
Simple interface
Responsive
Consistent UI elements
Feedback mechanism
Default settings
Purposeful layout
Validation of requirement
Process Metrics - In various phases of SDLC, the methods and tools used,
the company standards and the performance of development are software
process metrics.
COCOMO Model
Introduction to the COCOMO Model
The most fundamental calculation in the COCOMO model is the use of the
Effort Equation to estimate the number of Person-Months required to develop a
project. Most of the other COCOMO results, including the estimates for
Requirements and Maintenance, are derived from this quantity.
Overview of COCOMO
Every assumption made in the model (e.g. "the project will enjoy good
management")
Every definition (e.g. the precise definition of the Product Design phase of
a project)
Because COCOMO is well defined, and because it doesn't rely upon proprietary
estimation algorithms, Costar offers these advantages to its users:
Typically, you'll start with only a rough description of the software system that
you'll be developing, and you'll use Costar to give you early estimates about the
proper schedule and staffing levels. As you refine your knowledge of the
problem, and as you design more of the system, you can use Costar to produce
more and more refined estimates.
Costar allows you to define a software structure to meet your needs. Your initial
estimate might be made on the basis of a system containing 3,000 lines of code.
Your second estimate might be more refined so that you now understand that
your system will consist of two subsystems (and you'll have a more accurate
idea about how many lines of code will be in each of the subsystems). Your next
estimate will continue the process -- you can use Costar to define the
components of each subsystem. Costar permits you to continue this process until
you arrive at the level of detail that suits your needs.
COCOMO II Effort Equation
The COCOMO II model makes its estimates of required effort (measured in
Person-Months PM) based primarily on your estimate of the software
project's size (as measured in thousands of SLOC, KSLOC)):
Where
EAF Is the Effort Adjustment Factor derived from the Cost Drivers
E Is an exponent derived from the five Scale Drivers
As an example, a project with all Nominal Cost Drivers and Scale Drivers
would have an EAF of 1.00 and exponent, E, of 1.0997. Assuming that the
project is projected to consist of 8,000 source lines of code, COCOMO II
estimates that 28.9 Person-Months of effort is required to complete it:
Where
Effort Is the effort from the COCOMO II effort equation
SE Is the schedule equation exponent derived from the five Scale Drivers
Unit-3
Design Strategies:
Software design is a process to conceptualize the software requirements into software
implementation. Software design takes the user requirements as challenges and tries to
find optimum solution. While the software is being conceptualized, a plan is chalked
out to find the best possible design for implementing the intended solution.
There are multiple variants of software design. Let us study them briefly:
Structured Design
Structured design is a conceptualization of problem into several well-organized
elements of solution. It is basically concerned with the solution design. Benefit of
structured design is, it gives better understanding of how the problem is being solved.
Structured design also makes it simpler for designer to concentrate on the problem
more accurately.
Structured design is mostly based on divide and conquer strategy where a problem is
broken into several small problems and each small problem is individually solved until
the whole problem is solved.
The small pieces of problem are solved by means of solution modules. Structured
design emphasis that these modules be well organized in order to achieve precise
solution.
These modules are arranged in hierarchy. They communicate with each other. A good
structured design always follows some rules for communication among multiple
modules, namely -
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling arrangements.
Function Oriented Design
In function-oriented design, the system is comprised of many smaller sub-systems
known as functions. These functions are capable of performing significant task in the
system. The system is considered as top view of all functions.
Function oriented design inherits some properties of structured design where divide
and conquer methodology is used.
This design mechanism divides the whole system into smaller functions, which
provides means of abstraction by concealing the information and their operation..
These functional modules can share information among themselves by means of
information passing and using information available globally.
Another characteristic of functions is that when a program calls a function, the
function changes the state of the program, which sometimes is not acceptable by other
modules. Function oriented design works well where the system state does not matter
and program/functions work on input rather than on a state.
Design Process
The whole system is seen as how data flows in the system by means of
data flow diagram.
DFD depicts how functions changes data and state of entire system.
The entire system is logically broken down into smaller units known as
functions on the basis of their operation in the system.
Objects - All entities involved in the solution design are known as objects.
For example, person, banks, company and customers are treated as objects.
Every entity has some attributes associated to it and has some methods to
perform on the attributes.
Classes - A class is a generalized description of an object. An object is an
instance of a class. Class defines all the attributes, which an object can have
and methods, which defines the functionality of the object.
In the solution design, attributes are stored as variables and functionalities are
defined by means of methods or procedures.
Interface Design
User interface is the front-end application view to which user interacts in order to use
the software. User can manipulate and control the software as well as hardware by
means of user interface. Today, user interface is found at almost every place where
digital technology exists, right from computers, mobile phones, cars, music players,
airplanes, ships etc.
User interface is part of software and is designed such a way that it is expected to
provide the user insight of the software. UI provides fundamental platform for human-
computer interaction.
UI can be graphical, text-based, audio-video based, depending upon the underlying
hardware and software combination. UI can be hardware or software or a combination
of both.
The software becomes more popular if its user interface is:
Attractive
Simple to use
Clear to understand
Text-Box - Provides an area for user to type and enter text-based data.
Buttons - They imitate real life buttons and are used to submit inputs to the
software.
Sliders
Combo-box
Data-grid
Drop-down list
User Interface Design Activities
There are a number of activities performed for designing user interface. The process of
GUI design and implementation is alike SDLC. Any model can be used for GUI
implementation among Waterfall, Iterative or Spiral Model.
A model used for GUI design and development should fulfill these GUI specific steps.
GUI Requirement Gathering - The designers may like to have list of all
functional and non-functional requirements of GUI. This can be taken from
user and their existing software solution.
User Analysis - The designer studies who is going to use the software GUI.
The target audience matters as the design details change according to the
knowledge and competency level of the user. If user is technical savvy,
advanced and complex GUI can be incorporated. For a novice user, more
information is included on how-to of software.
Task Analysis - Designers have to analyze what task is to be done by the
software solution. Here in GUI, it does not matter how it will be done. Tasks
can be represented in hierarchical manner taking one major task and dividing
it further into smaller sub-tasks. Tasks provide goals for GUI presentation.
Flow of information among sub-tasks determines the flow of GUI contents in
the software.
GUI Design & implementation - Designers after having information about
requirements, tasks and user environment, design the GUI and implements
into code and embed the GUI with working or dummy software in the
background. It is then self-tested by the developers.
Testing - GUI testing can be done in various ways. Organization can have in-
house inspection, direct involvement of users and release of beta version are
few of them. Testing may include usability, compatibility, user acceptance etc.
Modularization
Concurrency
Back in time, all software are meant to be executed sequentially. By sequential
execution we mean that the coded instruction will be executed one after another
implying only one portion of program being activated at any given time. Say, a
software has multiple modules, then only one of all the modules can be found active at
any time of execution.
In software design, concurrency is implemented by splitting the software into multiple
independent units of execution, like modules and executing them in parallel. In other
words, concurrency provides capability to the software to execute more than one part
of code in parallel to each other.
It is necessary for the programmers and designers to recognize those modules, which
can be made parallel execution.
Example
The spell check feature in word processor is a module of software, which runs along
side the word processor itself.
Coupling and Cohesion
When a software program is modularized, its tasks are divided into several modules
based on some characteristics. As we know, modules are set of instructions put
together in order to achieve some tasks. They are though, considered as single entity
but may refer to each other to work together. There are measures by which the quality
of a design of modules and their interaction among them can be measured. These
measures are called coupling and cohesion.
Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements
of a module. The greater the cohesion, the better is the program design.
There are seven types of cohesion, namely
Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a
program. It tells at what level the modules interfere and interact with each other. The
lower the coupling, the better the program.
There are five levels of coupling, namely -
Data coupling- Data coupling is when two modules interact with each
other by means of passing data (as parameter). If a module passes data
structure as parameter, then the receiving module should use all its
components.
Ideally, no coupling is considered to be the best.
Design Verification
The output of software design process is design documentation, pseudo codes, detailed
logic diagrams, process diagrams, and detailed description of all functional or non-
functional requirements.
The next phase, which is the implementation of software, depends on all outputs
mentioned above.
It is then becomes necessary to verify the output before proceeding to the next phase.
The early any mistake is detected, the better it is or it might not be detected until
testing of the product. If the outputs of design phase are in formal notation form, then
their associated tools for verification should be used otherwise a thorough design
review can be used for verification and validation.
By structured verification approach, reviewers can detect defects that might be caused
by overlooking some conditions. A good design review is important for good software
design, accuracy and quality.
Software reuse
Software reuse principles
Hardware reuse
use the same tool more than once, producing the same product more than
once, etc.
Hammer a nail
Hammer a nail again
Hammer a nail again and again
Software reuse: dont reinvent the wheel use the same knowledge more than
once
Hammer a nail
Hammer a nut
Why Reuse?
Save the cost, Reduce the effort Software costs huge when it was created, but
costs almost nothing to copy or redistribute One should focus on more creative
tasks
Reduce bugs
Use proven legacy software rather than write it completely from scratch
Functionality testing
Implementation testing
When functionality is being tested without taking the actual implementation in concern
it is known as black-box testing. The other side is known as white-box testing where
not only functionality is tested but the way it is implemented is also analyzed.
Exhaustive tests are the best-desired method for a perfect testing. Every single possible
value in the range of the input and output values is tested. It is not possible to test each
and every value in real world scenario if the range of values is large.
Black-box testing
It is carried out to test functionality of the program. It is also called Behavioral
testing. The tester in this case, has a set of input values and respective desired results.
On providing input, if the output matches with the desired results, the program is
tested ok, and problematic otherwise.
In this testing method, the design and structure of the code are not known to the tester,
and testing engineers and end users conduct this test on the software.
Black-box testing techniques:
Equivalence class - The input is divided into similar classes. If one element
of a class passes the test, it is assumed that all the class is passed.
Boundary values - The input is divided into higher and lower end values. If
these values pass the test, it is assumed that all values in between may pass
too.
Cause-effect graphing - In both previous methods, only one input value at a
time is tested. Cause (input) Effect (output) is a testing technique where
combinations of input values are tested in a systematic way.
Pair-wise Testing - The behavior of software depends on multiple
parameters. In pairwise testing, the multiple parameters are tested pair-wise
for their different values.
State-based testing - The system changes state on provision of input. These
systems are tested based on their states and input.
White-box testing
It is conducted to test program and its implementation, in order to improve code
efficiency or structure. It is also known as Structural testing.
In this testing method, the design and structure of the code are known to the tester.
Programmers of the code conduct this test on the code.
The below are some White-box testing techniques:
Control-flow testing - The purpose of the control-flow testing to set up test
cases which covers all statements and branch conditions. The branch
conditions are tested for both being true and false, so that all statements can
be covered.
Data-flow testing - This testing technique emphasis to cover all the data
variables included in the program. It tests where the variables were declared
and defined and where they were used or changed.
Testing Levels
Testing itself may be defined at various levels of SDLC. The testing process runs
parallel to software development. Before jumping on the next stage, a stage is tested,
validated and verified.
Testing separately is done just to make sure that there are no hidden bugs or issues left
in the software. Software is tested on various levels -
Unit Testing
While coding, the programmer performs some tests on that unit of program to know if
it is error free. Testing is performed under white-box testing approach. Unit testing
helps developers decide that individual units of the program are working as per
requirement and are error free.
Integration Testing
Even if the units of software are working fine individually, there is a need to find out if
the units if integrated together would also work without errors. For example, argument
passing and data updation etc.
System Testing
The software is compiled as product and then it is tested as a whole. This can be
accomplished using one or more of the following tests:
Functionality testing - Tests all functionalities of the software against the
requirement.
Performance testing - This test proves how efficient the software is. It tests
the effectiveness and average time taken by the software to do desired task.
Performance testing is done by means of load testing and stress testing where
the software is put under high user and data load under various environment
conditions.
Security & Portability - These tests are done when the software is meant to
work on various platforms and accessed by number of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go through last phase
of testing where it is tested for user-interaction and response. This is important because
even if the software matches all user requirements and if user does not like the way it
appears or works, it may be rejected.
Alpha testing - The team of developer themselves perform alpha testing by
using the system as if it is being used in work environment. They try to find
out how user would react to some action in software and how the system
should respond to inputs.
Beta testing - After the software is tested internally, it is handed over to the
users to use it under their production environment only for testing purpose.
This is not as yet the delivered product. Developers expect that users at this
stage will bring minute problems, which were skipped to attend.
Regression Testing
Whenever a software product is updated with new code, feature or functionality, it is
tested thoroughly to detect if there is any negative impact of the added code. This is
known as regression testing.
Testing Documentation
Testing documents are prepared at different stages -
Before Testing
Testing starts with test cases generation. Following documents are needed for
reference
SRS document - Functional Requirements document
Test Policy document - This describes how far testing should take place
before releasing the product.
Test Strategy document - This mentions detail aspects of test team,
responsibility matrix and rights/responsibility of test manager and test
engineer.
Traceability Matrix document - This is SDLC document, which is related to
requirement gathering process. As new requirements come, they are added to
this matrix. These matrices help testers know the source of requirement. They
can be traced forward and backward.
While Being Tested
The following documents may be required while testing is started and is being done:
Test Case document - This document contains list of tests required to be
conducted. It includes Unit test plan, Integration test plan, System test plan
and Acceptance test plan.
Test description - This document is a detailed description of all test cases and
procedures to execute them.
Test case report - This document contains test case report as a result of the
test.
Test logs - This document contains test logs for every test case report.
After Testing
The following documents may be generated after testing :
Test summary - This test summary is collective analysis of all test reports
and logs. It summarizes and concludes if the software is ready to be launched.
The software is released under version control system if it is ready to launch.
Testing vs. Quality Control, Quality Assurance and Audit
We need to understand that software testing is different from software quality
assurance, software quality control and software auditing.
Software quality assurance - These are software development process
monitoring means, by which it is assured that all the measures are taken as
per the standards of organization. This monitoring is done to make sure that
proper software development methods were followed.
Software quality control - This is a system to maintain the quality of
software product. It may include functional and non-functional aspects of
software product, which enhance the goodwill of the organization. This
system makes sure that the customer is receiving quality product for their
requirement and the product certified as fit for use.
Software audit - This is a review of procedure used by the organization to
develop the software. A team of auditors, independent of development team
examines the software process, procedure, requirements and other aspects of
SDLC. The purpose of software audit is to check that software and its
development process, both conform standards, rules and regulations.
Softawre Reliablity:
First definition
Software reliability is defined as the probability of failure-free
operation of a software system for a specified time in a specified
environment.
Key elements of the above definition
Probability of failure-free operation
Length of time of failure-free operation
A given execution environment
Example
The probability that a PC in a store is up and running for
eight hours without crash is 0.99.
Second definition
Failure intensity is a measure of the reliability of a software system
operating in a given environment.
Example: An air traffic control system fails once in two years.
Reliability is a broad concept.
It is applied whenever we expect something to behave in a certain
way.
Reliability is one of the metrics that are used to measure quality.
It is a user-oriented quality factor relating to system operation.
Intuitively, if the users of a system rarely experience failure, the
system is considered to be more reliable than one that fails more
often.
A system without faults is considered to be highly reliable.
Constructing a correct system is a difficult task.
Even an incorrect system may be considered to be reliable if the
frequency of failure is acceptable.
Key concepts in discussing reliability:
Fault
Failure
Time
Three kinds of time intervals: MTTR, MTTF, MTBF
Failure
A failure is said to occur if the observable outcome of a program
execution is different from the expected outcome.
Fault
The adjudged cause of failure is called a fault.
Example: A failure may be cause by a defective block of code.
Time
Time is a key concept in the formulation of reliability. If the time gap
between two successive failures is short, we say that the system is
less reliable.
Two forms of time are considered.
Execution time ()
Calendar time (t)
MTTF: Mean Time To Failure
MTTR: Mean Time To Repair
MTBF: Mean Time Between Failures (= MTTF + MTTR)
Two ways to measure reliability
Counting failures in periodic intervals
Observer the trend of cumulative failure count - ().
Failure intensity
Observe the trend of number of failures per unit time ().
()
This denotes the total number of failures observed until execution
time from the beginning of system execution.
()
This denotes the number of failures observed per unit time after
time units of executing the system from the beginning. This is also
called the failure intensity at time .
Relationship between () and ()
() = d()/d
Defect Testing:
The goal of defect testing is to discover defects in programs
A successful defect test is a test which causes a program to behave in an
anomalous way
Tests show the presence not the absence of defects
To discover faults or defects in the software where its behaviour is
incorrect or not in conformance with its specification;
A successful test is a test that makes the system perform incorrectly and so
exposes a defect in the system.
Tes t Tes t Tes t Tes t
cas es data res ults reports
Software safety:
Diagram
Software has been built into more and more products and systems over the years
and has taken on more and more of the functionality of those systems. The
question is: how dependable is the functionality provided by software? The
traditional approach to verification of functionality - try it out and see if it works
- is of limited value in the case of software which can be much more complex
than hardware.
On an average, the cost of software maintenance is more than 50% of all SDLC
phases. There are various factors, which trigger maintenance cost go high, such as:
Real-world factors affecting Maintenance Cost
Older softwares, which were meant to work on slow machines with less
memory and storage capacity cannot keep themselves challenging against
newly coming enhanced softwares on modern hardware.
Most maintenance engineers are newbie and use trial and error method to
rectify problem.
Often, changes made can easily hurt the original structure of the software,
making it hard for any subsequent changes.
Changes are often left undocumented which may cause more conflicts in
future.
Software-end factors affecting Maintenance Cost
Software Re-engineering
When we need to update the software to keep it to the current market, without
impacting its functionality, it is called software re-engineering. It is a thorough process
where the design of software is changed and programs are re-written.
Legacy software cannot keep tuning with the latest technology available in the market.
As the hardware become obsolete, updating of software becomes a headache. Even if
software grows old with time, its functionality does not.
For example, initially Unix was developed in assembly language. When language C
came into existence, Unix was re-engineered in C, because working in assembly
language was difficult.
Other than this, sometimes programmers notice that few parts of software need more
maintenance than others and they also need re-engineering.
Re-Engineering Process
Program Restructuring
It is a process to re-structure and re-construct the existing software. It is all about re-
arranging the source code, either in same programming language or from one
programming language to a different one. Restructuring can have either source code-
restructuring and data-restructuring or both.
Re-structuring does not impact the functionality of the software but enhance reliability
and maintainability. Program components, which cause errors very frequently can be
changed, or updated with re-structuring.
The dependability of software on obsolete hardware platform can be removed via re-
structuring.
Forward Engineering
Forward engineering is a process of obtaining desired software from the specifications
in hand which were brought down by means of reverse engineering. It assumes that
there was some software engineering already done in the past.
Forward engineering is same as software engineering process with only one difference
it is carried out always after reverse engineering.
Component reusability
A component is a part of software program code, which executes an independent task
in the system. It can be a small module or sub-system itself.
Example
The login procedures used on the web can be considered as components, printing
system in software can be seen as a component of the software.
Components have high cohesion of functionality and lower rate of coupling, i.e. they
work independently and can perform tasks without depending on other modules.
In OOP, the objects are designed are very specific to their concern and have fewer
chances to be used in some other software.
In modular programming, the modules are coded to perform specific tasks which can
be used across number of other software programs.
There is a whole new vertical, which is based on re-use of software component, and is
known as Component Based Software Engineering (CBSE).
Re-use can be done at various levels
Application level - Where an entire application is used as sub-system of new
software.
Component level - Where sub-system of an application is used.
Defect tracking: It ensures that every defect has traceability back to its
source.
SDLC Activities
Software Development Life Cycle, SDLC for short, is a well-defined,
structured sequence of stages in software engineering to develop the intended
software product.
Communication
This is the first step where the user initiates the request for a desired software product.
He contacts the service provider and tries to negotiate the terms. He submits his
request to the service providing organization in writing.
Requirement Gathering
This step onwards the software development team works to carry on the project. The
team holds discussions with various stakeholders from problem domain and tries to
bring out as much information as possible on their requirements. The requirements are
contemplated and segregated into user requirements, system requirements and
functional requirements. The requirements are collected using a number of practices as
given -
Evolution starts from the requirement gathering process. After which developers create
a prototype of the intended software and show it to the users to get their feedback at
the early stage of software product development. The users suggest changes, on which
several consecutive updates and maintenance keep on changing too. This process
changes to the original software, till the desired software is accomplished.
Even after the user has desired software in hand, the advancing technology and the
changing requirements force the software product to change accordingly. Re-creating
software from scratch and to go one-on-one with requirement is not feasible. The only
feasible and economical solution is to update the existing software so that it matches
the latest requirements.
4. End user - Manuals for the end-user, system administrators and support
staff.
Quality definition:
developed software.
- During the 1950s and 1960s, the programmers controls their product
quality.
Testing of Software
Control of Change (Assess the need for change, document the change)
Measurement (Software Metrics to measure the quality, quantifiable)
--------------------------------------------------------------------------------