0% found this document useful (0 votes)
80 views29 pages

Defn: Testing Is The Process of Exercising or Evaluating A System or System

Testing is the process of evaluating a system to verify it meets requirements. Early testing finds defects at lower cost. There are different levels of testing including unit, integration, system, and acceptance testing. Unit testing checks individual functions while acceptance testing involves users testing a completed system. White box testing uses internal knowledge to test paths and branches while black box testing treats the system as a black box without internal knowledge.

Uploaded by

connectsims4533
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views29 pages

Defn: Testing Is The Process of Exercising or Evaluating A System or System

Testing is the process of evaluating a system to verify it meets requirements. Early testing finds defects at lower cost. There are different levels of testing including unit, integration, system, and acceptance testing. Unit testing checks individual functions while acceptance testing involves users testing a completed system. White box testing uses internal knowledge to test paths and branches while black box testing treats the system as a black box without internal knowledge.

Uploaded by

connectsims4533
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 29

Q) What is software testing?

Different levels of software testing

Defn : Testing is the process of exercising or evaluating a system or system


component by manual or automated means to verify that it satisfies specified
requirements. Testing is a process of executing a program with the intent of
finding errors.

Measurements show that a defect discovered during design that costs $1 to rectify
will cost $1,000 to repair if detected in later stages. This is a tremendous cost
differential & clearly points out the advantage of early error detection.

Levels of testing

1. Unit Testing: The most ‘micro’ scale of testing; to test particular functions
or code modules. Typically done by the programmer & not by testers, as it
requires detailed knowledge of the internal program design & code.

2. Incremental Integration Testing : Bottom up approach for testing i.e.


continuous testing of an application as new functionality is added; Application
functionality and modules should be independent enough to test separately.
Done by programmers or by testers.

3. Integration testing : Testing of integrated modules to verify combined


functionality after integration. Modules are typically code modules, individual
applications, client and server applications on a network, etc. This type of
testing is especially relevant to client/server and distributed systems.

4. System testing : Entire system is tested as per the requirements. Black-


box type testing that is based on overall requirements specifications, covers
all combined parts of a system.

5. Acceptance testing : Normally this type of testing is done to verify if


system meets the customer specified requirements. User or customer does
this testing to determine whether to accept application.

1. Alpha testing – In house virtual user environment can be created for


this type of testing. Testing is done at the end of development. Still
minor design changes may be made as a result of such testing. User
enters dummy data into the system at the end of development &
check system against requirements

2. Beta testing – Testing typically done by end-users or others. Final


testing before releasing application for commercial purpose. Testing is
done at the end of customer organization. End users test system
against requirements by entering live data
Q2 . Explain black box testing and white box testing used under software
testing

One can broadly classify Software testing in following two types:

1. Black Box Testing

2. White Box Testing

BLACK BOX TESTING

Black box testing treats the system as a “black-box”, so it doesn’t use


Knowledge of the internal structure or code. Or in other words the Test engineer
need not know the internal working of the “Black box” or application. Main
focus in black box testing is on functionality of the system as a whole.
The term ‘behavioral testing’ is also used for black box testing.

Methods of Black box testing

1. Equivalence Partitioning:
this testing method divides the input domain of a program into classes of
data from which test cases can be derived.

a. Valid Input Set: Input set which when give to our program, our program
generates expected output.

b. Invalid Input Set: Input set which given to our program leads to error
handling or exception handling path.

2. Error Guessing:
This is purely based on previous experience and judgment of tester. Error
Guessing is the art of guessing where errors can be hidden.

3. Boundary Value Analysis:


Boundary Value Analysis (BVA) is a test Functional Testing technique where the
extreme boundary values are chosen. Boundary values include maximum,
minimum, just inside/outside boundaries, typical values, and error values.
Advantages of Black Box Testing : A)Tester can be non-technical. B) Used
to verify contradictions in actual system and the specifications. C) Test cases
can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing: A) The test inputs needs to be from


large sample space. B) It is difficult to identify all possible inputs in limited
testing time. So writing test cases is slow and difficult C) Chances of having
unidentified paths during this testing

WHITE BOX TESTING


White box testing involves looking at the structure of the code. When you know
the internal structure of a product, tests can be conducted to ensure that the
internal operations performed according to the specification. And all internal
components have been adequately exercised. white box testing is also
sometimes called ’structural testing’.

Need of White Box Testing: To discover the following types of bugs:

I. Logical error tend to creep into our work when we design and implement
functions, conditions or controls that are out of the program
II. The design errors due to difference between logical flow of the program and
the actual implementation
III. Typographical errors and syntax checking
METHODS OF WHITE BOX TESTING
1. Segment coverage:
Ensure that each code statement is executed once.
2. Branch Coverage or Node Testing:
Coverage of each code branch in from all possible was.
3. Compound Condition Coverage:
For multiple condition test each condition with multiple paths and
combination of different path to reach that condition.
4. Basis Path Testing:
Each independent path in the code is taken for testing.

Short notes
TEST PLAN FORMAT
Defn : This plan contains all the details of required resources, the testing
approaches to be followed, the testing methodologies, the test cases etc.

The test plan has to be prepared during the project planning stage itself and revised
before the testing process starts.

FORMAT
TEST DATA & TEST CASE

TEST CASE

• A test case is a document that describes an input, action, or event and an


expected response, to determine if a feature of an application is working
correctly.

• A test case should contain particulars such as test case identifier, test case
name, objective, test conditions/setup, input data requirements, steps, and
expected results.

• process of developing test cases can help find problems in the requirements
or design of an application

TEST DATA

If you are writing test case then you need input data for any kind of test.

I. Tester may provide this input data at the time of executing the test cases or
application may pick the required input data from the predefined data
locations.

II. The test data may be any kind of input to application, any kind of file that is
loaded by the application or entries read from the database tables.

III. It may be in any format like xml test data, system test data, SQL test data.
o What is the ideal test data?

Test data can be said to be ideal if for the minimum size of data set all the
application errors get identified. The test data should not exceed cost and time
constraint for preparing test data and running tests.

Design your test data considering following categories:


Test data set examples:

1) No data: Run your test cases on blank or default data. See if proper error
messages are generated.

2) Valid data set: Create it to check if application is functioning as per


requirements and valid input data is properly saved in database or files.

3) Invalid data set: Prepare invalid data set to check application behavior for
negative values, alphanumeric string inputs.

4) Illegal data format: Make one data set of illegal data format. System should
not accept data in invalid or illegal format. Also check proper error messages are
generated.

5) Boundary Condition data set: Data set containing out of range data. Identify
application boundary cases and prepare data set that will cover lower as well as
upper boundary conditions.

6) Data set for performance, load and stress testing: This data set should be
large in volume.

SHORT NOTE : SOFTWARE DOCUMENTATION

Documentation requirements

• communication among team members

• act as an information repository

• Should describe to users how to operate and administer the system

In all software projects some amount of documentation should be created

- prior to any code being written EX. Design docs, SRS doc, etc.

- Documentation should continue after the code has been completed EX. User
’s manuals, etc.
The two main types of documentation created are Process and Product
documents

Process Documentation

1) Used to record and track the development process

— Planning documentation

— Cost, Schedule, Fund tracking

— Schedules

— Standards

— Etc.

2) This documentation is created to allow successful management of a


software project.

3) Process Documentation has a relatively short lifespan

4) Only important to internal development process — Except in cases where the


customer requires a view into this data

Product Documentation

1) Describes the delivered product

2) Must evolve with the development of the software product

3) Two main categories:

a. System Documentation

b. User Documentation

System Documentation

— Describes how the system works, but not how to operate it

Examples:
— Requirements Specification

— Architectural Design

— Detailed Design

— Commented Source Code

— Test Plans

— List of Known Bugs

User Documentation

has two main user types

1) End User

2) System Administrator

In some cases these are the same people

— The target audience must be well understood

Document Structure

All documents for a given product should have a similar structure

The IEEE Standard for User Documentation lists such a structure

— It is a superset of what most documents need

The authors “best practices ”are:

— Put a cover page on all documents

— Divide documents into chapters with sections and subsections

— Add an index if there is lots of reference information

— Add a glossary to define ambiguous terms

WHAT IS SOFTWARE MAINTENANCE? ELABORATE DIFFERENT TYPES OF


SOFTWARE MAINTENANCE

The term “Software maintenance” is used to describe the software engineering


activities that occur following delivery of a software product to the customer.
Maintenance activities involve making enhancements to software products,
adapting products to new environments and correcting problems.
Typically the maintenance phase spans 5 to 10 years.

Types of Software Maintenance

There are four different types of software maintenance.

1. Corrective Maintenance :Maintenance to repair software faults

 Corrective maintenance deals with the repair of faults or defects found.


A defect can result from design errors, logic errors and coding errors.

 Design errors occur when , e.g. changes made to the software are
incorrect, incomplete, wrongly communicated or the change request
misunderstood.

 Logic errors result from invalid tests and conclusions, incorrect


implementation of design specifications, faulty logic flow or incomplete
test of data.

 Coding errors are caused by incorrect implementation of detailed logic


design and incorrect use of the source code logic. Defects are also
caused by data processing errors and system performance errors.

 All these errors, sometimes called ‘residual errors’ or ‘bugs’, prevent


the software from conforming to its agreed specification. The need for
corrective maintenance is usually initiated by bug reports drawn up by
the end users.

2. Adaptive Maintenance : Maintenance to adapt the software to a different


operating environment

 This type of maintenance is required when some aspect of the


system’s environment such as the hardware, the platform, operating
system or other support software changes. The application system
must be modified to adapt it to cope with these environment changes.

 The term environment in this context refers to the totality of all


conditions and influences which act from outside upon the system.

 For example, business rule, government policies, work patterns,


software and hardware operating platforms.

 Sometimes a change introduced in one part of the system requires


changes to other parts. Adaptive maintenance is the implementation of
these secondary changes.

 The need for adaptive maintenance can only be recognized by


monitoring the environment.

3. Perfective Maintenance :Maintenance to add or modify the system’s


functionality

 Perfective maintenance mainly deals with accommodating to new or


changed user requirements.

 Perfective maintenance concerns functional enhancements to the


system and activities to increase the system’s performance or to
enhance its user interface.

 As we maintain a system, we examine documents, design, code, and


tests, looking for opportunities for improvement.

 A re-design may enhance future maintenance and make it easier for us


to add new functions in the future.

 Perfective maintenance involves making changes to improve some


aspect of the system, even when the changes are not suggested by
faults.

4. Preventive Maintenance :Maintenance to find an actual or potential fault


that has not yet become a failure and take action to correct the fault before
damage is done

 Preventive maintenance concerns activities aimed at increasing the


system’s maintainability, such as updating documentation, adding
comments, and improving the modular structure of the system.

 The long-term effect of corrective, adaptive and perfective changes


increases the system’s complexity. As a large program is continuously
changed, its complexity, which reflects deteriorating structure,
increases unless work is done to maintain or reduce it. This work is
known as preventive change.

 Examples of preventive change include restructuring and optimizing


code and updating documentation.

 Preventive maintenance usually results when a programmer or code


analyzer finds an actual or potential fault that has not yet become a
failure and takes action to correct the fault before damage is done.

SHORT NOTES

1) SOFTWARE CONFIGURATION MANAGEMNT

2) VERSION CONTROL

SOFTWARE CONFIGURATION MANAGEMNT

1) New versions of software systems are created as they change:


• For different machines/OS;
• Offering different functionality;
• Customized for particular user requirements
2) Configuration management is concerned with managing evolving software
systems:
• Configuration Management aims to control the costs and effort
involved in making changes to a system.
3) Involves the development and application of procedures and standards to
manage an evolving software product.
4) Configuration Management may be seen as part of a more general quality
management process.
5) When released to Configuration Management, software systems are
sometimes called baselines as they are a starting point for further
development.
6) All products of the software process may have to be managed:
• Specifications;
• Designs;
• Programs;
• Test data;
• User manuals.
7) Thousands of separate documents may be generated for a large,
complex software system.

Software Version Control


 Software source code & its documents are the products of multiple iterations
& effects of multiple developers. The final release comes after multiple
revisions.
 The revisions may be carried out to remove some bugs, add some special
functionality, add some environmental adaptability, or for any other
reasons.
 Due to multiple revisions, the development team is left with a number of
versions of the S/W.
 Software version control (SVC) is the management process of
controlling multiple versions of the same S/W & controlling read &
write access for the group of authenticated Developers.
 Traditional version control systems work to control the write access on
version items in the form of a centralized model by keeping them in a shared
system.
 The shared system is commonly known as common server or simply server.
Any authenticated developer wishing to make changes to any item can
access it & make changes as required.
 The risk of concurrent access & overwriting of changes is handled by file
locking. File locking provides only one developer with the write access to the
item.
 Once a developer ‘checks out’ an item, i.e. captures write access to the it,
the others can have only read permission to it. After making required
alterations to the item, the developers needs to ‘check in’ the item to release
its write permission for others.

DESCRIBE SOFTWARE RISK MAMAGEMENT PROCESS IN DETAIL


Risk Management

— Risk management is concerned with identifying risks and drawing up plans to


minimize their effect on a project.
— A risk is a probability that some adverse circumstance will occur
— Project risks affect schedule or resources;
— Product risks affect the quality or performance of the software being
developed;
— Business risks affect the organization developing or procuring the software.
The risk management process

a. Risk identification :Identify project, product and business risks;

b. Risk analysis : Assess the likelihood and consequences of these risks;

c. Risk planning : Draw up plans to avoid or minimise the effects of the risk;
d. Risk monitoring :Monitor the risks throughout the project;

Risk Identification

— Technology risks. Eg : database working at slower speed, less transaction per


second, software to be reused has bugs
— People risks. Eg : key staff ill, skilled staff difficult to find, staff not trained
properly
— Organisational risks. Eg: Organization restructuring, financial problems in
organization
— Requirements risks. Eg: customer makes changes to requirement, doesn’t
understand the impact
— Estimation risks.Eg: size of software, time required etc underestimated

Risk analysis
— Assess probability and seriousness of each risk

— Probability may be very low, low, moderate, high or very high.

— Risk effects might be catastrophic, serious, tolerable or insignificant.

Risk planning

Consider each risk and develop a strategy to manage that risk.


— Avoidance strategies: The probability that the risk will arise is reduced;
— Minimisation strategies : The impact of the risk on the project or product will
be reduced;
— Contingency plans : If the risk arises, contingency plans are plans to deal with
that risk;
Risk monitoring
— Assess each identified risks regularly to decide whether or not it is becoming
less or more probable.
— Also assess whether the effects of the risk have changed.
— Each key risk should be discussed at management progress meetings.

Why we get poor quality software when the project is big and complex?
In software organizations, it is observed that when project is big & complex
software development team ends up with poor quality software which
requires maintenance to fix the problems or continuously upgrading the
system.
Following reasons can result in poor quality software.
1) Understanding of requirements – Software development team builds
software based on requirements given by customer. If customer is not clear
with the requirements or he is not able to construct or communicate his
requirements to analysts then the software developed by development team
will never satisfy the customer. On the other side, if customer is able to
communicate all the requirements but analyst is not able to understand them
or communicate them to other team members then it will end up with poor
quality software.
2) Poor scope – Customer & Analysts are unable to define the software
scope well in advance. Hence some of the requirements are unnoticed. Such
requirements will be added in maintenance phase.
3) Change management is poor – Software development team must
identify the changes suggested by user are valid or not. If they are valid,
team must study the effect of that change on working of entire software (in
terms of functionalities, in terms of data). Team must identify the effect of
changes on documentation. The changes must be reflected to documents.
Review of change must be taken.
4) Technology changes – In software industry, the technology upgrades
from time to time. If we develop a system with one technology & if the
technology is outdated after few months or years then we have to upgrade
the system for new technology.
5) Business needs changes – Today, team is working on one problem. If
after some time, another problem becomes important & management
neglects the first problem & attempts to solve second problem, the
developed solution’s quality can be questioned.
6) Deadline is unrealistic – The development schedule is following some
deadline for completion. If the deadline is unrealistic then the schedule
prepared will be too tight. Then the required quality will not be implemented
because of short time available for development.
7) Users are resistant – The new system will force some changes in
organization. The users never accept the changes easily.
8) Sponsorship is lost – The budget sanctioned by management team for
our project is diverted to some other project. In this case, development team
will face a problem of cost overrun & will not be able to develop quality
software.
9) Team lacks people with appropriate skill – The skills required for
developing the application are not present in the team, hence the required
quality cannot be implemented in the delivered product.
10) Managers avoid best practices & lessons learned – The software
development project is based on the past data & experience as well as
standard tools & techniques in software engineering. It is observed that the
project leaders never use past experience & standard tools & techniques.

DESCRIBE DIFFERENT QA TECHNIQUES IN SOFTWARE QUALITY


MANAGEMNT

Software Quality Management is concerned with ensuring that the required level of
quality is achieved in a software product.
Different Techniques are
1.Formal Technical Review (FTR)
2. Walkthroughs
3. Software Inspection

Formal Technical Review (FTR)


Formal Technical Review (FTR) is a software quality assurance activity that is
performed by software engineers.
The objectives of the FTR are:
a. To uncover errors in function, logic, or implementation for any part of the
software.
b. To verify that the software under review meet its requirements.
c. To ensure that the software has been represented according to predefined
standards.
d. To achieve software that is developed in an uniform manner.
e. To make projects more manageable.
The FTR serves:
a. )As a training ground, enabling junior engineers to observe different approaches
to software analysis, design and implementation.
b.) To promote backup and continuity because a number of people become familiar
with parts of the software that they may not have otherwise seen.

Walkthroughs
A structure walkthrough is an in - depth, technical review of some aspect of a
software system . In a walkthrough session, the material being examined is
presented by reviewee . The reviewee “walks through ”that work product, and
reviewers raise questions on issues of concern . A walkthrough is not a project
review, but is rather an in - depth examination of selected work products by
individuals who are qualified to offer expert opinions .
A walkthroughs team usually consists of reviewee and three to five reviewers .
Members of a walkthrough team may include the project leader, other members of
the project team, a representative from quality assurance group, a technical writer,
and other technical personnel who have interest in the project . Customers and
users should be included in walkthroughs. Higher level managers should not attend
walkthroughs . The goal of a Walkthrough is to discover and make note of problem
areas, problems are not resolved during the walkthrough session .

Software Inspection
Inspection in software project management, refers to peer review of any work
product by trained individuals who look for defects using a well defined process .
In an inspection, a work product is selected for review and a team is gathered for an
inspection meeting to review the work product .
The goal of the inspection is to identify defects . For example, if the team is
inspecting a software requirements specification, each defect will be text in the
document which an inspector disagrees with .

Inspection process :
The stages in the inspections process are : Planning, Overview meeting,
Preparation, Inspection meeting, Rework and Follow - up . The Preparation,
Inspection meeting and Rework stages might be iterated .
Planning : The inspection is planned by the moderator.
Overview meeting : The author describes the background of the work product .
Preparation : Each inspector examines the work product to identify possible
defects .
Inspection meeting : During this meeting the reader reads through the work
product, part by part and the inspectors point out the defects for every part .
Rework : The author makes changes to the work product according to the action
plans from the inspection meeting .
Follow - up : The changes by the author are checked to make sure everything is
correct . The process is ended by the moderator when it satisfies some predefined
exit criteria .
DIFFERENCE BETWEEN QA & QC ( LOOKS IMP TO ME)
SHORT NOTES
1.ISO CERTIFICATION FOR SOFTWARE ORG
2.SEI CMM
3.SOFTWARE QUALITY PARAMETERS

ISO CERTIFICATION FOR SOFTWARE ORG ( dekh lena its bakwass I say
leave it)
1) The ISO 9001,9002, and 9003 standards concern quality systems that are
assessed by outside auditors, and they apply to many kinds of production &
manufacturing organizations, not just software.
2) The most comprehensive is 9001, and this is the one most often used by
software development organizations. It covers documentation, design,
development, production, testing, installation, servicing, and other processes.
3) The sections of the ISO 9000 standard that deal the software are ISO 9001
and ISO 9000-3.
4) ISO 9000-3 is a guideline for applying ISO 9001 to sofware development
organizations.
5) To be ISO 9001 certified, a third-party auditor assesses an organization, and
certification is typically good for about 3 years, after which a complete
reassessment is required.
Some of the requirements in ISO 9000-3 include:
a. Develop detailed quality plans and procedures to control configuration
management, product verification & validation (testing), nonconformance
(bugs), and corrective actions (fixes).
b. Prepare & receive approval for a software for a software development plan
that includes a definition of the project, a list of the project’s objectives, a
project schedule, a product specification, a description of how the project is
organized, a discussion of risks and assumptions, and strategies for
controlling it.
c. Communicate the specifications in terms that make it easy for the customer
to understand and to validate during testing.
d. Plan, develop, document & perform software design review procedures.
e. Develop procedures that control software design changes made over the
product’s life cycle.
f. Develop and document software test plan.

g. Develop methods to test whether the software meets the customer’s


requirements.
h. Perform software validation and acceptance tests.
i. Maintain records of the test results.
j. Control how software bugs are investigated and resolved.
k. Prove that the product is ready before it’s released.
l. Identify and define what quality information should be collected.
m. Use statistical techniques to analyze the software development process.
n. Use statistical techniques to evaluate product quality.
Note that ISO 9000 certification does not necessarily indicate quality products –
it indicates only that documented processes are followed.

SEI CMM
SEI: ‘Software Engineering Institute’ at Carnegie-Mellon University established to
help improve software development process.

CMM: ‘Capability Maturity Model’. It is an industry standard model for defining and
measuring the maturity of software company’s development process and for
providing direction on what they can do to improve their software quality. It was
developed by the SEI.
Level 1 Initial :

The software development processes at this level are ad hoc and often chaotic. The
project’s success depends on heroes & luck. There are no general practices It’s
impossible to predict the time and cost to develop the software. The test process is
just as ad hoc as the rest of the process.

Level 2 Repeatable:

This maturity level is best described as project-level thinking. Basic project


management processes are in place to track the cost, schedule, functionality, and
quality of the project. Lessons learned from previous similar projects are applied.
There is a sense of discipline. Basis software testing practice, such as test plans &
test cases are used

Level 3 Defined : Organizational, not just project specific, thinking comes into
play at this level. Common management & engineering activities are standardized
& documented. These standards are adapted & approved for use on different
projects. Test documents & plans are reviewed & approved before testing begins.
The test group is independent from the developers. The test results are used to
determine when the software is ready.

Level 4 Manage: At this maturity level, the organization’s process is under


statistical control. Product quality is specified quantitatively beforehand( e.g. this
product won’t release until it has fewer than 0.5 defects per 1,000 lines of code)
and the software isn’t released until that goal is met. Details of the development
process and the software’s quality are collected over the project’s development, &
adjustment are made to correct deviations & to keep the project on plan.

Level 5 Optimizing : This level is called Optimizing (not ‘optimized’) because it’s
continually improving from level 4. new technologies & processes are attempted.,
the results are measured, and both incremental and revolutionary changes are
instituted to achieve even better quality levels. Just when everyone thinks the best
had been obtained, the crank is turned one more time, and the next level of
improvement is obtained.

SOFTWARE QUALITY PARAMETERS


Software quality parameters are non - functional requirement for a software
program which are not called up by the customer's contract, but nevertheless are
desirable requirements which enhances the quality of the software program . Some
software quality parameters are listed here :

A) Understandability–clarity of purpose . This goes further than


just a statement of purpose ; all of the design and user
documentation must be clearly written so that it is easily
understandable .
B) Completeness– presence of all constituent parts, with each
part fully developed . This means that if the code calls a
subroutine from an external library , the software package must
provide reference to that library and all required parameters
must be passed . All required input data must also be available .
C) Conciseness– minimization of excessive or redundant
information or processing. This is important where memory
capacity is limited, and it is generally considered good practice
to keep lines of code to a minimum. It also applies to
documents.
D) Portability–ability to be run well and easily on multiple
computer configurations. Portability can mean both between
different hardware — such as running on a PC as well as a
smartphone — and between different operating systems — such
as running on both Mac OS X and GNU/Linux .
E) Consistency– uniformity in notation, symbology, appearance,
and terminology within itself.
F) Maintainability– propensity to facilitate updates to satisfy new
requirements . Thus the software product that is maintainable
should be well - documented, should not be complex, and should
have spare capacity for memory, storage and processor
utilization and other resources .
G) Testability–SHOULD support acceptance criteria and evaluation
of performance . Such a characteristic must be built - in during
the design phase if the product is to be easily testable ; a
complex design leads to poor testability.
H) Usability–convenience and practicality of use . This is
affected by such things as the human - computer interface . The
component of the software that has most impact on this is the
user interface (UI), which for best usability is usually graphical
(i . e . a GUI) .
I) Reliability–ability to be expected to perform its intended
functions satisfactorily. This implies a time factor in that a
reliable product is expected to perform correctly over a period of
time . It also encompasses environmental considerations in that
the product is required to perform correctly in whatever
conditions it finds itself (sometimes termed robustness ) .
J) Efficiency–fulfillment of purpose without waste of resources,
such as memory, space and processor utilization, network
bandwidth, time, etc .
K) Security–ability to protect data against unauthorized access
and to withstand malicious or inadvertent interference with its
operations . Besides the presence of appropriate security
mechanisms such as authentication, access control and
encryption, security also implies resilience in the face of
malicious, intelligent and adaptive attackers .

SHORT NOTES
1)WBS
2)SOFTWARE SIZING

WBS
A complex project is made manageable by first breaking it down into individual
components in a hierarchical structure, known as the work breakdown structure, or
the WBS . Such a structure defines tasks that can be completed independently of
other tasks, facilitating resource allocation, assignment of responsibilities, and
measurement and control of the project .

Diagram : WBS
Higher levels in the structure generally are performed by groups . The lowest level
in the hierarchy often comprises activities performed by individuals.

Level of Detail :
The breaking down of a project into its component parts helps in resource allocation
and the assignment of individual responsibilities .Care should be taken to use a
proper level of detail when creating the WBS.
On the one extreme, a very high level of detail is likely to result in micro -
management .
On the other extreme, the tasks may become too large to manage effectively.
Defining tasks so that their duration is between several days and a few
months works well for most projects .

WBS's Role in Project Planning : The work breakdown structure is the foundation
of project planning . It is developed before dependencies are identified and activity
durations are estimated . The WBS can be used to identify the tasks in the CPM and
PERT project planning models .

Example
SOFTWARE SIZING
Software sizing is an important activity in software engineering that is used to
estimate the size of a software application or component in order to be able to
implement other software project management activities (such as estimating or
tracking).

Software Sizing Measures: There are only two software sizing measures widely
used today
1) Lines of Code (LOC or KLOC) and
2) Function Points (FP)

Lines of Code
• Lines of Code is a measure of the size of the system after it is built .
• It is very dependent on the technology used to build the system, the system
design, and how the programs are coded .
• The major disadvantages of LOC are that systems coded in different
languages cannot be easily compared and efficient code is penalized by
having a smaller size .
Function Points
• Function Points is a measure of delivered functionality that is relatively
independent of the technology used to develop the system .
• FP is based on sizing the system by counting external components (inputs,
outputs, external interfaces, files)

EXPLAIN DIFFERENT COCOMO MODELS USED FOR EFFORT ,TIME & COST
ESTIMATION

The Constructive Cost Model (COCOMO) is an algorithmic software cost


estimation model developed by Barry Boehm. The model uses a basic regression
formula, with parameters that are derived from historical project data and
current project characteristics.
The first level, Basic COCOMO is good for quick, early, rough order of
magnitude estimates of software costs, but its accuracy is limited due to
its lack of factors to account for difference in project attributes (Cost Drivers).
Intermediate COCOMO takes these Cost Drivers into account and
Advanced COCOMO additionally accounts for the influence of individual project
phases.
ALSO SHORT NOTE Basic COCOMO computes software development effort (and
cost) as a function of program size. Program size is expressed in estimated
thousands of lines of code (KLOC).

COCOMO applies to three classes of software projects:

• Organic projects - "small" teams with "good" experience working with "less
than rigid" requirements
• Semi-detached projects - "medium" teams with mixed experience working
with a mix of rigid and less than rigid requirements
• Embedded projects - developed within a set of "tight" constraints (hardware,
software, operational, ...)

The basic COCOMO equations take the form


Effort Applied = ab(KLOC)bb [ person-months ]
Development Time = cb(Effort Applied)db [months]
People required = Effort Applied / Development Time [count]

The coefficients ab, bb, cb and db are given in the following table.
Software
a b bb c b db
project
2. 1.0 2. 0.3
Organic
4 5 5 8
Semi- 3. 1.1 2. 0.3
detached 0 2 5 5
3. 1.2 2. 0.3
Embedded
6 0 5 2

Basic COCOMO is good for quick estimate of software costs.


ALSO SHORT NOTE Intermediate COCOMO computes software development
effort as function of program size and a set of "cost drivers" that include subjective
assessment of product, hardware, personnel and project attributes. This extension
considers a set of four "cost drivers”, each with a number of subsidiary attributes:-
• Product attributes
o Required software reliability
o Size of application database
o Complexity of the product
• Hardware attributes
o Run-time performance constraints
o Memory constraints
o Volatility of the virtual machine environment
o Required turnabout time
• Personnel attributes
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience
• Project attributes
o Use of software tools
o Application of software engineering methods
o Required development schedule
Each of the 15 attributes receives a rating on a six-point scale that ranges from
"very low" to "extra high" (in importance or value). The product of all effort
multipliers results in an effort adjustment factor (EAF). Typical values for EAF range
from 0.9 to 1.4.
The Intermediate COCOMO formula now takes the form:
E=ai (KLoC)(bi).EAF
where E is the effort applied in person-months, KLoC is the estimated number of
thousands of delivered lines of code for the project, and EAF is the factor
calculated above. The coefficient ai and the exponent bi are given in the next
table.
Software project ai bi
Organic 3.2 1.05
Semi-detached 3.0 1.12
Embedded 2.8 1.20
The Development time D calculation uses E in the same way as in the Basic
COCOMO.
Advanced COCOMO - incorporates all characteristics of the intermediate
version with an assessment of the cost driver's impact on each step (analysis,
design, etc.) of the software engineering process. The advanced model uses
different efforts multipliers for each cost driver attribute. These Phase Sensitive
effort multipliers are used to determine the amount of effort required to
complete each phase.

SHORT NOTES : GANTT CHART


A Gantt chart is a type of bar chart that illustrates a project schedule . Gantt charts
illustrate the start and finish dates of the terminal elements and summary elements
of a project . Terminal elements and summary elements comprise the work
breakdown structure of the project . Some Gantt charts also show the dependency (i
. e . , precedence network) relationships between activities .Gantt charts can be
used to show current schedule status using percent - complete shadings and a
vertical "TODAY" line .

You might also like