0% found this document useful (0 votes)
49 views78 pages

Software Enginnering Complete Notes

Software engineering applies systematic and quantifiable approaches to software development, operation, and maintenance, ensuring the creation of efficient and reliable software products. It encompasses various paradigms including software development, design, and programming, with models such as waterfall, incremental, RAD, prototyping, spiral, and concurrent engineering guiding the process. Effective project management focuses on people, product, process, and project, utilizing metrics to assess and improve software quality and productivity.

Uploaded by

kandamadhuri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views78 pages

Software Enginnering Complete Notes

Software engineering applies systematic and quantifiable approaches to software development, operation, and maintenance, ensuring the creation of efficient and reliable software products. It encompasses various paradigms including software development, design, and programming, with models such as waterfall, incremental, RAD, prototyping, spiral, and concurrent engineering guiding the process. Effective project management focuses on people, product, process, and project, utilizing metrics to assess and improve software quality and productivity.

Uploaded by

kandamadhuri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

UNIT-I

Q) Explain the software engineering?


Software Engineering:
The application of a systematic, discipline, quantifiable approach to development, operation and
maintenance of software, that is the application of engineering to software.
(OR)
Software engineering is associated with development of software product using well defined scientific
principles, methods, and procedures. The outcome of software engineering is an efficient and reliable software
product.
Need of software engineering:
1. Large software
2. Cost
3. Dynamic nature
4. Quality management
5. Scalability
Q) Software paradigms:

Software development

Software design

progra
mming

Programming paradigm is subset of software design paradigm which is further a subset of software
development paradigm.
1) Software development paradigm:
It includes various researches and requirement gathering which helps the software product to
build.
➢ Requirement gathering
➢ Software design
➢ Programming

2) Software design paradigm:


This paradigm is part of software development and includes
➢ Design
➢ Maintenance
➢ Programming
3) Programming paradigm:
This paradigm is related closely to programming aspect of software development. This includes
➢ Coding
➢ Testing
➢ Integration
Characteristics of software:
A. Software is developed but not manufactured.
B. Does not wear out that is can’t be produced instantly. i.e. software does not degrade with time as
hardware does.
C. Failure curve of hardware (both tub problem) as a function of time.
Practices in software engineering is
I. Writing program using code reusability
II. ER- diagram (or) UML – diagrams.
III. Testing
IV. Quality assurance
V. Deployment
VI. Feedback
VII. Risk management
VIII. Measurement
Q) Explain about software engineering process paradigm models?
A process model provides a specific roadmap for software engineering work. It defines the flow of all
activities, actions and tasks, the degree of iteration, the work products, and the organization of the work that
must be done.
Process model helps in
➢ Software development
➢ Guide software engineering through a set of framework activities.
Prescriptive process model:
A prescriptive process model strives for structure and order in software development. Activities and tasks
occur sequentially with defined guidelines for process.
The waterfall model (or) system development life cycle (or) classical life cycle (or) linear
sequential.
The waterfall model was the first process model to be introduced it is also referred to as a linear-
sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be
completed before the next phase can begin and there is no overlapping in the phases.
The waterfall model is the earliest SDLC approach that was used for software development.
1. Communication:
In this customer and develops are interacting with each other, so the customer can give the specifications
to the developers.
Project initiation: feasibility of proposed system (system to be development) is considered.
Requirement gathering:
software engineer communicates with stakeholder to understand the information domain and formulates
design specifications.
Effective communication between the software engineer and stakeholder is needed.
2. Planning:
a) Estimating: Resources need for software development is to be estimated
b) Several techniques are adopted to monitor progress against plan.
3. Modeling: It can be divided into two ways
a. Analysis: system analyst must understand
• Information domain
• Required functions
• Performance
b. Design: Design is multi steps process focused on
• data dictionary
• user interfaces
• data structures
• algorithmic details.
4. Construction:
Coding: The process of writing application programs to given specifications.
Testing: The process of verifying the application for uncovered errors or ir-regulations
5. Development: installing product to customer.
I. Delivery
II. Support
III. Feedback
Limitations:
a) Difficult to state all requirement explicitly
b) Difficult to accommodate natural changes.
c) Customer must have patience.
d) If requirements are formulating poorly the entire system fails.
Advantages:
I. Simple and easy to understand implement and use.
II. All the requirements are known at beginning of project so easy to manage
III. It avoids overlapping of phases because each phase is completed at once.
IV. Works for small projects because the requirements are understanding very well.
Disadvantages:
I. This model is not good for complex and object-oriented projects.
II. It is a poor model for long projects.
Incremental model: The incremental process model focuses on the delivery of an operational product
with each increment. Early increments are stripped down versions of the final product, but they do provide
capability that servers the user and also provide a platform for evaluation by the user.
➢ Getting feedback from users and others and involving the software through several versions until the
required system has been developed.
➢ The early increments of the system include the most important (or) most ungently. Required
functionality.
Increment #1:

Each includes the following steps:


1.Communication: In this customer and develops are interacting with each other, so the customer can
give the specifications to the developers.
Project initiation: feasibility of proposed system (system to be development) is considered.
Requirement gathering:
software engineer communicates with stakeholder to understand the information domain and formulates
design specifications.
Effective communication between the software engineer and stakeholder is needed.
2.Planning/analysis:
c) Estimating: Resources need for software development is to be estimated
d) Several techniques are adopted to monitor progress against plan.
3.Modeling/design: It can be divided into two ways
c. Analysis: system analyst must understand
• Information domain
• Required functions
• Performance
d. Design: Design is multi steps process focused on
• data dictionary
• user interfaces
• data structures
• algorithmetic details, UML diagrams.
4.Construction/code:
Coding: The process of writing application programs to given specifications.
5.Testing: The process of verifying the application for uncovered errors or irregularities.
6. Development: installing product to customer.
IV. Delivery
V. Support
VI. Feedback
Advantages:
I. Useful when staffing is unavailable
II. Each increment can be implemented with few people
III. Core product can be developed & reviewed
IV. Increment can be planned to manage technical risk.
Disadvantages:
I. Process is not visible. Managers need regular deliverables to measure progress
II. System structure tends to degrade as new increments are added. Regular change leads to messy code as
new functionality is added.
RAD Model (Rapid Application Development):
The RAD model is an incremental software development process model. That emphasizes an extremely
short development cycle. It targets at developing software in a short span of time. Project can be broken down
into small modules. Each module can be assigned indepently to separate teams. Development of each module
involves the various basic steps as in waterfall model.

➢ RAD approach supports the following phases


I. Business modeling
II. Data modeling
III. Process modeling
IV. Application generation
V. Testing & turn over
Business modeling: The information flow among business functions is modeled in a way that answers the
following questions.
1. What information derives the business process?
2. What information is generated?
3. Who generates it?

Data modeling: Database objects are defined.

a) Entities d) Relations
b) Attributes e) Constraints
c) Key Attributes

Process modeling: Process description is created for

(a) Retrieval of data (b) Deletion of data (c) Addition of data

(d) Modification of data

Application Generation:

✓ Fourth generation techniques are adopted for writing application


✓ Reuse the existing programs

✓ Create re-useable components when necessary


✓ Automated tools are used for the construction of S/W

Testing and Turnover

▪ Many of the components have already been tested. This reduces overall testing time.

▪ The new components are tested and variety of test cases of exercised.
Advantages:
• An incremental software process model
• Having a short development cycle
• Creates a fully functional system within a very short span time of 60 to 90 days
• Multiple software teams work in parallel on different functions.
Disadvantages:
• Not all types of application are appropriate for RAD
• Requires a number of RAD teams
• If system cannot be modularized properly project will fail.
• Not suited when technical risks are high

Prototyping model or Evolutionary process model:


This prototype is developed based on the currently known requirements. Prototype model is a
software development model. By using this prototype, the client can get an “actual feel” of the system,
since the interactions with prototype can enable the client to better understand the requirements of the
desired system.
➢ The intention behind creating this model is to get the actual requirements more deeply from
user.
➢ Prototyping help a lot in getting the feedback from the customer.

Advantages:
➢ Users are actively involved in the development
➢ Errors can be detected much earlier.
➢ Quicker user feedback is available leading to better solutions.
➢ Missing functionality can be identified easily
Disadvantages:
➢ Incomplete applications may cause application not to be used as the full system was designed incomplete
➢ Practically, this methodology may increase the complexity of the system.

The spiral model:


➢ It is the combination of waterfall model and iterative model.
➢ Each phase in spiral model begin with design goal and ends with client reviewing.
➢ Software is developed in series of incremental releases.
The Spiral model include the following phases

✓ Customer communication
✓ Planning
✓ Risk analysis
✓ Engineering
✓ Construction & release
✓ Customer evaluation
Customer communication: Software engineer communicate with stack holder to understand the information
domain and formulates design specifications.
Effective communication between the software engineer and stack holder is need to formulate better
requirements.
Planning: Estimating: Resources need for software development is estimated.
Several techniques are adopted to monitor progress against plan
a. Work Breakdown Structures (WBS)

b. PERT (Program Evaluation and Review Technique)


c. GANTT Chart

Risk analysis: Asses both technical risks and management risks.


Engineering: Various software engineering techniques are adapted to develop the system
E.g. Top-down approach, Bottom-up approach, Physical design

Construction & release:


Coding: The process of writing application programs to given specifications.

Testing: The process of verifying the application program for uncovered errors or
irregularities.

Customer evaluation:
Feedback: Details given by end user.

Feedback from end users can be collected using


i) Interviews

ii) Questionnaires
iii) Onsite observations

Advantages:
➢ Additional functionality or changes can be done at later stage.
➢ Development is fast & features are added in a systematic way.
➢ Cost estimation becomes easy.
➢ There is always space for customer feedback.
Disadvantages:
➢ It is risk for is not meeting schedule or budget
➢ Documentation is more as it has intermediate phases
➢ It is not advisable for smaller projects; it might cost then a lot.

The concurrent engineering (or) concurrent development model:


➢ Concurrent development model sometimes called as concurrent engineering, which provide accurate
view of current state of the project.
➢ It allows a software team to represent iterative and current element of any of process models.
➢ It focuses on concurrent engineering activities in software engineering process such as prototyping,
modeling designing, requirement analysis and coding.
➢ All activities exist in concurrently but resides in a different state.
➢ The modelling activity (which existed in the inactive state while initial communication was completed,
now makes a transition into the under-development state. If, however, the customer indicates that
changes in requirements must be made, the modeling activity moves from the under-development state
into the awaiting changes state.
➢ It defines a series of events that will trigger transition from state to state.
➢ When there is an immediate need of feedback after testing.
➢ When more features are required to be added later in the project.
Advantages:
➢ Easy to understand and use
➢ Applicable to all types of software development processes
➢ It gives immediate feedback from testing
➢ It provides an accurate picture of current state of a project.
Disadvantages:
➢ It needs better communication between the team members.
➢ This may not be achieved all the time

Q) Write about process management


Effective software project management focuses on the four P’s: people, product, process,
and project.

1) The People
The people management defines the following key practice areas for software people
• RECRUITING
• SELECTION
• PERFORMANCE MANAGEMENT
• TRAINING.
1. The Players (Stack holders):
The software process (and every software project) is populated by players who can be
categorized into one of five constituencies:

1. Senior managers: who define the business issues that influence on the project.
2. Project (technical) managers: who must plan, motivate, organize, and control the
practitioners who do software work.
3. Practitioners who deliver the technical skills that are helps to develop a product or
application.
4. Customers who specify the requirements for the software to be engineered(developed)
5. End-users who interact with the software once it is released for Production use.
2. The Product: Before a project can be planned

• product objectives
• scope should be established,
• alternative solutions should be considered,
• technical and management constraints should be identified.
Without this information, it is impossible to estimates the cost.

Software Scope:
The first software project management activity is the determination of software scope.
Problem Decomposition:
Problem decomposition, sometimes called partitioning or problem elaboration, is an activity that sits
at the core of software requirements analysis.

• Decomposition is applied in two major areas:


(1) The functionality that must be delivered and

(2) The process that will be used to deliver it.

3.The Process
✓ A software process provides the framework from which a comprehensive plan for
software development can be established.

✓ umbrella activities—such as software quality assurance, software configuration


management, and measurement—overlay the process model
✓ The project manager must decide which process model is most appropriate for
(1) The customers who have requested the product and the people who will do the
work.
(2) The characteristics of the product itself.
(3) The project environment in which the software team works.
4. The Project

✓ In order to avoid project failure, a software project manager and the software
engineers who build the product must avoid a set of common warning signs that
lead to good project management.
✓ In order to manage a successful software project, we must understand what can go
wrong and how to do it right.

Q) Write about the W5HH PRINCIPLES


• Barry Boehm [BOE96] states:
• “you need an organizing principle that scales down to provide simple [project] plans for
simple projects.”
• He calls it the WWWWWHH principle, after a series of questions that lead to a definition
of key project characteristics and the resultant project plan:
• Why is the system being developed?
The answer to this question enables all parties to assess the validity of business
reasons for the software work.
• What will be done, by when?
The answers to these questions help the team to establish a project schedule by identifying
key project tasks that are required by the customer.
• Who is responsible for a function?
The role and responsibility of each member of the software team must be defined.

• Where are they organizationally located?


Not all roles and responsibilities reside within the software team itself. The customer, users, and
other stakeholders also have responsibilities.
• How will the job be done technically and managerially?
Once product scope is established, a management and technical strategy for the project must be
defined.

• How much of each resource is needed?


The answer to this question is derived by developing estimates based on answers to earlier
questions.

Q) Write about Process and Project Metrics

❖ Metrics

• Software process and project metrics are quantitative measures

• They offer the effectiveness of the software process and the projects that are
conducted using the process as a framework.

• Basic quality and productivity data are collected

• These data are analyzed, compared against past averages, and assessed

• Remedies can then be developed, and the software process can be improved
Process measurement/process metrics:

• Process metrics are collected across all projects and over long periods of time.

• They are used for making strategic decisions.


• We measure the effectiveness of a process by deriving a set of metrics based on
outcomes of the process such as

– Errors uncovered before release of the software

– Work products delivered

– Human effort expended

– Calendar time expended

Formalities of Process Metrics

• Use common sense and organizational sensitivity when interpreting metrics data

• Provide regular feedback to the individuals and teams who collect measures and
metrics
• Work with practitioners and teams to set clear goals and metrics that will be used
to achieve them

❖ Project metrics

• Project metrics enable a software project manager to

– Assess the status of an ongoing project

– Track potential risks

– Uncover problem areas before their status becomes critical

– Adjust workflow or tasks

– Evaluate the project team’s ability to control quality of software work


products
Project metrics are used for making tactical decisions
Use of Project Metrics

Metrics from past projects are used as a basis for estimating time and effort
• Project metrics are used to

– Minimize the development schedule by making the adjustments necessary


to avoid delays and mitigate potential problems and risks
– Assess product quality on an ongoing basis and, when necessary, to
modify the technical approach to improve quality

Project metrics can be consolidated to create process metrics for an organization by using
1. Size-oriented Metrics

• Derived by normalizing quality and/or productivity measures by considering the


size of the software produced
• Thousand lines of code (KLOC) are often chosen as the normalization value.
KLOC = NCLOC+CLOC
• Metrics include

– Errors per KLOC - Errors per person-month

– Defects per KLOC - KLOC per person-month

– Dollars per KLOC - Dollars per page of documentation

– Pages of documentation per KLOC


2. Function-oriented Metrics

➢ Function-oriented metrics use a measure of the functionality delivered bythe


application as a normalization value

➢ Most widely used metric of this type is the function point: FP


= count total * [0.65 + 0.01 * sum (value adj. factors)]
(OR)
FP = count total * [0.65+0.01* sum(fi)]
Where count total is the sum of all FP entries.

2 3.Web Engineering Project Metrics:


3 Measures that can be collected are
4 1.Number of static Web
5 2.Number of dynamic Web
6 3.Number of internal page links
4.Number of persistent data
5.Number of external systems interfaced
6.Number of executable functions.
4.Object-oriented Metrics
➢ Number of scenario scripts (i.e., use cases)
➢ Number of keys classes (the highly independent components)
➢ Number of support classes
➢ Average number of support classes per key class
➢ Number of subsystems.
Q) Write about software estimation.
• Estimation risk is measured by the degree of uncertainty in the quantitative estimates
established for resources, cost, and schedule.

• project manager should not become obsessive about estimation.

• The objective of software project planning is to provide a framework that enables the
manager to make reasonable estimates of resources, cost, and schedule.
Resources

• The second software planning task is estimation of the resources required to


accomplish the software development effort.
1. Human Resources

• The planner begins by evaluating scope and selecting the skills required to complete
development.

2.Reusable Software Resources

• Component-based software engineering (CBSE)5 emphasizes reusability—that is, the


creation and reuse of software building blocks.

3. Environmental Resources

• The environment that supports the software project, often called the software
engineering environment (SEE), incorporates hardware and software.

Software Project Estimation

To achieve reliable cost and effort estimates, a number of options arise:

1. Delay estimation until late in the project (obviously, we can achieve100% accurate
estimates after the project is complete!).

2. Base estimates on similar projects that have already been completed.

3. Use relatively simple decomposition techniques to generate project cost and

Effort estimates.

4. Use one or more empirical models for software cost and effort estimation.

DECOMPOSITION TECHNIQUES

• Software project estimation is a form of problem solving.

• We decompose the problem, from two different points of view:

• Decomposition of the problem

• Decomposition of the process.


LOC and FP data are used in two ways during software project estimation:

(1) As an estimation variable to "size" each element of the software

(2) As baseline metrics collected from past projects with estimation variables to develop cost and
effort projections.

• In general, LOC/pm or FP/pm averages should be computed by project domain.


Steps involved:

➢ Decompose software into functions.

➢ Using historical information, the planner estimates optimistic, most likely and pessimistic value for each
function.

➢ Expected value is calculated using

EV =(sopt + 4sml + spess)/6


➢ Base line metric is then applied to derive the cost of each function.

➢ Function estimates are combined to form project estimate.

Example:

Major functions of CAD:

• User interface and control facilities (UICF)

• Two-dimensional geometric analysis (2DGA)

• Three-dimensional geometric analysis (3DGA)

• Database management (DBM)

• Computer graphics display facilities (CGDF)

• Peripheral control function (PCF)

• Design analysis modules (DAM)


1. LOC Based Estimation:

Three-point estimation technique was used to estimate LOC value.

Following the decomposition technique for LOC an estimation table is developed. A range of LOC
estimates is developed for each function. For example, the range of LOC estimates for 3D geometric
analysis function is optimistic 4600 LOC, most likely 6900 LOC, and pessimistic 8600 LOC.

Function Estimated LOC

UICF 2300

2DGA 5300

3DGA 6800

DBM 3350

CGDF 4950

PCF 2100

DAM 8400

Estimated lines of code 33,200

Ex: for 3D geometric analysis

Optimistic: 4600

Most likely: 6900

Pessimistic: 8600

Hence the expected value for 3DGA is

Expected value = (sopt+4sml+spess)/6

= (4600+4(6900) +8600)/6

= 6800.

2.FP based Estimation:


• Decomposition for FP-based estimation focuses on information domain values rather than
software functions.

• Referring to the function point calculation table presented in Figure

• The project planner estimates inputs, outputs, inquiries, files, and external interfaces for the

CAD software.
Information Opt Most Pess Est count Weight FP count
domain value or likely
function units

Number of inputs 20 24 30 24 4 96

Number of output 12 15 22 16 5 80

Number of 16 22 28 22 4 88
inquiries

Number of files 4 4 5 4 10 40

Number of 2 2 3 2 7 14
external interfaces

Count total 318

Empirical Estimation Models:


• An estimation model for computer software uses empirically derived formulas to predict
effort as a function of LOC or FP.

The Structure of Estimation Models

• A typical estimation model is derived using regression analysis on data collected from past
software projects. The overall structure of such models takes the form.

➢ where A, B, and C are empirically derived constants,

➢ E is effort in person-months,

➢ ev is the estimation variable (either LOC or FP).


Among the many LOC-oriented estimation models proposed in the literature are
COCOMO Model:

Constructive cost model introduced by barry boehm in 1981. The COCOMO estimates the cost for
software product development in terms of effort (resources required to complete the project work) and
schedule (time required to complete the project work) based on the size of the software product. It estimates
the required number of man-months(mm) for the full development of software products. According to
COCOMO, there are three modes of software development projects that depend on complexity such as:

I. Organic project: it belongs to small & simple software projects which are handled by a small
team with good domain knowledge and few rigid requirements.
Ex: small data processing (or) inventory management system
II. Semidetached project: it is an intermediate (in terms of size and complexity) project, where
the team having mixed experience (both experience & inexperience resources) to deal with
rigid/nonrigid requirements.
Ex: Database design (or) OS development.
III. Embedded project: This project having a high level of complexity with a large team size by
considering all sets of parameters (software, hardware and operational).
Ex: ATM software (or) Traffic light control software.

Types of COCOMO model:


Depending upon the complexity of the project the COCOMO has three types. Such as

1. The Basic COCOMO model: it is the one type of static model to estimates software development
effort quickly and roughly. It mainly deals with the number of lines of code and the level of estimation
accuracy is less as we don’t consider the all parameters belongs to the project. The estimated effort and
scheduled time for the project are given by the relation.

E = a*(KLOC)b MM
D = C*(E)d M

N = E/D [p]

Project types A B C D

Organic 2.4 1.05 2.5 0.38

Semidetached 3 1.12 2.5 0.35

Embedded 3.6 1.2 2.5 0.32

Here E = total effort required for the project in man-months(mm)

KLOC = the size of the code for the project in kilo lines of code
D = total time required for project development in months(m)

A, B, C, D = the constant parameters for a s/w project


N = number of people.

2. The Intermediate Mode: the intermediate model estimates software development effort in terms of
size of the program and the other related “cost drivers” parameters [product, hardware, project,
resource parameters] of the project. The estimated effort and scheduled time are given by the
following.
Effort (E) = a (KLOC)b * EAF mm
Scheduled time (d) = c (E)d months (m)
Here, EAF = it is an effort adjustment factor, which is calculated by multiplying the parameters value
of different cost driver parameters.
3. The detailed COCOMO model:
It is the advanced model that estimates the s/w development effort like intermediate COCOMO in each
stage of the software development life cycle process.

Advantages:
➢ Easy to estimate the total cost of the project.
➢ Easy to implement with various factors
➢ Provide ideas about historical project.
Disadvantages:
➢ It ignores requirements customer skills and hardware issues.
➢ It limits the accuracy of the software costs.
➢ It mostly depends on time factors.

The COCOMO II MODEL:


➢ COCOMO stands for constructive cost model
➢ Introduced by “Barry Boehm” in 1981 his book “software engineering economics”
➢ It has evolved into a more comprehensive estimation model called “COCOMO II”
➢ COCOMO II is actually a hierarchy of 3 estimation models.
COCOMO II Address the following areas:
1. Application composition models: it used during the early stages of software engineering the following
are important
➢ Prototyping of user interfaces
➢ Consideration of software and system interaction
➢ Assessment of performance
➢ Evaluation of technology maturity
2. Early design stage model:
It used once requirements have been stabilized and basic software architecture has been
established.
3. Post architecture stage model:
It is used during the construction of the software.

➢ COCOMO II Application composition model uses object points.


➢ Objects points in an indirect software measure computed using
• Screen
• Reports
• Components like to be required to build the application
➢ Each object instance is classified into one of these complexity levels (simple, medium, difficult) on
criteria suggested by boehm.
➢ Complexity is a function of number of client and server tables required to generate a screen or report
and number of sections or views within a screen or report.
➢ After determining complexity no of screens, reports and components are weighted as shown in table.

Complexity weight
Object type Simple Medium difficult

Screen 1 2 3

Report 2 5 8

Components 10

➢ The object point count is determined by multiplying original no. of object instances by weighting
factor.
➢ For component – based development or when software reuse is applied, the % reuse is estimated and
object point count is adjusted.
NOP = (object point) * [(100-%reuse)/100]

Here NOP is new object points.

EARLY DESIGN MODEL:

It is used at the stage 2 in COCOMO II models and supports estimation in early design stage of project
The equation is as
PM nominal = A * (size)b * M

Where PM nominal = effort for the project in person months


A = constant representation the nominal productivity where A = 2.5
B = scale factor
Size = size of the software
M = PERS*RCP*RUSE*PDIF*PRE*FCIL*SCED
➢ This model is used at the early stages of software project when there is not enough information
available about size of product which has to be developed.

3.POST – Architecture level:

➢ The post architecture model covers the actual development and maintenance of a software product.
➢ The post – architecture model predicts software development effort (person-months (PM)) is

PM = A. (size)0.9+1.01+j=1E5 SFj*i=1TI 17 EMi

Here SF – scale factor (5)/scaling exponent

EMi - multiplicative cost drivers (17)


Multiplicative constant for effort = 2.45
“a = 2.5, b = 0.91”
Q) PROJECT PLANNING:
➢ Project planning is an organized and integrated management process, which focuses on activities
required for successful completion of the project.
➢ Project planning helps in better utilization of resources and optimal usage of the allotted time for a
project.
➢ The other objectives of project planning are listed below.
❖ It defines the role and responsibilities of the project management teams.
❖ It ensures that the project management team works according to the business objectives.
❖ It checks feasibility of the schedule and user requirement
❖ It determines project constraints.
Several individual help in planning the project. These include senior management and project management team.
For effective project planning, some principles are followed. Those are
➢ Planning is necessary
➢ Risk analysis
➢ Tracking of project plan
➢ Meet quality standards
➢ Description of flexibility to accommodate changes

Project planning comprise project purpose, project scope, project planning process, and project plan. This
information is essential for effective project planning and to assist project, management team in accomplishing
user requirements.
Project purpose: software project is carried out to accomplish a specific purpose
Project objectives: the commonly followed project objectives are
➢ Meet user requirements
➢ Meet schedule deadlines
➢ Be within budget
➢ Produce quality deliverables.
Business software engineering: Business objectives ensure that the organizational objectives and requirements
are accomplished in the project.
➢ Evaluate processes
➢ Renew policies and processes
➢ Keep the project on schedule
➢ Improve software
Project scope:
The scope provides a detailed description of functions- describes the tasks that the software is expected to
perform.
Features – describe the attributes required in the software
Constraints – describe the limitations imposed on software by hardware, memory
Interfaces – describe the iteration of software components with each other.

Project planning process: project planning process comprises several activities which are essential for carrying
out a project systematically. These activities include estimation of time, effort, and resources required and risks
associated with the project.
1. Identification of project requirements: identification of project requirements helps in performing the
activities in a systematic manner.
2. Identification od cost estimation: it is necessary to estimate the cost that is to be incurred on a project.
The cost estimation includes the cost of hardware, network, connections and the cost required for the
maintenance of hardware components.

Identification of risks: identifying risks before a project begins helps in understanding their probable extent
of impact on the project.
Identification of critical success factors: for making a project successful, critical success factors are
followed. These factors refer to the conditions that ensure greater chances of success of a project.
Preparation of project charter: a project charter provides a brief description of the project scope, quality,
time, cost and resources constraints as described during project planning.
Preparation of project plan: a project plan provides information about the resources that are available for
the project, individuals involved in the project and the schedule according which the project is to be carried
out.
Commencement of the project: once the project planning is complete and resources are assigned to team
members, the software project commences.
Project plan:
It provides information about the end data, milestones, activities, and deliverables of the project.
A typically project plan is divided into the following sections.
➢ Introduction
➢ Project organization
➢ Risk analysis
➢ Resource requirements
➢ Work breakdown
➢ Project schedule
UNIT- II
REQUIREMENT ANALYSIS

Requirement Engineering
The process to gather the software requirements from client, analyze and
document them is known as requirement engineering.
The goal of requirement engineering is to develop and maintain descriptive ‘System
Requirements Specification’ document.

Q) Requirement Engineering Process:


It is a four-step process, which includes –
• Feasibility Study
• Requirement Gathering
• Software Requirement Specification
• Software Requirement Validation

Feasibility study

• When the client approaches the organization for getting the desired product developed,
it comes up with rough idea about what all functions the software must perform and
which all features are expected from the software.
• Based on this the analysts do a detailed study about whether the desired system and its
functionality are feasible to develop.
• This feasibility study is focused towards goal of the organization.
• The output of this phase should contain adequate comments and recommendations for
management about whether or not the project should be undertaken.
Requirement Gathering

• If the feasibility report is positive towards undertaking the project, next phase starts with

gathering requirements from the user.

• Analysts and engineers communicate with the client and end-users to know their ideas

7on what the software should provide and which features they want the software to

include.

Software Requirement Specification:

➢ SRS is a document created by system analyst after the requirements are collected from various

stakeholders.

➢ SRS defines how the intended software will interact with hardware, external interfaces, speed
of operation, response time of system, Security, Quality, Limitations etc.
➢ The requirements received from client are written in natural language.
➢ It is the responsibility of system analyst to document the requirements in technical language so
that they can be comprehended useful by the software development team.

SRS should come up with following features:


➢ User Requirements are expressed in natural language.
➢ Technical requirements are expressed in structured language, which is used inside the
organization.
➢ Design description should be written in Pseudo code.
➢ Format of Forms and GUI screen prints.
➢ Conditional and mathematical notations for DFDs etc.

Software Requirement Validation

➢ After requirement specifications are developed, the requirements mentioned in this


document are validated.
➢ User might ask for illegal, impractical solution or experts may interpret the
requirements incorrectly.
➢ Requirements can be checked against following conditions
• If they can be practically implemented
• If they are valid and as per functionality and domain of software
• If there are any ambiguities
• If they are complete
• If they can be demonstrated
Requirement Elicitation Process

Requirement elicitation process can be depicted using the following diagram:

• Requirements gathering: - The developers discuss with the client and end users

and know their expectations from the software.

• Organizing Requirements: - The developers prioritize and arrange the

requirements in order of importance, urgency and convenience.

• Negotiation & discussion: - If requirements are ambiguous or there are some

conflicts in requirements of various stakeholders.


➢ The requirements come from various stakeholders. To remove the ambiguity

and conflicts, they are discussed for clarity and correctness.

Documentation:
All & informal, functional and non-functional requirements are documented and made
available for next phase processing.

Requirement Elicitation Techniques:


It is the process to find out the requirements for an intended software system by communicating
with client, and users, system users and others in the software system development.
1. Interviews
Interviews are strong medium to collect requirements.
• Structured (closed) interviews, where every single information to gather is decided
in advance, they follow pattern.
• Non-structured (open) interviews, where information to gather is not decided in
advance, more flexible and less biased.
• Oral interviews
• Written interviews
• One-to-one interviews - which are held between two persons across the table.
• Group interviews - which are held between groups of participants.
2. surveys: Organization may conduct surveys among various stakeholders by querying
about their expectation and requirements from the upcoming system.
3. Questionaries: A document with pre- defined set of objective questions and
respective option is handed over all stakeholders to answer, which are collected and
compiled
4.Task analysis
Team of engineers and developers may analyze the operation for which the new system is required.
5. Domain analysis
The expert people in the domain can be a great help to analyze general and specific requirements.
6. Brainstorming
An informal debate is held among various stakeholders and all their inputs are recorded for further
requirements analysis.
Prototyping
• Prototyping is building user interface without adding detail functionality for user
• It helps giving better idea of requirements.
• The prototype is shown to the client and the feedback is noted.
• The client feedback serves as an input for requirement gathering.
Observation
• Team of experts visit the client’s organization or workplace.
• They observe the actual working of the existing installed systems.
• They observe the workflow at client’s end and how execution problems are dealt.
• The team itself draws some conclusions which aid (help)to form requirements
expected from the software.
Q) Explain about Feasibility study?
Feasibility study is a study to reveal whether a project is feasible (possible to do) or not.
Feasibility study determines whether the solution considered to accomplish the requirements is
practical and workable in the software.

The objective of the feasibility study is to establish the reasons for developing the software
that is acceptable to users, adaptable to change and conformable to established standards.
Types of Feasibility
1) Technical feasibility
2) Operational feasibility
3) Economic feasibility
4) Schedule Feasibility

1. Technical Feasibility: It accesses the current resource (such as hardware and software) and
technology, which are required to accomplish user requirements in the software within the
allocated time and budget.
It also performs the following tasks.
• Analyzes the technical skills and capabilities of the software development team
members
• Determines whether the relevant technology is stable and established
• Ascertains that the technology chosen for software development has a large number of
users so that they can be consulted when problems arise or improvements are required.

2. Operational feasibility: It assesses the extent to which the required software performs a
series of steps to solve business problems and user requirements. Operational feasibility
also performs the following tasks.
• Determines whether the problems anticipated in user requirements are of high priority
• Determines whether the solution suggested by the software development team is
acceptable
• Analyzes whether users will adapt to a new software
• Determines whether the organization is satisfied by the alternative solutions proposed
by the software development team.
3. Economic feasibility: determines whether the required software is capable of generating
financial gains for an organization. It involves the cost incurred on the software
development team, estimated cost of hardware and software, cost of performing
feasibility study, and so on.
It focuses on the issues listed below:

• Cost incurred on software development to produce long-term gains for


an organization

• Cost required to conduct full software investigation (such as requirements elicitation


and requirements analysis)
• Cost of hardware, software, development team, and training.

4. Schedule Feasibility - Does the company currently have the time resources to
undertake the project?
Can the project be completed in the available time?

Q) Write about requirement analysis process?


Requirements analysis process involves the following steps:
1. Requirement scope: The scope and boundary of the proposed software solution is drawn
based on business requirements and goals.
2. Stakeholders identification: identifying stakeholders such as customers, end-users,
systems administrators etc. is the next step in requirement analysis.
3. Requirements Elicitation/requirements gathering: post identification of stakeholders,
the tedious process of eliciting requirements follows. Based on the scope and nature of
a particular software solution there can be multiple stakeholders. Interaction happens
with stakeholders’ groups using various communication methodologies including in-
person interview, focus groups, market study, surveys and secondary research.
4. Requirement Analysis: Once user data is gathered; structured analysis is carried out on
this data to determine model usually use-cases are developed to analyze the data on
various parameters depending on the larger goals of the software solution.
5. Software Requirement Specification (SRS): Once the captured data is analyzed these
are put together in the form of software requirement specification. This document
serves as a blue print for the design or development teams to start building the solution
on.
6. Software Requirement management: The final step of the requirements analysis process
involves validating all elements of the requirements specifications document. Errors
are corrected here and it can also accommodate minor changes to requirements of the
proposed software solution.
Q) Write about the Analysis concept and principles?
To perform the requirement analysis properly you should set of underlying concepts and
principles.
1) Requirement Analysis: Requirements Analysis is the process of understanding the customer
needs and expectations from a proposed system or application and is a well-defined stage in the SDLC.
Software requirements analysis is may be divided into may five areas of effort:
a) Problem recognition
b) Evaluation and synthesis
c) Modeling
d) Specification
e) Review

2) Requirements Elicitation for software:


Before requirements can be analyzed, modeled, or specified they must be gathered through an
elicitation process.
Initiating the process: The most commonly used requirements elicitation technique is to conduct a
meeting or interview.
Facilitated Application Specification Techniques (FAST):
Investigators have developed a team- oriented approach to requirements gathering that is applied
during early stages of analysis and specification called Facilitated Application Specification Technique
(FAST).
Basic gridlines for this technique are
A meeting is conducted at a neutral site and attended by both software engineers and customers.
Rules for preparation and participation are established.
An agenda is suggested that is formal enough to cover all important points but informal enough to
encourage the free flow of ideas.
A facilitator controls the meeting
A definition mechanism is used
The goal is to identify the problem propose elements of the solution, negotiate different
approaches and specify a preliminary set of solution requirements is an atmosphere that is conductive to
the accomplishment of the goal.
Quality Function Deployment: It is a quality management technique that translates the needs of
the customer into technical requirements for software.
QFD identifies three types of requirements.
1) Normal requirements: The objectives and goals that are stated for a product or system during
meeting with customer. if these requirements present, the customer is satisfied.
Excepted requirements: These requirements are implicit to the air product or system and be so
fundamental that the customer does not explicitly state them. Their absence will be a cause for significant
dissatisfaction.
Exciting requirements: These features go beyond the customer's expectations and prove to be
satisfying when present.
Use-cases:
As requirements are gathered as a part of informal meetings, a software engineer can create a set of
scenarios that identify thread of usage for the system to be constructed. To create a use-case, the analyst
must first identify the different types of people play as the system operates.
The use - case should be answer
What main tasks or functions are performed by an actor?
What system information will the actor desire, produce or change?
What information does the actor desire about unexpected changes?
Analysis principles:
Each analysis method has a unique point of view:
The information domain of a problem must be represented and understood.
The functions that the software is to perform must be defined.
The behavior of the software must be represented.
The models that depict information, function and behavior must be partitioned in a manner that uncovers
details in a layered fashion.
The analysis process should move from essential information toward implementation details.
In addition to these operational analysis principles for requirements engineering:
Understand the problem before you begin to create the analysis model.
Develop prototype that enable a way to understand how human/machine interaction will occur.
Record the origin of and the reason for every requirement.
Use multiple views of requirements.
Rank requirements.
S Work to eliminate ambiguity.
Q) Analysis Model:
Analysis model operates as a link between the 'system description’ and ‘design model’. In the
analysis model, information, functions and behavior of the system is defined and these are translated into
the architecture, interface and component level design in the ‘design modeling’
1. Scenario-Based: System from the user’s point of view
2. Data-Based: Shows the data are transformed inside the system.
3. Class-Based: Defines objects, attributes & & relationships.
4. Behavioral-Based: Shows the impact of events on the system states.
5. Flow-Oriented: shows how data are transformed inside the system.
Elements of the analysis model:
1) Scenario based element:
This type of element represents the system user point of view.
Scenario based elements are use-case diagram, user stories.
It identities the possible use cases, for the system and produces the use-cases.
2) Class Based element:
The object of this type of element manipulated by the system
It defines the object, attributes and relationship.
The collaboration is occurring between the classes.
Class based elements are the class diagram, collaboration diagram.
From the modeling we get diagram or series of diagrams.
3) Behavioral Based elements:
It represents state of the system and how it is changed by the external events.
The behavioral elements care sequence diagrams, state diagram.
4) Flow oriented elements:
An information flows through a computer-based system it gets transformed.
It shows how the data objects are transformed while they flow between the various system
functions.
The flow elements are data flow diagram, control diagram.
DFD is a graphical representation of a system that shows the inputs to the system, the process
upon the inputs, the output of the system as well as the internal data stores.

Data Modeling:
Analysis modeling starts with the data modeling the software engineer defines all the data objects
that proceeds within the system and the relationship between data objects are identified.
1) Data objects:
The data object is the representation of composite information.
The composite information means an object has a number of different properties or attributes.
2) Data Attributes:
Each of the data object has a set of attributes.
Characteristics:
Name an instance of the data object
Describe the instance
Make reference to another instance in another table.
3) Relationship:
Relationship shows the relationship between data objects and how they are related to each other.
4) Cardinality:
Cardinality state the number of events of one object related to no of events of another object.
i) one to one: (1:1) one event of an object is related to one event of another object.
Ex: the employee has only one ID
ii) One to Many (1: N) one event of an object is related to many events.
Ex: one college has many departments.
iii) Many to many (M: N) Many events of one object are related to many events.
Ex: Many customers place order for many products.
iv) Modality:
If an event relationship is an optional then the modality relationship is zero.
If an event relationship is compulsory then modality of relationship is one.
Q) Problem of Requirements:
Problem 1: customers don't (really) know what they want:
The customers have only a vague idea of what up they need, and it's up to you to ask the right
questions and perform the analysis necessary to turn this amorphous vision into a formally-documented
software requirement specification.
To solve this problem, you should
Ensure that you spend sufficient time at the start of the project on understanding the objectives,
deliverables and scope of the project
Make visible any assumptions that customer is using, and, critically evaluate both the likely end
-user benefits and risks of the project.
Attempt to write a concrete vision statement for the project, which encompasses both the specific
functions or user benefits it provides.
Get your customer to read, think about and sign off the completed software requirement
specification, to align expectations and ensure that both parties have a clear understanding of the
deliverable.
Problem 2: Requirements change during the course of the project:
The 2nd most common problem with software projects is that the requirements defined in the
first phase change as the project progresses, it may occur changes in the external environment require
reshaping of the original business problem.
To "solve this problem", you should:
Have a clearly defined process for receiving, analyzing and incorporating change requests. Set
milestones for each development phase beyond which certain changes are not permissible.
Ensure that change requests are clearly communicated to all stakeholders.
Problem 3: customers have unreasonable timelines:
Customer say something like "it's in emergency job and we need this project completed in X
weeks". A common mistake is to agree to such timelines before actually performing a detailed analysis
and understanding both of the scope of the project and the resources necessary to execute it.
To Solve this problem", you should:
Convert the software requirements specification into a project plan.
Ensure that the project plan takes account of available resource constraints and keeps sufficient
time for testing and quality inspection.
Enter into a conversation about deadlines with your customer, using the figures in your plan as
supporting evidence for statements.
Problem 4: Communication gaps exist between customers, engineers and project managers:
Customers and engineers fail to communicate clearly with each other. This can lead to
confusion and severe miscommunication.
To solve this problem, you should:
Take notes at every meeting and disseminate these throughout the project team.
Be consistent in your set of words. Make yourself a glossary of the terms that you're going to
use right at the start, and ensure all stakeholders have copy.
Problem 5: The development team doesn't understand the politics of the customer's
organization:
When dealing with large projects in large organizations as information is often fragmented and
requirements. Analysis is hence stymied by problems of trust, internal conflicts of interest and
information inefficiencies.
To solve this problem, you should:
Review your existing network and identify both the information you need and who is likely to
have it.
Cultivate allies, build relationships and think systematically about your social capital in the
organization.
Use initial points of access/leverage to move your agenda forward.
UNIT-III
SOFTWARE DESIGN
Q) Explain about software design?
➢ Software design encompasses the set of principles, concepts, and practices
➢ that lead to the development of a high-quality system or product.

1) DESIGN WITHIN THE CONTEXT OF SOFTWARE ENGINEERING OR ELEMENTS OF


DESIGN MODEL

• The data design transforms the information domain model created during analysis
into the data structures that will be required to implement the software.
• The architectural design defines the relationship between major structural elements
of the software.
• The interface design describes how the software communicates within itself, with
systems that interoperate with it, and with humans who use it.

The component-level design transforms structural elements of the software
architecture into a procedural description of software components.
THE DESIGN PROCESS
David [DAV95] suggests a set of principles for software design.
➢ The design process should not suffer from “tunnel vision”.
➢ The design should be traceable to the analysis model.
➢ The design should not reinvent the wheel
➢ The design should “minimize the intellectual distance” between the software
➢ and the problem in the real world.
➢ The design should exhibit uniformity and integration.
➢ The design should be structured to degrade gently.
➢ The design should be structured to accommodate change.
➢ Design is not coding.
➢ The design should be assessed for quality.
➢ The design should be reviewed to minimize conceptual errors.
2.Explain about the Abstraction?
The process of describing a problem at a high level of representation without bothering about
its internal details.
➢ Highest level of abstraction: Solution is slated in broad terms using the language of the
problem environment
➢ Lower levels of abstraction: More detailed description of the solution is provided
Types of abstraction:
➢ Procedural abstraction: Refers to a sequence of instructions that has a specific and
limited function.

Ex The word “open” of a door which implies a long sequence of procedural steps.

➢ Data abstraction: Named collection of data that describe a data object

Ex: door would encompass a set of attributes that describe the door like

door type,

swing direction

weight,

dimensions…. etc.
Control abstraction: It implies a program control mechanism without specifying internal details.
Ex: loops, iterations, multithreading.
Advantages:
➢ It separates design for implementation.
➢ It helps in problem understanding and software maintenance.
➢ It reduces the complexity for users and engineers.
3.EXPLAIN ABOUT DESIGN CONCEPTS?
➢ A set of fundamental software design concepts has evolved over the history of
software
➢ engineering. Each provides the software designer with a foundation from which more
sophisticated design methods can be applied.
1.Abstraction
➢ A procedural abstraction refers to a sequence of instructions that have a specific and
limited function.
➢ A data abstraction is a named collection of data that describes a data object.
2.Architecture
➢ Software architecture alludes to “the overall structure of the software and the ways
in which that structure provides conceptual integrity for a system”
➢ A set of architectural patterns enables a software engineer to solve common design
problems.
➢ Structural properties: This aspect of the architectural design representation defines
the components of a system (e.g., modules, objects, filters) and the manner in which
those components are packaged and interact with one another.
➢ Extra-functional properties: The architectural design description should address how
the design architecture achieves requirements for performance, capacity, reliability,
security, adaptability, and other system characteristics.
➢ Families of related systems: The architectural design should draw upon repeatable
➢ patterns that are commonly encountered in the design of families of similar systems.
3. Patterns:
➢ The intent of each design pattern is to provide a description that enables a designer to
determine.
➢ Whether the pattern is applicable to the current work.
➢ Whether the pattern can be reused (hence, saving design time)
➢ Whether the pattern can serve as a guide for developing a similar, but functionally or
structural different pattern.
4.Separation of Concerns
➢ Separation of concerns is a design concept [Dij82] that suggests that any complex
problem can be more easily handled if it is subdivided into pieces that can each be
solved and/or optimized independently a concern is a feature or behavior that is f
specified as part of the requirements model for the software.
5. Modularity
➢ Modularity is the most common manifestation of separation of concerns. Software
is divided into separately named and addressable components, sometimes called
modules, that are integrated to satisfy problem requirements.
6.Information Hiding
➢ The use of information hiding as a design criterion for modular systems provides
the greatest benefits when modifications are required during testing and later
during software maintenance.
7.Functional Independence
➢ Functional independence is achieved by developing modules with “single-minded”
➢ function and an “aversion” to excessive interaction with other modules.
➢ Independence is assessed using two qualitative criteria: cohesion and coupling.
➢ Cohesion is an indication of the relative functional strength of a module.
➢ Coupling is an indication of the relative interdependence among modules.
8.Refinement
➢ Refinement helps you to reveal low-level details as design progresses. Both
concepts allow you to create a complete design model as the design evolves.
9.Refactoring
➢ “Refactoring is the process of changing a software system in such a way that it does
not alter the external behavior of the code [design]
➢ yet improves its internal structure.”
10.Design Classes
➢ User interface classes define all abstractions that are necessary for human computer
➢ interaction (HCI).
➢ Business domain classes are often refinements of the analysis classes
➢ Process classes implement lower-level business abstractions required to fully
manage the business domain classes.
➢ Persistent classes represent data stores (e.g., a database) that will persist beyond
the execution of the software.
System classes implement software management and control functions that enable the
system to operate and communicate within its computing environment and with the
outside world.
Q) Explain about the modularity?
Modularity is the most common manifestation of separation of concerns. Software is
divided into separately named and addressable components, sometimes called modules, that
are integrated to satisfy problem requirements. It is a technique to divide a software system
into multiple discrete and independent modules.
Modular decomposability: if a design method provides a systematic mechanism for
decomposing the problem into sub problems it will reduce the complexity of the overall
problem, there by achieving an effective modular solution.
Modular composability: If a design method enables existing design components to be
assembled into a new system, it will yield a modular solution that does not reinvent the wheel.
Modular understandability: If a module can be understood as a stand-alone unit it will be
easier to build and easier to change.
Modular continuity: If small changes to the system requirements result in changes to
individual modules, rather than system wide changes, the impact of change-induced side
effects will be minimized.
Modular protection: If an aberrant condition occurs within a module and its effects are
constrained within that module, the impact of error induced side effects will be minimized.

Advantages:
➢ Using modularity smaller components are easier to maintain.
➢ Desired level of abstraction can be brought in the program.
➢ Components with high cohesion can be re-used again.
➢ Desired from security aspect
➢ Concurrent execution can be made possible.

Q) WRITE ABOUT SOFTWARE ARCHITECTURE


Software architecture is the hierarchical structure of program components,
the manner in which these components interact and the structure of data that are used
by the component.
Its representation enables to analyze the effectiveness of the design in meeting
Its stated requirements. Reduce the risks associated with the construction of the software.
Software architecture is important because.

➢ Representations of software architecture are an enabler for communication


between all parties’ stakeholders.
➢ The architecture highlights early design decisions that will have a profound impact
on all software engineering works.
Properties of an Architectural design:
➢ Structural properties: it defines the components of a system and those components
are packaged and interact with one another.
➢ Extra- functional properties: It should address how the design architecture
achieves requirements for performance, capacity, reliability, security, adaptability
and other system characteristics.
➢ Families of related system: it should draw upon repeatable patterns that are
commonly encountered in the design of families of similar systems.
➢ Architectural Descriptions: The implication is that different stakeholder will
see an architecture from different viewpoints that are driven by different sets of concerns.
This implies that an architectural description is actually a set of work products that reflect different
views of the system.
Architectural decisions: Each view developed as part of an architectural description address
A specific stakeholder concerns. To develop each view (and the architectural description as a whole)
the system architecture considers a variety of alternatives and ultimately decides on the specific
architectural features that best meet the concern.
Q) WRITE ABOUT EFFECTIVE MODULAR DESIGN?
A modular design reduces complexity, facilitates change (a critical aspect of software
maintainability), and results in easier implementation by encouraging parallel development of
different parts of a system.
Functional independence: It is direct outgrowth of modularity & abstraction.
i. Modules have high cohesion and low coupling
ii. Functional independence is achieved by developing modules with “single- minded” function
and an “aversion” to excessive interaction with other modules.
iii. Independence is measured using two qualitative criteria: cohesion and coupling

COHESION: Cohesion is a measure of the function strength of a module. In general, it measures


the relationship strength between the pieces of functionality with in a given module in software
engineering cohesion may be represented as a “spectrum”.
Types of cohesion:

Coincidental cohesion: A module is said to have coincidental cohesion, if it performs a set of tasks
that relate to each other very loosely, if at all.

Logical cohesion: A module is said to be logically cohesive, if all elements of the module perform
similar operations such as error handling data input, data output, etc.

Temporal cohesion: When a module contains functions that are related by the fact that these functions
are executed in the same time span, then the module is said to possess temporal cohesion.

Procedural cohesion: A module is said to possess procedural cohesion, if the set of functions of the
module are executed one after the other, though these functions may work towards entirely different
purposes and operate on very different data.
Communicational cohesion: A module is said to have communicational cohesion, if all functions of
the module refer to or update the same data structure.
Sequential cohesion: A module is said to possess sequential cohesion, if the different functions of the
module executed in a sequence, and the output from one function is input to the next in the sequence.

Functional cohesion: A module is said to possess functional cohesion, if different of the modules
cooperate to complete a single task.
Coupling: The coupling between two modules indicates the degree of interdependence
between them. The degree of coupling between two modules depends on their interface
complexity.
If the system has low coupling then it is a sign of well-structured computer system and
a great design.
High coupling low

Content common External Control Stamp Data


Coupling Coupling Coupling coupling coupling coupling

Data coupling
Two modules are data coupled, if they communicate using an elementary data item that is
passed as a parameter between the two.
Stamp coupling: Two modules are stamp coupled, if they communicate using a composite data item
such as a record in PASCAL or a structure in C.
Control coupling: Control coupling exist between two modules, if data from one module is used to
direct the order of instruction execution in other.
Common coupling: Two modules are common coupled, if they share some global data items.
Content coupling: Content coupling exists between two modules, if they share code. That is a jump
from one module into the code of another module can occur.
Q) Explain the concept of Architectural design?
As architectural design begins, the software to be developed must be put into
context— that is, the design should define the external entities (other systems, devices,
people) that the software interacts with and the nature of the interaction.
Representing the System in Context: System that inter operate with the target system are
represent as:
Superordinate systems: those systems that use the target system as part of
some higher-level processing scheme.
Subordinate systems: those systems that are used by the target system and
provide data or processing that are necessary to complete target system
functionality.
Peer-level systems: those systems that interact on a peer-to-peer basis (i.e.,
information is either produced or consumed by the peers and the target system.
Actors: entities (people, devices) that interact with the target system by
producing or consuming information that is necessary for requisite processing.

Defining Archetypes: An archetype is a class or pattern that represents a core abstraction that is
critical to the design of an architecture for the target system.

Node: Represents a cohesive collection of input and output elements of the home security
function.
Detector. An abstraction that encompasses all sensing equipment that feeds
information into the target system.
Indicator. An abstraction that represents all mechanisms for indicating that
an alarm condition is occurring.
Controller: An abstraction that depicts the mechanism that allows the
arming or disarming of a node. If controllers reside on a network, they have
the ability to communicate with one another.
Refining the Architecture into Components:
The architecture is applied to a specific problem with the intent of
demonstrating that the structure and components are appropriate.
Below diagram an instantiation of the safe home architecture for the
security system.

Q) What are the Characteristics of a Good Software Design?


The desirable characteristics that a good software design should have are as follows:

• Correctness

• Efficiency

• Understandability

• Maintainability

• Simplicity

• Completeness

• Verifiability
• Portability

• Modularity

• Reliability

• Reusability

Q) What are various Design Principles?


There are certain design concepts and principles that govern the building of quality software
designs.

Some of the common concepts of software design are:


➢ Abstraction
➢ Information hiding
➢ Functional decomposition
➢ Design strategies
➢ Modularity
➢ Modular design.
Q) Briefly explain about procedural design?
Procedural design is best used to model programs that have an obvious flow of data from
input to output. It represents the architecture of a program as a set of interacting processes
that pass data from one to another.
The two major diagramming tools used in procedural design are
I. data flow diagrams
II. structure charts.
Data Flow Diagram (DFD): A data flow diagram (or DFD) is a tool to help you
discover and document the program’s major processes.
The DFD is a conceptual model- it doesn’t represent the computer program; it
represents what the program must accomplish. By showing the input and output of each
major task.
Structure chart: A structure chart is a tool to help you derive and document the program’s
architecture. It is similar to an organization chart.

When a component is divided into separate pieces, it is called the parent and its pieces are
called its children. The structure chart shows the hierarchy between a parent and its children.
The procedural design is often understood as a software design process that uses mainly
control commands such as: sequence, condition, repetition, which are applied to the
predefined data.
Sequences: serve to achieve the processing steps in order that is essential in the
specification of any algorithm.
Conditions: provide facilities for achieving selected processing according to some logical
statement.
Repetitions: serve to achieve looping’s during the computation process.

These three commands are implemented as ready programming language


constructs.
The programming languages that provide such command constructs are called
imperative programming languages.
The software design technique that relies on these constructs is called procedural design, or
also structured design.

Q) What is Data flow-oriented design?


Data flow diagram is graphical representation of flow of data in an information system. It is
capable of depicting incoming data flow, outgoing data flow and stored data. The DFD does not
mention anything about how data flows through the system.
Types of DFD
Data Flow Diagrams are either Logical or Physical.
• Logical DFD - This type of DFD concentrates on the system process, and flow of data
in the system.
• Physical DFD - This type of DFD shows how the data flow is actually implemented in
the system. It is more specific and closer to the implementation.
DFD Components:

• Entities - Entities are source and destination of information data. Entities are
represented by a rectangle with their respective names.
• Process - Activities and action taken on the data are represented by Circle or Round-
edged rectangles.
• Data Storage - There are two variants of data storage - it can either be represented as
a rectangle with absence of both smaller sides or as an open-sided rectangle with only
one side missing.
Levels of DFD:

• Level 0 - Highest abstraction level DFD is known as Level 0 DFD, which depicts the
entire information system as one diagram concealing all the underlying details. Level
0 DFDs are also known as context level DFDs.

• Level 1 - The Level 0 DFD is broken down into more specific, Level 1 DFD. Level
1 DFD depicts basic modules in the system and flow of data among various modules.
Level 1 DFD also mentions basic processes and sources of information.
• Level 2 - At this level, DFD shows how data flows inside the modules mentioned in
Level 1.

Higher level DFDs can be transformed into more specific lower level DFDs
with deeper level of understanding unless the desired level of specification is
achieved.
UNIT – IV
Q) DEFINE USER INTERFACE DESIGN? EXPLAIN ABOUT THE
TYPES OF INTERFACES?
User interface is the front-end application view to which user interacts in order to use the
software. User can manipulate and control the software as well as hardware by means of
user interface. Today, user interface is found at almost every place where digital technology
exists, right from computers, mobile phones, cars, music players, airplanes, ships etc.
User interface is part of software and is designed such a way that it is expected to provide the
user insight of the software. UI provides fundamental platform for human-computer interaction.
UI can be graphical, text-based, audio-video based, depending upon the underlying
hardware and software combination. UI can be hardware or software or a combination of both.
UI is broadly divided into two categories:
• Command Line Interface
• Graphical User Interface
1) COMMAND LINE INTERFACE: A command is a text-based reference to set of
instructions, which are expected to be executed by the system. There are methods like
macros, scripts that make it easy for the user to operate.
A text-based command line interface can have the following elements:
Command Prompt - It is text-based notifier that is mostly shows the context in which the
user is working. It is generated by the software system.
Cursor - It is a small horizontal line or a vertical bar of the height of line, to represent
position of character while typing. Cursor is mostly found in blinking state. It moves as the
user writes or deletes something.
Command - A command is an executable instruction. It may have one or more parameters.
Output on command execution is shown inline on the screen. When output is produced,
command prompt is displayed on the next line.

2) GRAPHICAL USER INTERFACE: Graphical User Interface provides the user graphical
means to interact with the system. GUI can be combination of both hardware and software.
Using GUI, user interprets the software.
GUI Elements
GUI provides a set of components to interact with software or hardware.
Every graphical component provides a way to work with the system.
A GUI system has following elements such as:
1) Window 2) Tabs 3) Menu 4) Icon 5) Cursor

2) Explain about the Human factors?


Competence : It encompasses innate talent, specific software-related skills, and overall
knowledge of the process that the team has chosen to apply. Skill and knowledge of
process can and should be taught to all people who serve as agile team members.
Common focus : Although members of the agile team may perform different tasks
and bring different skills to the project, all should be focused on one goal to deliver a
working software increment to the customer within the time promised. To achieve this

goal, the team will also focus on continual adaptations that will make the process
fit the needs of the team.

Collaboration : Software engineering is about assessing, analyzing, and using information


that is communicated to the software team; creating information that will help all stakeholders
understand the work of the team; and building information that provides business value for the
customer. To accomplish these tasks, team members must collaborate with one another and
all other stakeholders.
Decision-making ability : Any good software team must be allowed the freedom to
control its own destiny. This implies that the team is given autonomy decision-making
authority for both technical and project issues.
Fuzzy problem-solving ability : Software managers must recognize that the agile team
will continually have to deal with ambiguity and will continually be buffeted by change.
In some cases, the team must accept the fact that the problem they are solving today
may not be the problem that needs to be solved tomorrow. However, lessons learned from
any problem-solving activity may be of benefit to the team later in the project.
Mutual trust and respect : The agile team must become what DeMarco and Lister call
a “jelled” team. A jelled team exhibits the trust and respect that are necessary to make them
“so strongly knit that the whole is greater than the sum of the parts".
Self-organization : In the context of agile development, self-organization implies three things:
(1) the agile team organizes itself for the work to be done,
(2) the team organizes the process to best accommodate its local environment,
(3) the team organizes the work schedule to best
achieve delivery of the software increment.
Self-organization has a number of technical benefits, but more importantly, it serves to improve
collaboration and boost team morale.

3) Explain the Human computer interaction?


“Human Computer Interaction is a discipline concerned with the design,
evaluation and implementation of interactive computing systems for human use and
with the study of the major phenomena surrounding them.”
Examples of interactive computing systems
➢ Single PC - capable of displaying web pages
➢ Embedded devices, for example in cars and in cell phones
➢ Handheld Global Positioning Systems for outdoor activities
Goals of HCI
To develop or improve the
➢ Safety
➢ Utility
➢ Effectiveness
➢ Efficiency
➢ Usability
➢ Appeal
Safety:
➢ Safety of Users think of
• Air traffic control
• Hospital intensive care
➢ Safety of Data think of
• Protection of files from tampering
• Privacy and security
Utility and effectiveness:
➢ Utility: what services a system provides e.g., Ability to print documents
➢ Effectiveness: user’s ability to achieve goals, e.g.
• How to enter the desired information
• How to print a report
• Utility and effectiveness are distinct
• A system might provide all necessary services, but if users can’t find the services
items, the system lacks effectiveness
Efficiency: A measure of how quickly users can accomplish their goals or finish them
work using the system.
Usability: “A measure of the ease with which a system can be learned and used,
its safety, effectiveness and efficiency, and attitude of its users towards it” the extent
to which a product can be used by specified users to achieve specified goals with
effectiveness, efficiency and satisfaction in a specified context of use”.

4) EXPLAIN ABOUT GOLDEN RULES OF USER INTERFACE DESIGN?


(OR)
EXPLAIN ABOUT MANDEL’S GOLDEN RULES IN USER INTERFACE DESIGN?
The golden rules actually from the basis for a set of user interface design principles that
guide the important software design.
They are three golden rules
1) Place the user in control.
2) Reduce the user’s memory load.
3) Make the interface consistent.
PLACE THE USER IN CONTROL:
The principles that allow users to be in control
1. Use modes judiciously (modeless)
2. Allow users to use either the keyboard or mouse (flexible)
3. Allow users to change focus (interruptible)
4. Display descriptive messages and text (Helpful)
5. Provide immediate and reversible actions, and feedback (forgiving)
6. Provide meaningful paths and exits (navigable)
7. Accommodate users with different skill levels (accessible)
8. Make the user interface transparent (facilitative)
REDUCE USER MEMORY LOAD:
Mandel defines design principles that enable an interface to reduce the user’s memory load:
1. Relieve short-term memory (remember)
2. Rely on recognition, not recall (recognition)
3. Provide visual cues (inform)
4. Provide defaults, undo, and redo (forgiving)
5. Provide interface shortcuts (frequency)
6. User progressive disclosure (context)
7. Promote visual clarity (organize)
MAKE THE INTERFACE CONSISTENT:
Mandel defines a set of design principles that help make the interface consistent.
1. Sustain the context of users’ tasks (continuity)
2. Maintain consistency within and across products (experience)
3. Keep interaction results the same (expectations)
4. Provide aesthetic appeal and integrity (attitude)
5. Encourage exploration (predictable)

5) Briefly explain about the User Interface design?


User Interface Analysis and design:
The goal of user interface design is to make the user’s interaction as simple and
efficient as possible, in terms of accomplishing user goals.
The overall process for analysing and designing a UI begins with the creation of models
of system functions.
User interface design models:
For different models come into play when a user interface is to be analyzed and designed.
I. User model: a profile of all end users of the system users can be categorized as:
a. Novices: No syntatic and little syntatic knowledge of the system.
b. Knowledgeable, intermittent users: reasonable knowledge of the system.
c. Knowledgeable, frequent users: good syntatic and semantic knowledge of the system.
II. Design model: a design realization of the user model that incorporates data, architectural
Interface and procedural representations of the software.
III. Mental model: (system perception) the user’s mental image of what the interface is.
The User’s mental model shapes how the user percevives the interface and whether the
UI meets the user’s needs.
IV. Implementation model: the interface “look and feel of the interface” coupled with all
supporting information (documentation) that describes interface syntax and semantics.
User Interface Analysis and design process:
The process: The analysis and design process for UI’s is “iterative” and can be represented
using a “spiral model”.
The user interface analysis and design process encompasses four distinct framework
Activiies.
a. User, task and environment analysis.
b. Interface design.
c. Interface construction.
d. Interface validation.

The figure implies that each of these tasks will occur more than once, with each pass
around the spiral representing additional elaboration of requirements and the resultant design.
1. Analysis:
➢ Analysis of the user environment focuses on the physical work environment.
➢ The information gathered aspart of the analusis action is used to create analysis
Model for the interface.
2. Interface design: the goal of interface design is to define a set of interface objects
and action that enables a user to perform all defined tasks.
3. Interface construction: interface construction- begins with the creation of a prototype
That enables usage scenaios to be evaluated.
4. Interface validation: the ability of the interface to implement every user task
Correctly, to accommodate all task variations and to achieve all general user
Requirements.
Interface Analysis:
A key tenet of the software engineering process models is tis: understand the problem
before you attempt to design a solution.
User interface design: understandung the problem understanding.
i. The people who will interact with the system through the interface.
ii. The tasks that end users must perform to do their work.

iii. The environment in which these tasks will be conducted.


User Analysis: it is probably all the justification need to spend some time understanding
The user before worring about techical matters.
User interviews: The software team meet with endusers to better understand their needs,
motivations, work culture.
Sales input: sales people meet with users on a regular basis and can gather information that
will help the software team to categorize users and better understand their requirements.
Marketing input: market analysis can be invaluable in the definition of market segments
and an understanding of how each segment might use the software in subtly different ways.
Support input: staff talks with users on a daily basis. They are the most likely source of
information on what works and what doesn’t what users like and what they dislike.
➢ Are users trained professionals, technicians, clerical or manufacturing workers?
➢ What level of formal education does the average user have?
➢ Are the users capable of learning from written materials or have they expressed a
desive for classroom training?
➢ What is the age range of the user community?.

6) What are the design issues in UI design?


DESIGN ISSUES:
In user interface design, 4 common design issues always identified: System response
Time, User help facility, error information handling and command labelling
• System response time: It is primary compliant for many interactive applications.
general, system response time is measured from the point at which the user performs some
control action (e.g. hits the return key or clicks a mouse) until the software responds with
desired output or action)
• User help facilities: Two different types of help facilities are encountered: Integrated and
Add-on. An integrated help facility s designed into the software from the beginning.
We can access this facility by pressing ‘F1’ (according to software we use), an Add-on
facility is added to the software after system has been built. It is really an on-line
users’ manuals available
• Error Information: Error messages and warnings deliver when something going wrong.
Most of the users identify the actual problems in the design, and an effective error message
can do much to improve the quality of an interactive system and significantly reduce user
frustration when problems do occur.
• Command Labelling: Typing commands was one of the important modes in olden
days gives interaction between user and system software. Today, the user of
window-oriented allows reducing command operation instead they allow selecting things
directly from desktop.

Q) What are the UI standards?


We also realized that for our user interface standards to be useful to programmers,
they needed to be set up in a way that made it easy for developers to find the relevant
standard. To make it easy for developers to find the information that they need, we
organized our user interface standards into four major areas:

• Navigation: We needed to develop a standard method for navigating through our


applications. In the past we'd used switchboards, command buttons on forms, and
drop-down menus. We decided to standardize on the use of one form of navigation,
drop-down menus, which we felt would provide us the most flexibility in all of our
applications.
• Forms: We needed to develop a standard method for laying out and presenting
information on forms, including methods of navigating between various parts of
the form, standard colors for forms, and some of the basic functionality that should
be built into all forms.
• Reports: We needed to develop a standard design and layout for all reports. We
believed it necessary to develop standard report design guideline (all reports include
a printed date, page 1 of , report name, standard font, and so on) and techniques for
permitting users to select reports for printing that would be both easy to maintain
and easy for a user to understand.
• Documentation: Last, but not least, we needed standards around the documentation
we'd produce, both for systems and user documentation. Our programming standards
and the use of FMS tools such as Total Access Analyzer, to a large part, addressed the
system documentation requirements. However, we needed to develop a standard for
user documentation including.
UNIT – V
Q) WRITE ABOUT SOFTWARE TESTING FUNDAMENTALS?
Testing: It is an essential activity in software life cycle the goal of testing is to
uncover as many errors as possible. This is used to increase the quality of the software. It
has the following characteristic:
Testability. “Software testability is simply how easily [a computer program] can be tested.”
The following characteristics lead to testable software.
Operability. “The better it works, the more efficiently it can be tested.”

Observability. “What you see is what you test.”.

Controllability. “The better we can control the software, the more the testing can
be automated and optimized.”
Decomposability. “By controlling the scope of testing, we can more quickly
isolate problems and perform smarter retesting.”
Simplicity. “The less there is to test, the more quickly we can test it.”

The program should exhibit functional simplicity (e.g., the feature set is the minimum
necessary to meet requirements);
structural simplicity (e.g., architecture is modularized to limit the propagation of faults) code
simplicity (e.g., a coding standard is adopted for ease of inspection and maintenance).
Stability. “The fewer the changes, the fewer the disruptions to testing.” Changes to the
software are infrequent, controlled when they do occur, and do not invalidate existing tests.
Understandability. “The more information we have, the smarter we will test.”
Test Characteristics.

A good test has a high probability of finding an error.


A good test is not redundant.
A good test should be “best of breed”.
A good test should be neither too simple nor too complex.

Q) DEFINE TESTING OBJECTIVES?


(OR)
EXPLAIN SOFTWARE TESTING FUNDAMENTAL OBJECTIVES AND PRINCIPLES
The states number of rules that can serve well as testing objectives
i. Testing is a process or executing a program with the intent of finding on error.
ii. A good test case is one that has a high probability of finding and as yet undiscovered error.
iii. A successful test is one that uncovers an as yet undiscovered error

PRINCIPLES:
1) Software engineer must understand the basic principles that guide software testing
2) All tests should be traceable to customer requirements
3) Tests should be planned long before testing begins
4) The paretic principle applies to software testing
5) Testing should begin “in the small” and progress toward testing in the large
6) Exhaustive testing is not possible
7) To be most effective should be conducted by an independent third party.

Q) EXPLAIN ABOUT THE TESTING PRINCIPLES?


(OR)
WHAT ARE THE BASIC TESTING PRINCIPLES TO GUIDE THE SOFTWARE
ENGINEERING?
Before applying methods to design effective test cases, a software engineer must understand the basic
principles that guide software testing. They are
1) All tests should be traceable to customer requirements
2) Tests should be planned long before testing begins
3) The Pareto principle applies to software testing.
4) Testing should begin “in the small” and progress toward testing “in the large”
5) Exhaustive testing is not possible
6) To be most effective, testing should be conducted by an independent third party

Q) Explain about SOFTWARE QUALITY ASSURANCE


Software quality:
Software quality is:
1. The degree to which a system, component, or process meets specified requirements.
2. The degree to which a system, components, or process meets customer or user needs or
expectations.
Software quality assurance (SQA) is a process that ensures that developed software meets and
complies with defined or standardized quality specifications. SQA is an ongoing process within
the software development life cycle (SDLC) that routinely checks the developed software to
ensure it meets desired quality measures.
Elements of SQA Standards:
Software quality assurance encompasses a broad range of concerns and activities that focus
on the management of software quality.
The job of SQA is to ensure that standards that have been adopted are followed and
that all work products conform to them.
Reviews and audits. Technical review are a quality control activity performed by
software engineers for software engineers their intent is to uncover errors.

Audits are a type of review performed by SQA personnel with the intent of ensuring that
quality guidelines are being followed for software engineering work
Testing. Software testing is a quality control function that has one primary goal—to find
errors. The job of SQA is to ensure that testing is properly planned and efficiently conducted
Error/defect collection and analysis. The only way to improve is to measure how you’re
doing. SQA collects and analyzes error and defect data to better understand how errors are
introduced and what software engineering activities are best suited to eliminating them.
Change management. Change is one of the most disruptive aspects of any software project. If
it is not properly managed, change can lead to confusion, and confusion almost always leads to
poor quality.
Education. Every software organization wants to improve its software engineering practices.
A key contributor to improvement is education of software engineers, their managers, and other
stakeholders.
Vendor management. Three categories of software are acquired from external software
vendors
Security management. SQA ensures that appropriate process and technology are used to
achieve software security.
Safety. SQA may be responsible for assessing the impact of software failure and for initiating
those steps required to reduce risk.
Risk management. The SQA organization ensures that risk management activities are
properly conducted and that risk-related contingency plans have been established.

Q) write about quality metrics?


GOALS, ATTRIBUTES, AND METRICS

The SQA actions described in the preceding section are performed to achieve a set of
pragmatic goals:
Requirements quality. SQA must ensure that the software team has properly reviewed the
requirements model to achieve a high level of quality.
Design quality. SQA looks for attributes of the design that are indicators of quality. Code
quality. SQA should isolate those attributes that allow a reasonable analysis of the quality
of code.
Quality control effectiveness. SQA analyzes the allocation of resources for reviews and testing
to assess whether they are being allocated in the most effective manner.

Q) What is software reliability?


Software reliability is defined in statistical terms as “the probability of failure-free operation
of a computer programing a specified environment for a specified time”.
Measures of Reliability and Availability
A simple measure of reliability is meantime-between-failure (MTBF):
MTBF = MTTF +MTTR
where MTTF = mean-time-to-failure
and MTTR = mean-time-to-failure and mean-time-to-repair
Software availability is the probability that a program is operating according to requirements

at a given point in time and is defined as


Software Safety:
Software safety is a software quality assurance activity that focuses on the identification
and assessment of potential hazards that may affect software negatively and cause an
entire system to fail. For example, some of the hazards associated with a computer-based
cruise control for an automobile might be:
✓ Causes uncontrolled acceleration that cannot be stopped,

✓ Does not respond to depression of brake pedal (by turning off),

✓ Does not engage when Switch is activated, and

✓ Slowly loses or gains speed

Q) Briefly explain about a strategic approach to software testing?


Testing is a set of activities that can be planned in advance and conducted systematically.
Generic characteristics:

• To perform effective testing you should conduct effective technical reviews.By


doing this, many errors will be eliminated before testing commences.
• Testing begins at the component level and works “outward” toward the integration of
the entire computer-based system.
• Different testing techniques are appropriate for different software engineering
approaches and at different points in time.
• Testing is conducted by the developer of the software and (for large projects)an
independent test group.
• Testing and debugging are different activities, but debugging must be
accommodated in any testing strategy.

Verification and Validation


Software testing is one element of a broader topic that is often referred to as verification
and validation (V&V).
Verification refers to the set of tasks that ensure that software correctly implements a specific
function.

Verification: “Are we building the product, right?”


Validation refers to a different set of tasks that ensure that the software that has been built is
traceable to customer

Validation: “Are we building the right product?”


Q) Explain about unit testing?
Unit testing, a testing technique using which individual modules are tested to determine
if there are any issues by the developer himself. It is concerned with functional correctness
of the standalone modules.
The main aim is to isolate each unit of the system to identify, analyze and fix the defects.
Unit Testing - Advantages:
Reduces Defects in the Newly developed features or reduces bugs when changing the existing
functionality.
Reduces Cost of Testing as defects are captured in very early phase.
Improves design and allows better refactoring of code.
Unit Tests, when integrated with build gives the quality of the build as well.
Unit Testing Lifecyle:
Unit Testing Techniques:
Black Box Testing - Using which the user interface, input and output are tested.
White Box Testing - used to test each one of those functions’ behavior is tested.
Gray Box Testing - Used to execute tests, risks and assessment methods.

Q) Explain about integration testing?


INTEGRATION TESTING is defined as a type of testing where software modules are integrated
logically and tested as a group.
Integration Testing focuses on checking data communication amongst these modules.
Hence it is also termed as 'I & T' (Integration and Testing), 'String Testing' and sometimes 'Thread
Testing'.
1) Top-down Integration Testing:
Top-Down Integration Testing is a method in which integration testing takes place from top to
bottom following the control flow of software system. The higher-level modules are tested first and
then lower-level modules are tested and integrated in order to check the software functionality.
Stubs are used for testing if some modules are not ready.
Advantages:
1.Fault Localization is easier.
Possibility to obtain an early prototype.
2.Critical Modules are tested on priority;
3.major design flaws could be found and fixed first.
Disadvantages:
1.Needs many Stubs.
2.Modules at a lower level are tested inadequately.
2) Bottom-up Integration Testing:
Bottom-up Integration Testing is a strategy in which the lower-level modules are tested first.
These tested modules are then further used to facilitate the testing of higher-level modules. The
process continues until all modules at top level are tested. Once the lower-level modules are tested
and integrated, then the next level of modules is formed.
Advantages:
1.Fault localization is easier.
2.No time is wasted waiting for all modules to be developed unlike Big-bang approach
Disadvantages:
1.Critical modules (at the top level of software architecture) which control the flow of application are
tested last and may be prone to defects.
2.An early prototype is not possible.

3) Sandwich Testing:
Sandwich Testing is a strategy in which top level modules are tested with lower-level modules at the
same time lower modules are integrated with top modules and tested as a system.
It is a combination of Top-down and Bottom-up approaches therefore it is called Hybrid Integration
Testing. It makes use of both stubs as well as drivers.
Q) Write about Smoke testing?
Smoke Testing is a software testing process that determines whether the deployed software
build is stable or not. Smoke testing is a confirmation for QA team to proceed with further
software testing. It consists of a minimal set of tests run on each build to test software
functionalities. Smoke testing is also known as "Build Verification Testing" or “Confidence Testing.”

Advantages of Smoke testing:


Here are few advantages listed for Smoke Testing.
1.Easy to perform testing Defects will be identified in early stages.
2.Improves the quality of the system
3.Reduces the risk
4.Progress is easier to access.
5.Saves test effort and time Easy to detect critical errors and correction of errors.
6.It runs quickly Minimizes integration risks.

Q) What is Regression Testing?


REGRESSION TESTING is defined as a type of software testing to confirm that a recent program
or code change has not adversely affected existing features.
Regression Testing is nothing but a full or partial selection of already executed test cases which are
re-executed to ensure existing functionalities work fine.

Q) What is Validation Testing?


The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be defined
as to demonstrate that the product fulfills its intended use when deployed on appropriate environment.
Verification

Needs &
expectation specificati
of customer process product
on

Validation

Q) What is System testing?


SYSTEM TESTING is a level of software testing where a complete and integrated software is
tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements.
Types of system testing:
Recovery Testing: Recovery testing is a system test that forces the software to fail in a variety
Of ways and verifies that recovery is properly performed.

Security Testing: Security testing attempts to verify that protection mechanisms built into a
system will, in fact, protect it from improper penetration.
Stress Testing: Stress testing executes a system in a manner that demands resources in
abnormal quantity, frequency, or volume.
Performance Testing: Performance tests are often coupled with stress testing and usually
require both hardware and software instrumentation.
Deployment testing: In many cases, software must execute on a variety of platforms and under more
than one operating system environment. Deployment testing, sometimes called configuration testing,
exercises the software in each environment in which it is to operate.

Q) What is Basis Path Testing?

The basis path method enables the test-case designer to derive a logical complexity measure of
a procedural design and use this measure as a guide for defining a basis set of execution paths.
Test cases derived to exercise the basis set are guaranteed
The principle behind basis path testing is that all independent paths of the program have to be
tested at least once. Below are the steps of this technique:
- Draw a control flow graph.

- Determine Cyclomatic complexity.

- Find a basis set of paths.

- Generate test cases for each path.


Step 1: Draw a control flow graph

Basic control flow graph structures:

On a control flow graph, we can see that:

- Arrows or edges represent flows of control.

- Circles or nodes represent actions.

- Areas bounded by edges and nodes are called regions.

- A predicate node is a node containing a condition.

Below is an example of control flow graph:

1: IF A = 100

2: THEN IF B > C

3: THEN A = B

4: ELSE A= C

5: ENDIF

6: ENDIF

7: Print A
Cyclomatic complexity= Number of Predicate Nodes + 1

From the example in Step 1, we can redraw it as below to show predicate nodes clearly:

As we see, there are two predicate nodes in the graph.


So the Cyclomatic complexity is 2+1= 3.
Cyclomatic complexity V(G) for a flow graph G is defined as

where E is the number of flow graph edges and N is the number of flow graph nodes.
3. Cyclomatic complexity V(G) for a flow graph G is also defined as

where P is the number of predicate nodes contained in the flow graph G.


Step 3: Find a basis set of paths
The Cyclomatic complexity tells us the number of paths to evaluate for basis path testing. In
the example, we have 3 paths, and our basis set of paths is:
Path 1: 1, 2, 3, 5, 6, 7.
Path 2: 1,2,4,5,6,7

Path 3: 1, 6, 7.
Step 4: Generate test cases for each path
After determining the basis set of paths, we can generate the test case for each path. Usually, we
need at least one test case to cover one path. In the example, however, Path 3 is already covered by
Path 1 and 2 so we only need to write 2 test cases.
In conclusion, basis path testing helps us to reduce redundant tests. It suggests independent
paths from which we write test cases needed to ensure that every statement and condition can
be executed at least one time.

Q) CONTROL STRUCTURE TESTING


Condition Testing
Condition testing is a test-case design method that exercises the logical conditions contained
in a program module. A simple condition is a Boolean variable or a relational expression,
possibly preceded with one NOT (¬) operator. A relational expression takes the form

where E1 and E2 are arithmetic expressions and <relational-operator> is one of the


following:

A compound condition is composed of two or more simple conditions, Boolean operators,


and parentheses.
Data Flow Testing
The data flow testing method [Fra93] selects test paths of a program according to the
locations of definitions and uses of variables in the program.

Loop Testing
Loop testing is a white-box testing technique that focuses exclusively on the
validity of loop constructs
Simple loops. The following set of tests can be applied to simple loops, where n
is the maximum number of allowable passes through the loop.
a. Skip the loop entirely.
b. Only one pass through the loop.
c. Two passes through the loop.
d. m passes through the loop where m < n.
e. n -1, n, n + 1 passes through the loop.

For nested loop, you need to follow the following steps.

1. Set all the other loops to minimum value and start at the innermost loop
2. For the innermost loop, perform a simple loop test and hold the outer loops at their
minimum iteration parameter value.

3. Perform test for the next loop and work outward.


4. Continue until the outermost loop has been tested.

Concatenated Loops:
In the concatenated loops, if two loops are independent of each other then they are tested
using simple loops or else test them as nested loops.
Unstructured Loops For unstructured loops, it requires restructuring of the design to reflect
the use of the structured programming constructs.
Q) EXPLAIN ABOUT BLACK-BOX TESTING?
Black-box testing, also called behavioral testing, focuses on the functional requirements of
the software.
Black-box testing attempts to find errors in the following categories:

(1) Incorrect or missing functions,

(2) Interface errors

(3) Errors in data structures or external database access

(4) Behavior or performance errors

(5) Initialization and


(6)termination errors.
Tests are designed to answer the following questions:

• How is functional validity tested?

• How are system behavior and performance tested?

• What classes of input will make good test cases?

• Is the system particularly sensitive to certain input values?

• How are the boundaries of a data class isolated?

• What data rates and data volume can the system tolerate?

• What effect will specific combinations of data have on system operation?


Graph-Based Testing Methods

Software testing begins by creating a graph of important objects and their relationships andthen
devising a series of tests that will cover the graph so that each object and relationship is
exercised and errors are uncovered.

A directed link (represented by an arrow) indicates that a relationship moves in only one
direction.
A bidirectional link, also called a symmetric link, implies that the relationship applies in
both directions.
Parallel links are used when a number of different relationships are established between
graph nodes.
Equivalence Partitioning

Equivalence partitioning is a black-box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
Test-case design for equivalence partitioning is based on an evaluation of equivalence classes

for an input condition.


Boundary value Analysis (BVA)
It leads to a selection of test cases that exercise bounding values.
➢ Input Condition specifies a no of values test cases that exercise the min & max values.
Comparison testing:
To minimize the redundant errors occurs in hardware or software.
Q) Write about Software Re-Engineering?
Reorganizing and modifying existing software systems to make them more maintainable
➢ Re-structuring or re-writing part or all of a legacy system without
changing its functionality
➢ Applicable where some but not all sub-systems of a larger system require
frequent maintenance
➢ Re-engineering involves adding effort to make them easier to maintain. The
system may be re-structured and re-documented
When to re-engineer
✓ When system changes are mostly confined to part of the system then re-engineer that
part
✓ When hardware or software support becomes obsolete
✓ When tools to support re-structuring are available
Re-engineering advantages:
Reduced risk: There is a high risk in new software development. There may
be development problems, staffing problems and specification problems

Reduced cost: The cost of re-engineering is often significantly less than the costs
of developing new software

Re-engineering cost factors:


The quality of the software to be re-engineered the tool
support available for re-engineering
The extent of the data conversion which is required
The availability of expert staff for re-engineering
Re-engineering approaches:

Source code translation:


Involves converting the code from one language (or language version) to another
e.g., FORTRAN to C
May be necessary because of:
• Hardware platform update
• Staff skill shortages
• Organizational policy changes
-Only realistic if an automatic translator is available

The program translation process:

Q) Write about Reverse Engineering?


➢ Reverse engineering is the process of deriving the system design and
specification from its source code
➢ Analyzing software with a view to understanding its design and specification
➢ May be part of a re-engineering process but may also be used to re-specify a system for
re-implementation
➢ Builds a program data base and generates information from this
➢ Program understanding tools (browsers, cross-reference generators, etc.) may be
used in this process
The reverse engineering processes

Program structure improvement:


➢ The program may be automatically restructured to remove unconditional branches
➢ Conditions may be simplified to make them more readable
Restructuring problems:
Problems with re-structuring are:
• Loss of comments
• Loss of documentation and Heavy computational demands
o Restructuring doesn’t help with poor modularization where related
components are dispersed throughout the code
o The understandability of data-driven programs may not be improved by re-structuring

Q15) CASE stands for Computer Aided Software Engineering?


It means, development and maintenance of software projects with help of various automated
software tools.
CASE Tools: CASE tools are set of software application programs, which are used to automate
SDLC activities. CASE tools are used by software project managers, analysts and engineers to
develop software system.
There are number of CASE tools available to simplify various stages of Software Development
Life Cycle such as Analysis tools, Design tools, Project management tools, Database
Management tools, Documentation tools are to name a few.
Use of CASE tools accelerates the development of project to produce desired result and helps
to uncover flaws before moving ahead with next stage in software development
Components of CASE Tools

CASE tools can be broadly divided into the following parts based on their use at a particular
SDLC stage: CASE stands for Computer Aided Software Engineering. It means, development
and maintenance of software projects with help of various automated software tools.

Components of CASE Tools


CASE tools can be broadly divided into the following parts based on their use at a particular
SDLC
stage:
Central Repository - CASE tools require a central repository, which can serve as a source of
common, integrated and consistent information. Central repository is a central place of storage
where product specifications, requirement documents, related reports and diagrams, other
useful information regarding management is stored. Central repository also
serves as data dictionary.
Upper Case Tools - Upper CASE tools are used in planning, analysis and design stages of
SDLC.
Lower Case Tools - Lower CASE tools are used in implementation, testing and
maintenance.
Integrated Case Tools - Integrated CASE tools are helpful in all the stages of SDLC, from
Requirement gathering to Testing and documentation.

CASE tools can be grouped together if they have similar functionality, process activities and
capability of getting integrated with other tools.
Scope of Case Tools
The scope of CASE tools goes throughout the SDLC.
Case Tools Types
Now we briefly go through various CASE tools
Diagram tools
These tools are used to represent system components, data and control flow among various
software components and system structure in a graphical form. For example, Flow Chart
Maker
tool for creating state-of-the-art flowcharts.
Process Modeling Tools
Process modeling is method to create software process model, which is used to develop the
software. Process modeling tools help the managers to choose a process model or modify it as
per the requirement of software product. For example, EPF Composer

Project Management Tools


These tools are used for project planning, cost and effort estimation, project scheduling and
resource planning. Managers have to strictly comply project execution with every mentioned
step in software project management. Project management tools help in storing and sharing
project information in real-time throughout the organization.
Documentation Tools
Documentation in a software project starts prior to the software process, goes throughout all
phases of SDLC and after the completion of the project.
Documentation tools generate documents for technical users and end users. Technical users are mostly
in-house professionals of the development team who refer to system manual, training manual,
installation manuals etc. The end user documents describe the functioning and how-to of the system
such as user manual. For example, Dioxygen, Dr Explain, Adobe Robo Help for documentation.
Analysis Tools
These tools help to gather requirements, automatically check for any inconsistency, inaccuracy
in the diagrams, data redundancies or erroneous omissions. For example, Accept 360,
Accompa, Case Complete for requirement analysis, Visible Analyst for total analysis.

Design Tools
These tools help software designers to design the block structure of the software, which may

further be broken down in smaller modules using refinement techniques. These tools provide
detailing of each module and interconnections among modules. For example, Animated
Software Design
Configuration Management Tools
An instance of software is released under one version. Configuration Management tools deal
with –
Version and revision management
Baseline configuration management
Change control management
CASE tools help in this by automatic tracking, version management and release management.
For example, Fossil, Git, Accu REV.
Change Control Tools
These tools are considered as a part of configuration management tools. They deal with changes
made to the software after its baseline is fixed or when the software is first released. CASE
tools automate change tracking, file management, code management and more. It also helps in
enforcing change policy of the organization.
Programming Tools
These tools consist of programming environments like IDE Integrated Development
Environment, in-built modules library and simulation tools. These tools provide
comprehensive aid in building software product and include features for simulation and
testing. For example, Cscope to search code in C,Eclipse.
Prototyping Tools
Software prototype is simulated version of the intended software product. Prototype provides
initial look and feel of the product and simulates few aspects of actual product.
Web Development Tools
These tools assist in designing web pages with all allied elements like forms, text, script,
graphic and so on. Web tools also provide live preview of what is being developed and how
will it look after completion. For example, Fontello, Adobe Edge Inspect, Foundation 3,
Brackets.
Quality Assurance Tools
Quality assurance in a software organization is monitoring the engineering process and
methods adopted to develop the software product in order to ensure conformance of quality
as per organization standards. QA tools consist of configuration and change control tools and
software testing tools. For example, SoapTest, AppsWatch, JMeter.
Maintenance Tools
Software maintenance includes modifications in the software product after it is delivered.
Automatic logging and error reporting techniques, automatic error ticket generation and root
cause Analysis are few CASE tools, which help software organization in maintenance phase
of SDLC.
Case Environment:

User interface

The user interface provides for the users to interact with the different tools and reducing the
overhead of learning how the different tools are used.

Object management system and repository

Different case tools represent the software product as a set of entities such as specification,
design, text data, project plan, etc. The commercial relational database management systems
are geared towards supporting large volumes of information structured as simple relatively
short records.

You might also like