Introduction To SE BSNL

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 60

COMPUTER SCIENCE

SUBJECT NAME: SOFTWARE ENGINEERING


Software
Software is:

(1) instructions (computer programs) that when executed provide


desired features, function, and performance

(2) data structures that enable the programs to adequately manipulate


information

(3) documentation that describes the operation and use of the


programs.
Software Engineering
 Realities about Software Engineering:
• a concerted effort should be made to understand the problem before a
software solution is developed
• design becomes a pivotal activity
• software should exhibit high quality
• software should be maintainable

 The seminal definition: [Software engineering is] the establishment and


use of sound engineering principles in order to obtain economically
software that is reliable and works efficiently on real machines.
Software Engineering
The IEEE definition:

Software Engineering:

1. The application of a systematic, disciplined, quantifiable


approach to the development, operation, and maintenance of
software; that is, the application of engineering to software.

2. The study of approaches as in (1).


A Generic View of Software Engineering
 The Definition Phase (Focus on What)
 Information Engineering, S/W Project Planning & Requirement Analysis

 The Development Phase (Focus on How)


 Design, Code & Test

 The Support Phase (Focus on Changes)


 Correction

 Adaptation

 Enhancement

 Preventions
Software Development Life Cycle
Process Flow
Process flow — describes how the framework activities, actions and
tasks that occur within each framework activity are organized with
respect to sequence and time
Process Flow
Process Models
A software process model represents the order in which the activities
of software development will be undertaken.

It describes the sequence in which the phases of the software lifecycle


will be performed.
Linear Process Models
Waterfall Model
Waterfall Model
Waterfall model is most appropriate for:
 Requirements are very well documented, clear and fixed.

 Product definition is stable.

 Technology is understood and is not dynamic.

 There are no ambiguous requirements.

 Ample resources with required expertise are available to support the


product.

 The project is short.


Waterfall Model
The major disadvantages of the Waterfall Model are

 No working software is produced until late during the life cycle.

 High amounts of risk and uncertainty.

 Not a good model for complex and object-oriented projects.

 Poor model for long and ongoing projects.


Waterfall Model
The major disadvantages of the Waterfall Model are

 Not suitable for the projects where requirements are at a moderate to high risk of
changing. So, risk and uncertainty is high with this process model.

 It is difficult to measure progress within stages.

 Cannot accommodate changing requirements.

 Adjusting scope during the life cycle can end a project.

 Integration is done as a "big-bang. at the very end, which doesn't allow identifying any
technological or business bottleneck or challenges early.
V Model
V Model
 emphasizes the concept of “Verification and Validation”

 Testing Phases will be planned in parallel with the development of


the stage

 Advantages
 Less bugs: Do testing in every layer

 Provide an explanation of problems involved in detail.

 Emphasizes the importance of testing and makes sure that testing is


planned.
Evolutionary Process Models
Prototyping Model
Prototyping Model
1. The new system requirements are defined in as much detail as possible. This
usually involves interviewing a number of users.

2. A preliminary design is created for the new system.

3. A first prototype of the new system is constructed from the preliminary design.
This is usually a scaled-down system, and represents an approximation of the
characteristics of the final product.
Prototyping Model
4. The users thoroughly evaluate the first prototype, noting its strengths and
weaknesses, what needs to be added, and what should to be removed. The
developer collects and analyzes the remarks from the users.

5. The first prototype is modified, based on the comments supplied by the users, and
a second prototype of the new system is constructed.

6. The second prototype is evaluated in the same manner as was the first prototype.
Prototyping Model
7. The preceding steps are iterated as many times as necessary, until the users are
satisfied that the prototype represents the final product desired.

8. The final system is constructed, based on the final prototype.

9. The final system is thoroughly evaluated and tested. Routine maintenance is


carried out on a continuing basis to prevent large-scale failures and to minimize
downtime.
Prototyping Model
Advantages:
 Users are actively involved in the development

 Since, this methodology a working model of the system is provided, the users get
a better understanding of the system being developed.

 Errors can be detected much earlier.

 Quicker user feedback is available leading to better solutions.

 Missing functionality can be identified easily


Prototyping Model
Disadvantages
 Increases the complexity of the system as scope of the system may expand beyond original
plans.

When to use Prototype model:


 Prototype model should be used when the desired system needs to have a lot of interaction
with the end users.

 Typically, online systems, web interfaces have a very high amount of interaction with end
users, are best suited for Prototype model. It might take a while for a system to be built that
allows ease of use and needs minimal training for the end user.
Spiral Model
Spiral Model
When to use Spiral Model?

 deliverance is required to be frequent.

 the project is large

 requirements are unclear and complex

 changes may require at any time

 Large and high budget projects


Spiral Model
Advantages

 High amount of risk analysis

 Useful for large and mission-critical projects.

Disadvantages

 Can be a costly model to use.

 Risk analysis needed highly particular expertise

 Doesn't work well for smaller projects.


Incremental Process Models
Incremental Model
Incremental Model
Advantages:
 Generates working software quickly and early during the software life
cycle

 The core product is developed first i.e. main functionality is added in the
first increment.

 With each release, a new feature can be added to the product.


Incremental Model
Disadvantages :
 Needs good planning and design.

 Needs a clear and complete definition of the whole system before it can be
broken down and built incrementally.

 Total cost is higher than waterfall.


RAD Model
RAD Model
When to use RAD Model?
 When the system should need to create the project that modularizes in a short
span time (2-3 months).

 When the requirements are well-known.

 When the technical risk is limited.

 It should be used only if the budget allows the use of automatic code generating
tools.
RAD Model
Advantage of RAD Model
 This model is flexible for change.

 Each phase in RAD brings highest priority functionality to the customer.

 It reduced development time.

 It increases the reusability of features.


RAD Model
Disadvantage of RAD Model

 For large, but scalable projects, RAD requires sufficient human resources to create
the right number of RAD teams.

 If developers and customers are not committed to the rapid-fire activities


necessary to complete the system in a much abbreviated time frame, RAD project
will fail.

 If a system cannot properly be modularized, building the components necessary


for RAD will be problematic.

 All application is not compatible with RAD.

 For smaller projects, we cannot use the RAD model.


Testing
Software Testing
Testing is the process of exercising a program with the specific intent
of finding errors prior to delivery to the end user.

A good test case is one that has a high probability of finding an as-yet
undiscovered error.

A successful test is one that uncovers an as-yet-undiscovered error.


V&V
 Verification refers to the set of tasks that ensure that software correctly
implements a specific function.

 Validation refers to a different set of tasks that ensure that the software that has
been built is traceable to customer requirements. Boehm [Boe81] states this
another way:
 Verification: "Are we building the product right?"

 Validation: "Are we building the right product?"


Who Tests the Software
Testing Strategy
Testing Strategy
 We begin by ‘testing-in-the-small’ and move toward ‘testing-in-the-
large’

 For conventional software


 The module (component) is our initial focus

 Integration of modules follows


Unit Testing
Integration Testing
Integration Testing is to detect defects that occur on the interfaces of
units;

Options:

 the “big bang” approach

 an incremental construction strategy


Top Down Integration
Bottom-Up Integration
Sandwich Testing
System Testing
The goal is to ensure that the system performs according to its requirements.

System test evaluates both functional behavior and quality requirements such as
reliability, usability, performance and security.

This phase of testing is especially useful for detecting external hardware and
software interface defects.

Example:
 causing race conditions

 deadlocks

 problems with interrupts and exception handling

 ineffective memory usage.


System Testing
Functional Testing

Non Functional Testing

Performance testing
 Load testing (increasing workload)

 Stress testing (beyond the limits of its anticipated workload)

 Endurance testing(significant workload is given continuously)

 Spike testing (the load is suddenly and substantially increased)

Volume testing (performance against huge data of the database)

Recovery testing (how quickly system recover)

Security testing (uncover vulnerabilities of the system)

Scalability testing (measured in terms of its ability to scale up or scale down the number of user
requests)
Regression Testing
Regression testing is the re-execution of some subset of tests that have already been
conducted to ensure that changes have not propagated unintended side effects

Whenever software is corrected, some aspect of the software configuration (the program, its
documentation, or the data that support it) is changed.

Regression testing helps to ensure that changes (due to testing or for other reasons) do not
introduce unintended behavior or additional errors.

Regression testing may be conducted manually, by re-executing a subset of all test cases or
using automated capture/playback tools.
User Acceptance Testing
 Alpha Testing

 Beta Testing / Installation Testing


White box Testing
White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box
Testing, Transparent Box Testing, Code-Based Testing or Structural Testing) is a
software testing method in which the internal structure/ design/ implementation
of the item being tested is known to the tester.

 White-box test design techniques:

 Basis path testing

 Control flow testing (branch coverage)

 Data flow testing (statement coverage)

 Loop testing

 Condition testing
Basis Path Testing
Basis path testing is a white-box testing technique first proposed by Tom McCabe.
A simple notation for the representation of control flow is called a flow graph.
Cyclomatic complexity
Cyclomatic complexity is a software metric that provides a quantitative measure
of the logical complexity of a program.

For example, a set of independent paths for the flow graph illustrated in Figure B
is

 path 1: 1-11

 path 2: 1-2-3-4-5-10-1-11

 path 3: 1-2-3-6-8-9-10-1-11

 path 4: 1-2-3-6-7-9-10-1-11
Cyclomatic complexity
Complexity is computed in one of three ways:

 The number of regions of the flow graph corresponds to the cyclomatic complexity.

 Cyclomatic complexity, V(G), for a flow graph, G, is defined as V (G) = E - N + 2,


Where E is the number of flow graph edges, N is the number of flow graph nodes.

 Cyclomatic complexity, V (G), for a flow graph, G, is also defined as V (G) = P + 1,


Where P is the number of predicate nodes contained in the flow graph G.
Black-box testing
Black-box testing, also called behavioral testing, focuses on the functional requirements of the software.

Black-box testing attempts to find errors in the following categories:

(1) incorrect or missing functions

(2) interface errors

(3) errors in data structures or external data base access

(4) behavior or performance errors

(5) initialization and termination errors.

Techniques:
 Equivalence Class Partitioning

 Boundary Value Analysis

 Cause Effect Graphing


Equivalence class partitioning
Divides input domain into classes of data, and with the help of these classes of data, test
cases can be derived which are generally termed as ‘Valid’ and ‘Invalid’.

Example: If you are testing for an input box accepting numbers from 1 to 1000

The valid partition: Pick a single value from range 1 to 1000 as a valid test case.

The invalid partition1: Input data class with all values below the lower limit. I.e. any value
below 1, as an invalid input data test case.

The invalid partition2:Input data with any value greater than 1000 to represent the third
invalid input class.
Equivalence class partitioning
Equivalence class partitioning results in a partitioning of the input domain of the software
under-test.

advantages:

 It eliminates the need for exhaustive testing, which is not feasible.

 It guides a tester in selecting a subset of test inputs with a high probability of detecting a
defect.

 It allows a tester to cover a larger domain of inputs/outputs with a smaller subset selected
from an equivalence class.
Boundary Value Analysis
 find the errors at boundaries of input domain (tests the behavior of a program at the input
boundaries) rather than finding those errors in the centre of input.

 the basic idea in boundary value testing is to select input variable values at their:
minimum, just above the minimum, just below the minimum, a nominal value, just below
the maximum, maximum and just above the maximum.

Example: Accepts age from 18 to 56


Cause Effect Graphing / Decision Table Testing
A Decision Table is a tabular representation of inputs versus rules/cases/test conditions.

It is a very effective tool used for both complex software testing and requirements
management.

Decision table helps to check all possible combinations of conditions for testing and testers
can also identify missed conditions easily.

The conditions are indicated as True(T) and False(F) values.

It is also known as Ishikawa diagram as it was invented by Kaoru Ishikawa or fish bone
diagram because of the way it looks.
Cause Effect Graphing / Decision Table Testing

You might also like