1 WVaegm Ne G2 Nmxs 6 H 1 Etwr Me 2 W YWAt F0 J
1 WVaegm Ne G2 Nmxs 6 H 1 Etwr Me 2 W YWAt F0 J
1 WVaegm Ne G2 Nmxs 6 H 1 Etwr Me 2 W YWAt F0 J
UNIT I
1. Software myths
Developer Myths. Developers often want to be artists (or artisans), but the software
development craft is becoming an engineering discipline. However myths remain:
costs. This myth is true only for shelfware --- software that is never used, and there
are no customers for next release of a shelfware product.
Project success depends solely on the quality of the delivered program.
Documentation and software configuration information is very important to the
quality. After functionality, maintainability, see the preceding myth, is of critical
importance. Developers must maintain the software and they need good design
documents, test data, etc to do their job.
You can't assess software quality until the program is running.
There are static ways to evaluate quality without running a program. Software
reviews can effectively determine the quality of requirements documents, design
documents, test plans, and code. Formal (mathematical) analyses are often used to
verify safety critical software, software security factors, and very-high reliability
software.
2. CMMI
https://jntuhbtechadda.blogspot.com/
Improved
And most importantly ... effective. Poor but mature processes are just as bad as no maturity
at all!
The CMM helps to solve the maturity problem by defining a set of practices and providing a
general framework for improving them. The CMM focus is on identifying key process areas
and the exemplary practices that may comprise a disciplined software process.
Immature vs Mature Organization:
There are following characteristics of an immature organization:
Process improvised during project
Approved processes being ignored
Reactive, not proactive
Unrealistic budget and schedule
Quality sacrificed for schedule
No objective measure of quality
There are following characteristics of an mature organization:
Inter-group communication and coordination
Work accomplished according to plan
Practices consistent with processes
Processes updated as necessary
Well defined roles/responsibilities
Management formally commits
CMM Integration project was formed to sort out the problem of using multiple CMMs.
CMMI Product Team's mission was to combine three Source Models into a single
improvement framework to be used by the organizations pursuing enterprise-wide process
improvement. These three Source Models are :
Capability Maturity Model for Software (SW-CMM) - v2.0 Draft C
Electronic Industries Alliance Interim Standard (EIA/IS) - 731 Systems Engineering
Integrated Product Development Capability Maturity Model (IPD-CMM) v0.98
CMM Integration:
- builds an initial set of integrated models.
- improves best practices from source models based on lessons learned.
- establishes a framework to enable integration of future models.
https://jntuhbtechadda.blogspot.com/
CMMI provides:
There are five CMMI maturity levels. However, maturity level ratings are only awarded for
levels 2 through 5.
CM Configuration Management
MA Measurement and Analysis
PMC Project Monitoring and Control
PP Project Planning
PPQA Process and Product Quality Assurance
REQM Requirements Management
SAM Supplier Agreement Management
https://jntuhbtechadda.blogspot.com/
The Unified Process is not simply a process, but rather an extensible framework which
should be customized for specific organizations or projects. The Rational Unified Process is,
similarly, a customizable framework. As a result, it is often impossible to say whether a
refinement of the process was derived from UP or from RUP, and so the names tend to be
used interchangeably.
The name Unified Process as opposed to Rational Unified Process is generally used to
describe the generic process, including those elements which are common to most
refinements. The Unified Process name is also used to avoid potential issues of trademark
infringement since Rational Unified Process and RUP are trademarks of IBM. The first book
to describe the process was titled The Unified Software Development Process (ISBN 0-201-
57169-2) and published in 1999 by Ivar Jacobson, Grady Booch and James Rumbaugh.
Since then various authors unaffiliated with Rational Software have published books and
articles using the name Unified Process, whereas authors affiliated with Rational
Software have favored the name Rational Unified Process.
In 2012 the Disciplined Agile Delivery framework was released, a hybrid framework that
adopts and extends strategies from Unified Process, Scrum, XP, and other methods.
Inception phase
Inception is the smallest phase in the project, and ideally it should be quite short. If the
Inception Phase is long then it may be an indication of excessive up-front specification,
which is contrary to the spirit of the Unified Process.
The following are typical goals for the Inception phase:
Establish
Prepare a preliminary project schedule and cost estimate
Feasibility
Buy or develop it
The Lifecycle Objective Milestone marks the end of the Inception phase.
Develop an approximate vision of the system, make the business case, define the scope, and
produce rough estimate for cost and schedule.
Elaboration phase
During the Elaboration phase the project team is expected to capture a healthy majority of
the system requirements. However, the primary goals of Elaboration are to address known
risk factors and to establish and validate the system architecture. Common processes
undertaken in this phase include the creation of use case diagrams, conceptual diagrams
(class diagrams with only basic notation) and package diagrams (architectural diagrams).
https://jntuhbtechadda.blogspot.com/
The final Elaboration phase deliverable is a plan (including cost and schedule estimates) for
the Construction phase. At this point the plan should be accurate and credible, since it
should be based on the Elaboration phase experience and since significant risk factors
should have been addressed during the Elaboration phase.
Construction phase
Construction is the largest phase in the project. In this phase the remainder of the system is
built on the foundation laid in Elaboration. System features are implemented in a series of
short, timeboxed iterations. Each iteration results in an executable release of the software. It
is customary to write full text use cases during the construction phase and each one becomes
the start of a new iteration. Common Unified Modeling Language (UML) diagrams used
during this phase include activity diagrams, sequence diagrams, collaboration
diagrams, State Transition diagrams and interaction overview diagrams. Iterative
implementation for the lower risks and easier elements are done. The final Construction
phase deliverable is software ready to be deployed in the Transition phase.
Transition phase
The final project phase is Transition. In this phase the system is deployed to the target users.
Feedback received from an initial release (or initial releases) may result in further
refinements to be incorporated over the course of several Transition phase iterations. The
Transition phase also includes system conversions and user training.
https://jntuhbtechadda.blogspot.com/
concept, reuse of the existing prototypes (components), continuous integration and rapid
delivery.
What is RAD?
Rapid application development is a software development methodology that uses minimal
planning in favor of rapid prototyping. A prototype is a working model that is functionally
equivalent to a component of the product.
In the RAD model, the functional modules are developed in parallel as prototypes and are
integrated to make the complete product for faster product delivery. Since there is no
detailed preplanning, it makes it easier to incorporate the changes within the development
process.
RAD projects follow iterative and incremental model and have small teams comprising of
developers, domain experts, customer representatives and other IT resources working
progressively on their component or prototype.
The most important aspect for this model to be successful is to make sure that the
prototypes developed are reusable.
Business Modeling
The business model for the product under development is designed in terms of flow of
information and the distribution of information between various business channels. A
complete business analysis is performed to find the vital information for business, how it
can be obtained, how and when is the information processed and what are the factors
driving successful flow of information.
Data Modeling
The information gathered in the Business Modeling phase is reviewed and analyzed to form
sets of data objects vital for the business. The attributes of all data sets is identified and
defined. The relation between these data objects are established and defined in detail in
relevance to the business model.
Process Modeling
The data object sets defined in the Data Modeling phase are converted to establish the
business information flow needed to achieve specific business objectives as per the
https://jntuhbtechadda.blogspot.com/
business model. The process model for any changes or enhancements to the data object sets
is defined in this phase. Process descriptions for adding, deleting, retrieving or modifying a
data object are given.
Application Generation
The actual system is built and coding is done by using automation tools to convert process
and data models into actual prototypes.
Identification
This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
requirements and unit requirements are all done in this phase.
Design
The Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and the final design
in the subsequent spirals.
Construct or Build
The Construct phase refers to production of the actual software product at every spiral. In
the baseline spiral, when the product is just thought of and the design is being developed a
POC (Proof of Concept) is developed in this phase to get customer feedback.
Then in the subsequent spirals with higher clarity on requirements and design details a
working model of the software called build is produced with a version number. These
builds are sent to the customer for feedback.
https://jntuhbtechadda.blogspot.com/
The following illustration is a representation of the Spiral Model, listing the activities in
each phase.
https://jntuhbtechadda.blogspot.com/
UNIT-II
1. System Requirements
Requirement Engineering
The process to gather the software requirements from client, analyze and document them is
known as requirement engineering.
The goal of requirement engineering is to develop and maintain sophisticated and
Feasibility Study
Requirement Gathering
Software Requirement Specification
Software Requirement Validation
Let us see the process briefly -
Feasibility study
When the client approaches the organization for getting the desired product developed, it
comes up with rough idea about what all functions the software must perform and which all
features are expected from the software.
Referencing to this information, the analysts does a detailed study about whether the
desired system and its functionality are feasible to develop.
This feasibility study is focused towards goal of the organization. This study analyzes
whether the software product can be practically materialized in terms of implementation,
contribution of project to organization, cost constraints and as per values and objectives of
the organization. It explores technical aspects of the project and product such as usability,
maintainability, productivity and integration ability.
The output of this phase should be a feasibility study report that should contain adequate
comments and recommendations for management about whether or not the project should
be undertaken.
Requirement Gathering
If the feasibility report is positive towards undertaking the project, next phase starts with
gathering requirements from the user. Analysts and engineers communicate with the client
and end-users to know their ideas on what the software should provide and which features
they want the software to include.
Software Requirement Specification
SRS is a document created by system analyst after the requirements are collected from
various stakeholders.
https://jntuhbtechadda.blogspot.com/
SRS defines how the intended software will interact with hardware, external interfaces,
speed of operation, response time of system, portability of software across various
platforms, maintainability, speed of recovery after crashing, Security, Quality, Limitations
etc.
The requirements received from client are written in natural language. It is the
responsibility of system analyst to document the requirements in technical language so that
they can be comprehended and useful by the software development team.
SRS should come up with following features:
Requirements gathering - The developers discuss with the client and end users and
know their expectations from the software.
Organizing Requirements - The developers prioritize and arrange the requirements
in order of importance, urgency and convenience.
Negotiation & discussion - If requirements are ambiguous or there are some
conflicts in requirements of various stakeholders, if they are, it is then negotiated
and discussed with stakeholders. Requirements may then be prioritized and
reasonably compromised.
https://jntuhbtechadda.blogspot.com/
The requirements come from various stakeholders. To remove the ambiguity and
conflicts, they are discussed for clarity and correctness. Unrealistic requirements are
compromised reasonably.
The functional requirements for a system describes what the system should do(defines a
function of a system or its component). Functional requirements may be calculations,
technical details, data manipulation and processing and other specific functionality that
define what a system is supposed to accomplish. These requirements depend on the type of
software being developed, the expected users of the software and the general approach taken
by the organization when writing requirements. When expressed as user requirements, the
requirements described in a fairly abstract way. However, functional system requirements
describe the system function in detail, its inputs, expectations, behavior and
outputs.Functional requirements for the software system may be expressed in a number of
ways. For example, Functional requirements for a library system, used by students to order
books and documents from other libraries could be following points;
The user shall be able to search either all of the initial set of databases or select a
subset from it.
The system shall provide appropriate viewers for the user to read documents in the
document store.
Every order shall be allocated a unique identifier (ORDER_ID) which the user shall
be able to copy to th
Non-functionalrequirements
Non-functional requirements are not directly concerned with the specific functions delivered by the
system. It is a requirement that specifies criteria that can be used to judge the operation of a system,
rather than specific behaviors. It defines system properties and constraints like, reliability, response
time and storage requirements. Constraints are I/O device capability, system representations, etc.
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
Unit-III
1. Design Engineering:
Software design is a process to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation.
Software design is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from problem domain to solution domain. It tries to specify how to fulfill the
requirements mentioned in SRS.
Architectural Design - The architectural design is the highest abstract version of the
system. It identifies the software as a system with many components interacting with
each other. At this level, the designers get the idea of proposed solution domain.
Detailed Design- Detailed design deals with the implementation part of what is seen
as a system and its sub-systems in the previous two designs. It is more detailed
towards modules and their implementations. It defines logical structure of each
module and their interfaces to communicate with other modules.
https://jntuhbtechadda.blogspot.com/
Modularization
Modularization is a technique to divide a software system into multiple discrete and
independent modules, which are expected to be capable of carrying out task(s)
independently. These modules may work as basic constructs for the entire software.
Designers tend to design modules such that they can be executed and/or compiled
separately and independently.
Advantage of modularization:
It is necessary for the programmers and designers to recognize those modules, which can be
made parallel execution.
Example
The spell check feature in word processor is a module of software, which runs along side
the word processor itself.
https://jntuhbtechadda.blogspot.com/
Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements of a
module. The greater the cohesion, the better is the program design.
Logical cohesion - When logically categorized elements are put together into a
module, it is called logical cohesion.
Temporal Cohesion - When elements of module are organized such that they are
processed at a similar point in time, it is called temporal cohesion.
Procedural cohesion - When elements of module are grouped together, which are
executed sequentially in order to perform a task, it is called procedural cohesion.
Sequential cohesion - When elements of module are grouped because the output of
one element serves as input to another and so on, it is called sequential cohesion.
https://jntuhbtechadda.blogspot.com/
2. Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a
program. It tells at what level the modules interfere and interact with each other. The lower
the coupling, the better the program.
Content coupling - When a module can directly access or modify or refer to the
content of another module, it is called content level coupling.
Common coupling- When multiple modules have read and write access to some
global data, it is called common or global coupling.
Control coupling- Two modules are called control-coupled if one of them decides
the function of the other module or changes its flow of execution.
Stamp coupling- When multiple modules share common data structure and work on
different part of it, it is called stamp coupling.
Data coupling- Data coupling is when two modules interact with each other by
means of passing data (as parameter). If a module passes data structure as parameter,
then the receiving module should use all its components.
The component-level design provides a way to determine whether the defined algorithms,
data structures, and interfaces will work properly. Note that a component (also known
https://jntuhbtechadda.blogspot.com/
as module) can be defined as a modular building block for the software. However, the
meaning of component differs according to how software engineers use it. The modular
design of the software should exhibit the following sets of properties.
Modularity has become an accepted approach in every engineering discipline. With the
introduction of modular design, complexity of software design has considerably reduced;
change in the program is facilitated that has encouraged parallel development of systems. To
achieve effective modularity, design concepts like functional independence are considered to
be very important.
3. Functional Independence
Functional independence is the refined form of the design concepts of modularity,
abstraction, and information hiding. Functional independence is achieved by developing a
module in such a way that it uniquely performs given sets of function without interacting
with other parts of the system. The software that uses the property of functional
independence is easier to develop because its functions can be categorized in a systematic
manner. Moreover, independent modules require less maintenance and testing activity, as
secondary effects caused by design modification are limited with less propagation of errors.
In short, it can be said that functional independence is the key to a good software design and
a good design results in high-quality software. There exist two qualitative criteria for
measuring functional independence, namely, coupling and cohesion.
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
UNIT-IV
Verification Validation
2. It does not involve executing the code. 2. It always involves executing the code.
4. Verification uses methods like 4. Validation uses methods like black box
inspections, reviews, walkthroughs, and (functional) testing, gray box testing, and
Desk-checking etc. white box (structural) testing etc.
6. It can catch errors that validation cannot 6. It can catch errors that verification
catch. It is low level exercise. cannot catch. It is High Level Exercise.
https://jntuhbtechadda.blogspot.com/
2.UNIT TESTING:
Unit testing, a testing technique using which individual modules are tested to determine if
there are any issues by the developer himself. It is concerned with functional correctness of
the standalone modules.
The main aim is to isolate each unit of the system to identify, analyze and fix the defects.
Unit Tests, when integrated with build gives the quality of the build as well.
White Box Testing - used to test each one of those functions behaviour is tested.
Gray Box Testing - Used to execute tests, risks and assessment methods.
https://jntuhbtechadda.blogspot.com/
Black Box Testing, also known as Behavioral Testing, is a software testing method in
which the internal structure/design/implementation of the item being tested is not kno
Definition by ISTQB
https://jntuhbtechadda.blogspot.com/
Example
A tester, without knowledge of the internal structures of a website, tests the web pages by
using a browser; providing inputs (clicks, keystrokes) and verifying the outputs against the
expected outcome.
Levels Applicable To
Black Box Testing method is applicable to the following levels of software testing:
Integration Testing
System Testing
Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more black-box
testing method comes into use.\
Techniques
Following are some techniques that can be used for designing black box tests.
https://jntuhbtechadda.blogspot.com/
UNIT-V
The goal of the risk mitigation, monitoring and management plan is to identify as
many potential risks as possible. To help determine what the potential risks are, GameForge
contained within this Web site]. These checklists help to identify potential risks in a generic
sense. The project will then be analyzed to determine any project-specific risks. When all
risks have been identified, they will then be evaluated to determine their probability of
occurrence, and how GameForge will be affected if they do occur. Plans will then be made
to avoid each risk, to track each risk to determine if it is more or less likely to occur, and to
The quicker the risks can be identified and avoided, the smaller the chances of having to
RMMM plan, the better the product, and the smoother the development process. Risk
management organizational role Each member of the organization will undertake risk
management. The development team will consistently be monitoring their progress and
project status as to identify present and future risks as quickly and accurately as possible.
With this said, the members who are not directly involved with the implementation of the
product will also need to keep their eyes open for any possible risks that the development
team did not spot. The responsibility of risk management falls on each member of the
organization, while William Lord maintains this document.
Six Sigma is a highly disciplined process that helps us focus on developing and delivering
near-perfect products and services.
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
UNIT-I
Elaboration phase
Construction phase
Transition phase
3. What is the other name for waterfall model and who invented it?
Ans: It is also known as linear sequential approach, The Waterfall model is originally
invented by Winston W. Royce in 1970.
https://jntuhbtechadda.blogspot.com/
Ans: A process frame work establishes the foundation for a complete software process by
identifying a small number of frame work activities that are applicable to all software
projects, regardless of their size or complexity. Communication, planning, modelling,
construction, deployment
Ans: Application software consists of standalone programs that solve a specific business
need.Application software is a program or group of programs designed for end users. These
programs are divided into two classes: system software and application software. While
system software consists of low-level programs that interact with computers at a basic level,
application software resides above system software and includes applications such as
database program
* Most software is custom built rather than being assembled from components
* Application software
* Engineering/Scientific software
* Embedded software
https://jntuhbtechadda.blogspot.com/
Ans: Many causes of a software affliction can be traced to a mythology that arose during
the early history of software development. Unlike ancient myths that often provide human
lessons well worth heeding, software myths propagated misinformation and confusion.
Software myths had a number of attributes that made them insidious; for instance, they
appeared to be reasonable statements of fact (sometimes containing elements of truth), they
had an intuitive feel, and they were often promulgated by experienced practitioners who
"knew the score." Today, most knowledgeable professionals recognize myths for what they
are misleading attitudes that have caused serious problems for managers and technical
people alike. However, old attitudes and habits are difficult to modify, and remnants of
software myths are still believed.
Management myths
Managers with software responsibility, like managers in most disciplines, are often under
pressure to maintain budgets, keep schedules from slipping, and improve quality. Like a
drowning person who grasps at a straw, a software manager often grasps at belief in a
software myth, if that belief will lessen the pressure (even temporarily).
Customer myths
A customer who requests computer software may be a person at the next desk, a technical
group down the hall, the marketing/sales department, or an outside company that has
requested software under contract. In many cases, the customer believes myths about
software because software managers and practitioners do little to correct misinformation.
Myths lead to false expectations (by the customer) and ultimately, dissatisfaction with the
developer
Practitioner's myths
Myths that are still believed by software practitioners have been fostered by 50 years of
programming culture. During the early days of software, programming was viewed as an art
form. Old ways and attitudes die hard.
https://jntuhbtechadda.blogspot.com/
Ans: CMMI: In
The Software Engineering Institute (SEI) has developed a comprehensive model predicated
on a set of software engineering capabilities that should be present as organizations reach
d
maturity, the SEI uses an assessment that results in a five point grading scheme. The grading
scheme determines compliance with a capability maturity model (CMM) [PAU93] that
defines key activities required at different levels of process maturity. The SEI approach
provides a measure of the global effectiveness of a company's software engineering
practices and establishes five process maturity levels that are defined in the following
manner:
Level 1: Initial. The software process is characterized as ad hoc and occasionally even
chaotic. Few processes are defined, and success depends on individual effort.
Level 2: Repeatable. Basic project management processes are established to track cost,
schedule, and functionality. The necessary process discipline is in place to repeat earlier
successes on projects with similar applications
Level 3: Defined. The software process for both management and engineering activities is
documented, standardized, and integrated into an organization wide software process. All
projects use a documented and approved version of the organization's process for developing
and supporting software. This level includes all characteristics defined for level 2.
Level 4: Managed. Detailed measures of the software process and product quality are
collected. Both the software process and products are quantitatively understood and
controlled using detailed measures. This level includes all characteristics defined for level 3.
The five levels defined by the SEI were derived as a consequence of evaluating responses to
the SEI assessment questionnaire that is based on the CMM. The results of the questionnaire
are distilled to a single numerical grade that provides an indication of an organization's
process maturity.
https://jntuhbtechadda.blogspot.com/
The SEI has associated key process areas (KPAs) with each of the maturity levels. The
KPAs describe those software engineering functions (e.g., software project planning,
requirements management) that must be present to satisfy good practice at a particular level.
Each KPA is described by identifying the following characteristics:
https://jntuhbtechadda.blogspot.com/
Ans: The incremental model combines elements of the linear sequential model (applied
repetitively) with the iterative philosophy of prototyping. The incremental model applies
linear sequences in a staggered fashion as calendar time progresses. Each linear sequence
-processing software
developed using the incremental paradigm might deliver basic file management, editing, and
document production functions in the first increment; more sophisticated editing and
document production capabilities in the second increment; spelling and grammar checking
in the third increment; and advanced page layout capability in the fourth increment. It should
be noted that the process flow for any increment can incorporate the prototyping paradigm.
When an incremental model is used, the first increment is often a core product. That is, basic
requirements are addressed, but many supplementary features (some known, others
unknown) remain undelivered. The core product is used by the customer (or undergoes
detailed review). As a result of use and/or evaluation, a plan is developed for the next
increment. The plan addresses the modification of the core product to better meet the needs
of the customer and the delivery of additional features and functionality. This process is
repeated following the delivery of each increment, until the complete product is produced.
The incremental process model, like prototyping and other evolutionary approaches, is
iterative in nature. But unlike prototyping, the incremental model focuses on the delivery of
an operational product with each increment. Early increments are stripped down versions of
the final product, but they do provide capability that serves the user and also provide a
platform for evaluation by the user.
https://jntuhbtechadda.blogspot.com/
y for information
systems applications; the RAD approach encompasses the following phases.
Data modelling: information flow defined as part of the business modelling phase is
refined into a set of data objects that are needed to support the business. The char acteristics
(called attributes) of each object are identified and the relationships between these objects
defined.
Process modeling. The data objects defined in the data modeling phase are transformed
to achieve the information flow necessary to implement a business function. Processing
descriptions are created for adding, modifying, deleting, or retrieving a data object.
https://jntuhbtechadda.blogspot.com/
Application generation. RAD assumes the use of fourth generation techniques Rather
than creating software using conventional third generation programming languages the RAD
process works to reuse existing program components (when possible) or create reusable
components (when necessary). In all cases, automated tools are used to facilitate
construction of the software.
Testing and turnover. Since the RAD process emphasizes reuse, many of the program
components have already been tested. This reduces overall testing time. However, new
components must be tested and all interfaces must be fully exercised.
function to be completed in less than three months (using the approach described
previously), it is a candidate for RAD. Each major function can be addressed by a separate
RAD team and then integrated to form a whole. Like all process models, the RAD approach
has drawbacks [BUT94]:
For large but scalable projects, RAD requires sufficient human resources to create the right
number of RAD teams.
RAD requires developers and customers who are committed to the rapid-fire activities
necessary to get a system complete in a much abbreviated time frame. If commitment is
lacking from either constituency, RAD projects will fail.
Not all types of applications are appropriate for RAD. If a system cannot be properly
modularized, building the components necessary for RAD will be problematic. If high
performance is an issue and performance is to be achieved through tuning the interfaces to
system components, the RAD approach may not work.
https://jntuhbtechadda.blogspot.com/
Sometimes called the classic life cycle or the waterfall model, the linear sequential model
suggests a systematic, sequential approach5 to software development that begins at the
system level and progresses through analysis, design, coding, testing, and support. Illustrates
the linear sequential model for software engineering. Modeled after a conventional
engineering cycle, the linear sequential model encompasses the following activities:
Software requirements analysis. The requirements gathering process is intensi- fied and
focused specifically on software. To understand the nature of the program(s) to be built, the
software engineer ("analyst") must understand the information domain (described in Chapter
11) for the software, as well as required function, behavior, performance, and interface.
Requirements for both the system and the software are documented and reviewed with the
customer.
Design: Software design is actually a multistep process that focuses on four distinct
attributes of a program: data structure, software architecture, interface representations, and
procedural (algorithmic) detail. The design process translates requirements into a
representation of the software that can be assessed for quality before coding begins. Like
requirements, the design is documented and becomes part of the software configuration.
Code generation. The design must be translated into a machine-readable form. The code
generation step performs this task. If design is performed in a detailed manner, code
generation can be accomplished mechanistically.
https://jntuhbtechadda.blogspot.com/
Testing. Once code has been generated, program testing begins. The testing process focuses
on the logical internals of the software, ensuring that all statements have been tested, and on
the functional externals; that is, conducting tests to uncover errors and ensure that defined
input will produce actual results that agree with required results.
https://jntuhbtechadda.blogspot.com/
2) Software is
a) Set of computer programs, procedures and possibly associated document
concerned with the operation of data processing.
b) A set of compiler instructions
c) A mathematical formula
d) None of above
https://jntuhbtechadda.blogspot.com/
Answers
Q. No. 1 2 3 4 5 6 7 8 9 10
Answer a a d b a a d b a d
https://jntuhbtechadda.blogspot.com/
Answers
Q. No. Answers
1 preliminary investigation and analysis.
2 system analysis
3 end user understanding and approval
4 proposed system.
5 spiral model
6 Software development life cycle
7 2 phases
8 accommodating change
9 Rapid Application Development
10 Radial_ and Angular
https://jntuhbtechadda.blogspot.com/
Unit-II
Two mark questions with answers
Q1) Define Requirements?
Ans: descriptions of the services that a software system must provide and the constraints
under which it must operate. Requirements can range from high-level abstract statements of
services or system constraints to detailed mathematical functional specifications
Requirements Engineering is the process of establishing the services that the customer
requires from the system and the constraints under which it is to be developed and operated.
Requirements may serve a dual function: As the basis of a bid for a contract As the basis for
the contract itself
Q2)What are functional and non-functional requirements?
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
Ans:Software validation checks that the software product satisfies or fits the intended use
(high-level checking), i.e., the software meets the user requirements, not as specification
artifacts or as needs of those who will operate the software only; but, as the needs of all the
stakeholders (such as users, operators, administrators, managers, investors, etc.).
There are two ways to perform software validation: internal and external. During internal
software validation it is assumed that the goals of the stakeholders were correctly understood
and that they were expressed in the requirement artifacts precise and comprehensively. If the
software meets the requirement specification, it has been internally validated. External
validation happens when it is performed by asking the stakeholders if the software meets
their needs.
Different software development methodologies call for different levels of user and
stakeholder involvement and feedback; so, external validation can be a discrete or a
continuous event. Successful final external validation occurs when all the stakeholders
accept the software product and express that it satisfies their needs. Such final external
validation requires the use of an acceptance test which is a dynamic test.
However, it is also possible to perform internal static tests to find out if it meets the
requirements specification but that falls into the scope of static verification because the
software is not running.
Ans:Not only can the software product as a whole be validated. Requirements should be
validated before the software product as whole is ready (the waterfall development process
requires them to be perfectly defined before design starts; but, iterative development
processes do not require this to be so and allow their continual improvement).
https://jntuhbtechadda.blogspot.com/
them directly (static testing) or even by releasing prototypes and having the users and
stakeholders to assess them (dynamic testing).
User input validation: User input (gathered by any peripheral such as keyboard, bio-metric
sensor, etc.) is by checking if the input provided by the software operators or users
meet the domain rules and constraints (such as data type, range, and format).
Ans:
Software Validation: The process of evaluating software during or at the end of the
development process to determine whether it satisfies specified requirements.
Software Verification: The process of evaluating software to determine whether the
products of a given development phase satisfy the conditions imposed at the start of
that phase
Ans:These define system properties and constraints e.g. reliability, response time and
storage
requirements. Constraints are I/O device capability, system representations, etc. Process
requirements may also be specified mandating a particular CASE system, programming
Ans:Derived from the application domain and describe system characteristics and features
that reflect
define specific computations. If domain requirements are not satisfied, the system may be
unworkable.
https://jntuhbtechadda.blogspot.com/
Ans:System Models: System models are graphical representation that describes business
processes, the trouble to be solved and the system that is to be urbanized.
One can use models in the analysis process to develop an understanding of the existing
system that is to be replaced or enhanced or to specify the new system that is required.
For example,
1) An exterior perspective, where the context or environment of the system is modelled.
2) A behavioural perspective, where the behaviour of the system is modelled.
Types of Model
Different types of system are based on different approaches to abstraction. A data flow
model, e.g., concentrates on the flow of data and the functional transformation on that data
.It leaves out details of the data structure.
Examples of Types of System Models
1) Data Flow Model: Data flow models show the principal sub-system that make-up a
system.
2) Composition Model: A composition or aggregation model shows how entities in the
system are composed of other entities.
3) Architectural Model: Architectural models show the principal sub-system that make-
up a system.
4) Classification Model: Object class/inheritance diagrams show how entities have
common characteristics.
5) Stimulus-Response Model: A stimulus-response model, or state transition diagram.
Shows how the system reacts to internal and external events.
Ans:Data Models:
Most great software systems make use of a large database of information. In some cases, this
database is autonomous of the software system. An imperative part of system modelling is
significant the logical form of the data processed by the system. These are sometimes
called semantic data models.
https://jntuhbtechadda.blogspot.com/
Ans: Non-functional requirements may be very difficult to state precisely and imprecise
requirements may be difficult to verify.
Goal
A general intention of the user such as ease of use.
The system should be easy to use by experienced controllers and should be organised in such
a way that user errors are minimised.
https://jntuhbtechadda.blogspot.com/
Experienced controllers shall be able to use all the system functions after a total of two
hours training. After this training, the average number of errors made by experienced users
shall not exceed two per day.
Goals are helpful to developers as they convey the intentions of the system users.
Requirements interaction:
Conflicts between different non-functional requirements are common in complex systems.
Spacecraft system
To minimise weight, the number of separate chips in the system should be minimised.
To minimise power consumption, lower power chips should be used.
However, using low power chips may mean that more chips have to be used.
Which is the most critical requirement?
A common problem with non-functional requirements is tht they can be difficult to verify.
Users or customers often state these requirements as general goals such as ease of use, the
ability of the system to recover from failure or rapid user response. These vague goals cause
problems for system developers as they leave scope for interpretation and subsequent dispute
once the system is delivered.
Ans:Derived from the application domain and describe system characteristics and features
that reflect the domain.
Domain requirements be new functional requirements, constraints on existing requirements
or define specific computations.
If domain requirements are not satisfied, the system may be unworkable.
Library system domain requirements:
There shall be a standard user interface to all databases which shall be
based on the Z39.50 standard.
Because of copyright restrictions, some documents must be deleted immediately on arrival.
system server for manually forwarding to the user or routed to a network printer.
Understandability
Requirements are expressed in the language of the application domain;
This is often not understood by software engineers developing the system.
Implicitness
Domain specialists understand the area so well that they do not think of making the domain
requirements explicit.
https://jntuhbtechadda.blogspot.com/
Ans:The goal of requirements engineering process is to create and maintain a system requirements
document. The overall process includes four high-level requirement engineering sub-processes.
These are concerned with
Assessing whether the system is useful to the business(feasibility study)
Discovering requirements(elicitation and analysis)
Converting these requirements into some standard form(specification)
Checking that the requirements actually define the system that the customer
wants(validation) The process of managing the changes in the requirements is called
requirement management.
The alternative perspective on the requirements engineering process presents the process as
a three-stage activity where the activities are organized as an iterative process around a
spiral. The amount of time and effort devoted to each activity in iteration depends on the
stage of the overall process and the type of system being developed. Early in the process,
most effort will be spent on understanding high-level business and non-functional
requirements and the user requirements. Later in the process, in the outer rings of the spiral,
more effort will be devoted to system requirements engineering and system modeling.
This spiral model accommodates approaches to development in which the requirements are
developed to different levels of detail. The number of iterations around the spiral can vary, so
the spiral can be exited after some or all of the user requirements have been elicited.
https://jntuhbtechadda.blogspot.com/
1. Unambiguous
2. Distinctly Specific
3. Functional
4. All of Above
https://jntuhbtechadda.blogspot.com/
Answers
Q. No. 1 2 3 4 5 6 7 8 9 10
Answer a c d b a A d a a d
https://jntuhbtechadda.blogspot.com/
ANSWERS
Q. No. Answers
1 Client and the supplier
2 Validation
3 Cost
4 Problem Analysis
5 Unambiguous
6 Function Requirement
7 structuring information.
8 Constructive Cost Model
9 Semidetached
10 Time sheets
https://jntuhbtechadda.blogspot.com/
Unit-III
Ans:Design is what virtually every engineer wants to do. It is the place where creativity rules
technical considerations all come together in the
formulation of a product or a system. Design creates a representation or model of the software, but
unlike the analysis model, the design model provides detail about software data structures,
architecture, interfaces, and components that are necessary to implement the system.
Q3)Quality attributes:
Ans The FURPS quality attributes represent a target for all software design:
Functionality is assessed by evaluating the feature set and capabilities of the program, the
generality of the functions that are delivered, and the security of the overall system.
Usability is assessed by considering human factors, overall aesthetics, consistency and
documentation. Reliability is evaluated by measuring the frequency and severity of failure,
the accuracy of output results, and the mean time to- failure (MTTF), the ability to
recover from failure, and the predictability of the program. Performance is measured by
processing speed, response time, resource consumption, throughput, and efficiency
Supportability combines the ability to extend the program (extensibility), adaptability,
serviceability- these three attributes represent a more common term maintainability
Ans The challenge for a business has been to extract useful information from this data
environment, particularly when the information desired is cross functional. To solve this
challenge, the business IT community has developed data mining techniques, also called
knowledge discovery in databases (KDD), that navigate through existing databases in an
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
Ans The interface analysis activity focuses on the profile of the users who will interact with
the system. Skill level, business understanding, and general receptiveness to the new system
are recorded; and different user categories are defined. For each user category, requirements
are elicited. In essence, the software engineer attempts to understand the system perception
(Section 15.2.1) for each class of users.Once general requirements have been defined, a
more detailed task analysis is conducted. Those tasks that the user performs to accomplish
the goals of the system are identified, described, and elaborated
https://jntuhbtechadda.blogspot.com/
Specify name
Specify userid
Specify PIN and password
Specify prescription number
Specify date refill is required
https://jntuhbtechadda.blogspot.com/
Modularity
Separately named and addressable components (i.e., modules) that are integrated to satisfy
requirements (divide and conquer principle) Makes software intellectually manageable so as
to grasp the control paths, span of reference, number of variables, and overall complexity
Information hiding
The designing of modules so that the algorithms and local data contained within them are
inaccessible to other modules. This enforces access constraints to both procedural (i.e.,
implementation) detail and local data structures
Functional independence
Modules that have a "single-minded" function and an aversion to excessive interaction with
other modules
High cohesion a module performs only a single task
Low coupling a module has the lowest amount of connection needed with other modules
Stepwise refinement
Development of a program by successively refining levels of procedure detail
https://jntuhbtechadda.blogspot.com/
Complements abstraction, which enables a designer to specify procedure and data and yet
suppress low-level details
Refactoring
A reorganization technique that simplifies the design (or internal code structure) of a
component without changing its function or external behavior
Removes redundancy, unused design elements, inefficient or unnecessary algorithms, poorly
constructed or inappropriate data structures, or any other design failures
Design classes
Refines the analysis classes by providing design detail that will enable the classes to be
implemented
Creates a new set of design classes that implement a software infrastructure to support the
business solution
Types of Design Classes
User interface classes define all abstractions necessary for human-computer interaction
(usually via metaphors of real-world objects)
Business domain classes refined from analysis classes; identify attributes and services
(methods) that are required to implement some element of the business domain
Process classes implement business abstractions required to fully manage the business
domain classes
Persistent classes represent data stores (e.g., a database) that will persist beyond the
execution of the software
System classes implement software management and control functions that enable the
system to operate and communicate within its computing environment and the outside world
.
https://jntuhbtechadda.blogspot.com/
Semantic constraints that define how components can be integrated to form the system A
topological layout of the components indicating their runtime interrelationships
https://jntuhbtechadda.blogspot.com/
incurring overhead in set-up and take-down time, Use this style when it makes sense to view
your system as one that produces a well-defined easily identified output
The output should be a direct result of sequentially transforming a well-defined easily
identified input in a time-independent fashion
Data-Centered Style
Has the goal of integrating the data
Refers to systems in which the access and update of a widely accessed data store occur
A client runs on an independent thread of control
The shared data may be a passive repository or an active blackboard
A blackboard notifies subscriber clients when changes occur in data of interest
At its heart is a centralized data store that communicates with a number of clients
Clients are relatively independent of each other so they can be added, removed, or changed
in functionality
The data store is independent of the clients
Use this style when a central issue is the storage, representation, management, and retrieval
of a large amount of related persistent data
Note that this style becomes client/server if the clients are modeled as independent processes
Virtual Machine Style
Has the goal of portability
Software systems in this style simulate some functionality that is not native to the hardware
and/or software on which it is implemented
Can simulate and test hardware platforms that have not yet been built
Can simulate "disaster modes" as in flight simulators or safety-critical systems that would be
too complex, costly, or dangerous to test with the real system
Examples include interpreters, rule-based systems, and command language processors
Interpreters
Add flexibility through the ability to interrupt and query the program and introduce
modifications at runtime
Incur a performance cost because of the additional computation involved in execution
Use this style when you have developed a program or some form of computation but have
no make of machine to directly run it on.
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
The user shall be provided mnemonics (i.e., control or alt combinations) that tie easily to the
action in a way that is easy to remember such as the first letter
The visual layout of the interface should be based on a real world metaphor
Disclose information in a progressive fashion
Make the Interface Consistent
The interface should present and acquire information in a consistent fashion
Allow the user to put the current task into a meaningful context
Maintain consistency across a family of applications
If past interactive models have created user expectations, do not make changes unless there
is a compelling reason to do so.
The length and complexity of the written specification of the system and its interface provide an
indication of the amount of learning required by user of the system. The number of user tasks
specified and the average number of actions per task provide an indication on interaction time and
the overall efficiency of the system. The number of actions, tasks, and system states indicated by the
design model imply the memory load on users of the system. Interface styles, help facilities, and
error handling protocol provide a general indication of the complexity of the interface and the degree
to which it will be accepted by the user.
P.Praveen,Assistant Professo r
https://jntuhbtechadda.blogspot.com/
3) Which of the following comments about object oriented design of software, is not
true?
https://jntuhbtechadda.blogspot.com/
8) Once object oriented programming has been accomplished, unit testing is applied
for each class. Class tests include?
(a) Fault based testing
(b) Random testing
(c) Partition testing
(d) All of above
Answers
Q. No. 1 2 3 4 5 6 7 8 9 10
Answer d c c d A d d d c c
https://jntuhbtechadda.blogspot.com/
Answers
Q. No. Answers
1 Process Design Language
2 Parallel, Hardware and Software design
3 Data, architectural, interface, procedural
design
4 Step wise refinement
5 Module
6 Architecture
7 Physical
8 Coupling
9 Fan-out
10 Abstraction
https://jntuhbtechadda.blogspot.com/
Unit-lV
Ans Use-cases describe user-visible functions and features that are basic requirements for a
system. The use-cases is directly proportional to the size of the application in LOC and to
the number of use-cases is directly proportional to the size of the application in LOC and to
the number of test cases that will have to be designed to fully exercise the application.
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
must be derived indirectly using other direct measures. Function-oriented metrics were first
proposed by Albrecht, who suggested a measure called the function point. Function points
are derived using an empirical relationship based on countable (direct) measures of
software's information domain and assessments of software complexity.
https://jntuhbtechadda.blogspot.com/
Ans Treats the system as black box whose behavior can be determined by studying its input and
related output Not concerned with the internal structure of the program
Ans It focuses on the functional requirements of the software ie it enables the sw engineer to derive
a set of input conditions that fully exercise all the functional requirements for that program.
Concerned with functionality and implementation
1) Graph based testing method
2) Equivalence partitioning
https://jntuhbtechadda.blogspot.com/
Unit testing focuses verification effort on the smallest unit of software design the software
component or module. Using the component-level design description as a guide, important
control paths are tested to uncover errors within the boundary of the module. The relative
complexity of tests and uncovered errors is limited by the constrained scope established for
unit testing. The unit test is white-box oriented, and the step can be conducted in parallel for
multiple components.
The tests that occur as part of unit tests are illustrated schematically . The module interface
is tested to ensure that information properly flows into and out of the program unit under
test. The local data structure is examined to ensure that data stored temporarily maintains its
integrity during all steps in an algorithm's execution. Boundary conditions are tested to
ensure that the module operates properly at boundaries established to limit or restrict
processing. All independent paths (basis paths) through the control structure are exercised to
ensure that all statements in a module have been executed at least once. And finally, all error
handling paths are tested.
Tests of data flow across a module interface are required before any other test is initiated. If
data do not enter and exit properly, all other tests are moot. In addition, local data structures
should be exercised and the local impact on global data should be ascertained (if possible)
during unit testing. Selective testing of execution paths is an essential task during the unit
test. Test cases should be designed to uncover errors due to erroneous computations,
incorrect comparisons, or improper control flow. Basis path and loop testing are effective
techniques for uncovering a broad array of path errors. Among the more common errors in
computation are (1) misunderstood or incorrect arithmetic precedence, (2) mixed mode
operations, (3) incorrect initialization, (4) precision inaccuracy, (5) incorrect symbolic
representation of an expression. Comparison and control flow are closely coupled to one
another (i.e., change of flow frequently occurs after a comparison). Test cases should
https://jntuhbtechadda.blogspot.com/
uncover errors such as (1) comparison of different data types, (2) incorrect logical operators
or precedence, (3) expectation of equality when precision error makes equality unlikely, (4)
incorrect comparison of variables, (5) improper or nonexistent loop termination, (6) failure
to exit when divergent iteration is encountered, and (7) improperly modified loop variables.
INTEGRATION TESTING:
A neophyte in the software world might ask a seemingly legitimate question once all
modules have been unit tested: "If they all work individually, why do you doubt that they'll
work when we put them together?" The problem, of course, is "putting them together"
interfacing. Data can be lost across an interface; one module can have an inadvertent,
adverse affect on another; sub functions, when combined, may not produce the desired
major function; individually acceptable imprecision may be magnified to unacceptable
levels; global data structures can present problems. Sadly, the list goes on and on.
Integration testing is a systematic technique for constructing the program structure while at
the same time conducting tests to uncover errors associated with interfacing. The objective is
to take unit tested components and build a program structure that has been dictated by
design. There is often a tendency to attempt non incremental integration; that is, to construct
the program using a "big bang" approaches. All components are combined in advance. The
entire program is tested as a whole. And chaos usually results! A set of errors is
encountered. Correction is difficult because isolation of causes is complicated by the vast
expanse of the entire program. Once these errors are corrected, new ones appear and the
process continues in a seemingly endless loop. Incremental integration is the antithesis of the
big bang approach. The program is constructed and tested in small increments, where errors
are easier to isolate and correct; interfaces are more likely to be tested completely; and a
systematic test approach may be applied. In the sections that follow, a number of different
incremental integration strategies are discussed.
Top-down Integration:
https://jntuhbtechadda.blogspot.com/
components on a major control path of the structure. Selection of a major path is somewhat
arbitrary and depends on application-specific characteristics. For example, selecting the left
hand path, components M1, M2 , M5 would be integrated first. Next, M8 or sary for proper
functioning of M2) M6 would be integrated. Then, the central and right hand control paths
are built. Breadth-first integration incorporates all components directly subordinate at each
level, moving across the structure horizontally. From the figure, components M2, M3, and
M4 (a replacement for stub S4) would be integrated first. The next control level, M5, M6,
and so on, follows. The integration process is performed in a series of five steps: 1. The
main control module is used as a test driver and stubs are substituted for all components
directly subordinate to the main control module. 2. Depending on the integration approach
selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual
components. 3. Tests are conducted as each component is integrated. 4. On completion of
each set of tests, another stub is replaced with the real component. 5. Regression testing may
be conducted to ensure that new errors have not been introduced. The process continues
from step 2 until the entire program structure is built. The top-down integration strategy
verifies major control or decision points early in the test process. In a well-factored program
structure, decision making occurs at upper levels in the hierarchy and is therefore
encountered first. If major control problems do exist, early recognition is essential. If depth-
first integration is selected, a complete function of the software may be implemented and
demonstrated.
(1) Delay many tests until stubs are replaced with actual modules,
(2) Develop stubs that perform limited functions that simulate the actual module,
(3) Integrate the software from the bottom of the hierarchy upward. The first approach
(delay tests until stubs are replaced by actual modules) causes us to lose some control over
correspondence between specific tests and incorporation of specific modules. This can lead
to difficulty in determining the cause of errors and tends to violate the highly constrained
nature of the top-down approach. The second approach is workable but can lead to
significant overhead, as stubs become more and more complex. The third approach, called
bottom-up testing, is discussed in the next section.
https://jntuhbtechadda.blogspot.com/
Bottom-up Integration:
Bottom-up integration testing, as its name implies, begins construction and testing with
atomic modules (i.e., components at the lowest levels in the program structure). Because
components are integrated from the bottom up, processing required for components
subordinate to a given level is always available and the need for stubs is eliminated. A
bottom-up integration strategy may be implemented with the following steps:
1. Low-level components are combined into clusters (sometimes called builds) that
perform a specific software sub function
. 2. A driver (a control program for testing) is written to coordinate test case input and
output. 3. The cluster is tested. 4. Drivers are removed and clusters are combined moving
upward in the program structure. Integration follows the pattern .. Components are
combined to form clusters 1, 2, and 3. Each of the clusters is tested using a driver (shown as
a dashed block). Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2
are removed and the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3
is removed prior to integration with module Mb. Both Ma and Mb will ultimately be
integrated with component Mc, and so forth. As integration moves upward, the need for
separate test drivers lessens. In fact, if the top two levels of program structure are integrated
top down, the number of drivers can be reduced substantially and integration of clusters is
greatly simplified
https://jntuhbtechadda.blogspot.com/
It is virtually impossible for a software developer to foresee how the customer will really
use a program. Instructions for use may be misinterpreted; strange combinations of data may
be regularly used; output that seemed clear to the tester may be unintelligible to a user in the
field. When custom software is built for one customer, a series of acceptance tests are
conducted to enable the customer to validate all requirements. Conducted by the enduser
rather than software engineers, an acceptance test can range from an informal "test drive" to
a planned and systematically executed series of tests. In fact, acceptance testing can be
conducted over a period of weeks or months, thereby uncovering cumulative errors that
might degrade the system over time. If software is developed as a product to be used by
many customers, it is impractical to perform formal acceptance tests with each one. Most
software product builders use a process called alpha and beta testing to uncover errors that
only the end-user seems able to find. The alpha test is conducted at the developer's site by a
customer. The software is used in a natural setting with the developer "looking over the
shoulder" of the user and recording errors and usage problems. Alpha tests are conducted in
a controlled environment. The beta test is conducted at one or more customer sites by the
end-user of the software. Unlike alpha testing, the developer is generally not present.
Therefore, the beta test is a "live" application of the software in an environment that cannot
be controlled by the developer. The customer records all problems (real or imagined) that
https://jntuhbtechadda.blogspot.com/
are encountered during beta testing and reports these to the developer at regular intervals. As
a result of problems reported during beta tests, software engineers make modifications and
then prepare for release of the software product to the entire customer base.
Ans: Software process and are not conducted solely by software engineers. However, steps
taken during software design and testing can greatly improve the probability of successful
software integration in the larger system. A classic system testing problem is "finger-
pointing." This occurs when an error is uncovered, and each system element developer
blames the other for the problem. Rather than indulging in such nonsense, the software
engineer should anticipate potential interfacing problems and (1) design error-handling paths
that test all information coming from other elements of the system, (2) conduct a series of
tests that simulate bad data or other potential errors at the software interface, (3) record the
results of tests to use as "evidence" if finger-pointing does occur, and (4) participate in
planning and design of system tests to ensure that software is adequately tested. System
testing is actually a series of different tests whose primary purpose is to fully exercise the
computer-based system. Although each test has a different purpose, all work to verify that
system elements have been properly integrated and perform allocated functions. In the
sections that follow, we discuss the types of system tests [BEI84] that are worthwhile for
software-based system.
Recovery testing:
Many computer based systems must recover from faults and resume processing within a
prespecified time. In some cases, a system must be fault tolerant; that is, processing faults
must not cause overall system function to cease. In other cases, a system failure must be
corrected within a specified period of time or severe economic damage will occur. Recovery
testing is a system test that forces the software to fail in a variety of ways and verifies that
recovery is properly performed. If recovery is automatic (performed by the system itself),
reinitialization, check pointing mechanisms, data recovery, and restart are evaluated for
correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is
evaluated to determine whether it is within acceptable limits.
https://jntuhbtechadda.blogspot.com/
Security testing:
Any computer-based system that manages sensitive information or causes actions that can
improperly harm (or benefit) individuals is a target for improper or illegal penetration.
Penetration spans a broad range of activities: hackers who attempt to penetrate systems for
sport; disgruntled employees who attempt to penetrate for revenge; dishonest individuals
who attempt to penetrate for illicit personal gain. Security testing attempts to verify that
protection mechanisms built into a system will, in fact, protect it from improper penetration.
To quote Beizer [BEI84]: "The system's security must, of course, be tested for
invulnerability from frontal attack but must also be tested for invulnerability from flank or
rear attack." During security testing, the tester plays the role(s) of the individual who desires
to penetrate the system. Anything goes! The tester may attempt to acquire passwords
through external clerical means; may attack the system with custom software designed to
breakdown any defenses that have been constructed; may overwhelm the system, thereby
denying service to others; may purposely cause system errors, hoping to penetrate during
recovery; may browse through insecure data, hoping to find the key to system entry. Given
enough time and resources, good security testing will ultimately penetrate a system. The role
of the system designer is to make penetration cost more than the value of the information
that will be obtained.
Stress testing:
During earlier software testing steps, white-box and black-box techniques resulted in
thorough evaluation of normal program functions and performance. Stress tests are designed
to confront programs with abnormal situations. In essence, the tester who performs stress
testing asks: "How high can we crank this up before it fails?" Stress testing executes a
system in a manner that demands resources in abnormal quantity, frequency, or volume. For
example, (1) special tests may be designed that generate ten interrupts per second, when one
or two is the average rate, (2) input data rates may be increased by an order of magnitude to
determine how input functions will respond, (3) test cases that require maximum memory or
other resources are executed, (4) test cases that may cause thrashing in a virtual operating
system are designed, (5) test cases that may cause excessive hunting for disk-resident data
are created. Essentially, the tester attempts to break the program. A variation of stress testing
is a technique called sensitivity testing. In some situations (the most common occur in
mathematical algorithms), a very small range of data contained within the bounds of valid
https://jntuhbtechadda.blogspot.com/
data for a program may cause extreme and even erroneous processing or profound
performance degradation. Sensitivity testing attempts to uncover data combinations within
valid input classes that may cause instability or improper processing.
Performance testing:
For real-time and embedded systems, software that provides required function but does not
conform to performance requirements is unacceptable. Performance testing is designed to
test the run-time performance of software within the context of an integrated system.
Performance testing occurs throughout all steps in the testing process. Even at the unit level,
the performance of an individual module may be assessed as white-box tests are conducted.
However, it is not until all system elements are fully integrated that the true performance of
a system can be ascertained. Performance tests are often coupled with stress testing and
usually require both hardware and software instrumentation. That is, it is often necessary to
measure resource utilization (e.g., processor cycles) in an exacting fashion. External
instrumentation can monitor execution intervals, log events (e.g., interrupts) as they occur,
and sample machine states on a regular basis. By instrumenting a system, the tester can
uncover situations that lead to degradation and possible system failure.
Process metrics assess the effectiveness and quality of software process, determine maturity
of the process, effort required in the process, effectiveness of defect removal during
development, and so on. Product metrics is the measurement of work product produced
during different phases of software development. Project metrics illustrate the project
characteristics and their execution.
https://jntuhbtechadda.blogspot.com/
Process Metrics
To improve any process, it is necessary to measure its specified attributes, develop a set of
meaningful metrics based on these attributes, and then use these metrics to obtain indicators
in order to derive a strategy for process improvement.
Using software process metrics, software engineers are able to assess the efficiency of the
software process that is performed using the process as a framework. Process is placed at the
centre of the triangle connecting three factors (product, people, and technology), which have
an important influence on software quality and organization performance. The skill and
motivation of the people, the complexity of the product and the level of technology used in
the software development have an important influence on the quality and team performance.
The process triangle exists within the circle of environmental conditions, which includes
development environment, business conditions, and customer /user characteristics.
To measure the efficiency and effectiveness of the software process, a set of metrics is
formulated based on the outcomes derived from the process. These outcomes are listed
below.
https://jntuhbtechadda.blogspot.com/
Note that process metrics can also be derived using the characteristics of a particular
software engineering activity. For example, an organization may measure the effort and time
spent by considering the user interface design.
It is observed that process metrics are of two types, namely, private and public. Private
Metrics are private to the individual and serve as an indicator only for the specified
individual(s). Defect rates by a software module and defect errors by an individual are
examples of private process metrics. Note that some process metrics are public to all team
members but private to the project. These include errors detected while performing formal
technical reviews and defects reported about various functions included in the software.
Public metrics include that was private to both individuals and teams. Project-level defect
rates, effort and related data are collected, analyzed and assessed in order to obtain
indicators that help in improving the organizational process performance.
Ans Process metrics can provide substantial benefits as the organization works to improve
its process maturity. However, these metrics can be misused and create problems for the
organization. In order to avoid this misuse, some guidelines have been defined, which can be
used both by managers and software engineers. These guidelines are listed
below. Rational thinking and organizational sensitivity should be considered while
analyzing metrics data.
Since metrics are used to indicate a need for process improvement, any metric indicating this
problem should not be considered harmful.
https://jntuhbtechadda.blogspot.com/
(SSPI). SSPI uses software failure analysis to collect information about all errors (it is
detected before delivery of the software) and defects (it is detected after software is
delivered to the user) encountered during the development of a product or system.
Product Metrics
Product metrics help software engineer to detect and correct potential problems before they
result in catastrophic defects. In addition, product metrics assess the internal product
attributes in order to know the efficiency of the following.
Various metrics formulated for products in the development process are listed below.
Metrics for analysis model: These address various aspects of the analysis model such as
system functionality, system size, and so on.
Metrics for design model: These allow software engineers to assess the quality of design
and include architectural design metrics, component-level design metrics, and so on.
Metrics for source code: These assess source code complexity, maintainability, and other
characteristics.
Metrics for testing: These help to design efficient and effective test cases and also
evaluate the effectiveness of testing.
Metrics for maintenance: These assess the stability of the software product.
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
Answers
Q. No. 1 2 3 4 5 6 7 8 9 10
Answer d c d a a e a a c c
https://jntuhbtechadda.blogspot.com/
Fill in Blanks:
1.The goal of coding should not be to reduce the
cost, but the goal should be to reduce cost of
Answers
Q. No. Answers
1 Implementation & Later Phases
2 Goto
3 structure chart
4 Linear
5 data abstraction
6 control constructs
7 if-then-else
8 global
9 prologue
10 static and dynamic
https://jntuhbtechadda.blogspot.com/
Unit-V
Ans: with them, should they become actual problems. More commonly, the software team does
nothing about risks until something goes wrong. Then, the team flies into action in an attempt to
correct the problem rapidly. This is often called a fire fighting mode.
project team reacts to risks when they occur
mitigation plan for additional resources in anticipation of fire fighting fix on failure
resource are found and applied when the risk strikes crisis management failure does not
respond to applied resources and project is in jeopardy
Ans begins long before technical work is initiated. Potential risks are identified, their probability
and impact are assessed, and they are ranked by importance. Then, the software team establishes a
plan for managing risk.ormal risk analysis is performed organization corrects the root causes
of risk examining risk sources that lie beyond the bounds of the software o developing the
skill to manage change
Q3)RISK IDENTIFICATION
Ans Risk identification is a systematic attempt to specify threats to the project plan. There are two
distinct types of risks.
https://jntuhbtechadda.blogspot.com/
Product-specific risks can be identified only by those with a clear understanding of the technology,
the people, and the environment that is specific to the project that is to be built.
Known and predictable risks in the following generic subcategories
Q4)RISK PROJECTION
Ans Risk projection, also called risk estimation, attempts to rate each risk in two ways the
likelihood or probability that the risk is real and the consequences of the problems associated with
the risk, should it occur.The project planner, along with other managers and technical staff, performs
four risk projectionactivities:establish a scale that reflects the perceived likelihood of a risk,delineate
the consequences of the risk, estimate the impact of the risk on the project and the product, andnote
the overall accuracy of the risk projection so that there will be no misunderstandings.
\
Q5) Assessing Risk Impact
Ans Three factors affect the consequences that are likely if a risk does occur: its nature, its
scope, and its timing.The nature of the risk indicates the problems that are likely if it
occurs.The scope of a risk combines the severity (just how serious is it?) with its overall
distribution.Finally, the timing of a risk considers when and for how long the impact will be
felt.
https://jntuhbtechadda.blogspot.com/
Sub condition 1. Certain reusable components were developed by a third party with no knowledge
of internal design standards
Sub condition 2. The design standard for component interfaces has not been solidified and may not
conform to certain existing reusable components.
Sub condition 3. Certain reusable components have been implemented in a language that is not
supported on the target environment
Ans Quality control involves the series of inspections, reviews, and tests used throughout the
software process to ensure each work product meets the requirements placed upon it.A key concept
of quality control is that all work products have defined, measurable specifications to which we may
compare the output of each process. The feedback loop is essential to minimize the defects
produced.
Q3)Quality Assurance
Ans Quality assurance consists of the auditing and reporting functions that assess the effectiveness
and completeness of quality control activities. The goal of quality assurance is to provide
management with the data necessary to be informed aboutproduct quality, thereby gaining insight
and confidence that product quality is meeting its goals.
Cost of Quality The cost of quality includes all costs incurred in the pursuit of quality or in
performing quality-related activities
Ans Software quality is defined as conformance to explicitly stated functional and performance
requirements, explicitly documented development standards, and implicit characteristics that are
expected of all professionally developed software.
Software requirements are the foundation from which quality is measured. Lack of conformance to
requirements is lack of quality.
https://jntuhbtechadda.blogspot.com/
Specified standards define a set of development criteria that guide the manner in which software is
engineered. If the criteria are not followed, lack of quality will almost surely result.A set of implicit
requirements often goes unmentioned (e.g., the desire for ease of use and good maintainability). If
software conforms to its explicit requirements but fails to meet implicit requirements, software
quality is suspect.
Many different types of reviews can be conducted as part of software engineering. Each has its
place. An informal meeting around the coffee machine is a form of review, if technical problems are
discussed. A formal presentation of software design to an audience of customers, management, and
technical staff is also a form of review
A formal technical review is the most effective filter from a quality assurance standpoint. Conducted
by software engineers (and others) for software engineers, the FTR is an effective means for
improving software quality.
https://jntuhbtechadda.blogspot.com/
Reactive risk management catalogues all previous accidents and documents them to find the
errors which lead to the accident. Preventive measures are recommended and implemented
via the reactive risk management method. This is the earlier model of risk management.
Reactive risk management can cause serious delays in a workplace due to the
unpreparedness for new accidents. The unpreparedness makes the resolving process
complex as the cause of accident needs investigation and solution involve high cost, plus
extensive modification.
flexibility and creative intellectual power of humans who have a high sense of safety
concern. Though, humans are the source of error, they can also be a very important safety
source as per proactive risk management. Further, the closed loop strategy refers to setting
up of boundaries to operate within. These boundaries are considered to have safe
performance level. Accidental analysis is part of the proactive risk management, with which
https://jntuhbtechadda.blogspot.com/
accident scenarios are built and the key employees and stakeholder who may create the error
for an accident, are identified. So, past accidents are important in proactive risk
management as well .
Ans Reactive:
accident evaluation and audit
Proactive:
observation of the present safety level and planned explicit target safety level with a creative
https://jntuhbtechadda.blogspot.com/
Risk management and planning - It assumes that the mitigation effort failed and the risk
is a reality.
RMMM Plan
It is a part of the software development plan or a separate document.
The RMMM plan documents all work executed as a part of risk analysis and used by the
project manager as a part of the overall project plan.
The risk mitigation and monitoring starts after the project is started and the documentation
of RMMM is completed.
Mitigation
The cost associated with a computer crash resulting in a loss of data is crucial. A
Computer crash itself is not crucial, but rather the loss of data. A loss of data will result in
not being able to deliver the product to the customer. This will result in a not receiving a
letter of acceptance from the customer. Without the letter of acceptance, the group will
receive a failing grade for the course. As a result the organization is taking steps to make
multiple backup copies of the software in development and all documentation associated
with it, in multiple locations.
Monitoring
When working on the product or documentation, the staff member should always be aware
of the stability of the co changes in the
stability of the environment should be recognized and taken seriously.
Management
The lack of a stable-computing environment is extremely hazardous to software
development team. In the event that the computing environment is found unstable, the
development team should cease work on that system until the environment is made stable
again, or should move to a system that is stable and continue working there.
https://jntuhbtechadda.blogspot.com/
Ans Presently there are two important approaches that are used to determine the quality of
the software:
As mentioned before anything that is not in line with the requirement of the client can be
considered as a defect. Many times the development team fails to fully understand the
requirement of the client which eventually leads to design error. Besides that, the error can
be caused due to poor functional logic, wrong coding or improper data handling. In order to
keep a track of defect a defect management approach can be applied. In defect management,
categories of defects are defined based on severity. The number of defects is counted and
actions are taken as per the severity defined. Control charts can be created to measure the
development process capability.
https://jntuhbtechadda.blogspot.com/
1. Functionality: refers to complete set of important functions that are provided by the
software
https://jntuhbtechadda.blogspot.com/
2. Reliability: this refers to the capability of software to perform under certain conditions for
a defined duration. This also defines the ability of the system to withstand component
failure.
Maturity: Frequency of failure of software
after failure.
https://jntuhbtechadda.blogspot.com/
1. Prevention costs: amount spent on ensuring that all quality assurance practices are
followed correctly. This includes tasks like training the team, code reviews and any
other QA related activity etc.
2. Appraisal costs: this is the amount of money spent on planning all the test activities
and then carrying them out such as developing test cases and then executing them.
The non conformance cost on the other hand is the expense that arises due to:
1. Internal failures: it is the expense that arises when test cases are executed for the first
time at internal level and some of them fail. The expenses arise when the programme
has to rectify all the defects uncovered from his piece of code at the time of unit or
component testing.
2. External failures: it is the expense that occurs when the defect is found by the
customer instead of the tester. These expenses are much more than what arise at
internal level, especially if the customer gets unsatisfied or escalates the software
failure.
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
(b) No material
(c) Breakdown
(d) All of the above
Answers
Q. No. 1 2 3 4 5 6 7 8 9 10
Answer b B d a c a d a A b
https://jntuhbtechadda.blogspot.com/
1. TQM promotes
2.. Kaizen is
10) is the most widely used strategy for statistical quality assurance in industry
today
Answers:
Q. No. Answers
1 Employee Participation
2 small change
3 Continuous improvement
4 Employee
5 Training and development
6 Capability maturity model
7 Customer need
8 ISO-14000
9 Customer need
10 Six Sigma
https://jntuhbtechadda.blogspot.com/
STLC phases:
STLC stands for Software Testing Life Cycle. STLC is a sequence of different activities
performed by the testing team to ensure the quality of the software or the product.
STLC is an integral part of Software Development Life Cycle (SDLC). But, STLC
deals only with the testing phases.
In the early stage of STLC, while the software or the product is developing, the tester
can analyze and define the scope of testing, entry and exit criteria and also the Test
Cases. It helps to reduce the test cycle time along with better quality.
As soon as the development phase is over, the testers are ready with test cases and
start with execution. This helps to find bugs in the initial phase.
STLC Phases
STLC has the following different phases but it is not mandatory to follow all phases. Phases
are dependent on the nature of the software or the product, time and resources allocated for
the testing and the model of SDLC that is to be followed.
https://jntuhbtechadda.blogspot.com/
Test Planning
Let us consider the following points and thereby, compare STLC and SDLC.
STLC is part of SDLC. It can be said that STLC is a subset of the SDLC set.
STLC is limited to the testing phase where quality of software or product ensures.
SDLC has vast and vital role in complete development of a software or product.
However, STLC is a very important phase of SDLC and the final product or the
software cannot be released without passing through the STLC process.
STLC is also a part of the post-release/ update cycle, the maintenance phase of SDLC
where known defects get fixed or a new functionality is added to the software.
The following table lists down the factors of comparison between SDLC and STLC based
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/
https://jntuhbtechadda.blogspot.com/