Software Enginnering Complete Notes
Software Enginnering Complete Notes
Software development
Software design
progra
mming
Programming paradigm is subset of software design paradigm which is further a subset of software
development paradigm.
1) Software development paradigm:
It includes various researches and requirement gathering which helps the software product to
build.
➢ Requirement gathering
➢ Software design
➢ Programming
a) Entities d) Relations
b) Attributes e) Constraints
c) Key Attributes
Application Generation:
▪ Many of the components have already been tested. This reduces overall testing time.
▪ The new components are tested and variety of test cases of exercised.
Advantages:
• An incremental software process model
• Having a short development cycle
• Creates a fully functional system within a very short span time of 60 to 90 days
• Multiple software teams work in parallel on different functions.
Disadvantages:
• Not all types of application are appropriate for RAD
• Requires a number of RAD teams
• If system cannot be modularized properly project will fail.
• Not suited when technical risks are high
Advantages:
➢ Users are actively involved in the development
➢ Errors can be detected much earlier.
➢ Quicker user feedback is available leading to better solutions.
➢ Missing functionality can be identified easily
Disadvantages:
➢ Incomplete applications may cause application not to be used as the full system was designed incomplete
➢ Practically, this methodology may increase the complexity of the system.
✓ Customer communication
✓ Planning
✓ Risk analysis
✓ Engineering
✓ Construction & release
✓ Customer evaluation
Customer communication: Software engineer communicate with stack holder to understand the information
domain and formulates design specifications.
Effective communication between the software engineer and stack holder is need to formulate better
requirements.
Planning: Estimating: Resources need for software development is estimated.
Several techniques are adopted to monitor progress against plan
a. Work Breakdown Structures (WBS)
Testing: The process of verifying the application program for uncovered errors or
irregularities.
Customer evaluation:
Feedback: Details given by end user.
ii) Questionnaires
iii) Onsite observations
Advantages:
➢ Additional functionality or changes can be done at later stage.
➢ Development is fast & features are added in a systematic way.
➢ Cost estimation becomes easy.
➢ There is always space for customer feedback.
Disadvantages:
➢ It is risk for is not meeting schedule or budget
➢ Documentation is more as it has intermediate phases
➢ It is not advisable for smaller projects; it might cost then a lot.
1) The People
The people management defines the following key practice areas for software people
• RECRUITING
• SELECTION
• PERFORMANCE MANAGEMENT
• TRAINING.
1. The Players (Stack holders):
The software process (and every software project) is populated by players who can be
categorized into one of five constituencies:
1. Senior managers: who define the business issues that influence on the project.
2. Project (technical) managers: who must plan, motivate, organize, and control the
practitioners who do software work.
3. Practitioners who deliver the technical skills that are helps to develop a product or
application.
4. Customers who specify the requirements for the software to be engineered(developed)
5. End-users who interact with the software once it is released for Production use.
2. The Product: Before a project can be planned
• product objectives
• scope should be established,
• alternative solutions should be considered,
• technical and management constraints should be identified.
Without this information, it is impossible to estimates the cost.
Software Scope:
The first software project management activity is the determination of software scope.
Problem Decomposition:
Problem decomposition, sometimes called partitioning or problem elaboration, is an activity that sits
at the core of software requirements analysis.
3.The Process
✓ A software process provides the framework from which a comprehensive plan for
software development can be established.
✓ In order to avoid project failure, a software project manager and the software
engineers who build the product must avoid a set of common warning signs that
lead to good project management.
✓ In order to manage a successful software project, we must understand what can go
wrong and how to do it right.
❖ Metrics
• They offer the effectiveness of the software process and the projects that are
conducted using the process as a framework.
• These data are analyzed, compared against past averages, and assessed
• Remedies can then be developed, and the software process can be improved
Process measurement/process metrics:
• Process metrics are collected across all projects and over long periods of time.
• Use common sense and organizational sensitivity when interpreting metrics data
• Provide regular feedback to the individuals and teams who collect measures and
metrics
• Work with practitioners and teams to set clear goals and metrics that will be used
to achieve them
❖ Project metrics
Metrics from past projects are used as a basis for estimating time and effort
• Project metrics are used to
Project metrics can be consolidated to create process metrics for an organization by using
1. Size-oriented Metrics
• The objective of software project planning is to provide a framework that enables the
manager to make reasonable estimates of resources, cost, and schedule.
Resources
• The planner begins by evaluating scope and selecting the skills required to complete
development.
3. Environmental Resources
• The environment that supports the software project, often called the software
engineering environment (SEE), incorporates hardware and software.
1. Delay estimation until late in the project (obviously, we can achieve100% accurate
estimates after the project is complete!).
Effort estimates.
4. Use one or more empirical models for software cost and effort estimation.
DECOMPOSITION TECHNIQUES
(2) As baseline metrics collected from past projects with estimation variables to develop cost and
effort projections.
➢ Using historical information, the planner estimates optimistic, most likely and pessimistic value for each
function.
Example:
Following the decomposition technique for LOC an estimation table is developed. A range of LOC
estimates is developed for each function. For example, the range of LOC estimates for 3D geometric
analysis function is optimistic 4600 LOC, most likely 6900 LOC, and pessimistic 8600 LOC.
UICF 2300
2DGA 5300
3DGA 6800
DBM 3350
CGDF 4950
PCF 2100
DAM 8400
Optimistic: 4600
Pessimistic: 8600
= (4600+4(6900) +8600)/6
= 6800.
• The project planner estimates inputs, outputs, inquiries, files, and external interfaces for the
CAD software.
Information Opt Most Pess Est count Weight FP count
domain value or likely
function units
Number of inputs 20 24 30 24 4 96
Number of output 12 15 22 16 5 80
Number of 16 22 28 22 4 88
inquiries
Number of files 4 4 5 4 10 40
Number of 2 2 3 2 7 14
external interfaces
• A typical estimation model is derived using regression analysis on data collected from past
software projects. The overall structure of such models takes the form.
➢ E is effort in person-months,
Constructive cost model introduced by barry boehm in 1981. The COCOMO estimates the cost for
software product development in terms of effort (resources required to complete the project work) and
schedule (time required to complete the project work) based on the size of the software product. It estimates
the required number of man-months(mm) for the full development of software products. According to
COCOMO, there are three modes of software development projects that depend on complexity such as:
I. Organic project: it belongs to small & simple software projects which are handled by a small
team with good domain knowledge and few rigid requirements.
Ex: small data processing (or) inventory management system
II. Semidetached project: it is an intermediate (in terms of size and complexity) project, where
the team having mixed experience (both experience & inexperience resources) to deal with
rigid/nonrigid requirements.
Ex: Database design (or) OS development.
III. Embedded project: This project having a high level of complexity with a large team size by
considering all sets of parameters (software, hardware and operational).
Ex: ATM software (or) Traffic light control software.
1. The Basic COCOMO model: it is the one type of static model to estimates software development
effort quickly and roughly. It mainly deals with the number of lines of code and the level of estimation
accuracy is less as we don’t consider the all parameters belongs to the project. The estimated effort and
scheduled time for the project are given by the relation.
E = a*(KLOC)b MM
D = C*(E)d M
N = E/D [p]
Project types A B C D
KLOC = the size of the code for the project in kilo lines of code
D = total time required for project development in months(m)
2. The Intermediate Mode: the intermediate model estimates software development effort in terms of
size of the program and the other related “cost drivers” parameters [product, hardware, project,
resource parameters] of the project. The estimated effort and scheduled time are given by the
following.
Effort (E) = a (KLOC)b * EAF mm
Scheduled time (d) = c (E)d months (m)
Here, EAF = it is an effort adjustment factor, which is calculated by multiplying the parameters value
of different cost driver parameters.
3. The detailed COCOMO model:
It is the advanced model that estimates the s/w development effort like intermediate COCOMO in each
stage of the software development life cycle process.
Advantages:
➢ Easy to estimate the total cost of the project.
➢ Easy to implement with various factors
➢ Provide ideas about historical project.
Disadvantages:
➢ It ignores requirements customer skills and hardware issues.
➢ It limits the accuracy of the software costs.
➢ It mostly depends on time factors.
Complexity weight
Object type Simple Medium difficult
Screen 1 2 3
Report 2 5 8
Components 10
➢ The object point count is determined by multiplying original no. of object instances by weighting
factor.
➢ For component – based development or when software reuse is applied, the % reuse is estimated and
object point count is adjusted.
NOP = (object point) * [(100-%reuse)/100]
It is used at the stage 2 in COCOMO II models and supports estimation in early design stage of project
The equation is as
PM nominal = A * (size)b * M
➢ The post architecture model covers the actual development and maintenance of a software product.
➢ The post – architecture model predicts software development effort (person-months (PM)) is
Project planning comprise project purpose, project scope, project planning process, and project plan. This
information is essential for effective project planning and to assist project, management team in accomplishing
user requirements.
Project purpose: software project is carried out to accomplish a specific purpose
Project objectives: the commonly followed project objectives are
➢ Meet user requirements
➢ Meet schedule deadlines
➢ Be within budget
➢ Produce quality deliverables.
Business software engineering: Business objectives ensure that the organizational objectives and requirements
are accomplished in the project.
➢ Evaluate processes
➢ Renew policies and processes
➢ Keep the project on schedule
➢ Improve software
Project scope:
The scope provides a detailed description of functions- describes the tasks that the software is expected to
perform.
Features – describe the attributes required in the software
Constraints – describe the limitations imposed on software by hardware, memory
Interfaces – describe the iteration of software components with each other.
Project planning process: project planning process comprises several activities which are essential for carrying
out a project systematically. These activities include estimation of time, effort, and resources required and risks
associated with the project.
1. Identification of project requirements: identification of project requirements helps in performing the
activities in a systematic manner.
2. Identification od cost estimation: it is necessary to estimate the cost that is to be incurred on a project.
The cost estimation includes the cost of hardware, network, connections and the cost required for the
maintenance of hardware components.
Identification of risks: identifying risks before a project begins helps in understanding their probable extent
of impact on the project.
Identification of critical success factors: for making a project successful, critical success factors are
followed. These factors refer to the conditions that ensure greater chances of success of a project.
Preparation of project charter: a project charter provides a brief description of the project scope, quality,
time, cost and resources constraints as described during project planning.
Preparation of project plan: a project plan provides information about the resources that are available for
the project, individuals involved in the project and the schedule according which the project is to be carried
out.
Commencement of the project: once the project planning is complete and resources are assigned to team
members, the software project commences.
Project plan:
It provides information about the end data, milestones, activities, and deliverables of the project.
A typically project plan is divided into the following sections.
➢ Introduction
➢ Project organization
➢ Risk analysis
➢ Resource requirements
➢ Work breakdown
➢ Project schedule
UNIT- II
REQUIREMENT ANALYSIS
Requirement Engineering
The process to gather the software requirements from client, analyze and
document them is known as requirement engineering.
The goal of requirement engineering is to develop and maintain descriptive ‘System
Requirements Specification’ document.
Feasibility study
• When the client approaches the organization for getting the desired product developed,
it comes up with rough idea about what all functions the software must perform and
which all features are expected from the software.
• Based on this the analysts do a detailed study about whether the desired system and its
functionality are feasible to develop.
• This feasibility study is focused towards goal of the organization.
• The output of this phase should contain adequate comments and recommendations for
management about whether or not the project should be undertaken.
Requirement Gathering
• If the feasibility report is positive towards undertaking the project, next phase starts with
• Analysts and engineers communicate with the client and end-users to know their ideas
7on what the software should provide and which features they want the software to
include.
➢ SRS is a document created by system analyst after the requirements are collected from various
stakeholders.
➢ SRS defines how the intended software will interact with hardware, external interfaces, speed
of operation, response time of system, Security, Quality, Limitations etc.
➢ The requirements received from client are written in natural language.
➢ It is the responsibility of system analyst to document the requirements in technical language so
that they can be comprehended useful by the software development team.
• Requirements gathering: - The developers discuss with the client and end users
Documentation:
All & informal, functional and non-functional requirements are documented and made
available for next phase processing.
The objective of the feasibility study is to establish the reasons for developing the software
that is acceptable to users, adaptable to change and conformable to established standards.
Types of Feasibility
1) Technical feasibility
2) Operational feasibility
3) Economic feasibility
4) Schedule Feasibility
1. Technical Feasibility: It accesses the current resource (such as hardware and software) and
technology, which are required to accomplish user requirements in the software within the
allocated time and budget.
It also performs the following tasks.
• Analyzes the technical skills and capabilities of the software development team
members
• Determines whether the relevant technology is stable and established
• Ascertains that the technology chosen for software development has a large number of
users so that they can be consulted when problems arise or improvements are required.
2. Operational feasibility: It assesses the extent to which the required software performs a
series of steps to solve business problems and user requirements. Operational feasibility
also performs the following tasks.
• Determines whether the problems anticipated in user requirements are of high priority
• Determines whether the solution suggested by the software development team is
acceptable
• Analyzes whether users will adapt to a new software
• Determines whether the organization is satisfied by the alternative solutions proposed
by the software development team.
3. Economic feasibility: determines whether the required software is capable of generating
financial gains for an organization. It involves the cost incurred on the software
development team, estimated cost of hardware and software, cost of performing
feasibility study, and so on.
It focuses on the issues listed below:
4. Schedule Feasibility - Does the company currently have the time resources to
undertake the project?
Can the project be completed in the available time?
Data Modeling:
Analysis modeling starts with the data modeling the software engineer defines all the data objects
that proceeds within the system and the relationship between data objects are identified.
1) Data objects:
The data object is the representation of composite information.
The composite information means an object has a number of different properties or attributes.
2) Data Attributes:
Each of the data object has a set of attributes.
Characteristics:
Name an instance of the data object
Describe the instance
Make reference to another instance in another table.
3) Relationship:
Relationship shows the relationship between data objects and how they are related to each other.
4) Cardinality:
Cardinality state the number of events of one object related to no of events of another object.
i) one to one: (1:1) one event of an object is related to one event of another object.
Ex: the employee has only one ID
ii) One to Many (1: N) one event of an object is related to many events.
Ex: one college has many departments.
iii) Many to many (M: N) Many events of one object are related to many events.
Ex: Many customers place order for many products.
iv) Modality:
If an event relationship is an optional then the modality relationship is zero.
If an event relationship is compulsory then modality of relationship is one.
Q) Problem of Requirements:
Problem 1: customers don't (really) know what they want:
The customers have only a vague idea of what up they need, and it's up to you to ask the right
questions and perform the analysis necessary to turn this amorphous vision into a formally-documented
software requirement specification.
To solve this problem, you should
Ensure that you spend sufficient time at the start of the project on understanding the objectives,
deliverables and scope of the project
Make visible any assumptions that customer is using, and, critically evaluate both the likely end
-user benefits and risks of the project.
Attempt to write a concrete vision statement for the project, which encompasses both the specific
functions or user benefits it provides.
Get your customer to read, think about and sign off the completed software requirement
specification, to align expectations and ensure that both parties have a clear understanding of the
deliverable.
Problem 2: Requirements change during the course of the project:
The 2nd most common problem with software projects is that the requirements defined in the
first phase change as the project progresses, it may occur changes in the external environment require
reshaping of the original business problem.
To "solve this problem", you should:
Have a clearly defined process for receiving, analyzing and incorporating change requests. Set
milestones for each development phase beyond which certain changes are not permissible.
Ensure that change requests are clearly communicated to all stakeholders.
Problem 3: customers have unreasonable timelines:
Customer say something like "it's in emergency job and we need this project completed in X
weeks". A common mistake is to agree to such timelines before actually performing a detailed analysis
and understanding both of the scope of the project and the resources necessary to execute it.
To Solve this problem", you should:
Convert the software requirements specification into a project plan.
Ensure that the project plan takes account of available resource constraints and keeps sufficient
time for testing and quality inspection.
Enter into a conversation about deadlines with your customer, using the figures in your plan as
supporting evidence for statements.
Problem 4: Communication gaps exist between customers, engineers and project managers:
Customers and engineers fail to communicate clearly with each other. This can lead to
confusion and severe miscommunication.
To solve this problem, you should:
Take notes at every meeting and disseminate these throughout the project team.
Be consistent in your set of words. Make yourself a glossary of the terms that you're going to
use right at the start, and ensure all stakeholders have copy.
Problem 5: The development team doesn't understand the politics of the customer's
organization:
When dealing with large projects in large organizations as information is often fragmented and
requirements. Analysis is hence stymied by problems of trust, internal conflicts of interest and
information inefficiencies.
To solve this problem, you should:
Review your existing network and identify both the information you need and who is likely to
have it.
Cultivate allies, build relationships and think systematically about your social capital in the
organization.
Use initial points of access/leverage to move your agenda forward.
UNIT-III
SOFTWARE DESIGN
Q) Explain about software design?
➢ Software design encompasses the set of principles, concepts, and practices
➢ that lead to the development of a high-quality system or product.
• The data design transforms the information domain model created during analysis
into the data structures that will be required to implement the software.
• The architectural design defines the relationship between major structural elements
of the software.
• The interface design describes how the software communicates within itself, with
systems that interoperate with it, and with humans who use it.
•
The component-level design transforms structural elements of the software
architecture into a procedural description of software components.
THE DESIGN PROCESS
David [DAV95] suggests a set of principles for software design.
➢ The design process should not suffer from “tunnel vision”.
➢ The design should be traceable to the analysis model.
➢ The design should not reinvent the wheel
➢ The design should “minimize the intellectual distance” between the software
➢ and the problem in the real world.
➢ The design should exhibit uniformity and integration.
➢ The design should be structured to degrade gently.
➢ The design should be structured to accommodate change.
➢ Design is not coding.
➢ The design should be assessed for quality.
➢ The design should be reviewed to minimize conceptual errors.
2.Explain about the Abstraction?
The process of describing a problem at a high level of representation without bothering about
its internal details.
➢ Highest level of abstraction: Solution is slated in broad terms using the language of the
problem environment
➢ Lower levels of abstraction: More detailed description of the solution is provided
Types of abstraction:
➢ Procedural abstraction: Refers to a sequence of instructions that has a specific and
limited function.
Ex The word “open” of a door which implies a long sequence of procedural steps.
Ex: door would encompass a set of attributes that describe the door like
door type,
swing direction
weight,
dimensions…. etc.
Control abstraction: It implies a program control mechanism without specifying internal details.
Ex: loops, iterations, multithreading.
Advantages:
➢ It separates design for implementation.
➢ It helps in problem understanding and software maintenance.
➢ It reduces the complexity for users and engineers.
3.EXPLAIN ABOUT DESIGN CONCEPTS?
➢ A set of fundamental software design concepts has evolved over the history of
software
➢ engineering. Each provides the software designer with a foundation from which more
sophisticated design methods can be applied.
1.Abstraction
➢ A procedural abstraction refers to a sequence of instructions that have a specific and
limited function.
➢ A data abstraction is a named collection of data that describes a data object.
2.Architecture
➢ Software architecture alludes to “the overall structure of the software and the ways
in which that structure provides conceptual integrity for a system”
➢ A set of architectural patterns enables a software engineer to solve common design
problems.
➢ Structural properties: This aspect of the architectural design representation defines
the components of a system (e.g., modules, objects, filters) and the manner in which
those components are packaged and interact with one another.
➢ Extra-functional properties: The architectural design description should address how
the design architecture achieves requirements for performance, capacity, reliability,
security, adaptability, and other system characteristics.
➢ Families of related systems: The architectural design should draw upon repeatable
➢ patterns that are commonly encountered in the design of families of similar systems.
3. Patterns:
➢ The intent of each design pattern is to provide a description that enables a designer to
determine.
➢ Whether the pattern is applicable to the current work.
➢ Whether the pattern can be reused (hence, saving design time)
➢ Whether the pattern can serve as a guide for developing a similar, but functionally or
structural different pattern.
4.Separation of Concerns
➢ Separation of concerns is a design concept [Dij82] that suggests that any complex
problem can be more easily handled if it is subdivided into pieces that can each be
solved and/or optimized independently a concern is a feature or behavior that is f
specified as part of the requirements model for the software.
5. Modularity
➢ Modularity is the most common manifestation of separation of concerns. Software
is divided into separately named and addressable components, sometimes called
modules, that are integrated to satisfy problem requirements.
6.Information Hiding
➢ The use of information hiding as a design criterion for modular systems provides
the greatest benefits when modifications are required during testing and later
during software maintenance.
7.Functional Independence
➢ Functional independence is achieved by developing modules with “single-minded”
➢ function and an “aversion” to excessive interaction with other modules.
➢ Independence is assessed using two qualitative criteria: cohesion and coupling.
➢ Cohesion is an indication of the relative functional strength of a module.
➢ Coupling is an indication of the relative interdependence among modules.
8.Refinement
➢ Refinement helps you to reveal low-level details as design progresses. Both
concepts allow you to create a complete design model as the design evolves.
9.Refactoring
➢ “Refactoring is the process of changing a software system in such a way that it does
not alter the external behavior of the code [design]
➢ yet improves its internal structure.”
10.Design Classes
➢ User interface classes define all abstractions that are necessary for human computer
➢ interaction (HCI).
➢ Business domain classes are often refinements of the analysis classes
➢ Process classes implement lower-level business abstractions required to fully
manage the business domain classes.
➢ Persistent classes represent data stores (e.g., a database) that will persist beyond
the execution of the software.
System classes implement software management and control functions that enable the
system to operate and communicate within its computing environment and with the
outside world.
Q) Explain about the modularity?
Modularity is the most common manifestation of separation of concerns. Software is
divided into separately named and addressable components, sometimes called modules, that
are integrated to satisfy problem requirements. It is a technique to divide a software system
into multiple discrete and independent modules.
Modular decomposability: if a design method provides a systematic mechanism for
decomposing the problem into sub problems it will reduce the complexity of the overall
problem, there by achieving an effective modular solution.
Modular composability: If a design method enables existing design components to be
assembled into a new system, it will yield a modular solution that does not reinvent the wheel.
Modular understandability: If a module can be understood as a stand-alone unit it will be
easier to build and easier to change.
Modular continuity: If small changes to the system requirements result in changes to
individual modules, rather than system wide changes, the impact of change-induced side
effects will be minimized.
Modular protection: If an aberrant condition occurs within a module and its effects are
constrained within that module, the impact of error induced side effects will be minimized.
Advantages:
➢ Using modularity smaller components are easier to maintain.
➢ Desired level of abstraction can be brought in the program.
➢ Components with high cohesion can be re-used again.
➢ Desired from security aspect
➢ Concurrent execution can be made possible.
Coincidental cohesion: A module is said to have coincidental cohesion, if it performs a set of tasks
that relate to each other very loosely, if at all.
Logical cohesion: A module is said to be logically cohesive, if all elements of the module perform
similar operations such as error handling data input, data output, etc.
Temporal cohesion: When a module contains functions that are related by the fact that these functions
are executed in the same time span, then the module is said to possess temporal cohesion.
Procedural cohesion: A module is said to possess procedural cohesion, if the set of functions of the
module are executed one after the other, though these functions may work towards entirely different
purposes and operate on very different data.
Communicational cohesion: A module is said to have communicational cohesion, if all functions of
the module refer to or update the same data structure.
Sequential cohesion: A module is said to possess sequential cohesion, if the different functions of the
module executed in a sequence, and the output from one function is input to the next in the sequence.
Functional cohesion: A module is said to possess functional cohesion, if different of the modules
cooperate to complete a single task.
Coupling: The coupling between two modules indicates the degree of interdependence
between them. The degree of coupling between two modules depends on their interface
complexity.
If the system has low coupling then it is a sign of well-structured computer system and
a great design.
High coupling low
Data coupling
Two modules are data coupled, if they communicate using an elementary data item that is
passed as a parameter between the two.
Stamp coupling: Two modules are stamp coupled, if they communicate using a composite data item
such as a record in PASCAL or a structure in C.
Control coupling: Control coupling exist between two modules, if data from one module is used to
direct the order of instruction execution in other.
Common coupling: Two modules are common coupled, if they share some global data items.
Content coupling: Content coupling exists between two modules, if they share code. That is a jump
from one module into the code of another module can occur.
Q) Explain the concept of Architectural design?
As architectural design begins, the software to be developed must be put into
context— that is, the design should define the external entities (other systems, devices,
people) that the software interacts with and the nature of the interaction.
Representing the System in Context: System that inter operate with the target system are
represent as:
Superordinate systems: those systems that use the target system as part of
some higher-level processing scheme.
Subordinate systems: those systems that are used by the target system and
provide data or processing that are necessary to complete target system
functionality.
Peer-level systems: those systems that interact on a peer-to-peer basis (i.e.,
information is either produced or consumed by the peers and the target system.
Actors: entities (people, devices) that interact with the target system by
producing or consuming information that is necessary for requisite processing.
Defining Archetypes: An archetype is a class or pattern that represents a core abstraction that is
critical to the design of an architecture for the target system.
Node: Represents a cohesive collection of input and output elements of the home security
function.
Detector. An abstraction that encompasses all sensing equipment that feeds
information into the target system.
Indicator. An abstraction that represents all mechanisms for indicating that
an alarm condition is occurring.
Controller: An abstraction that depicts the mechanism that allows the
arming or disarming of a node. If controllers reside on a network, they have
the ability to communicate with one another.
Refining the Architecture into Components:
The architecture is applied to a specific problem with the intent of
demonstrating that the structure and components are appropriate.
Below diagram an instantiation of the safe home architecture for the
security system.
• Correctness
• Efficiency
• Understandability
• Maintainability
• Simplicity
• Completeness
• Verifiability
• Portability
• Modularity
• Reliability
• Reusability
When a component is divided into separate pieces, it is called the parent and its pieces are
called its children. The structure chart shows the hierarchy between a parent and its children.
The procedural design is often understood as a software design process that uses mainly
control commands such as: sequence, condition, repetition, which are applied to the
predefined data.
Sequences: serve to achieve the processing steps in order that is essential in the
specification of any algorithm.
Conditions: provide facilities for achieving selected processing according to some logical
statement.
Repetitions: serve to achieve looping’s during the computation process.
• Entities - Entities are source and destination of information data. Entities are
represented by a rectangle with their respective names.
• Process - Activities and action taken on the data are represented by Circle or Round-
edged rectangles.
• Data Storage - There are two variants of data storage - it can either be represented as
a rectangle with absence of both smaller sides or as an open-sided rectangle with only
one side missing.
Levels of DFD:
• Level 0 - Highest abstraction level DFD is known as Level 0 DFD, which depicts the
entire information system as one diagram concealing all the underlying details. Level
0 DFDs are also known as context level DFDs.
• Level 1 - The Level 0 DFD is broken down into more specific, Level 1 DFD. Level
1 DFD depicts basic modules in the system and flow of data among various modules.
Level 1 DFD also mentions basic processes and sources of information.
• Level 2 - At this level, DFD shows how data flows inside the modules mentioned in
Level 1.
Higher level DFDs can be transformed into more specific lower level DFDs
with deeper level of understanding unless the desired level of specification is
achieved.
UNIT – IV
Q) DEFINE USER INTERFACE DESIGN? EXPLAIN ABOUT THE
TYPES OF INTERFACES?
User interface is the front-end application view to which user interacts in order to use the
software. User can manipulate and control the software as well as hardware by means of
user interface. Today, user interface is found at almost every place where digital technology
exists, right from computers, mobile phones, cars, music players, airplanes, ships etc.
User interface is part of software and is designed such a way that it is expected to provide the
user insight of the software. UI provides fundamental platform for human-computer interaction.
UI can be graphical, text-based, audio-video based, depending upon the underlying
hardware and software combination. UI can be hardware or software or a combination of both.
UI is broadly divided into two categories:
• Command Line Interface
• Graphical User Interface
1) COMMAND LINE INTERFACE: A command is a text-based reference to set of
instructions, which are expected to be executed by the system. There are methods like
macros, scripts that make it easy for the user to operate.
A text-based command line interface can have the following elements:
Command Prompt - It is text-based notifier that is mostly shows the context in which the
user is working. It is generated by the software system.
Cursor - It is a small horizontal line or a vertical bar of the height of line, to represent
position of character while typing. Cursor is mostly found in blinking state. It moves as the
user writes or deletes something.
Command - A command is an executable instruction. It may have one or more parameters.
Output on command execution is shown inline on the screen. When output is produced,
command prompt is displayed on the next line.
2) GRAPHICAL USER INTERFACE: Graphical User Interface provides the user graphical
means to interact with the system. GUI can be combination of both hardware and software.
Using GUI, user interprets the software.
GUI Elements
GUI provides a set of components to interact with software or hardware.
Every graphical component provides a way to work with the system.
A GUI system has following elements such as:
1) Window 2) Tabs 3) Menu 4) Icon 5) Cursor
goal, the team will also focus on continual adaptations that will make the process
fit the needs of the team.
The figure implies that each of these tasks will occur more than once, with each pass
around the spiral representing additional elaboration of requirements and the resultant design.
1. Analysis:
➢ Analysis of the user environment focuses on the physical work environment.
➢ The information gathered aspart of the analusis action is used to create analysis
Model for the interface.
2. Interface design: the goal of interface design is to define a set of interface objects
and action that enables a user to perform all defined tasks.
3. Interface construction: interface construction- begins with the creation of a prototype
That enables usage scenaios to be evaluated.
4. Interface validation: the ability of the interface to implement every user task
Correctly, to accommodate all task variations and to achieve all general user
Requirements.
Interface Analysis:
A key tenet of the software engineering process models is tis: understand the problem
before you attempt to design a solution.
User interface design: understandung the problem understanding.
i. The people who will interact with the system through the interface.
ii. The tasks that end users must perform to do their work.
Controllability. “The better we can control the software, the more the testing can
be automated and optimized.”
Decomposability. “By controlling the scope of testing, we can more quickly
isolate problems and perform smarter retesting.”
Simplicity. “The less there is to test, the more quickly we can test it.”
The program should exhibit functional simplicity (e.g., the feature set is the minimum
necessary to meet requirements);
structural simplicity (e.g., architecture is modularized to limit the propagation of faults) code
simplicity (e.g., a coding standard is adopted for ease of inspection and maintenance).
Stability. “The fewer the changes, the fewer the disruptions to testing.” Changes to the
software are infrequent, controlled when they do occur, and do not invalidate existing tests.
Understandability. “The more information we have, the smarter we will test.”
Test Characteristics.
PRINCIPLES:
1) Software engineer must understand the basic principles that guide software testing
2) All tests should be traceable to customer requirements
3) Tests should be planned long before testing begins
4) The paretic principle applies to software testing
5) Testing should begin “in the small” and progress toward testing in the large
6) Exhaustive testing is not possible
7) To be most effective should be conducted by an independent third party.
Audits are a type of review performed by SQA personnel with the intent of ensuring that
quality guidelines are being followed for software engineering work
Testing. Software testing is a quality control function that has one primary goal—to find
errors. The job of SQA is to ensure that testing is properly planned and efficiently conducted
Error/defect collection and analysis. The only way to improve is to measure how you’re
doing. SQA collects and analyzes error and defect data to better understand how errors are
introduced and what software engineering activities are best suited to eliminating them.
Change management. Change is one of the most disruptive aspects of any software project. If
it is not properly managed, change can lead to confusion, and confusion almost always leads to
poor quality.
Education. Every software organization wants to improve its software engineering practices.
A key contributor to improvement is education of software engineers, their managers, and other
stakeholders.
Vendor management. Three categories of software are acquired from external software
vendors
Security management. SQA ensures that appropriate process and technology are used to
achieve software security.
Safety. SQA may be responsible for assessing the impact of software failure and for initiating
those steps required to reduce risk.
Risk management. The SQA organization ensures that risk management activities are
properly conducted and that risk-related contingency plans have been established.
The SQA actions described in the preceding section are performed to achieve a set of
pragmatic goals:
Requirements quality. SQA must ensure that the software team has properly reviewed the
requirements model to achieve a high level of quality.
Design quality. SQA looks for attributes of the design that are indicators of quality. Code
quality. SQA should isolate those attributes that allow a reasonable analysis of the quality
of code.
Quality control effectiveness. SQA analyzes the allocation of resources for reviews and testing
to assess whether they are being allocated in the most effective manner.
3) Sandwich Testing:
Sandwich Testing is a strategy in which top level modules are tested with lower-level modules at the
same time lower modules are integrated with top modules and tested as a system.
It is a combination of Top-down and Bottom-up approaches therefore it is called Hybrid Integration
Testing. It makes use of both stubs as well as drivers.
Q) Write about Smoke testing?
Smoke Testing is a software testing process that determines whether the deployed software
build is stable or not. Smoke testing is a confirmation for QA team to proceed with further
software testing. It consists of a minimal set of tests run on each build to test software
functionalities. Smoke testing is also known as "Build Verification Testing" or “Confidence Testing.”
Needs &
expectation specificati
of customer process product
on
Validation
Security Testing: Security testing attempts to verify that protection mechanisms built into a
system will, in fact, protect it from improper penetration.
Stress Testing: Stress testing executes a system in a manner that demands resources in
abnormal quantity, frequency, or volume.
Performance Testing: Performance tests are often coupled with stress testing and usually
require both hardware and software instrumentation.
Deployment testing: In many cases, software must execute on a variety of platforms and under more
than one operating system environment. Deployment testing, sometimes called configuration testing,
exercises the software in each environment in which it is to operate.
The basis path method enables the test-case designer to derive a logical complexity measure of
a procedural design and use this measure as a guide for defining a basis set of execution paths.
Test cases derived to exercise the basis set are guaranteed
The principle behind basis path testing is that all independent paths of the program have to be
tested at least once. Below are the steps of this technique:
- Draw a control flow graph.
1: IF A = 100
2: THEN IF B > C
3: THEN A = B
4: ELSE A= C
5: ENDIF
6: ENDIF
7: Print A
Cyclomatic complexity= Number of Predicate Nodes + 1
From the example in Step 1, we can redraw it as below to show predicate nodes clearly:
where E is the number of flow graph edges and N is the number of flow graph nodes.
3. Cyclomatic complexity V(G) for a flow graph G is also defined as
Path 3: 1, 6, 7.
Step 4: Generate test cases for each path
After determining the basis set of paths, we can generate the test case for each path. Usually, we
need at least one test case to cover one path. In the example, however, Path 3 is already covered by
Path 1 and 2 so we only need to write 2 test cases.
In conclusion, basis path testing helps us to reduce redundant tests. It suggests independent
paths from which we write test cases needed to ensure that every statement and condition can
be executed at least one time.
Loop Testing
Loop testing is a white-box testing technique that focuses exclusively on the
validity of loop constructs
Simple loops. The following set of tests can be applied to simple loops, where n
is the maximum number of allowable passes through the loop.
a. Skip the loop entirely.
b. Only one pass through the loop.
c. Two passes through the loop.
d. m passes through the loop where m < n.
e. n -1, n, n + 1 passes through the loop.
1. Set all the other loops to minimum value and start at the innermost loop
2. For the innermost loop, perform a simple loop test and hold the outer loops at their
minimum iteration parameter value.
Concatenated Loops:
In the concatenated loops, if two loops are independent of each other then they are tested
using simple loops or else test them as nested loops.
Unstructured Loops For unstructured loops, it requires restructuring of the design to reflect
the use of the structured programming constructs.
Q) EXPLAIN ABOUT BLACK-BOX TESTING?
Black-box testing, also called behavioral testing, focuses on the functional requirements of
the software.
Black-box testing attempts to find errors in the following categories:
• What data rates and data volume can the system tolerate?
Software testing begins by creating a graph of important objects and their relationships andthen
devising a series of tests that will cover the graph so that each object and relationship is
exercised and errors are uncovered.
A directed link (represented by an arrow) indicates that a relationship moves in only one
direction.
A bidirectional link, also called a symmetric link, implies that the relationship applies in
both directions.
Parallel links are used when a number of different relationships are established between
graph nodes.
Equivalence Partitioning
Equivalence partitioning is a black-box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
Test-case design for equivalence partitioning is based on an evaluation of equivalence classes
Reduced cost: The cost of re-engineering is often significantly less than the costs
of developing new software
CASE tools can be broadly divided into the following parts based on their use at a particular
SDLC stage: CASE stands for Computer Aided Software Engineering. It means, development
and maintenance of software projects with help of various automated software tools.
CASE tools can be grouped together if they have similar functionality, process activities and
capability of getting integrated with other tools.
Scope of Case Tools
The scope of CASE tools goes throughout the SDLC.
Case Tools Types
Now we briefly go through various CASE tools
Diagram tools
These tools are used to represent system components, data and control flow among various
software components and system structure in a graphical form. For example, Flow Chart
Maker
tool for creating state-of-the-art flowcharts.
Process Modeling Tools
Process modeling is method to create software process model, which is used to develop the
software. Process modeling tools help the managers to choose a process model or modify it as
per the requirement of software product. For example, EPF Composer
Design Tools
These tools help software designers to design the block structure of the software, which may
further be broken down in smaller modules using refinement techniques. These tools provide
detailing of each module and interconnections among modules. For example, Animated
Software Design
Configuration Management Tools
An instance of software is released under one version. Configuration Management tools deal
with –
Version and revision management
Baseline configuration management
Change control management
CASE tools help in this by automatic tracking, version management and release management.
For example, Fossil, Git, Accu REV.
Change Control Tools
These tools are considered as a part of configuration management tools. They deal with changes
made to the software after its baseline is fixed or when the software is first released. CASE
tools automate change tracking, file management, code management and more. It also helps in
enforcing change policy of the organization.
Programming Tools
These tools consist of programming environments like IDE Integrated Development
Environment, in-built modules library and simulation tools. These tools provide
comprehensive aid in building software product and include features for simulation and
testing. For example, Cscope to search code in C,Eclipse.
Prototyping Tools
Software prototype is simulated version of the intended software product. Prototype provides
initial look and feel of the product and simulates few aspects of actual product.
Web Development Tools
These tools assist in designing web pages with all allied elements like forms, text, script,
graphic and so on. Web tools also provide live preview of what is being developed and how
will it look after completion. For example, Fontello, Adobe Edge Inspect, Foundation 3,
Brackets.
Quality Assurance Tools
Quality assurance in a software organization is monitoring the engineering process and
methods adopted to develop the software product in order to ensure conformance of quality
as per organization standards. QA tools consist of configuration and change control tools and
software testing tools. For example, SoapTest, AppsWatch, JMeter.
Maintenance Tools
Software maintenance includes modifications in the software product after it is delivered.
Automatic logging and error reporting techniques, automatic error ticket generation and root
cause Analysis are few CASE tools, which help software organization in maintenance phase
of SDLC.
Case Environment:
User interface
The user interface provides for the users to interact with the different tools and reducing the
overhead of learning how the different tools are used.
Different case tools represent the software product as a set of entities such as specification,
design, text data, project plan, etc. The commercial relational database management systems
are geared towards supporting large volumes of information structured as simple relatively
short records.