SEPM
SEPM
Software is: (1) instructions (computer programs) that when executed provide desired features,
function, and performance; (2) data structures that enable the programs to adequately
manipulate information, and (3) descriptive information in both hard copy and virtual forms
that describes the operation and use of the programs.
Software Engineering:
1. By Fritz Bauer: Software engineering is the establishment and use of sound engineering
principles in order to obtain economically software that is reliable and works efficiently on real
machines.
Importance:
• Foundation for Architecture: Good design is crucial for creating a robust software
architecture.
• Interaction Efficiency: Ensures that all system components work together seamlessly.
Quality and Maintainability
• Robustness and Reliability: Ensures the software can handle failures gracefully.
• Ease of Maintenance: Good design and development practices make future updates
and enhancements easier.
• Trustworthy Systems: SE practices build software that can be trusted in strategic and
tactical operations.
• Agile Practices: Promotes agile practices to respond to changes and new requirements
effectively.
• Quality Focus: SE emphasizes quality at its core, ensuring high standards throughout
the development process.
• Process Layer: Forms the foundation for managing projects and ensuring quality.
• Methods and Tools: Provide the technical expertise and automated support for
effective development.
Continuous Process Improvement
Software Process:
A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created. An activity strives to achieve a broad objective (e.g., communication
with stakeholders) and is applied regardless of the application domain, size of the project,
complexity of the effort, or degree of rigor with which software engineering is to be applied.
An action (e.g., architectural design) encompasses a set of tasks that produce a major work
product (e.g., an architectural design model). A task focuses on a small, but well-defined
objective (e.g., conducting a unit test) that produces a tangible outcome.
A process framework establishes the foundation for a complete software engineering process
by identifying a small number of framework activities that are applicable to all software
projects, regardless of their size or complexity.
2. Planning: The planning activity creates a “map” that helps guide the team as it makes the
journey. The map is called a software project plan defines the software engineering work by
describing the technical tasks to be conducted, the risks that are likely, the resources that will
be required, the work products to be produced, and a work schedule.
3. Modelling: To better understand the problem and how it’s going to be solved, a software
engineer creates models to better understand software requirements and the design that will
achieve those requirements.
4. Construction: This activity combines code generation (either manual or automated) and the
testing that is required to uncover errors in the code.
5. Deployment: The software (can be an increment) is delivered to the customer who evaluates
the delivered product and provides feedback based on the evaluation.
• The waterfall model is called as classic life cycle, suggests a systematic, sequential
approach to software development that begins with customer specification of requirements
and progresses through planning, modelling, construction, and deployment, culminating in
ongoing support of the completed software (Figure 2.3)
• Reasons for the failure of waterfall model:
1. Real projects rarely follow the sequential flow that the model proposes.
2. It is often difficult for the customer to state all requirements explicitly.
3. The customer must have patience.
4. It is found that the linear nature of the classic life cycle leads to “blocking states” in
which some project team members must wait for other members of the team to complete
dependent tasks. Time spent may exceed the time taken for production sometimes.
Incremental:
• The incremental model delivers a series of releases, called increments that provide
progressively more functionality for the customer as each increment is delivered.
• The incremental model combines elements of linear and parallel process flows.
• For example, word-processing software developed using the incremental paradigm
might deliver basic file management, editing, and document production functions in the
first increment; more sophisticated editing and document production capabilities in the
second increment; spelling and grammar checking in the third increment; and advanced
page layout capability in the fourth increment.
• When an incremental model is used, the first increment is often a core product. That is,
basic requirements are addressed but many supplementary features remain undelivered.
• The plan addresses the modification of the core product to better meet the needs of the
customer and the delivery of additional features and functionality. This process is
repeated following the delivery of each increment, until the complete product is
produced.
• The incremental process model focuses on the delivery of an operational product with
each increment.
1 Component-Based Development
• The formal methods model includes a series of steps that produce a mathematical
specification of computer software. These methods use rigorous mathematical notation
to specify, develop, and verify computer-based systems.
• During the development process, formal methods provide a way to handle numerous
challenges that are challenging to address using alternative software engineering
methods. They aid in identifying and resolving problems like ambiguity,
incompleteness, and inconsistency with greater efficiency.
• When employed in the design phase, formal methods act as a foundation for program
verification, allowing the identification and correction of errors that might otherwise
remain unnoticed.
• Problems to be addressed:
o Creating formal models is presently a time-consuming and costly endeavour.
o Due to the limited number of software developers equipped with the requisite
expertise in applying formal methods, extensive training is necessary.
o Communicating the models to technically inexperienced clients poses
challenges.
i. Network intensiveness: A WebApp resides on a network and must serve the needs of
a diverse community of clients. The network may enable worldwide access and
communication (i.e., the Internet) or more limited access and communication (e.g., a
corporate Intranet).
ii. Concurrency. A large number of users may access the WebApp at one time. In many
cases, the patterns of usage among end users will vary greatly.
iii. Unpredictable load. The number of users of the WebApp may vary by orders of
magnitude from day to day. One hundred users may show up on Monday; 10,000 may
use the system on Thursday.
iv. Performance. If a WebApp user must wait too long (for access, for serverside
processing, for client-side formatting and display), he or she may decide to go
elsewhere.
v. Availability. Although expectation of 100 percent availability is unreasonable, users of
popular WebApps often demand access on a 24/7/365 basis. Users in Australia or Asia
might demand access during times when traditional domestic software applications in
North America might be taken off-line for maintenance.
vi. Data driven. The primary function of many WebApps is to use hypermedia to present
text, graphics, audio, and video content to the end user. In addition, WebApps are
commonly used to access information that exists on databases that are not an integral
part of the Web-based environment (e.g., e-commerce or financial applications).
vii. Content sensitive. The quality and aesthetic nature of content remains an important
determinant of the quality of a WebApp.
viii. Continuous evolution. Unlike conventional application software that evolves over a
series of planned, chronologically spaced releases, Web applications evolve
continuously. It is not unusual for some WebApps (specifically, their content) to be
updated on a minute-by-minute schedule or for content to be independently computed
for each request.
ix. Immediacy. Although immediacy—the compelling need to get software to market
quickly—is a characteristic of many application domains, WebApps often exhibit a
time-to-market that can be a matter of a few days or weeks.
x. Security. Because WebApps are available via network access, it is difficult, if not
impossible, to limit the population of end users who may access the application. In
order to protect sensitive content and provide secure modes of data transmission, strong
security measures must be implemented throughout the infrastructure that supports a
WebApp and within the application itself.
xi. Aesthetics. An undeniable part of the appeal of a WebApp is its look and feel. When
an application has been designed to market or sell products or ideas, aesthetics may
have as much to do with success as technical design.
5. Explain in brief along with diagrams –
Prototyping:
• When your customer has a legitimate need, but is clueless about the details, develop a
prototype as a first step.
• A customer defines a set of general objectives for software, but does not identify detailed
requirements for functions and features.
• A prototyping iteration is planned quickly, and modelling (in the form of a “quick design”)
occurs. A quick design focuses on a representation of those aspects of the software that will
be visible to end users (e.g., human interface layout or output display formats).
• The quick design leads to the construction of a prototype. The prototype is deployed and
evaluated by stakeholders, who provide feedback that is used to further refine requirements.
Iteration occurs as the prototype is tuned to satisfy the needs of various stakeholders.
• Ideally, the prototype serves as a mechanism for identifying software requirements.
• Problems associated with Prototyping:
1. Stakeholders see what appears to be a working version of the software not considering
the over-all software quality and long-term maintainability.
2. Implementation compromises are made in order to get a prototype working quickly;
inefficient algorithm might be used; inappropriate OS or Programming language might be
used. If all the stakeholders agree that the prototype is built to serve as a mechanism for
defining requirements, then prototyping can be an effective paradigm for software
engineering.
2) Concurrent process model
• The concurrent model is often more appropriate for product engineering projects where
different engineering teams are involved. Figure 2.8 provides a schematic
representation of one software engineering activity within the modelling activity using
a concurrent modelling approach.
• Modelling activity may be in any one of the states noted at any given time. Similarly,
other activities, actions, or tasks (e.g., communication or construction) can be
represented in an analogous manner. All software engineering activities exist
concurrently but reside in different states.
• For example, assuming that project the communication activity has completed its first
iteration and currently in the awaiting changes state. The modelling activity (which was
in the inactive state while initial makes a transition into the under-development state.
If, however, the customer indicates that changes in requirements must be made, the
modelling activity moves from the under-development state into the awaiting changes
state.
• A series of events is going to trigger transitions from state to state for each of the
software engineering activities, actions, or tasks. Concurrent modelling is applicable to
all types of software development and provides an accurate picture of the current state
of a project.
3) Spiral model
• The spiral model is an evolutionary software process model that couples the iterative
nature of prototyping with the controlled and systematic aspects of the waterfall model.
• The spiral model can be adapted to apply throughout the entire life cycle of an
application, from concept development to maintenance.
• As this evolutionary process begins, the software team performs activities that are
implied by a circuit around the spiral in a clockwise direction, beginning at the centre.
• Risk is analysed as each revolution is made.
• Project milestones are attained along the path of the spiral after each pass.
• The first circuit around the spiral might result in the development of a product
specification; subsequent passes around the spiral might be used to develop a prototype
and then progressively more sophisticated versions of the software.
• Each pass through the planning region results in adjustments to the project plan. Cost
and schedule are adjusted based on feedback derived from the customer after delivery.
• The spiral model is a realistic approach to the development of large-scale systems and
software.
• Features:
1. Risk-driven process model generator
2. It maintains the systematic stepwise approach but incorporates it into an iterative
framework.
3. Guides multi-stakeholder
4. Concurrent in nature
5. cyclic approach
6. Incrementally growing
7. Ensures to meet the project milestones
6. Write about the various software myths and how it all starts.
Software myths—erroneous beliefs about software and the process that is used to build it—can
be traced to the earliest days of computing. Myths have a number of attributes that make them
insidious.
Myth: A book of standards and procedures for building software provides everything needed.
• Reality: Such books often go unused, become outdated, or fail to reflect current
practices. Effective standards need to be known, used, and regularly updated to be
valuable.
Myth: Adding more programmers to a late project will speed it up.
• Reality: Known as Brooks' Law, adding people to a late project typically delays it
further due to the time needed for new team members to get up to speed and the resultant
communication overhead.
Myth: Outsourcing the project allows us to relax and let the third party handle everything.
• Reality: Without strong internal management and control, outsourcing can lead to
increased difficulties and poor project outcomes.
Customer Myths: A customer who requests computer software may be a person at the next
desk, a technical group down the hall, the marketing/sales department, or an outside company
that has requested software under contract. In many cases, the customer believes myths about
software because software managers and prac titioners do little to correct misinformation.
Myths lead to false expectations (by the customer) and, ultimately, dissatisfaction with the
developer.
Myth: A general statement of objectives is sufficient to start programming; details can be filled
in later.
Myth: Software requirements change frequently, but such changes are easy to accommodate
because software is flexible.
• Reality: While early changes have a minimal cost impact, changes introduced later in
the development process can cause significant disruption and require extensive
additional resources.
Practitioner’s Myths: Myths that are still believed by software practitioners have been
fostered by over 50 years of programming culture. During the early days, pro gramming was
viewed as an art form. Old ways and attitudes die hard.
Myth: Once the program is written and works, the job is done.
• Reality: The majority of effort (60-80%) occurs after initial delivery, involving
maintenance, updates, and enhancements.
Myth: Software engineering creates unnecessary documentation and slows down the process.
• Reality: Software engineering focuses on quality. High quality reduces rework and
accelerates delivery.
Every software project begins with a business need, whether to fix a defect, adapt to changes,
extend functionality, or create something new. Initially, this need is often expressed informally,
like in casual conversations. However, as the project progresses, it becomes clear that software
will be central to its success, requiring careful planning, clear requirements, and robust
management to meet the customer's needs and market demands.
Software Engineering:
1. By Fritz Bauer: Software engineering is the establishment and use of sound engineering
principles in order to obtain economically software that is reliable and works efficiently on real
machines.
Code of ethics:
Although each of these eight principles is equally important, an overriding theme appears: a
software engineer should work in the public interest. On a personal level, a software engineer
should abide by the following rules:
• The waterfall model is called as classic life cycle, suggests a systematic, sequential
approach to software development that begins with customer specification of requirements
and progresses through planning, modelling, construction, and deployment, culminating in
ongoing support of the completed software (Figure 2.3)
• A variation in the representation of the waterfall model is called the V-model.
• The V-model illustrates how verification and validation actions are associated with earlier
engineering actions. Figure 2.4 depicts the V-model describing the relationship of quality
assurance actions to the actions associated with communication, modelling, and early
construction activities.
• As a software team moves down the left side of the V, basic problem requirements are
refined into progressively more detailed and technical representations of the problem and
its solution.
• Once code has been generated, the team moves up the right side of the V, essentially
performing a series of tests (quality assurance actions) that validate each of the models
created as the team moved down the left side.
• The V-model provides a way of visualizing how verification and validation actions are
applied to earlier engineering work.
• Reasons for the failure of waterfall model:
1. Real projects rarely follow the sequential flow that the model proposes.
2. It is often difficult for the customer to state all requirements explicitly.
3. The customer must have patience.
4. It is found that the linear nature of the classic life cycle leads to “blocking states” in
which some project team members must wait for other members of the team to complete
dependent tasks. Time spent may exceed the time taken for production sometimes.
2. Planning: The planning activity creates a “map” that helps guide the team as it makes the
journey. The map is called a software project plan defines the software engineering work by
describing the technical tasks to be conducted, the risks that are likely, the resources that will
be required, the work products to be produced, and a work schedule.
3. Modelling: To better understand the problem and how it’s going to be solved, a software
engineer creates models to better understand software requirements and the design that will
achieve those requirements.
4. Construction: This activity combines code generation (either manual or automated) and the
testing that is required to uncover errors in the code.
5. Deployment: The software (can be an increment) is delivered to the customer who evaluates
the delivered product and provides feedback based on the evaluation.
Characteristics:
2. Software doesn’t “wear out.”: Figure 1.1 depicts failure rate as a function of time for
hardware. The relationship, often called the “bathtub curve,” indicates that hardware exhibits
relatively high failure rates early in its life (these failures are often attributable to design or
manufacturing defects); defects are corrected and the failure rate drops to a steady-state level
for some period of time. As time passes, however, the failure rate rises again as hardware
components suffer from the cumulative effects of dust, vibration, abuse, temperature extremes,
and many other environmental maladies. Stated simply, the hardware begins to wear out.
Software is not susceptible to the environmental maladies that cause hardware to wear out. In
theory, therefore, the failure rate curve for software should take the form of the “idealized
curve” shown in Figure 1.2.
i. System software - a collection of programs written to service other programs. Some system
software processes complex, but determinate, information structures (e.g., compilers, editors,
and file management utilities). Other systems applications process largely indeterminate data
(e.g., operating system components, drivers, networking software, telecommunications
processors).
In either case, the systems software area is characterized by heavy interaction with computer
hardware; heavy usage by multiple users; concurrent operation that requires scheduling,
resource sharing, and sophisticated process management; complex data structures; and multiple
external interfaces.
ii. Application software - stand-alone programs that solve a specific business need.
Applications in this area process business or technical data in a way that facilitates business
operations or management/technical decision making. e.g., point-of-sale transaction processing
However, modern applications within the engineering/scientific area are moving away from
conventional numerical algorithms. Computer-aided design, system simulation, and other
interactive applications have begun to take on real-time and even system software
characteristics.
iv. Embedded software - resides within a product or system and is used to implement and
control features and functions for the end user and for the system itself. (e.g., key pad control
for a microwave oven)
Embedded software can perform limited and esoteric functions or provide significant function
and control capability (e.g., digital functions in an automobile such as fuel control, dashboard
displays, and braking systems).
V. Product-line software - designed to provide a specific capability for use by many different
customers. Product-line software can focus on a limited and esoteric marketplace (e.g.,
inventory control products) or address mass consumer markets (e.g., word processing,
spreadsheets).
vi. Web applications - called “WebApps,” this network-centric software category spans a wide
array of applications. In their simplest form, WebApps can be little more than a set of linked
hypertext files that present information using text and limited graphics.
However, as Web 2.0 emerges, WebApps are evolving into sophisticated computing
environments that not only provide stand-alone features, computing functions, and content to
the end user, but also are integrated with corporate databases and business applications.
vii. Artificial intelligence software - makes use of nonnumerical algorithms to solve complex
problems that are not vulnerable to computation or straightforward analysis. Applications
within this area include robotics, expert systems, pattern recognition (image and voice),
artificial neural networks, theorem proving, and game playing.
Challenges: The legacy to be left behind by this generation will ease the burden of future
software engineers. And yet, new challenges have appeared on the horizon:
Open-world computing - the rapid growth of wireless networking may soon lead to true
pervasive, distributed computing. The challenge for software engineers will be to develop
systems and application software that will allow mobile devices, personal computers, and
enterprise systems to communicate across vast networks.
Netsourcing - the World Wide Web is rapidly becoming a computing engine as well as a
content provider. The challenge for software engineers is to architect simple (e.g., personal
financial planning) and sophisticated applications that provide a benefit to targeted end-user
markets worldwide.
Open source - a growing trend that results in distribution of source code for systems
applications (e.g., operating systems, database, and development environments) so that many
people can contribute to its development. The challenge for software engineers is to build
source code that is self-descriptive, but more importantly, to develop techniques that will
enable both customers and developers to know what changes have been made and how those
changes manifest themselves within the software.
1. By Fritz Bauer: Software engineering is the establishment and use of sound engineering
principles in order to obtain economically software that is reliable and works efficiently on real
machines.
Software engineering encompasses a process, methods for managing and engineering software,
and tools.
2. An activity strives to achieve a broad objective and is applied regardless of the application
domain, size of the project, complexity of the effort, or degree of rigor with which software
engineering is to be applied. (e.g., communication with stakeholders)
3. An action (e.g., architectural design) encompasses a set of tasks that produce a major work
product (e.g., an architectural design model).
4. A task focuses on a small, but well-defined objective that produces a tangible outcome. (e.g.,
conducting a unit test)
1. Software project tracking and control - allows the software team to assess progress against
the project plan and take any necessary action to maintain the schedule.
2. Risk management - assesses risks that may affect the outcome of the project or the quality
of the product.
3. Software quality assurance - defines and conducts the activities required to ensure software
quality.
5. Measurement - defines and collects process, project, and product measures that assist the
team in delivering software that meets stakeholders’ needs.
8. Work product preparation and production - encompasses the activities required to create
work products such as models, documents, logs, forms, and lists.
18. With neat diagram explain generic process model. /Software Process
Framework.
• A process is defined as a collection of work activities, actions, and tasks that are
performed when some work product is to be created.
• Each of these activities, actions, and tasks reside within a framework or model that
defines their relationship with the process and with one another.
• The software process is represented schematically in Figure 2.1. Referring to the figure,
each framework activity is populated by a set of software engineering actions.
• Each software engineering action is defined by a task set that identifies the work tasks
that are to be completed, the work products that will be produced, the quality
assurance points that will be required, and the milestones that will be used to indicate
progress.
19. Explain the types of process flow in SE.
• Process flow describes how the framework activities and the actions and tasks that
occur within each framework activity are organized with respect to sequence and time.
• A linear process flow executes each of the five framework activities in sequence,
beginning with communication and culminating with deployment (Figure 2.2a).
• An iterative process flow repeats one or more of the activities before proceeding to the
next (Figure 2.2b).
• An evolutionary process flow executes the activities in a “circular” manner. Each circuit
through the five activities leads to a more complete version of the software (Figure
2.2c).
• A parallel process flow (Figure 2.2d) executes one or more activities in parallel with
other activities (e.g., modeling for one aspect of the software might be executed in
parallel with construction of another aspect of the software)
20. What is process pattern? Explain the template of process pattern.
A process pattern describes a process-related problem that is encountered during software
engineering work, identifies the environment in which the problem has been encountered, and
suggests one or more proven solutions to the problem.
Stated in more general terms, a process pattern provides you with a template a consistent
method for describing problem solutions within the context of the software process. By
combining patterns, a software team can solve problems and construct a process that best meets
the needs of a project.
• Forces: The environment in which the pattern is encountered and the issues that make the
problem visible and may affect its solution.
1. Stage pattern - defines a problem associated with a framework activity for the process. An
example of a stage pattern might be EstablishingCommunication. This pattern would
incorporate the task pattern RequirementsGathering and others.
2. Task pattern - defines a problem associated with a software engineering action or work task
and relevant to successful software engineering practice (e.g., RequirementsGathering)
3. Phase pattern - define the sequence of framework activities that occurs within the process,
even when the overall flow of activities is iterative in nature. An example of a phase pattern
might be SpiralModel or Prototyping.
• Initial context. Describes the conditions under which the pattern applies.
• Problem. The specific problem to be solved by the pattern.
• Solution. Describes how to implement the pattern successfully.
• Resulting Context. Describes the conditions that will result once the pattern has been
successfully implemented.
• Related Patterns. Provide a list of all process patterns that are directly related to this one.
• Known Uses and Examples. Indicate the specific instances in which the pattern is
applicable.
Nogueira and his colleagues describe this balance as the "edge of chaos," where too much
order can stifle creativity, while too much chaos can lead to disorganization. They argue
that while prescriptive models strive for structure, they may not always be suitable in an
environment that requires adaptability and change. These models define specific process
elements, such as activities, tasks, and quality assurance mechanisms, and prescribe a
predictable workflow. Yet, the challenge remains whether to adhere to these structured
models or adopt more flexible approaches that can better accommodate the dynamic nature
of software development.
22. Discuss the David Hooker’s seven principles of software engineering
practice
1. The Reason It All Exists: The primary purpose of a software system is to deliver value
to its users. Every decision should be aligned with this goal. If an aspect of the system
doesn’t add value, it should be reconsidered.
3. Maintain the Vision: A clear vision is critical to the success of a software project.
Without it, the project risks becoming inconsistent and disjointed. An empowered
architect who maintains and enforces this vision can significantly enhance the project's
success.
4. What You Produce, Others Will Consume: Software is rarely used in isolation. It will
be maintained, documented, or expanded by others, so it’s important to design and
implement the system with this in mind.
6. Plan Ahead for Reuse: Reuse can save time and effort, but it requires careful planning.
Reusing code and designs can be beneficial, but achieving this goal requires forethought
at every stage of development.
7. Think!: Thoughtful consideration before taking action leads to better results. Clear
thinking helps avoid mistakes and provides valuable learning opportunities when things
do go wrong. Applying the first six principles effectively requires careful and deliberate
thought.
SEPM - MODULE 2
Requirements engineering is the wide range of activities and methods that result in a
comprehension of requirements. Requirements engineering is a significant software
engineering action that starts during the communication activity and extends into the modelling
activity from the standpoint of the software process. It needs to be modified to meet the
requirements of the project, the product, the workers, and the procedure.
3. Elaboration: Elaboration refines use case scenarios to detail user interactions and identify
analysis classes, their attributes, services, relationships, and generates various diagrams.
6. Validation: The primary method for validating requirements is through technical review,
where a team of software engineers, customers, users, and stakeholders examine the
specification for errors, omissions, inconsistencies, conflicts, and impractical or unattainable
requirements.
7. Requirements Management: Requirements management involves tasks that enable the
project team to identify, control, and manage requirements and any changes to them
throughout the system's lifecycle, recognizing that requirements for computer-based systems
evolve over time.
The goal of negotiation is to create a project plan that fulfils stakeholder’ requirements while
considering the real-world constraints (such as time, personnel, and budget) imposed on the
software team. Successful negotiations aim for a "win-win" outcome, where stakeholders
receive a system or product that meets most of their needs, and the software team works within
realistic and achievable budgets and deadlines.
1. Recognize that It’s Not a Competition: Successful negotiations require both parties to feel
they have achieved something. Understand that compromise is necessary.
2. Map Out a Strategy: Define your goals, understand the other party’s goals, and plan how
both can be achieved. Preparation is key to successful negotiation.
3. Listen Actively: Focus on what the other party is saying without formulating your response
simultaneously.
4. Focus on Interests, Not Positions: Avoid taking rigid positions. Instead, focus on
understanding and addressing the underlying interests and concerns of the other party to find
common ground.
5. Don’t Let It Get Personal: Keep the discussion focused on solving the problem at hand,
rather than on personal disagreements or conflicts.
6. Be Creative: Think outside the box to find innovative solutions that satisfy both parties
when faced with problem.
7. Be Ready to Commit: Once an agreement is reached, commit to it fully and move forward,
otherwise can undermine trust and delay progress.
It certainly seems simple enough — jask the customer, the users, and others what the objectives
for the system or product are, what is to be accomplished, how the system or product fits into
the needs of the business, and finally, how the system or product is to be used on a day-to-day
basis. But it isn’t simple—it’s very hard.
Christel and Kang [Cri92] identify a number of problems that are encountered as elicitation
occurs.
To help overcome these problems, you must approach requirements gathering in an organized
manner.
1. Activity Diagram:
The UML activity diagram supplements the use case by providing a graphical repre sentation
of the flow of interaction within a specific scenario. Similar to the flowchart, an activity
diagram uses rounded rectangles to imply a specific system function, arrows to represent flow
through the system, decision diamonds to depict a branch ing decision (each arrow emanating
from the diamond is labeled), and solid horizon tal lines to indicate that parallel activities are
occurring. An activity diagram for the ACS-DCV use case is shown in Figure 6.5. It should be
noted that the activity dia gram adds additional detail not directly mentioned (but implied) by
the use case
2. Swimlane Diagram:
The UML Swimlane diagram is a useful variation of the activity diagram and allows you to
represent the flow of activities described by the use case and at the same time indicate which
actor (if there are multiple actors involved in a specific use case) or analysis class has
responsibility for the action described by an activity rectangle. Responsibilities are represented
as parallel segments that divide the diagram vertically, like the lanes in a swimming pool.
Referring to Figure 6.6, the activity diagram is rearranged so that activities associated with a
particular analysis class fall inside the Swimlane for that class. For example, the Interface class
represents the user interface as seen by the homeowner. The activity diagram notes two prompts
that are the responsibility of the interface - “prompt for re-entry” and “prompt for another
view.” These prompts and the decisions associated with them fall within the Interface
Swimlane. However, arrows lead from that swim lane back to the Homeowner Swimlane,
where homeowner actions occur.
1. The model should focus on requirements that are visible within the problem or business
domain. The level of abstraction should be relatively high.
2. Each element of the requirements model should add to an overall understanding of software
requirements and provide insight into the information domain, function, and behaviour of the
system.
1. Class Name: The name of the class, written at the top of the card.
2. Responsibilities: The attributes and operations that the class is responsible for. These
are listed on the left side of the card.
3. Collaborators: Other classes that the class interacts with to fulfill its responsibilities.
These are listed on the right side of the card.
Example
Consider a simple home security system. Let's model the FloorPlan class using CRC modeling.
Responsibilities:
Collaborators:
CRC modeling is particularly useful during the early stages of object-oriented design. It allows
teams to:
• Identify Classes: Helps in discovering the classes needed for the system.
• Determine Collaborations: Identifies how classes will interact with one another to
achieve system functionality.
Advantages
• Simplicity: CRC cards provide a straightforward way to think about the design of a
system.
CRC modeling is an effective and simple technique to identify classes, define their
responsibilities, and understand their interactions in an object-oriented system. It lays the
groundwork for more detailed design and implementation phases by providing a clear,
organized structure for the system’s components.
• The objective is to recognize the problem, suggest components of the solution, discuss
various strategies, and outline an initial set of solution requirements, all within an
environment that supports achieving the objective.
• Basic guidelines
• Meetings are conducted and attended by both software engineers and other
stakeholders.
• Rules for preparation and participation are established.
• An agenda is suggested that is formal enough to cover all important points but
informal enough to encourage the free flow of ideas.
• A “facilitator” (can be a customer, a developer, or an outsider) controls the meeting.
• A “definition mechanism” (can be work sheets, flip charts, or wall stickers or an
electronic bulletin board, chat room, or virtual forum) is used.
• During inception, the developer and customers write “product request”. A meeting
location, time, and date are determined; a facilitator is appointed; and participants
from the software team and other stakeholder groups are invited to join. The product
request is shared with all attendees prior to the meeting.
• Prior to the meeting, each participant is asked to review the product request and create
several lists: one of objects within the environment surrounding the system, another of
objects the system will produce, and a third of objects the system will use to carry out
its functions.
• Additionally, participants should compile a list of services (processes or functions)
that interact with or manipulate these objects.
• Finally, they need to develop lists of constraints (such as cost, size, and business
rules) and performance criteria (such as speed and accuracy). The goal is to create an
agreed-upon list of objects, services, constraints, and performance criteria for the
system that will be developed.
• Each mini-specification is an elaboration of an object or service. The mini-specs are
shared with all stakeholders for discussion, where additions, deletions, and further
details are made. This process may reveal new objects, services, constraints, or
performance requirements that will be added to the initial lists.
Quality function deployment (QFD) is a quality management technique that translates the
needs of the customer into technical requirements for software. QFD “concentrates on
maximizing customer satisfaction from the software engineering process” [Zul92]
2. Expected requirements. These requirements are implicit to the product or system and
may be so fundamental that the customer does not explicitly state them. Their absence will be
a cause for significant dissatisfaction. Examples of expected requirements are: ease of
human/machine interaction, overall operational correctness and reliability, and ease of
software.
3. Exciting requirements. These features exceed the customer’s expectations and are highly
satisfying when included. For example, software for a new mobile phone comes with
standard features, but is coupled with a set of unexpected capabilities (e.g., multi-touch
screen, visual voice mail) that delight every user of the product.
QFD gathers requirements through customer interviews and observations, surveys, and
analysis of historical data (such as problem reports). This information is compiled into a
customer voice table, which is reviewed with the customer and other stakeholders. Various
diagrams, matrices, and evaluation methods are then employed to identify expected
requirements and try to uncover exciting requirements.
Stakeholder Collaboration:
Identifying Stakeholders:
• Stakeholders are anyone who benefits from the system being developed. The process
begins with identifying and listing these stakeholders, which grows as more are
contacted.
Multiple Viewpoints:
9. Reflection: Does the model reflect the intended information, function and behaviour?
10. Partitioning: Has the model been partitioned to reveal detailed information?
11. Patterns: Are requirements patterns used, validated and consistent with customer needs?
2. Use Cases:
o Use Case: The primary tool in scenario-based modeling, a use case describes a
sequence of actions that the system performs in response to an actor’s request.
Each use case captures a specific functionality of the system from the user’s
perspective.
o Components: A use case typically includes actors (who interact with the
system), a description of the interaction, preconditions (what must be true
before the use case starts), and postconditions (what is true after the use case
completes).
3. Activity Diagrams:
4. Importance:
o User-Centric: Scenario-based models focus on how users will actually use the
system, ensuring that the system’s design meets user needs.
5. Development:
• The first step in creating a use case is to identify the "actors," which are roles that people
or devices play when interacting with the system. An actor is anything external to the
system that communicates with it.
• An actor represents a role rather than a specific person. For example, a single user might
play multiple roles (e.g., programmer, tester, monitor) that translate into different actors
within the use case.
• Primary actors interact directly with the system to achieve its main functions, while
secondary actors support the primary actors.
• Use cases are developed by answering specific questions about the actors and their
interactions with the system. Questions include identifying primary and secondary
actors, their goals, main tasks, potential exceptions, and variations in interactions.
1. Scenario-Based Elements: Scenario-based elements describe the system from the user's
perspective, often using use cases and corresponding diagrams. These elements are typically
the first part of the requirements model to be developed and serve as input for other modelling
elements.
• Use Cases: Detailed descriptions of user interactions with the system, capturing
functional requirements.
• Use-Case Diagrams: Visual representations of the interactions between actors (users or
other systems) and the system itself.
• Activity Diagrams: Show the flow of activities involved in a use case, as illustrated in
Figure 5.3.
2. Class-Based Elements: Class-based elements focus on the objects manipulated by the
system and their interactions.
• Class Diagrams: Represent classes (e.g., Sensor class in Figure 5.4) with their attributes
(e.g., name, type) and operations (e.g., identify, enable).
• Relationships and Interactions: Diagrams depicting how classes interact and collaborate
with each other.
3. Behavioral Elements: Behavioral elements model how the system behaves in response to
external stimuli and internal processes.
• State Diagrams: Show the states of a system and the transitions between these states
triggered by events. For example, a state diagram for the SafeHome control panel
software could depict modes like reading user input, processing input, and responding
to commands (Figure 5.5).
• Sequence Diagrams: Depict the sequence of messages exchanged between objects to
carry out a function.
1. Purpose:
o The primary goal of domain analysis is to identify and create reusable analysis
patterns and classes that can be applied to various projects within a specific
business domain. By doing so, the development process is expedited, time-to-
market is improved, and development costs are reduced.
2. Application Domain:
3. Reusability:
4. Analysis Patterns:
o Analysis patterns are recurring solutions to common problems within a specific
domain. These patterns are identified through domain analysis and categorized
so they can be applied to new projects within the same domain.
o Expert Advice: Consulting domain experts who have deep knowledge and
experience in the domain.
• Cost Reduction: Reuse leads to lower development costs as less time and effort are
spent on creating new solutions.
• Improved Quality: Reused components are typically well-tested and refined, leading
to higher quality software.
• Definition: An ERD is a visual tool used in data modeling to represent all data objects
within a system, their relationships, and other relevant details.
• Purpose: It shows how data objects are interconnected and how they interact within an
application.
2. Data Objects
• Forms of Data Objects: They can represent external entities (e.g., a person),
occurrences (e.g., an event like an alarm), roles (e.g., salespeople), organizational units
(e.g., departments), places (e.g., warehouses), or structures (e.g., files).
• Description and Representation: A data object's description includes the object itself
and its attributes. It can be represented in a table format where attributes are the
headings, and rows represent specific instances (e.g., a table of cars with attributes like
make, model, ID number).
3. Data Attributes
• Purpose: Attributes are used to name, describe, and sometimes reference data objects.
• Functions:
• Contextual Choice: The choice of attributes depends on the specific application. For
example, attributes in a DMV application might include make, model, and ID number,
while an automobile manufacturing control software might include interior code and
transmission type.
4. Relationships
• Definition: Relationships describe how data objects are connected to one another,
crucial for developing a comprehensive data model.
• Examples:
These concepts form the foundation of understanding how data is structured and interacted
with in software systems, particularly through the use of ERDs in the design and development
of databases and applications.
SEPM – MODULE 3
Agile process in the context of software development refers to the ability of a development
process to quickly adapt to changes and unpredictability. This concept, primarily discussed in
agile software methodologies, addresses several key assumptions and characteristics about
software projects:
2. Interleaving of Design and Construction: For many software projects, the design and
construction phases are not strictly sequential but are instead interleaved. This means that
design models are validated through construction activities as they are created, making it
difficult to determine the extent of design needed before beginning construction.
1. Incremental Adaptation: Rather than attempting to predict and plan for all changes upfront,
an agile process adapts incrementally. This means delivering software in small, manageable
increments that can be reviewed and adjusted based on customer feedback.
2. Customer Feedback: Frequent and ongoing feedback from customers is crucial. It helps
the development team make the necessary adaptations to the product, ensuring it meets the
evolving needs and priorities of the customers.
Benefits of Agility:
The toolset of the agile process includes both technological and non-technological aids
designed to enhance team collaboration, communication, and overall project efficiency.
1. Social Tools:
• Hiring Practices: One of the social tools is the practice of assessing a prospective team
member’s fit through pair programming sessions with an existing team member. This
allows the team to evaluate the candidate’s skills and compatibility with the team
dynamics in real-time, ensuring that the right people are brought on board.
• Whiteboards, Poster Sheets, Index Cards, Sticky Notes: These low-tech tools
facilitate active communication by allowing team members to visualize and manipulate
information during meetings or brainstorming sessions.
• Information Radiators: Passive communication tools such as flat panel displays that
show the overall status of different components of a project, enabling the team to stay
informed about progress without needing constant verbal updates.
• Earned Value Charts and Graphs of Tests Created vs. Passed: These tools provide
a clear, visual representation of project progress, focusing on tangible outcomes rather
than traditional project management tools like Gantt charts.
• Time-Boxing and Pair Programming: These are process tools that help streamline
the work process and ensure efficiency. Time-boxing restricts tasks to a set timeframe,
while pair programming encourages collaborative coding practices.
• Electronic Whiteboards: These are physical devices that enable dynamic and real-
time collaboration, particularly useful for distributed teams.
• Collocated Teams: Encouraging teams to work in a shared physical space fosters better
collaboration and stronger team culture.
In the agile process, the term "tools" extends beyond software or digital tools to include any
social, physical, or process mechanisms that enhance the work environment, collaboration, and
communication among team members. The agile toolset is diverse, encompassing everything
from hiring practices and team dynamics to physical workspaces and visual management aids,
all aimed at improving the efficiency and quality of the final product. These tools are critical
to the success of agile teams as they align with the core principles of agility: collaboration,
responsiveness, and continuous improvement.
Agility in the context of software engineering refers to the ability of a software development
team to quickly and effectively respond to changes throughout the development process.
Principles:
1. Our highest priority is to satisfy the customer through early and continuous delivery of
valuable software.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference to the shorter timescale.
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support they
need and trust them to get the job done.
6. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
8. Agile processes promote sustainable development. The sponsors, developers, and users
should be able to maintain a constant pace indefinitely.
10. Simplicity—the art of maximizing the amount of work not done—is essential.
11. The best architectures, requirements, and designs emerge from self–organizing teams.
12. At regular intervals, the team reflects on how to become more effective, then tunes and
adjusts its behaviour accordingly.
4. Explain any two agile process models other than XP that have been
proposed
i. Adaptive Software Development (ASD)
The ASD life cycle consists of three phases: speculation, collaboration, and learning.
1. Speculation: The project is initiated and adaptive cycle planning is conducted. This
phase uses project initiation information—customer’s mission statement, project
constraints, and basic requirements—to define the set of release cycles (software
increments) needed for the project.
2. Collaboration: Motivated people work together in a way that multiplies their talent
and creative output beyond their absolute numbers. Collaboration involves
communication and teamwork, individual creativity, and above all, trust. Team
members must trust one another to criticize without animosity, assist without
resentment, work diligently, possess the necessary skills, and communicate problems
effectively.
3. Learning: ASD teams learn through focus groups, technical reviews, and project
postmortems. This phase emphasizes the dynamics of self-organizing teams,
interpersonal collaboration, and both individual and team learning, leading to a higher
likelihood of project success.
ASD promotes an environment where progress is as important as the adaptive cycle's success,
fostering a collaborative and learning-oriented approach to software development.
ii. SCRUM
Scrum is an agile software development method developed by Jeff Sutherland and his team in
the early 1990s. It aligns with agile principles and guides development through a framework
involving the following activities: requirements, analysis, design, evolution, and delivery.
These activities are structured into "sprints," adaptable work units defined and modified in real
time by the Scrum team. The Scrum process includes several key components and activities:
Led by a Scrum master, these meetings help identify potential problems early and
promote "knowledge socialization."
4. Demos: At the end of each sprint, the software increment is demonstrated to the
customer for evaluation. The demo may not include all planned functionality, but
showcases what can be delivered within the established time-box.
Scrum emphasizes the use of software process patterns that have proven effective for projects
with tight timelines, changing requirements, and critical business needs. This method fosters a
collaborative and adaptive approach to software development, ensuring continuous
improvement and customer satisfaction.
The DSDM life cycle consists of five activities, with the last three forming iterative cycles:
1. Feasibility Study: Establishes basic business requirements and constraints for the
application, assessing its viability as a project candidate.
2. Business Study: Defines the functional and information requirements needed for the
application to provide business value, along with the basic application architecture and
maintainability requirements.
3. Functional Model Iteration: Produces incremental prototypes to demonstrate
functionality for the customer, gathering additional requirements through user feedback
as the prototype is exercised.
4. Design and Build Iteration: Revisits and refines prototypes from the functional model
iteration to ensure they are engineered to provide operational business value for end
users. This cycle may occur concurrently with the functional model iteration.
5. Implementation: Deploys the latest software increment into the operational
environment. The increment may not be 100 percent complete, and changes may be
requested during this phase. Development work then continues by returning to the
functional model iteration activity.
DSDM emphasizes iterative development and incremental delivery, ensuring that the system
evolves based on user feedback and changing requirements, thus maximizing business value
and flexibility in the development process.
iv. Crystal
Alistair Cockburn and Jim Highsmith created the Crystal family of agile methods in order to
achieve a software development approach that puts a premium on “manoeuvrability” during
what Cockburn characterizes as “a resource limited, cooperative game of invention and
communication, with a primary goal of delivering useful, working software and a secondary
goal of setting up for the next game”
The Crystal family is actually a set of example agile processes that have been proven effective
for different types of projects. The intent is to allow agile teams to select the member of the
crystal family that is most appropriate for their project and environment.
Feature Driven Development (FDD) was originally conceived by Peter Coad and later extended
by Stephen Palmer and John Felsing to create an adaptive, agile process suitable for moderately
sized and larger software projects. FDD focuses on object-oriented software engineering and
incorporates several key principles and activities to manage complexity and ensure software
quality.
• Use of Patterns: Applies design patterns for consistent and effective analysis, design,
and construction.
Definition of a Feature:
In FDD, a feature is a client-valued function that can be implemented in two weeks or less.
This approach has several benefits:
• Ease of Description: Features are small and deliverable, making them easier for users
to describe and understand.
• Regular Deliverables: Teams develop operational features every two weeks, providing
regular, incremental progress.
• Effective Inspections: Small features make design and code inspections more
manageable and effective.
• Feature-driven Planning: Project planning, scheduling, and tracking are based on the
feature hierarchy rather than arbitrary tasks.
vi. Lean Software Development (LSD)
Lean Software Development (LSD) has adapted the principles of lean manufacturing to the
world of software engineering. The lean principles that inspire the LSD process can be
summarized as eliminate waste, build quality in, create knowledge, defer commitment, deliver
fast, respect people, and optimize the whole. Each of these principles can be adapted to the
software process. For example, eliminate waste within the context of an agile software project
as
(2) assessing the cost and schedule impact of any newly requested requirement,
(4) establishing mechanisms to improve the way team members find information,
Principle 1: Be agile. Whether the process model you choose is prescriptive or agile.
Principle 2: Focus on quality at every step. For every process activity, action, and task
should focus on the quality of the work product that has been produced.
Principle 3: Be ready to adapt. adapt your approach to conditions imposed by the problem,
the people, and the project itself.
Principle 4: Build an effective team. Build a self-organizing team that has mutual trust and
respect.
Principle 6: Manage change. The methods must be established to manage the way changes
are requested, approved, and implemented.
Principle 7: Assess risk. Lots of things can go wrong as software is being developed.
Principle 8: Create work products that provide value for others. Create only those work
products that provide value for other process activities, actions and tasks.
Principle 8: Maintainability
1. Readiness Assessment
Before starting an IXP project, the organization needs to conduct a readiness assessment. This
assessment ensures:
2. Project Community
• Team members should be well-trained, adaptable, skilled, and suitable for a self-
organizing team.
• For large projects, the team concept evolves into a community. This community
includes technologists, customers, and various stakeholders (e.g., legal staff, quality
auditors, manufacturing, sales) who play important roles even if they are on the
periphery.
• Roles should be explicitly defined, and communication and coordination mechanisms
should be established.
3. Project Chartering
4. Test-Driven Management
IXP projects require measurable criteria to assess project progress. This involves:
5. Retrospectives
After delivering a software increment, the IXP team conducts retrospectives, which are
specialized technical reviews. These retrospectives:
1. Examine issues, events, and lessons learned across a software increment or the entire
release.
2. Aim to improve the IXP process.
6. Continuous Learning
Continuous learning is essential for process improvement. XP team members are encouraged
(and possibly incentivized) to learn new methods and techniques to enhance product quality.
In addition to the six new practices discussed, IXP modifies a number of existing XP practices.
• Story-driven development (SDD) insists that stories for acceptance tests be written
before a single line of code is generated.
• Domain-driven design (DDD) is an improvement on the “system metaphor” concept
used in XP. DDD suggests the evolutionary creation of a domain model that “accurately
represents how domain experts think about their subject”.
• Pairing extends the XP pair programming concept to include managers and other
stakeholders. The intent is to improve knowledge sharing among XP team members
who may not be directly involved in technical development.
• Iterative usability discourages front-loaded interface design in favour of usability
design that evolves as software increments are delivered and users’ interaction with the
software is studied.
Planning
• The process begins with listening to understand the business context and gather
requirements. This leads to the creation of “user stories”, which describe the required
output, features, and functionality.
• Customers write user stories and prioritize them based on business value and is placed
on an index card. The XP team assesses each story and estimates the development effort
in weeks. Stories requiring more than three weeks are split into smaller stories.
• Customers and developers work together to decide how to group stories into the next
release to be developed by the XP team. Once a basic commitment is made for a release,
the XP team orders the stories that will be developed in one of three ways:
(2) the stories with highest value will be moved up in the schedule and implemented
first, or
(3) the riskiest stories will be moved up in the schedule and implemented first.
• Project Velocity: After the first release, project velocity (the number of stories
implemented) is computed to estimate delivery dates and manage project scope.
Design
• XP design emphasizes simplicity (Keep It Simple principle) and uses CRC (Class-
Responsibility-Collaborator) cards to organize object-oriented classes relevant to the
current increment.
• For challenging design problems, spike solutions (prototypes) are created to reduce risk
and validate estimates.
• Refactoring: Refactoring is the process of changing a software system in such a way
that it does not alter the external behaviour of the code yet improves the internal
structure. Continuous refactoring improves internal design without changing external
behaviour. Design is considered transient and can be modified continuously.
Coding
• Before coding, unit tests are created to ensure each story's requirements are met. This
focuses the developer on essential functionality.
• Pair Programming: Two programmers work together at one workstation to write code.
This enhances problem-solving, real-time quality assurance, and adherence to coding
standards.
• Continuous Integration: Code is integrated frequently (often daily) to avoid
compatibility issues and enable early error detection.
Testing
• Automated unit tests are run frequently to support regression testing and ensure code
modifications do not introduce new errors.
• Integration testing occurs regularly, providing continuous progress indications and
early problem detection.
• Customer-specified acceptance tests validate overall system features and functionality,
ensuring the software meets user requirements.
i. Competence: Agile teams require members with innate talent, specific software-
related skills, and knowledge of the chosen process. While skills can be taught, a
baseline competence is essential for effective execution.
ii. Common Focus: Despite diverse roles and skills, all team members must share a
singular goal: delivering working software increments to customers as promised.
iii. Collaboration: Agile software development thrives on effective communication and
collaboration. Team members must actively communicate with each other and
stakeholders, creating and using information that drives business value.
iv. Decision-Making Ability: Agile teams operate best when they have autonomy over
technical and project decisions. Empowering teams to make decisions fosters
ownership and commitment to project success.
v. Fuzzy Problem-Solving Ability: Agile teams face ambiguity and change regularly.
They must be adaptable and capable of addressing evolving problems and requirements
flexibly. Learning from each problem-solving activity contributes to overall project
success.
vi. Mutual Trust and Respect: Trust and respect among team members are critical. A
"jelled" team is cohesive and operates collaboratively, leveraging collective strengths
for superior outcomes.
vii. Self-Organization: In the context of agile development, self-organization implies three
things:
(1) the agile team organizes itself for the work to be done,
(2) the team organizes the process to best accommodate its local environment,
(3) the team organizes the work schedule to best achieve delivery of the software
increment.
Principle 3. Someone should facilitate the activity: A facilitator should guide the
communication, mediate conflicts, and ensure productive discussion.
Principle 5. Take notes and document decisions: Keep detailed notes of important points
and decisions to avoid any misunderstandings later.
Principle 7. Stay focused: modularize your discussion: Keep discussions on-topic and
modular, addressing one issue at a time.
Principle 9. (a) Once you agree to something, move on. (b) If you can’t agree to something,
move on. (c) If a feature or function is unclear and cannot be clarified at the moment, move on.
Principle 10. Negotiation is not a contest or a game. It works best when both parties win:
Approach negotiation as a cooperative process, aiming for a win-win outcome for all parties
involved.
10. What is Agility? Explain Agility with the cost of change with Diagram.
• In software development, it's widely accepted that the cost of changes increases
nonlinearly as a project progresses.
• Early changes during requirements gathering are relatively low-cost and easy to
implement, but as the project advances, especially into later stages like validation
testing, the cost and complexity of changes escalate significantly. This is because
changes at later stages often require major modifications to the software's architecture,
components, and tests, leading to substantial time and cost implications.
• Agile methodologies aim to "flatten" this cost curve by enabling incremental delivery
and incorporating practices like continuous unit testing and pair programming. These
practices allow teams to accommodate changes even late in the project with reduced
cost and time impacts.
• While the extent of this cost reduction is still debated, evidence suggests that agile
processes can significantly mitigate the high costs traditionally associated with late-
stage changes in software development.
11. Describe briefly the design modelling principles that guide the respective
framework activity
Principle 1: Traceability to Requirements: Ensure that every element of the design model is
traceable back to the requirements model, which includes the problem's information domain,
user functions, system behaviour, and requirements classes.
Principle 3: Data Design is Crucial: Treat data design as critically important as processing
functions. A well-structured data design simplifies program flow, facilitates component
implementation, and improves processing efficiency.
Principle 4: Design Interfaces Carefully: Design both internal and external interfaces with
care to ensure efficient data flow, minimize error propagation, and simplify integration and
testing.
Principle 5: User Interface Design: Tailor the user interface to meet end-user needs with an
emphasis on ease of use, as a poorly designed interface can detract from the software's
perceived quality.
Principle 7: Loose Coupling: Maintain loose coupling between components and with the
external environment to reduce error propagation and enhance maintainability.
Principle 8: Understandable Design Models: Create design representations that are easily
understandable to effectively communicate with those involved in coding, testing, and future
maintenance.
Principle 9: Iterative Design Development: Develop the design iteratively, refining it with
each iteration and aiming for simplicity as the design evolves.
Principle 2. Travel light: Create only the essential models needed to facilitate construction.
Excessive modelling takes time and effort that could be better spent on coding and testing.
Principle 3. Strive to produce the simplest model that will describe the problem or the
software. Don’t overbuild the software. By keeping models simple, the resultant software will
also be simple. The result is software that is easier to integrate, easier to test, and easier to
maintain. In addition, simple models are easier for members of the software team to understand
and critique, resulting in an ongoing form of feedback that optimizes the end result.
Principle 4. Build models in a way that makes them amenable to change: However, don't
neglect thoroughness, especially in requirements modelling, as it forms the foundation for
accurate design.
Principle 5. Be able to state an explicit purpose for each model that is created. Every time
you create a model, ask yourself why you’re doing so. If you can’t provide solid justification
for the existence of the model, don’t spend time on it.
Principle 6. Adapt the models you develop to the system at hand. It may be necessary to
adapt model notation or rules to the application; for example, a video game application might
require a different modelling technique than real-time, embedded software that controls an
automobile engine.
Principle 7. Try to build useful models, but forget about building perfect models. When
building requirements and design models, a software engineer reaches a point of diminishing
returns. That is, the effort required to make the model absolutely complete and internally
consistent is not worth the benefits of these properties.
Principle 8. Don’t become dogmatic about the syntax of the model.: It communicates
content successfully, representation is secondary. Although everyone on a software team
should try to use consistent notation during modell ing, the most important characteristic of
the model is to communicate information that enables the next software engineering task. If a
model does this successfully, incorrect syntax can be forgiven.
Principle 9. If your instincts tell you a model isn’t right even though it seems okay on
paper, you probably have reason to be concerned. If you are an experienced software
engineer, trust your instincts. Software work teaches many lessons—some of them on a
subconscious level. If something tells you that a design model is doomed to fail, you have
reason to spend additional time examining the model or developing a different one.
Principle 10. Get feedback as soon as you can. Every model should be reviewed by members
of the software team. The intent of these reviews is to provide feedback that can be used to
correct modelling mistakes, change misinterpretations, and add features or functions that were
inadvertently omitted.
McConnell also argues that by the year 2000, a "stable core" of software engineering
knowledge had emerged, representing about 75% of what is needed to develop complex
systems. This stable core consists of fundamental principles that underlie software engineering
practices, providing a solid foundation for applying and evaluating software engineering
models, methods, and tools.
Principle 1: Understand the Scope: Clearly define the project scope to establish a clear
destination for the software team, guiding all planning and execution efforts.
Principle 3: Recognize Iterative Planning: Understand that planning is iterative and must
adapt to changes as work progresses. Replan after each software increment based on user
feedback and project developments.
Principle 4: Base Estimates on Known Information: Provide estimates for effort, cost, and
duration based on current knowledge. Reliable estimates depend on having accurate and clear
information.
Principle 7: Adjust Granularity: Adapt the level of detail in the project plan according to the
time frame. Use high granularity for near-term tasks and lower granularity for long-term tasks,
as details become less certain over time.
Principle 8: Define Quality Assurance: Specify methods for ensuring quality in the plan, such
as scheduling technical reviews or using pair programming, to maintain high standards
throughout the project.
Principle 10: Track and Adjust the Plan: Monitor progress frequently, ideally daily, to
identify and address issues promptly. Adjust the plan as needed to stay on track and manage
any slippage.
Principle 2: Function Definition: Clearly define the software's functions, which provide value
to end users and internal support, ranging from general purpose to detailed processing tasks.
Coding Principles. The principles that guide the coding task are closely aligned with
programming style, programming languages, and programming methods. However, there are
a number of fundamental principles that can be stated:
Preparation principles: Before you write one line of code, be sure you
• Pick a programming language that meets the needs of the software to be built and the
environment in which it will operate.
• Select a programming environment that provides tools that will make your work easier.
• Create a set of unit tests that will be applied once the component you code is completed.
• Select data structures that will meet the needs of the design.
• Understand the software architecture and create interfaces that are consistent with it.
• Select meaningful variable names and follow other local coding standards.
• Create a visual layout (e.g., indentation and blank lines) that aids understanding.
Validation Principles: After you’ve completed your first coding pass, be sure you
Principle 2: Early Test Planning: Tests should be planned early, ideally after the
requirements model is complete, and before code generation begins, to ensure thorough
preparation.
Principle 3: Pareto Principle in Testing: The Pareto principle suggests that 80% of errors
are likely found in 20% of the components, so identifying and thoroughly testing these
components is crucial.
Principle 4: Small to Large Testing: Testing should start with individual components ("in
the small") and progressively expand to integrated clusters and the entire system ("in the
large").
Principle 5: Exhaustive Testing is Impossible: Due to the vast number of possible path
combinations, exhaustive testing is unfeasible, but adequate coverage of program logic and
conditions can be achieved.
Principle 2: Assemble and Test a Complete Delivery Package: Before delivery, compile
all software, support files, and documentation into a complete package and thoroughly beta-
test it across various computing environments.
Principle 3: Establish a Support Regime: Set up a robust support system before delivery
to provide timely and accurate assistance to end users, ensuring customer satisfaction.
Principle 5: Fix Bugs Before Delivery: Prioritize fixing bugs before delivering software,
even under time pressure, as delivering a high-quality product late is better than delivering
a buggy product on time.
SEPM – MODULE 4
Importance:
2. Project Success Rates: Many projects fail due to poor management. For example, the
Standish Group’s analysis found that only a third of projects were successful, with
many being late or over budget.
3. Skill and Approach: Effective project management requires specific skills and a
proven approach to managing projects and risks. The National Audit Office in the UK
identified a lack of these skills as a key factor in project failures.
Planning: If the feasibility study produces results which indicate that the prospective project
appears viable, planning of the project can take place. However, for a large project, we would
not do all our detailed planning right at the beginning. We would formulate an outline plan for
the whole project and a detailed one for the first stage. More detailed planning of the later
stages would be done as they approached. This is because we would have more detailed and
accurate information upon which to base our plans nearer to the start of the later stages.
3. Write about traditional versus modern project management practices.
1. Planning Incremental Delivery
• Modern Practice: Modern approaches advocate for incremental delivery, where the
project is divided into smaller, manageable increments. This allows for regular updates
and adjustments based on ongoing customer feedback, making the project more
adaptable to changing requirements.
2. Quality Management
3. Change Management
4. Requirements Management
5. Release Management
6. Risk Management
• Traditional Practice: In traditional scope management, the project scope was defined
at the outset and adhered to strictly, with little room for changes. This often led to
"scope creep" when changes were necessary but not properly managed.
W5HH Principle: Barry Boehm, summarized the questions that need to be asked and
answered in order to have an understanding of these project characteristics.
2. Project Bidding: Once the top management is convinced by the business case, the project
charter is developed. For some categories of projects, it may be necessary to have formal
bidding process to select suitable vendor based on some cost-performance criteria. The
different types of bidding techniques are:
• Request for quotation (RFQ): An organization advertises an RFQ if it has good
understanding of the project and the possible solutions.
• Request for Proposal (RFP): An organization had reasonable understanding of the
problem to be solved, however, it does not have good grasp of the solution aspects. i.e.
may not have sufficient knowledge about different features to be implemented. The
purpose of RFP is to get an understanding of the alternative solutions possible that can
be deployed and not vendor selection. Based on the RFP process, the requesting
organization can form a clear idea of the project solutions required, based on which it
can form a statement of work (SOW) for requesting RFQ for the vendors.
• Request for Information (RFI): An organization soliciting bids may publish an RFI.
Based on the vendor response to the RFI, the organization can assess the competencies
of the vendors and shortlist the vendors who can bid for the work.
3. Project Planning: During the project planning the project manager carries out several
processes and creates the following documents:
• Project plan: This document identifies the project the project tasks and a schedule for
the project tasks that assigns project resources and time frames to the tasks.
• Resource Plan: It lists the resources, manpower and equipment that would be required
to execute the project.
• Functional Plan: It documents the plan for manpower, equipment and other costs.
• Quality Plan: Plan of quality targets and control plans are included in this document.
• Risk Plan: This document lists the identification of the potential risks, their
prioritization and a plan for the actions that would be taken to contain the different risks.
4. Project Execution: In this phase the tasks are executed as per the project plan developed
during the planning phase. Quality of the deliverables is ensured through execution of proper
processes. Once all the deliverables are produced and accepted by the customer, the project
execution phase completes and the project closure phase starts.
5. Project Closure: Project closure involves completing the release of all the required
deliverables to the customer along with the necessary documentation. All the Project resources
are released and supply agreements with the vendors are terminated and all the pending
payments are completed. Finally, a postimplementation review is undertaken to analyse the
project performance and to list the lessons for use in future projects.
A project is generally considered successful if it meets its project objectives, which typically
include:
1. Delivering the Agreed Functionality: The project meets the functional requirements
and specifications as agreed upon at the outset.
2. Achieving the Required Level of Quality: The final product is of the quality expected
and required by stakeholders.
3. Being Completed on Time: The project is delivered within the agreed timeframe.
4. Being Completed Within Budget: The project does not exceed the allocated financial
resources.
However, success in business terms goes beyond meeting these objectives. A project is
successful in business terms if the value of the benefits generated by the project exceeds the
costs incurred.
Project Failure:
1. It Does Not Meet Project Objectives: It fails to deliver the agreed functionality, does
not meet the required quality, is late, or exceeds the budget.
2. It Fails in Business Terms: Even if the project meets its technical objectives, it may
still be a failure if it does not provide the expected business benefits. For example, a
product might be delivered on time and within budget, but if it fails to attract customers
or generate revenue, it is a business failure.
• A project might be successful on delivery but later become a business failure if it does not
continue to generate value or if the market changes.
• Conversely, a project could be delayed and over budget, but if its deliverables generate
significant long-term benefits, it might be considered a success over time.
• The distinction between project objectives and business success is crucial. Project
managers often have control over project costs but less control over the external factors that
influence the business success of the project deliverables.
• Reducing the gap between project success and business success can involve considering
broader business issues, such as market research, customer feedback, and risk management,
during the project planning and execution phases.
• Long-term benefits such as technical expertise, reusable code, and strong customer
relationships can contribute to the success of future projects, even if the immediate project
faces challenges.
• Code Reusability: In the past, software development required writing code from
scratch with no reusability options. Today, almost every programming language
supports code reusability, allowing developers to customize and extend existing
code efficiently.
• Project Duration: Historically, software projects could span multiple years. Now,
project durations have significantly reduced to only a few months due to
advancements in development methodologies and tools.
• Compulsory Systems: These are systems that users are required to use to perform
their tasks, such as an order processing system in an organization.
• Voluntary Systems: These systems are used at the user's discretion, such as
computer games, where requirements are often less precise and depend on
developer creativity, market surveys, and prototype evaluations.
• Information Systems: These systems enable staff to carry out office processes,
such as a stock control system used to manage inventory.
• Embedded Systems: These control machines or processes, such as an air
conditioning system in a building. Some systems may combine elements of both,
like a stock control system that also manages an automated warehouse.
• Expertise Deficiency: Companies may outsource parts of a project when they lack
the necessary expertise to develop certain components internally.
• Cost-Effectiveness: Outsourcing can be a cost-effective solution, allowing
companies to leverage specialized skills and resources from external providers.
Object-Driven Development
i) SMART objectives
ii) Management control with project control cycle.
Management control in a project context involves setting objectives for a system and
continuously monitoring its performance to ensure it aligns with the set objectives. The process
is dynamic, requiring constant adjustments and updates based on the ongoing circumstances
and challenges that arise during the project's execution.
Example:
1. Setting Objectives:
o For the ICT project, the objective is to replace paper-based records with a
centrally organized database, ensuring that the system is fully operational once
all records are transferred.
2. Data Collection:
o Definition: Gathering raw data related to the project’s progress and other
critical parameters.
o Definition: Transforming raw data into useful information that can be used to
evaluate the project's progress.
o The collected data should be analyzed to provide insights into the actual
performance against planned targets, such as comparing the estimated
completion date with the overall project timeline.
4. Decision-Making:
o Definition: Based on the analyzed data, decisions are made to keep the project
on track or adjust plans as necessary.
o If the analysis shows that some branches are behind in transferring details,
management may need to make decisions about reallocating resources, such as
moving staff temporarily to assist in data transfer.
5. Implementation of Decisions:
o Definition: Taking corrective actions based on the decisions made to ensure the
project remains aligned with its objectives.
10. Explain the software development life cycle with block diagram
➢ Architecture Design: This maps the requirements to the components of the system
that is to be built. At the system level, decisions will need to be made about which
processes in the new system will be carried out by the user and which can be
computerized. This design of the system architecture thus forms an input to the
development of the software requirements. A second architecture design process then
takes place which maps the software requirements to software components.
Coding: This may refer to writing code in a procedural language or an object-oriented language
or could refer to the use of an application-builder. Even where software is not being built from
scratch, some modification to the base package could be required to meet the needs of the new
application.
Testing (Verification and Validation): Whether software is developed specially for the
current application or not, careful testing will be needed to check that the proposed system
meets its requirements.
• Integration: The individual components are collected together and tested to see if they
meet the overall requirements. Integration could be at the level of software where
different software components are combined, or at the level of the system as a whole
where the software and other components of the system such as the hardware platforms
and networks and the user procedures are brought together.
• Qualification Testing: The system, including the software components, has to be
tested carefully to ensure that all the requirements have been fulfilled.
Acceptance Support: Once the system has been implemented there is a continuing need for
the correction of any errors that may have crept into the system and for extensions and
improvements to the system. Maintenance and support activities may be seen as a series of
minor software projects.
11. List the characteristics of projects and show the differences between
Contract management and project management
• Devise and write test cases that will check that each requirement has been satisfied.
• Create test scripts and expected results for each test case.
• Compare the actual results and the expected results and identify discrepancies.
Methodology: While a method relates to a type of activity in general, a plan takes that method
(and perhaps others) and converts it to real activities, identifying for each activity:
13. Explain with necessity block diagram how a project management life
cycle (PMC) drives a software development lifecycle.
14. List the different types of stakeholders responsible for successful
completion of software project.
Stakeholders are the people who have a stake or interest in the project.
Categories of Stakeholders:
1. Internal to the Project Team: Under direct managerial control of the project leader.
2. External to the Project Team but within the Same Organization: For example, users
assisting with system testing; requires negotiated commitment.
3. External to Both the Project Team and the Organization: Includes customers or
users benefiting from the system and contractors working on the project; relationships
based on contracts.
• Different stakeholders have different objectives that need to be recognized and reconciled
by the project leader (e.g., ease of use for end-users vs. staff savings for managers).
• Theory W: Proposed by Boehm and Ross, where the project manager aims to create win-
win situations for all parties involved.
• Important stakeholder groups can sometimes be missed, especially in unfamiliar business
contexts.
• Communication Plan is the Recommended practice to create a communication plan at the
start of a project to coordinate stakeholder efforts effectively.
15. List the activities involved in management and explain principal project
management process.
Management in the context of software project management involves several keyactivities:
8. Representing: Liaising with clients, users, developers, suppliers, and other stakeholders.
• The project management process is iterative, meaning plans are revised as more
information becomes available.
• Accurate estimation of cost, duration, and effort is crucial for effective planning and
execution.
• Monitoring and control are ongoing activities throughout the project lifecycle.
Project Initiation:
Project Planning:
Project Execution:
• The project is implemented according to the plan, with ongoing monitoring and control
to ensure it stays on track.
• Monitoring: Tracking project progress.
• Control: Taking corrective actions to keep the project on track.
Project Closing:
• All project activities are completed, and contracts are formally closed.
It outlines the expected benefits, costs, and risks, providing a clear rationale for proceeding.
• Problem or opportunity: Clearly defines the issue the project aims to address.
• Costs and benefits: Quantifies the financial implications and expected returns.
• Aligns with strategic goals: Ensures the project contributes to overall business
objectives.
• Ensures that the project team is aligned with the project's goals.
18. What is Project? Explain the activities that benefit from the project
management. List the characteristics that distinguish projects
A project is a temporary endeavour undertaken to create a unique product, service, or result. It
has a defined beginning and end, and requires the organized application of resources and
activities to achieve specific objectives.
The image suggests that project management is most beneficial for activities that fall between
routine jobs and exploratory projects. These activities share characteristics of both, requiring a
degree of planning and control, but also involving a certain level of uncertainty and novelty.
• Construction projects: Building structures involves both routine tasks and unexpected
challenges.
• Research and development: Exploratory work combined with structured
experimentation benefits from project management.
Essentially, any activity that is complex, has a clear beginning and end, and involves multiple
interconnected tasks is a potential candidate for project management. By applying project
management principles, organizations can improve efficiency, reduce risks, and increase the
likelihood of successful project outcomes.
o Clearly define the project's scope and its objectives. Determine what the project
aims to achieve and the boundaries within which it will operate.
o Identify the products and the activities required to produce them. This step
involves breaking down the project into manageable tasks and defining what
needs to be done.
o Estimate the effort required to complete each identified activity. This includes
assessing the time, resources, and cost involved in each task.
o Identify the potential risks associated with each activity. Consider what could
go wrong and the impact these risks may have on the project's success.
o Allocate the necessary resources to each activity. This includes assigning team
members, budget, and tools needed to complete the tasks.
o Implement the project plan by carrying out the defined activities. Monitor
progress and make adjustments as necessary to stay on track.
o Engage in more detailed planning for lower-level tasks as the project progresses.
This involves refining activities, updating estimates, and continually assessing
risks.
o Continually review the project at each stage to ensure quality and alignment
with objectives. This involves feedback loops where you revisit previous steps
to refine and improve the plan.
➢ The final customer or user is naturally anxious about the general quality of software
especially about the reliability.
➢ They are concern about the safety because of their dependency on the software system such
as aircraft control system are more safety critical systems.
➢ As software is developed through a number of phases; output of one phase is given as input
to the other one. So, if error in the initial phase is not found, then at the later stage, it is difficult
to fix that error and also the cost indulged is more.
➢ The unknown number of errors makes the debugging phase difficult to control.
The SEI Capability Maturity Model (CMM) is a framework developed by the Software
Engineering Institute (SEI) to assess and improve the maturity of software development
processes within organizations. It categorizes organizations into five maturity levels based on
their process capabilities and practices:
Outcome:
❖ Basic project management practices like planning and tracking costs/schedules are in place.
Outcome:
❖ Processes for both management and development activities are defined and documented.
Outcome:
Outcome:
❖ Focus on managing and optimizing processes to meet quality and performance goals.
❖ Lessons learned from projects are used to refine and enhance processes.
Outcome:
1) Capability Evaluation: Used by contract awarding authorities (like the US DoD) to assess
potential contractors' capabilities to predict performance if awarded a contract.
Initial (Level 1)
Managed (Level 2 )
Defined (Level 3)
Optimizing (Level 5)
• Key Process Areas: Organizational innovation and deployment, causal analysis and
resolution.
• Description: The focus is on continuous process improvement. The organization
continually improves its processes based on a quantitative understanding of the
common causes of variation inherent in processes.
Benefits of CMMI
❖ Broad Applicability: CMMI's abstract nature allows it to be applied not only to software
development but also to various other disciplines and industries.
2. Leadership: Providing unity of purpose and direction for achieving quality objectives.
6. Factual Approach to Decision Making: Making decisions based on analysis of data and
information.
• A quality specification concerned with how well the function are to operate
Internal Factors: Known to developers, such as well-structured code, which may enhance
reliability.
Measuring Quality:
Necessity of Measurement: To judge if a system meets quality requirements, its qualities must
be measurable.
Good Measure: Relates the number of units to the maximum possible (e.g., faults per thousand
lines of code).
Clarification Through Measurement: Helps to define and communicate what quality really
means, effectively answering "how do we know when we have been successful?"
Direct vs. Indirect Measures:
Direct Measurement: Measures the quality itself (e.g., faults per thousand linesof code).
Indirect Measurement: Measures an indicator of the quality (e.g., number of user inquiries at a
help desk as an indicator of usability).
Setting Targets:
Impact on Project Team: Quality measurements set targets for team members.
Example: Counting errors found in program inspections may not be meaningful if errors are
allowed to pass to the inspection stage rather than being eradicated earlier.
• Independent evaluators who are accessing the quality of a software product, not for
themselves but for a community of user.
ISO 9126 also introduces another type of elements – quality in use- for which following
element has been identified
• Definition: The functions that a software product provides to satisfy user needs.
• Sub-characteristics: Suitability, accuracy, interoperability, security, compliance.
• ‘Functionality Compliance’ refers to the degree to which the software adheres to
application-related standard or legal requirements. Typically, these could be auditing
requirement. ‘Interoperability’ refers to the ability of software to interact with others.
2. Reliability:
• Definition: The capability of the software to maintain its level of performance under
stated conditions.
• Sub-characteristics: Maturity, fault tolerance, recoverability.
• Maturity refers to frequency of failures due to fault in software more identification of
fault more changes to remove them. Recoverability describes the control of access to a
system.
3. Usability:
4. Efficiency:
• Definition: The ability to use resources in relation to the amount of work done.
• Sub-characteristics: Time behaviour, resource utilization.
5. Maintainability:
6. Portability:
9. List the guidelines given by ISO 9126 for the use of the quality
characteristics.
4. Identify the relevant internal measurements and the intermediate products in which they
appear.
• Identify and track internal measurements such as cyclomatic complexity, code coverage,
defect density, etc.
• Relate these measurements to intermediate products like source code, test cases, and
documentation.
5. Overall assessment of product quality: To what extent is it possible to combine ratings for
different quality characteristics into a single overall rating for the software?
• Focus on key quality requirements and address potential weaknesses early to avoid the need
for an overall quality rating later.
▪ Evidence: A section in the procedures manual that outlines the steps, roles, and
responsibilities for conducting requirements analysis.
▪ Assessment: Assessors would review the documented procedures to ensure they clearly define
how requirements analysis is to be conducted. This indicates that the process is defined (3.1 in
Table 13.5).
▪ Evidence: Control documents or records showing that the documented requirements analysis
process has been used and followed in actual projects.
▪ Assessment: Assessors would look for signed-off control documents at each step of the
requirements analysis process, indicating that the defined process is being implemented and
deployed effectively (3.2 in Table 13.5).
Here’s a structured approach, drawing from CMMI principles, to address these issues and
improve process maturity:
1. Resource Overcommitment:
Issue: Lack of proper liaison between the Head of Software Engineering and Project Engineers
leads to resource overcommitment across new systems and maintenance tasks simultaneously.
2. Requirements Volatility:
Issue: Initial testing of prototypes often reveals major new requirements.
Issue: Lack of proper change control results in increased demands for software development
beyond original plans.
Issue: Completion of system testing is delayed due to a high volume of bug fixes.
Objective: Introduce structured planning and control mechanisms to assess and distribute
workloads effectively.
Actions:
❖ Implement formal project planning processes where software requirements are mapped to
planned work packages.
❖ Define clear milestones and deliverables, ensuring alignment with both hardware and
software development phases.
Expected Outcomes:
Objective: Establish robust change control procedures to manage and prioritize system changes
effectively.
Actions:
❖ Define a formal change request process with clear documentation and approval workflows.
❖ Ensure communication channels between development teams, testing groups, and project
stakeholders are streamlined for change notifications.
Expected Outcomes:
Objective: Improve testing and validation processes to reduce delays in system testing and bug
fixes.
Actions:
❖ Foster a culture of quality assurance and proactive bug identification throughout the
development phases.
Expected Outcomes:
Focus: Transition from ad-hoc, chaotic practices to defined processes with formal planning and
control mechanisms.
Time Management: PSP advocates that developers should rack the way they spend time. The
actual time spent on a task should be measured with the help of a stop-clock to get an objective
picture of the time spent. An engineer should measure the time he spends for various
development activities such as designing, writing code, testing etc.
PSP Planning: Individuals must plan their project. The developers must estimate the
maximum. minimum, and the average LOC required for the product. They record the plan data
in a project plan summary.
The PSP is schematically shown in Figure 13.7 . As an individual developer must plan the
personal activities and make the basic plans before starting the development work. While
carrying out the activities of different phases of software development, the individual developer
must record the log data using time measurement.
During post implementation project review, the developer can compare the log data with the
initial plan to achieve better planning in the future projects, to improve his process etc. The
four maturity levels of PSP have schematically been shown in Fig 13.8. The activities that the
developer must perform for achieving a higher level of maturity have also been annotated on
the diagram.
15. Explain Six Sigma method.
• Motorola, USA, initially developed the six-sigma method in the early 1980s. The
purpose of six sigma is to develop processes to do things better, faster, and at a lower
cost.
• Six sigma becomes applicable to any activity that is concerned with cost, timeliness,
and quality of results. Therefore, it is applicable to virtually every industry.
• Six sigma seeks to improve the quality of process outputs by identifying and removing
the causes of defects and minimizing variability in the use of process.
• Six sigma is essentially a disciplined, data-driven approach to eliminate defects in any
process. The statistical representation of six sigma describes quantitatively how a
process is performing. To achieve six sigma, a process must not produce more than 3.4
defects per million defect opportunities.
• A six-sigma defect is defined as any system behavior that is not as per customer
specifications. Total number of six sigma defect opportunities is then the total number
of chances for committing an error. Sigma of a process can easily be calculated using a
six-sigma calculator.
16. Explain the themes that emerge in discussion of software quality.
Benefits of Inspections:
▪ Inspections are noted for their effectiveness in eliminating superficial errors, motivating
developers to write better-structured code, and fostering team collaboration and spirit.
▪ They also facilitate the dissemination of good programming practices and improve overall
software quality by involving stakeholders from different stages of development.
• Inspection can be carried out by colleagues at all levels except the very top.
• Inspection meeting does not last for more than two hours.
• The inspection is led by a moderator who has had specific training in the techniques.
• Statistics are maintained so that the effectiveness of the inspection process can be monitored
▪ Software systems were becoming increasingly complex, making it impractical to test every
possible input combination comprehensively.
▪ Edsger Dijkstra and others argued that testing could only demonstrate the presence of errors,
not their absence, leading to uncertainty about software correctness.
2. Structured Programming:
▪ Each component was designed to be self-contained with clear entry and exit points,
facilitating easier understanding and validation by human programmers.
▪ Developed by Harlan Mills and others at IBM, clean-room software development introduced
a rigorous methodology to ensure software reliability.
➢ Development Team: Implements the code without conducting machine testing; focuses on
formal verification using mathematical techniques.
➢ Certification Team: Conducts testing to validate the software, using statistical models to
determine acceptable failure rates.
4. Incremental Development:
▪ Systems were developed incrementally, ensuring that each increment was capable of
operational use by end-users.
▪ This approach avoided the pitfalls of iterative debugging and ad-hoc modifications, which
could compromise software reliability.
▪ The certification team's testing was thorough and continued until statistical models showed
that the software failure rates were acceptably low.
Formal methods
• Precondition define the allowable states, before processing, of the data items upon which a
procedure is to work.
• Post condition define the state of those data items after processing. The mathematical notation
should ensure that such a specification is precise and unambiguous.
• Staff are involved in the identification of sources of errors through the formation of quality
circle. These can be set up in all departments of an organizations including those producing
software where they are known as software quality circle (SWQC).
• A quality circle is a group of four to ten volunteers working in the same area who meet for,
say, an hour a week to identify, analyze and solve their work -related problems. One of their
number is a group leader and there could be an outsider a facilitator, who can advise on
procedural matters.
• Associated with quality circles is the compilation of most probable error lists. For example,
at IOE, Amanda might find that the annual maintenance contracts project is being delayed
because of errors in the requirements specifications.
20. Write a short note on Lessons learnt reports
1) Test Planning: Test Planning consists of determining the relevant test strategies and
planning for any test bed that may be required. A test bed usually includes setting up the
hardware or simulator.
2) Test Case Execution and Result Checking: Each test case is run and the results are
compared with the expected results. A mismatch between the actual result and expected results
indicates a failure. The test cases for which the system fails are noted down for test reporting.
3) Test Reporting: When the test cases are run, the tester may raise issues, that is, report
discrepancies between the expected and the actual findings. A means of formally recording
these issues and their history is needed. A review body adjudicates these issues. The outcome
of this scrutiny would be one of the following:
• The issue is dismissed on the grounds that there has been a misunderstanding of a requirement
by the tester.
• The issue is identified as a fault which the developers need to correct - Where development
is being done by contractors, they would be expected to cover the cost of the correction.
• It is recognized that the software is behaving as specified, but the requirement originally
agreed is in fact incorrect.
• The issue is identified as a fault but is treated as an off-specification -It is decided that the
application can be made operational with the error still in place.
4) Debugging: For each failure observed during testing, debugging is carried out to identify
the statements that are in error.
5) Defect Retesting: Once a defect has been dealt with by the development team, the corrected
code is retested by the testing team to check whether the defect has successfully been addressed.
Defect retest is also called resolution testing. The resolution tests are a subset of the complete
test suite (Fig: 13.10).
6) Regression Testing: Regression testing checks whether the unmodified functionalities still
continue to work correctly. Thus, whenever a defect is corrected and the change is incorporated
in the program code, the change introduced to correct an error could actually introduce errors
in functionalities that were previously working correctly.
7) Test Closure: Once the system successfully passes all the tests, documents related to lessons
learned, results, logs etc., are achieved for use as a reference in future projects.
1) Testing is most time consuming and laborious of all software development. With the
growing size of programs and the increased importance being given to product quality, test
automation is drawing attention.
2) Test automation is automating one or some activities of the test process. This reduces human
effort and time which significantly increases the thoroughness of testing.
3) With automation, more sophisticated test case design techniques can be deployed. By using
the proper testing tools automated test results are more reliable and eliminates human errors
during testing.
4) Every software product undergoes significant change overtime. Each time the code changes,
it needs to be tested whether the changes induce any failures in the unchanged features. Thus
the originally designed test suite need to be run repeatedly each time the code changes.
Automated testing tools can be used in repeatedly running the same set of test cases.
➢ Capture and Playback Tools: In this type of tools, the test cases are executed manually
only once. During manual execution, the sequence and values of various inputs as the outputs
produced are recorded. Later, the test can be automatically replayed and the results are checked
against the recorded output.
Disadvantage: Test maintenance can be costly when the unit test changes , since some of the
captured tests may become invalid.
➢ Automated Test Script Tool: Test Scripts are used to drive an automated test tool. The
scripts provide input to the unit under test and record the output. The testers employ a variety
of languages to express test scripts.
Advantage: Once the test script is debugged and verified, it can be rerun a large number oftimes
easily and cheaply.
➢ Random Input Test Tools: In this type of an automatic testing tool, test values are
randomly generated to cover the input space of the unit under test. The outputs are ignored
because analyzing them would be extremely expensive.
Advantage: This is relatively easy and cost-effective for finding some types of defects.
Disadvantage: Is very limited form of testing. It finds only the defects that crash the unit under
test and not the majority of defects that do not crash but simply produce incorrect results.
➢ Hardware Reliability: Concerned with stability and consistent inter-failure times. Software
Reliability: Aims for growth, meaning an increase in inter-failure times as bugs are fixed.
➢ Hardware: Shows a "bathtub" curve where failure rate is initially high, decreases during the
useful life, and increases again as components wear out. Software: Reliability generally
improves over time as bugs are identified and fixed, leading to decreased failure rates
Figure 13.11(a): Illustrates the hardware product's failure rate over time, depicting the
"bathtub" curve.
Figure 13.11(b): Shows the software product's failure rate, indicating a decline in failure rate
over time due to bug fixes and improvements.
• POFOD measures likelihood of system failure when a service request is made. For example,
POFOD of 0.001 means that 1 out of every 1000 service requests would result in a failure.
Availability:
• Measures how likely the system is available for use during a given period.
If quality-related activities and requirements have been identified by the main planning process,
a separate quality plan may not be necessary.
When producing software for an external client, the client’s quality assurance staff might
require a dedicated quality plan to ensure the quality of the delivered products.
• A quality plan acts as a checklist to confirm that all quality issues have been addressed during
the planning process.
• Most of the content in a quality plan references other documents that detail specific quality
procedures and standards.
❖ Documentation to be produced
❖ Testing
❖ Training