0% found this document useful (0 votes)
46 views126 pages

SEPM

The document outlines the definitions and importance of software and software engineering, emphasizing the software process, which includes activities such as communication, planning, modeling, construction, and deployment. It discusses various software development models including Waterfall, Incremental, and specialized models like Component-Based Development and Formal Methods, highlighting their benefits and challenges. Additionally, it addresses the unique characteristics of software and web applications, evolutionary and concurrent process models, and common software myths that can mislead management and development practices.

Uploaded by

Anusha Ram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views126 pages

SEPM

The document outlines the definitions and importance of software and software engineering, emphasizing the software process, which includes activities such as communication, planning, modeling, construction, and deployment. It discusses various software development models including Waterfall, Incremental, and specialized models like Component-Based Development and Formal Methods, highlighting their benefits and challenges. Additionally, it addresses the unique characteristics of software and web applications, evolutionary and concurrent process models, and common software myths that can mislead management and development practices.

Uploaded by

Anusha Ram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 126

SEPM – MODULE 1

1. Define Software & Software Engineering. Why is it important. Explain


the Software Process in Software Engineering.

Software is: (1) instructions (computer programs) that when executed provide desired features,
function, and performance; (2) data structures that enable the programs to adequately
manipulate information, and (3) descriptive information in both hard copy and virtual forms
that describes the operation and use of the programs.

Software Engineering:

1. By Fritz Bauer: Software engineering is the establishment and use of sound engineering
principles in order to obtain economically software that is reliable and works efficiently on real
machines.

2. IEEE [IEE93a]: The application of a systematic, disciplined, quantifiable approach to the


development, operation, and maintenance of software; that is, the application of engineering to
software.

Importance:

Understanding the Problem Before Building a Solution

• Requirement Gathering: Involves collecting and analyzing the needs of various


stakeholders to ensure the final product meets users' expectations.

• Requirement Analysis: Ensures clarity on what the software should achieve,


preventing misunderstandings and scope creep.

Design as a Pivotal Activity

• Foundation for Architecture: Good design is crucial for creating a robust software
architecture.

• Complexity Management: Helps manage and reduce complexity, facilitating easier


maintenance and scalability.

• Interaction Efficiency: Ensures that all system components work together seamlessly.
Quality and Maintainability

• High-Quality Software: Essential for reliability and efficiency, particularly in critical


applications.

• Robustness and Reliability: Ensures the software can handle failures gracefully.

• Ease of Maintenance: Good design and development practices make future updates
and enhancements easier.

Complexity and Team Collaboration

• Managing Complexity: SE methodologies and tools help manage the complexity of


modern software systems.

• Team Coordination: Defines processes, roles, and responsibilities to ensure smooth


and efficient development.

Strategic and Tactical Decision Making

• Reliable Software: High-quality software provides accurate and reliable information


for critical decisions.

• Trustworthy Systems: SE practices build software that can be trusted in strategic and
tactical operations.

Adaptability and Agility

• Tailored Processes: SE provides flexibility to adapt methods to specific team and


project needs while maintaining discipline.

• Agile Practices: Promotes agile practices to respond to changes and new requirements
effectively.

Layered Technology Approach

• Quality Focus: SE emphasizes quality at its core, ensuring high standards throughout
the development process.

• Process Layer: Forms the foundation for managing projects and ensuring quality.

• Methods and Tools: Provide the technical expertise and automated support for
effective development.
Continuous Process Improvement

• Culture of Improvement: Encourages ongoing refinement of practices, drawing from


philosophies like Total Quality Management and Six Sigma.

• Effective Approaches: Leads to more effective and efficient software engineering


methods over time.

Software Process:

A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created. An activity strives to achieve a broad objective (e.g., communication
with stakeholders) and is applied regardless of the application domain, size of the project,
complexity of the effort, or degree of rigor with which software engineering is to be applied.
An action (e.g., architectural design) encompasses a set of tasks that produce a major work
product (e.g., an architectural design model). A task focuses on a small, but well-defined
objective (e.g., conducting a unit test) that produces a tangible outcome.

A process framework establishes the foundation for a complete software engineering process
by identifying a small number of framework activities that are applicable to all software
projects, regardless of their size or complexity.

1. Communication: Before any technical work can commence, it is critically important to


communicate and collaborate with the customer and other stakeholders. The intent is to
understand stakeholders’ objectives for the project and to gather requirements that help define
software features and functions.

2. Planning: The planning activity creates a “map” that helps guide the team as it makes the
journey. The map is called a software project plan defines the software engineering work by
describing the technical tasks to be conducted, the risks that are likely, the resources that will
be required, the work products to be produced, and a work schedule.

3. Modelling: To better understand the problem and how it’s going to be solved, a software
engineer creates models to better understand software requirements and the design that will
achieve those requirements.

4. Construction: This activity combines code generation (either manual or automated) and the
testing that is required to uncover errors in the code.
5. Deployment: The software (can be an increment) is delivered to the customer who evaluates
the delivered product and provides feedback based on the evaluation.

2. Explain Waterfall and Incremental Development Process model with a


neat block diagram. List its benefits and problems.

• The waterfall model is called as classic life cycle, suggests a systematic, sequential
approach to software development that begins with customer specification of requirements
and progresses through planning, modelling, construction, and deployment, culminating in
ongoing support of the completed software (Figure 2.3)
• Reasons for the failure of waterfall model:
1. Real projects rarely follow the sequential flow that the model proposes.
2. It is often difficult for the customer to state all requirements explicitly.
3. The customer must have patience.
4. It is found that the linear nature of the classic life cycle leads to “blocking states” in
which some project team members must wait for other members of the team to complete
dependent tasks. Time spent may exceed the time taken for production sometimes.

Incremental:

• The incremental model delivers a series of releases, called increments that provide
progressively more functionality for the customer as each increment is delivered.
• The incremental model combines elements of linear and parallel process flows.
• For example, word-processing software developed using the incremental paradigm
might deliver basic file management, editing, and document production functions in the
first increment; more sophisticated editing and document production capabilities in the
second increment; spelling and grammar checking in the third increment; and advanced
page layout capability in the fourth increment.
• When an incremental model is used, the first increment is often a core product. That is,
basic requirements are addressed but many supplementary features remain undelivered.
• The plan addresses the modification of the core product to better meet the needs of the
customer and the delivery of additional features and functionality. This process is
repeated following the delivery of each increment, until the complete product is
produced.
• The incremental process model focuses on the delivery of an operational product with
each increment.

3. Write about the various types of specialized process models.

1 Component-Based Development

• The component-based development model incorporates many of the characteristics of


the spiral model.
• It is evolutionary in nature, demanding an iterative approach to the creation of software.
• Commercial off-the-shelf (COTS) software components are pre-developed software
products created by third-party vendors. These components are designed to offer
specific functionalities and come with clearly defined interfaces, making them easy to
integrate into larger software systems being developed.
• The component-based development model includes these steps:
o 1. Investigate and assess available component-based products for the specific
application domain.
o 2. Consider potential challenges related to component integration.
o 3. Develop a software architecture that includes the identified components.
o 4. Integrate the components into the established architecture.
o 5. Conduct extensive testing to confirm proper functionality.
• The component-based development model encourages software reuse, which offers
software engineers various quantifiable benefits.

2 The Formal Methods Model

• The formal methods model includes a series of steps that produce a mathematical
specification of computer software. These methods use rigorous mathematical notation
to specify, develop, and verify computer-based systems.
• During the development process, formal methods provide a way to handle numerous
challenges that are challenging to address using alternative software engineering
methods. They aid in identifying and resolving problems like ambiguity,
incompleteness, and inconsistency with greater efficiency.
• When employed in the design phase, formal methods act as a foundation for program
verification, allowing the identification and correction of errors that might otherwise
remain unnoticed.
• Problems to be addressed:
o Creating formal models is presently a time-consuming and costly endeavour.
o Due to the limited number of software developers equipped with the requisite
expertise in applying formal methods, extensive training is necessary.
o Communicating the models to technically inexperienced clients poses
challenges.

3 Aspect-Oriented Software Development

• AOSD defines "aspects" as representations of customer concerns that span across


various system functions, features, and information.
• Aspect-oriented software development (AOSD), also known as aspect-oriented
programming (AOP), is a modern software engineering paradigm that offers a
structured approach and methodology for delineating, specifying, designing, and
building aspects.

4. What is the unique nature of software and web applications.

i. Network intensiveness: A WebApp resides on a network and must serve the needs of
a diverse community of clients. The network may enable worldwide access and
communication (i.e., the Internet) or more limited access and communication (e.g., a
corporate Intranet).
ii. Concurrency. A large number of users may access the WebApp at one time. In many
cases, the patterns of usage among end users will vary greatly.
iii. Unpredictable load. The number of users of the WebApp may vary by orders of
magnitude from day to day. One hundred users may show up on Monday; 10,000 may
use the system on Thursday.
iv. Performance. If a WebApp user must wait too long (for access, for serverside
processing, for client-side formatting and display), he or she may decide to go
elsewhere.
v. Availability. Although expectation of 100 percent availability is unreasonable, users of
popular WebApps often demand access on a 24/7/365 basis. Users in Australia or Asia
might demand access during times when traditional domestic software applications in
North America might be taken off-line for maintenance.
vi. Data driven. The primary function of many WebApps is to use hypermedia to present
text, graphics, audio, and video content to the end user. In addition, WebApps are
commonly used to access information that exists on databases that are not an integral
part of the Web-based environment (e.g., e-commerce or financial applications).
vii. Content sensitive. The quality and aesthetic nature of content remains an important
determinant of the quality of a WebApp.
viii. Continuous evolution. Unlike conventional application software that evolves over a
series of planned, chronologically spaced releases, Web applications evolve
continuously. It is not unusual for some WebApps (specifically, their content) to be
updated on a minute-by-minute schedule or for content to be independently computed
for each request.
ix. Immediacy. Although immediacy—the compelling need to get software to market
quickly—is a characteristic of many application domains, WebApps often exhibit a
time-to-market that can be a matter of a few days or weeks.
x. Security. Because WebApps are available via network access, it is difficult, if not
impossible, to limit the population of end users who may access the application. In
order to protect sensitive content and provide secure modes of data transmission, strong
security measures must be implemented throughout the infrastructure that supports a
WebApp and within the application itself.
xi. Aesthetics. An undeniable part of the appeal of a WebApp is its look and feel. When
an application has been designed to market or sell products or ideas, aesthetics may
have as much to do with success as technical design.
5. Explain in brief along with diagrams –

1) Evolutionary process model

• Evolutionary process models produce an increasingly more complete version of the


software with each iteration.
• Business and product requirements often change as development proceeds; tight market
deadlines make completion of a comprehensive software product impossible.
• Evolutionary models are iterative. They are characterized in a manner that enables you to
develop increasingly more complete versions of the software.

Prototyping:

• When your customer has a legitimate need, but is clueless about the details, develop a
prototype as a first step.
• A customer defines a set of general objectives for software, but does not identify detailed
requirements for functions and features.
• A prototyping iteration is planned quickly, and modelling (in the form of a “quick design”)
occurs. A quick design focuses on a representation of those aspects of the software that will
be visible to end users (e.g., human interface layout or output display formats).
• The quick design leads to the construction of a prototype. The prototype is deployed and
evaluated by stakeholders, who provide feedback that is used to further refine requirements.
Iteration occurs as the prototype is tuned to satisfy the needs of various stakeholders.
• Ideally, the prototype serves as a mechanism for identifying software requirements.
• Problems associated with Prototyping:
1. Stakeholders see what appears to be a working version of the software not considering
the over-all software quality and long-term maintainability.
2. Implementation compromises are made in order to get a prototype working quickly;
inefficient algorithm might be used; inappropriate OS or Programming language might be
used. If all the stakeholders agree that the prototype is built to serve as a mechanism for
defining requirements, then prototyping can be an effective paradigm for software
engineering.
2) Concurrent process model

• The concurrent model is often more appropriate for product engineering projects where
different engineering teams are involved. Figure 2.8 provides a schematic
representation of one software engineering activity within the modelling activity using
a concurrent modelling approach.
• Modelling activity may be in any one of the states noted at any given time. Similarly,
other activities, actions, or tasks (e.g., communication or construction) can be
represented in an analogous manner. All software engineering activities exist
concurrently but reside in different states.
• For example, assuming that project the communication activity has completed its first
iteration and currently in the awaiting changes state. The modelling activity (which was
in the inactive state while initial makes a transition into the under-development state.
If, however, the customer indicates that changes in requirements must be made, the
modelling activity moves from the under-development state into the awaiting changes
state.
• A series of events is going to trigger transitions from state to state for each of the
software engineering activities, actions, or tasks. Concurrent modelling is applicable to
all types of software development and provides an accurate picture of the current state
of a project.
3) Spiral model

• The spiral model is an evolutionary software process model that couples the iterative
nature of prototyping with the controlled and systematic aspects of the waterfall model.
• The spiral model can be adapted to apply throughout the entire life cycle of an
application, from concept development to maintenance.
• As this evolutionary process begins, the software team performs activities that are
implied by a circuit around the spiral in a clockwise direction, beginning at the centre.
• Risk is analysed as each revolution is made.
• Project milestones are attained along the path of the spiral after each pass.
• The first circuit around the spiral might result in the development of a product
specification; subsequent passes around the spiral might be used to develop a prototype
and then progressively more sophisticated versions of the software.
• Each pass through the planning region results in adjustments to the project plan. Cost
and schedule are adjusted based on feedback derived from the customer after delivery.
• The spiral model is a realistic approach to the development of large-scale systems and
software.
• Features:
1. Risk-driven process model generator
2. It maintains the systematic stepwise approach but incorporates it into an iterative
framework.
3. Guides multi-stakeholder
4. Concurrent in nature
5. cyclic approach
6. Incrementally growing
7. Ensures to meet the project milestones

6. Write about the various software myths and how it all starts.

Software myths—erroneous beliefs about software and the process that is used to build it—can
be traced to the earliest days of computing. Myths have a number of attributes that make them
insidious.

Management Myths: Managers with software responsibility, like managers in most


disciplines, are often under pressure to maintain budgets, keep schedules from slipping, and
improve quality. Like a drowning person who grasps at a straw, a software manager often
grasps at belief in a software myth, if that belief will lessen the pressure (even temporarily)

Myth: A book of standards and procedures for building software provides everything needed.

• Reality: Such books often go unused, become outdated, or fail to reflect current
practices. Effective standards need to be known, used, and regularly updated to be
valuable.
Myth: Adding more programmers to a late project will speed it up.

• Reality: Known as Brooks' Law, adding people to a late project typically delays it
further due to the time needed for new team members to get up to speed and the resultant
communication overhead.

Myth: Outsourcing the project allows us to relax and let the third party handle everything.

• Reality: Without strong internal management and control, outsourcing can lead to
increased difficulties and poor project outcomes.

Customer Myths: A customer who requests computer software may be a person at the next
desk, a technical group down the hall, the marketing/sales department, or an outside company
that has requested software under contract. In many cases, the customer believes myths about
software because software managers and prac titioners do little to correct misinformation.
Myths lead to false expectations (by the customer) and, ultimately, dissatisfaction with the
developer.

Myth: A general statement of objectives is sufficient to start programming; details can be filled
in later.

• Reality: Ambiguous objectives often lead to project failure. Clear, detailed


requirements are necessary and achieved through continuous communication.

Myth: Software requirements change frequently, but such changes are easy to accommodate
because software is flexible.

• Reality: While early changes have a minimal cost impact, changes introduced later in
the development process can cause significant disruption and require extensive
additional resources.

Practitioner’s Myths: Myths that are still believed by software practitioners have been
fostered by over 50 years of programming culture. During the early days, pro gramming was
viewed as an art form. Old ways and attitudes die hard.

Myth: Once the program is written and works, the job is done.

• Reality: The majority of effort (60-80%) occurs after initial delivery, involving
maintenance, updates, and enhancements.

Myth: Quality can only be assessed once the program is running.


• Reality: Early and regular technical reviews are effective in identifying defects and
ensuring quality from the project's inception.

Myth: The only deliverable is the working program.

• Reality: Successful projects produce various work products, including models,


documents, and plans, which guide development and support the software.

Myth: Software engineering creates unnecessary documentation and slows down the process.

• Reality: Software engineering focuses on quality. High quality reduces rework and
accelerates delivery.

How It All Starts:

Every software project begins with a business need, whether to fix a defect, adapt to changes,
extend functionality, or create something new. Initially, this need is often expressed informally,
like in casual conversations. However, as the project progresses, it becomes clear that software
will be central to its success, requiring careful planning, clear requirements, and robust
management to meet the customer's needs and market demands.

7. Define Software Engineering. Explain Software code of ethics.

Software Engineering:

1. By Fritz Bauer: Software engineering is the establishment and use of sound engineering
principles in order to obtain economically software that is reliable and works efficiently on real
machines.

2. IEEE [IEE93a]: The application of a systematic, disciplined, quantifiable approach to the


development, operation, and maintenance of software; that is, the application of engineering to
software.

Code of ethics:
Although each of these eight principles is equally important, an overriding theme appears: a
software engineer should work in the public interest. On a personal level, a software engineer
should abide by the following rules:

• Never steal data for personal gain.


• Never distribute or sell proprietary information obtained as part of your work on a
software project.
• Never maliciously destroy or modify another person’s programs, files, or data.
• Never violate the privacy of an individual, a group, or an organization.
• Never hack into a system for sport or profit.
• Never create or promulgate a computer virus or worm.
• Never use computing technology to facilitate discrimination or harassment

8. Explain the key challenges of Software Engineering.

1. A Concerted effort should be made to understand the problem before a software


solution is developed: When a new application or embedded system is to be built, many voices
must be heard. And it sometimes seems that each of them has a slightly different idea of what
software features and functions should be delivered.
2. Design becomes a pivotal activity: Sophisticated software that was once implemented in a
predictable, self-contained, computing environment is now embedded inside everything from
consumer electronics to medical devices to weapons systems. The complexity of these new
computer-based systems and products demands careful attention to the interactions of all
system elements.

3. Software should exhibit high quality: Individuals, businesses, and governments


increasingly rely on software for strategic and tactical decision making as well as day-to-day
operations and control. If the software fails, people and major enterprises can experience
anything from minor inconvenience to catastrophic failures.

4. Software should be maintainable: As the perceived value of a specific application grows,


the likelihood is that its user base and longevity will also grow. As its user base and time-in-
use increase, demands for adaptation and enhancement will also grow.

9. Illustrate the Waterfall Model.

• The waterfall model is called as classic life cycle, suggests a systematic, sequential
approach to software development that begins with customer specification of requirements
and progresses through planning, modelling, construction, and deployment, culminating in
ongoing support of the completed software (Figure 2.3)
• A variation in the representation of the waterfall model is called the V-model.
• The V-model illustrates how verification and validation actions are associated with earlier
engineering actions. Figure 2.4 depicts the V-model describing the relationship of quality
assurance actions to the actions associated with communication, modelling, and early
construction activities.
• As a software team moves down the left side of the V, basic problem requirements are
refined into progressively more detailed and technical representations of the problem and
its solution.
• Once code has been generated, the team moves up the right side of the V, essentially
performing a series of tests (quality assurance actions) that validate each of the models
created as the team moved down the left side.
• The V-model provides a way of visualizing how verification and validation actions are
applied to earlier engineering work.
• Reasons for the failure of waterfall model:
1. Real projects rarely follow the sequential flow that the model proposes.
2. It is often difficult for the customer to state all requirements explicitly.
3. The customer must have patience.
4. It is found that the linear nature of the classic life cycle leads to “blocking states” in
which some project team members must wait for other members of the team to complete
dependent tasks. Time spent may exceed the time taken for production sometimes.

10. Discuss the fundamental activities of Software Engineering.


A process framework establishes the foundation for a complete software engineering process
by identifying a small number of framework activities that are applicable to all software
projects, regardless of their size or complexity.

1. Communication: Before any technical work can commence, it is critically important to


communicate and collaborate with the customer and other stakeholders. The intent is to
understand stakeholders’ objectives for the project and to gather requirements that help define
software features and functions.

2. Planning: The planning activity creates a “map” that helps guide the team as it makes the
journey. The map is called a software project plan defines the software engineering work by
describing the technical tasks to be conducted, the risks that are likely, the resources that will
be required, the work products to be produced, and a work schedule.

3. Modelling: To better understand the problem and how it’s going to be solved, a software
engineer creates models to better understand software requirements and the design that will
achieve those requirements.

4. Construction: This activity combines code generation (either manual or automated) and the
testing that is required to uncover errors in the code.

5. Deployment: The software (can be an increment) is delivered to the customer who evaluates
the delivered product and provides feedback based on the evaluation.

11. Explain the nature of software.


• Today, software takes on a dual role. It is a product, and at the same time, the vehicle
for delivering a product.
• As a product, it delivers the computing potential embodied by computer hardware or
by a network of computers that are accessible by local hardware. Whether it resides
within a mobile phone or operates inside a mainframe computer, software is an
information transformer - producing, managing, acquiring, modifying, displaying, or
transmitting information that can be as simple as a single bit or as complex as a
multimedia presentation derived from data acquired from dozens of independent
sources.
• As the vehicle used to deliver the product, software acts as the basis for the control of
the computer (operating systems), the communication of information (networks), and
the creation and control of other programs (software tools and environments). Software
delivers the most important product of our time - information. It transforms personal
data so that the data can be more useful in a local context; it manages business
information to enhance competitiveness; it provides a gateway to worldwide
information networks and provides the means for acquiring information in all of its
forms.

12. Define Software. Explain the characteristics of Software or Explain how


software overcomes the limitations of hardware.
Software is: (1) instructions (computer programs) that when executed provide desired features,
function, and performance; (2) data structures that enable the programs to adequately
manipulate information, and (3) descriptive information in both hard copy and virtual forms
that describes the operation and use of the programs.

Characteristics:

1. Software is developed or engineered; it is not manufactured in the classical sense: All


though many similarities exist between software and Hardware, Manufacturing phase for
hardware can introduce quality problems that doesn’t exist in software

2. Software doesn’t “wear out.”: Figure 1.1 depicts failure rate as a function of time for
hardware. The relationship, often called the “bathtub curve,” indicates that hardware exhibits
relatively high failure rates early in its life (these failures are often attributable to design or
manufacturing defects); defects are corrected and the failure rate drops to a steady-state level
for some period of time. As time passes, however, the failure rate rises again as hardware
components suffer from the cumulative effects of dust, vibration, abuse, temperature extremes,
and many other environmental maladies. Stated simply, the hardware begins to wear out.
Software is not susceptible to the environmental maladies that cause hardware to wear out. In
theory, therefore, the failure rate curve for software should take the form of the “idealized
curve” shown in Figure 1.2.

3. Although the industry is moving toward component-based construction, most software


continues to be custom built: As an engineering discipline evolves, a collection of standard
design components is created. The reusable components have been created so that the engineer
can concentrate on the truly innovative elements of a design, that is, the parts of the design that
represent something new. A software component should be designed and implemented so that
it can be reused in many different programs.

13. Explain different Software Application Domains.


Today, seven broad categories of computer software present continuing challenges for software
engineers:

i. System software - a collection of programs written to service other programs. Some system
software processes complex, but determinate, information structures (e.g., compilers, editors,
and file management utilities). Other systems applications process largely indeterminate data
(e.g., operating system components, drivers, networking software, telecommunications
processors).

In either case, the systems software area is characterized by heavy interaction with computer
hardware; heavy usage by multiple users; concurrent operation that requires scheduling,
resource sharing, and sophisticated process management; complex data structures; and multiple
external interfaces.

ii. Application software - stand-alone programs that solve a specific business need.
Applications in this area process business or technical data in a way that facilitates business
operations or management/technical decision making. e.g., point-of-sale transaction processing

In addition to conventional data processing applications, application software is used to control


business functions in real time ( real-time manufacturing process control).

iii. Engineering/scientific software - has been characterized by “number crunching”


algorithms. Applications range from astronomy to volcanology, from automotive stress
analysis to space shuttle orbital dynamics, and from molecular biology to automated
manufacturing.

However, modern applications within the engineering/scientific area are moving away from
conventional numerical algorithms. Computer-aided design, system simulation, and other
interactive applications have begun to take on real-time and even system software
characteristics.

iv. Embedded software - resides within a product or system and is used to implement and
control features and functions for the end user and for the system itself. (e.g., key pad control
for a microwave oven)

Embedded software can perform limited and esoteric functions or provide significant function
and control capability (e.g., digital functions in an automobile such as fuel control, dashboard
displays, and braking systems).

V. Product-line software - designed to provide a specific capability for use by many different
customers. Product-line software can focus on a limited and esoteric marketplace (e.g.,
inventory control products) or address mass consumer markets (e.g., word processing,
spreadsheets).

vi. Web applications - called “WebApps,” this network-centric software category spans a wide
array of applications. In their simplest form, WebApps can be little more than a set of linked
hypertext files that present information using text and limited graphics.

However, as Web 2.0 emerges, WebApps are evolving into sophisticated computing
environments that not only provide stand-alone features, computing functions, and content to
the end user, but also are integrated with corporate databases and business applications.

vii. Artificial intelligence software - makes use of nonnumerical algorithms to solve complex
problems that are not vulnerable to computation or straightforward analysis. Applications
within this area include robotics, expert systems, pattern recognition (image and voice),
artificial neural networks, theorem proving, and game playing.

Challenges: The legacy to be left behind by this generation will ease the burden of future
software engineers. And yet, new challenges have appeared on the horizon:

Open-world computing - the rapid growth of wireless networking may soon lead to true
pervasive, distributed computing. The challenge for software engineers will be to develop
systems and application software that will allow mobile devices, personal computers, and
enterprise systems to communicate across vast networks.

Netsourcing - the World Wide Web is rapidly becoming a computing engine as well as a
content provider. The challenge for software engineers is to architect simple (e.g., personal
financial planning) and sophisticated applications that provide a benefit to targeted end-user
markets worldwide.

Open source - a growing trend that results in distribution of source code for systems
applications (e.g., operating systems, database, and development environments) so that many
people can contribute to its development. The challenge for software engineers is to build
source code that is self-descriptive, but more importantly, to develop techniques that will
enable both customers and developers to know what changes have been made and how those
changes manifest themselves within the software.

14. Write a short note on legacy software.


• Dayani-Fard and his colleagues describe legacy software as: Legacy software systems
. . . were developed decades ago and have been continually modified to meet changes
in business requirements and computing platforms. The proliferation of such systems
is causing headaches for large organizations who find them costly to maintain and risky
to evolve.
• Liu and his colleagues extend this description by noting that “many legacy systems
remain supportive to core business functions and are ‘indispensable’ to the business.”
Hence, legacy software is characterized by longevity and business criticality.
• Unfortunately, there is one additional characteristic that is present in legacy software -
poor quality.
• However, as time passes, legacy systems often evolve for one or more of the following
reasons:
o The software must be adapted to meet the needs of new computing environments or
technology.
o The software must be enhanced to implement new business requirements.
o The software must be extended to make it interoperable with other more modern
systems or databases.
o The software must be re-architected to make it viable within a network
environment.
• When these modes of evolution occur, a legacy system must be reengineered so that it
remains viable into the future. The goal of modern software engineering is to devise
methodologies that are founded on the notion of evolution.

15. Define Software Engineering and explain its layers.


Definitions:

1. By Fritz Bauer: Software engineering is the establishment and use of sound engineering
principles in order to obtain economically software that is reliable and works efficiently on real
machines.

2. IEEE [IEE93a]: The application of a systematic, disciplined, quantifiable approach to the


development, operation, and maintenance of software; that is, the application of engineering to
software.

Software engineering encompasses a process, methods for managing and engineering software,
and tools.

• Referring to Figure 1.3, any engineering approach (including software engineering)


must rest on an organizational commitment to quality.
• The bedrock that supports software engineering is a quality focus.
• The foundation for software engineering is the process layer. The software engineering
process is the glue that holds the technology layers together and enables rational and
timely development of computer software. Process defines a framework that must be
established for effective delivery of software engineering technology.
• Software engineering methods provide the technical how-to’s for building software.
Methods encompass a broad array of tasks that include communication, requirements
analysis, design modelling, program construction, testing, and support.
• Software engineering tools provide automated or semi-automated support for the
process and the methods.

16. Explain the elements of Software Process.


1. A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created.

2. An activity strives to achieve a broad objective and is applied regardless of the application
domain, size of the project, complexity of the effort, or degree of rigor with which software
engineering is to be applied. (e.g., communication with stakeholders)

3. An action (e.g., architectural design) encompasses a set of tasks that produce a major work
product (e.g., an architectural design model).

4. A task focuses on a small, but well-defined objective that produces a tangible outcome. (e.g.,
conducting a unit test)

17. Explain umbrella activities.


Umbrella activities occur throughout the software process and focus primarily on project
management, tracking, and control.

1. Software project tracking and control - allows the software team to assess progress against
the project plan and take any necessary action to maintain the schedule.

2. Risk management - assesses risks that may affect the outcome of the project or the quality
of the product.

3. Software quality assurance - defines and conducts the activities required to ensure software
quality.

4. Technical reviews - assesses software engineering work products in an effort to uncover


and remove errors before they are propagated to the next activity.

5. Measurement - defines and collects process, project, and product measures that assist the
team in delivering software that meets stakeholders’ needs.

6. Software configuration management - manages the effects of change throughout the


software process.
7. Reusability management - defines criteria for work product reuse and establishes
mechanisms to achieve reusable components.

8. Work product preparation and production - encompasses the activities required to create
work products such as models, documents, logs, forms, and lists.

18. With neat diagram explain generic process model. /Software Process
Framework.

• A process is defined as a collection of work activities, actions, and tasks that are
performed when some work product is to be created.
• Each of these activities, actions, and tasks reside within a framework or model that
defines their relationship with the process and with one another.
• The software process is represented schematically in Figure 2.1. Referring to the figure,
each framework activity is populated by a set of software engineering actions.
• Each software engineering action is defined by a task set that identifies the work tasks
that are to be completed, the work products that will be produced, the quality
assurance points that will be required, and the milestones that will be used to indicate
progress.
19. Explain the types of process flow in SE.
• Process flow describes how the framework activities and the actions and tasks that
occur within each framework activity are organized with respect to sequence and time.
• A linear process flow executes each of the five framework activities in sequence,
beginning with communication and culminating with deployment (Figure 2.2a).
• An iterative process flow repeats one or more of the activities before proceeding to the
next (Figure 2.2b).
• An evolutionary process flow executes the activities in a “circular” manner. Each circuit
through the five activities leads to a more complete version of the software (Figure
2.2c).
• A parallel process flow (Figure 2.2d) executes one or more activities in parallel with
other activities (e.g., modeling for one aspect of the software might be executed in
parallel with construction of another aspect of the software)
20. What is process pattern? Explain the template of process pattern.
A process pattern describes a process-related problem that is encountered during software
engineering work, identifies the environment in which the problem has been encountered, and
suggests one or more proven solutions to the problem.

Stated in more general terms, a process pattern provides you with a template a consistent
method for describing problem solutions within the context of the software process. By
combining patterns, a software team can solve problems and construct a process that best meets
the needs of a project.

Template for describing a process pattern:


• Pattern Name: The pattern is given a meaningful name describing it within the context of
the software process (e.g., TechnicalReviews).

• Forces: The environment in which the pattern is encountered and the issues that make the
problem visible and may affect its solution.

• Type: The pattern type is specified. There are 3 types of patterns:

1. Stage pattern - defines a problem associated with a framework activity for the process. An
example of a stage pattern might be EstablishingCommunication. This pattern would
incorporate the task pattern RequirementsGathering and others.

2. Task pattern - defines a problem associated with a software engineering action or work task
and relevant to successful software engineering practice (e.g., RequirementsGathering)

3. Phase pattern - define the sequence of framework activities that occurs within the process,
even when the overall flow of activities is iterative in nature. An example of a phase pattern
might be SpiralModel or Prototyping.

• Initial context. Describes the conditions under which the pattern applies.
• Problem. The specific problem to be solved by the pattern.
• Solution. Describes how to implement the pattern successfully.
• Resulting Context. Describes the conditions that will result once the pattern has been
successfully implemented.
• Related Patterns. Provide a list of all process patterns that are directly related to this one.
• Known Uses and Examples. Indicate the specific instances in which the pattern is
applicable.

21. Perspective models


Prescriptive process models were introduced to bring structure and order to the complex and
often chaotic nature of software development. Historically, these models have provided a
useful framework, guided software teams and helping them achieve consistent results.
However, software engineering and the products it generates continue to operate in a
delicate balance between order and chaos.

Nogueira and his colleagues describe this balance as the "edge of chaos," where too much
order can stifle creativity, while too much chaos can lead to disorganization. They argue
that while prescriptive models strive for structure, they may not always be suitable in an
environment that requires adaptability and change. These models define specific process
elements, such as activities, tasks, and quality assurance mechanisms, and prescribe a
predictable workflow. Yet, the challenge remains whether to adhere to these structured
models or adopt more flexible approaches that can better accommodate the dynamic nature
of software development.
22. Discuss the David Hooker’s seven principles of software engineering
practice
1. The Reason It All Exists: The primary purpose of a software system is to deliver value
to its users. Every decision should be aligned with this goal. If an aspect of the system
doesn’t add value, it should be reconsidered.

2. KISS (Keep It Simple, Stupid!): Simplicity in design is crucial. A simpler system is


easier to understand, maintain, and less prone to errors. However, simplicity should not
compromise essential features or quality.

3. Maintain the Vision: A clear vision is critical to the success of a software project.
Without it, the project risks becoming inconsistent and disjointed. An empowered
architect who maintains and enforces this vision can significantly enhance the project's
success.

4. What You Produce, Others Will Consume: Software is rarely used in isolation. It will
be maintained, documented, or expanded by others, so it’s important to design and
implement the system with this in mind.

5. Be Open to the Future: Software systems should be designed to adapt to change. By


preparing for future scenarios and avoiding design limitations, a system can have a
longer, more valuable lifespan.

6. Plan Ahead for Reuse: Reuse can save time and effort, but it requires careful planning.
Reusing code and designs can be beneficial, but achieving this goal requires forethought
at every stage of development.

7. Think!: Thoughtful consideration before taking action leads to better results. Clear
thinking helps avoid mistakes and provides valuable learning opportunities when things
do go wrong. Applying the first six principles effectively requires careful and deliberate
thought.
SEPM - MODULE 2

1. Discuss the importance of Requirement Engineering and list the Tasks


involved in it.

Requirements engineering is the wide range of activities and methods that result in a
comprehension of requirements. Requirements engineering is a significant software
engineering action that starts during the communication activity and extends into the modelling
activity from the standpoint of the software process. It needs to be modified to meet the
requirements of the project, the product, the workers, and the procedure.

1. Inception: Informal discussions can unexpectedly lead to significant software projects,


typically initiated by identifying a business requirement or new market opportunity and
involving business stakeholders such as managers, marketing professionals, and product
managers who assess feasibility and scope.

2. Elicitation: Elicitation faces challenges such as unclear system boundaries, users'


uncertainty about their needs, and changing requirements over time.

3. Elaboration: Elaboration refines use case scenarios to detail user interactions and identify
analysis classes, their attributes, services, relationships, and generates various diagrams.

4. Negotiation: Negotiation involves prioritizing and resolving conflicts in requirements


through stakeholder discussions, using an iterative approach to balance costs, risks, and
satisfaction.

5. Specification: A specification can take various forms, including a written document,


graphical representations, a formal mathematical model, use case scenarios, a prototype, or a
combination of these methods.

6. Validation: The primary method for validating requirements is through technical review,
where a team of software engineers, customers, users, and stakeholders examine the
specification for errors, omissions, inconsistencies, conflicts, and impractical or unattainable
requirements.
7. Requirements Management: Requirements management involves tasks that enable the
project team to identify, control, and manage requirements and any changes to them
throughout the system's lifecycle, recognizing that requirements for computer-based systems
evolve over time.

2. Explain the activities and steps involved in Negotiation Software


Requirements.

The goal of negotiation is to create a project plan that fulfils stakeholder’ requirements while
considering the real-world constraints (such as time, personnel, and budget) imposed on the
software team. Successful negotiations aim for a "win-win" outcome, where stakeholders
receive a system or product that meets most of their needs, and the software team works within
realistic and achievable budgets and deadlines.

Following activities are involved:

• Identification of the system or subsystem’s key stakeholders.


• Determination of the stakeholders’ “win conditions.”
• Negotiation of the stakeholders’ win conditions to reconcile them into a set of win-win
conditions for all concerned (including the software team).

Guidelines for Effective Negotiation:

1. Recognize that It’s Not a Competition: Successful negotiations require both parties to feel
they have achieved something. Understand that compromise is necessary.

2. Map Out a Strategy: Define your goals, understand the other party’s goals, and plan how
both can be achieved. Preparation is key to successful negotiation.

3. Listen Actively: Focus on what the other party is saying without formulating your response
simultaneously.

4. Focus on Interests, Not Positions: Avoid taking rigid positions. Instead, focus on
understanding and addressing the underlying interests and concerns of the other party to find
common ground.

5. Don’t Let It Get Personal: Keep the discussion focused on solving the problem at hand,
rather than on personal disagreements or conflicts.
6. Be Creative: Think outside the box to find innovative solutions that satisfy both parties
when faced with problem.

7. Be Ready to Commit: Once an agreement is reached, commit to it fully and move forward,
otherwise can undermine trust and delay progress.

3. Why Requirement Elicitation is difficult? Discuss the Problems in


Requirement Elicitation.

It certainly seems simple enough — jask the customer, the users, and others what the objectives
for the system or product are, what is to be accomplished, how the system or product fits into
the needs of the business, and finally, how the system or product is to be used on a day-to-day
basis. But it isn’t simple—it’s very hard.

Christel and Kang [Cri92] identify a number of problems that are encountered as elicitation
occurs.

• Problems of scope: The boundary of the system is ill-defined or the customers/users


specify unnecessary technical detail that may confuse, rather than clarify, overall
system objectives.
• Problems of understanding: The customers/users are not completely sure of what is
needed, have a poor understanding of the capabilities and limitations of their computing
environment, don’t have a full understanding of the problem domain, have trouble
communicating needs to the system engineer, omit information that is believed to be
“obvious,” specify requirements that conflict with the needs of other customers/users,
or specify requirements that are ambiguous or untestable.
• Problems of volatility: The requirements change over time

To help overcome these problems, you must approach requirements gathering in an organized
manner.

4. Illustrate the UML models that supplement the use cases.

1. Activity Diagram:

The UML activity diagram supplements the use case by providing a graphical repre sentation
of the flow of interaction within a specific scenario. Similar to the flowchart, an activity
diagram uses rounded rectangles to imply a specific system function, arrows to represent flow
through the system, decision diamonds to depict a branch ing decision (each arrow emanating
from the diamond is labeled), and solid horizon tal lines to indicate that parallel activities are
occurring. An activity diagram for the ACS-DCV use case is shown in Figure 6.5. It should be
noted that the activity dia gram adds additional detail not directly mentioned (but implied) by
the use case

2. Swimlane Diagram:

The UML Swimlane diagram is a useful variation of the activity diagram and allows you to
represent the flow of activities described by the use case and at the same time indicate which
actor (if there are multiple actors involved in a specific use case) or analysis class has
responsibility for the action described by an activity rectangle. Responsibilities are represented
as parallel segments that divide the diagram vertically, like the lanes in a swimming pool.
Referring to Figure 6.6, the activity diagram is rearranged so that activities associated with a
particular analysis class fall inside the Swimlane for that class. For example, the Interface class
represents the user interface as seen by the homeowner. The activity diagram notes two prompts
that are the responsibility of the interface - “prompt for re-entry” and “prompt for another
view.” These prompts and the decisions associated with them fall within the Interface
Swimlane. However, arrows lead from that swim lane back to the Homeowner Swimlane,
where homeowner actions occur.

5. Write the UML activity diagrams for eliciting requirements.


6. Explain the different rules of thumb that should be followed when creating
the analysis model.

1. The model should focus on requirements that are visible within the problem or business
domain. The level of abstraction should be relatively high.

2. Each element of the requirements model should add to an overall understanding of software
requirements and provide insight into the information domain, function, and behaviour of the
system.

3. Delay consideration of infrastructure and other non-functional models until design.

4. Minimize coupling throughout the system.

5. Be certain that the requirements model provides value to all stakeholders.

6. Keep the model as simple as it can be.

7. With an example, describe the Class-Responsibility-Collaborator (CRC)


modelling.
CRC modeling is a technique used in object-oriented design to identify and organize the classes
relevant to a system's requirements. It helps in defining the roles of different classes within the
system and how they interact with each other. CRC modeling uses index cards (either physical
or virtual) to represent classes and their responsibilities and collaborators.

Each index card is divided into three sections:

1. Class Name: The name of the class, written at the top of the card.

2. Responsibilities: The attributes and operations that the class is responsible for. These
are listed on the left side of the card.

3. Collaborators: Other classes that the class interacts with to fulfill its responsibilities.
These are listed on the right side of the card.

Example

Consider a simple home security system. Let's model the FloorPlan class using CRC modeling.

Class Name: FloorPlan

Responsibilities:

• Define the floor plan name/type.

• Manage floor plan positioning.

• Scale the floor plan for display.

• Incorporate walls, doors, and windows.


• Show the position of video cameras.

Collaborators:

• Wall: To manage walls within the floor plan.

• Camera: To show the position of video cameras.

Purpose and Usage

CRC modeling is particularly useful during the early stages of object-oriented design. It allows
teams to:

• Identify Classes: Helps in discovering the classes needed for the system.

• Allocate Responsibilities: Clarifies what each class will do (its responsibilities).

• Determine Collaborations: Identifies how classes will interact with one another to
achieve system functionality.

Advantages

• Simplicity: CRC cards provide a straightforward way to think about the design of a
system.

• Collaboration: Encourages discussion and collaboration among team members.

• Flexibility: Easy to modify as understanding of the system evolves.

CRC modeling is an effective and simple technique to identify classes, define their
responsibilities, and understand their interactions in an object-oriented system. It lays the
groundwork for more detailed design and implementation phases by providing a clear,
organized structure for the system’s components.

8. Explain collaborative requirements gathering.

• The objective is to recognize the problem, suggest components of the solution, discuss
various strategies, and outline an initial set of solution requirements, all within an
environment that supports achieving the objective.
• Basic guidelines
• Meetings are conducted and attended by both software engineers and other
stakeholders.
• Rules for preparation and participation are established.
• An agenda is suggested that is formal enough to cover all important points but
informal enough to encourage the free flow of ideas.
• A “facilitator” (can be a customer, a developer, or an outsider) controls the meeting.
• A “definition mechanism” (can be work sheets, flip charts, or wall stickers or an
electronic bulletin board, chat room, or virtual forum) is used.
• During inception, the developer and customers write “product request”. A meeting
location, time, and date are determined; a facilitator is appointed; and participants
from the software team and other stakeholder groups are invited to join. The product
request is shared with all attendees prior to the meeting.
• Prior to the meeting, each participant is asked to review the product request and create
several lists: one of objects within the environment surrounding the system, another of
objects the system will produce, and a third of objects the system will use to carry out
its functions.
• Additionally, participants should compile a list of services (processes or functions)
that interact with or manipulate these objects.
• Finally, they need to develop lists of constraints (such as cost, size, and business
rules) and performance criteria (such as speed and accuracy). The goal is to create an
agreed-upon list of objects, services, constraints, and performance criteria for the
system that will be developed.
• Each mini-specification is an elaboration of an object or service. The mini-specs are
shared with all stakeholders for discussion, where additions, deletions, and further
details are made. This process may reveal new objects, services, constraints, or
performance requirements that will be added to the initial lists.

9. Explain Quality Function Deployment

Quality function deployment (QFD) is a quality management technique that translates the
needs of the customer into technical requirements for software. QFD “concentrates on
maximizing customer satisfaction from the software engineering process” [Zul92]

QFD identifies three types of requirements:


1. Normal requirements. The objectives and goals that are stated for a product or system
during meetings with the customer. If these requirements are present, the customer is
satisfied. Examples of normal requirements might be requested types of graphical displays,
specific system functions, and defined levels of performance.

2. Expected requirements. These requirements are implicit to the product or system and
may be so fundamental that the customer does not explicitly state them. Their absence will be
a cause for significant dissatisfaction. Examples of expected requirements are: ease of
human/machine interaction, overall operational correctness and reliability, and ease of
software.

3. Exciting requirements. These features exceed the customer’s expectations and are highly
satisfying when included. For example, software for a new mobile phone comes with
standard features, but is coupled with a set of unexpected capabilities (e.g., multi-touch
screen, visual voice mail) that delight every user of the product.

QFD gathers requirements through customer interviews and observations, surveys, and
analysis of historical data (such as problem reports). This information is compiled into a
customer voice table, which is reviewed with the customer and other stakeholders. Various
diagrams, matrices, and evaluation methods are then employed to identify expected
requirements and try to uncover exciting requirements.

10. Write a short note on Elicitation Work Products

• The work products produced as a consequence of requirements elicitation will vary


depending on the size of the system or product to be built. For most systems, the work
products include
• A statement of need and feasibility.
• A bounded statement of scope for the system or product.
• A list of customers, users, and other stakeholders who participated in requirements
elicitation.
• A description of the system’s technical environment.
• A list of requirements (preferably organized by function) and the domain constraints
that apply to each.
• A set of usage scenarios that provide insight into the use of the system or product
under different operating conditions.
• Any prototypes developed to better define requirements.
• Each of these work products is reviewed by all people who have participated in
requirements elicitation.

11. Write a note on establishing groundwork.

Stakeholder Collaboration:

• Ideally, software engineers and stakeholders work closely together. However, in


reality, stakeholders may be distant, have limited technical knowledge, or conflicting
opinions, making requirements engineering challenging.

Identifying Stakeholders:

• Stakeholders are anyone who benefits from the system being developed. The process
begins with identifying and listing these stakeholders, which grows as more are
contacted.

Multiple Viewpoints:

• Different stakeholders have varied perspectives and priorities, often leading to


conflicting requirements. The challenge is to categorize and manage these diverse
inputs to achieve a consistent set of requirements.

Collaboration and Priority Points:

• Collaboration involves finding common ground among stakeholders and resolving


conflicts. A method like "priority points" allows stakeholders to vote on the
importance of different requirements, helping to prioritize them.

Asking the Right Questions:

• Initial questions should be "context-free," focusing on understanding the stakeholders,


project goals, and benefits. These questions help identify stakeholders, understand the
problem, and evaluate the effectiveness of the communication process.

12. Write a short note on validating requirements.


Each element of requirements model is created, it examines inconsistency, omissions and
ambiguity.

Requirements represented by models are prioritized by stakeholders and grouped by


implementation.

A review of the requirements model addresses following key questions: (CANCACFTRPP)

1. Consistency: Are requirements consistent with overall objectives?

2. Abstraction: Are requirements specified at the correct level of abstraction?

3. Necessity: Is the requirement necessary or add-on?

4. Clarity: Is each requirement bounded and unambiguous?

5. Attribution: Does each requirement have a noted source?

6. Conflict: Do any requirements conflict with others?

7. Feasibility: Is each requirement achievable in the technical environment?

8. Testability: Is each requirement testable once implemented?

9. Reflection: Does the model reflect the intended information, function and behaviour?

10. Partitioning: Has the model been partitioned to reveal detailed information?

11. Patterns: Are requirements patterns used, validated and consistent with customer needs?

13. Illustrate Scenario Based Modelling with Safe Home Surveillance


example.

1. Definition: A scenario-based model represents specific interactions between users


(actors) and the system to achieve particular goals or functions. These scenarios are
depicted through use cases, activity diagrams, and sometimes, swimlane diagrams.

2. Use Cases:

o Use Case: The primary tool in scenario-based modeling, a use case describes a
sequence of actions that the system performs in response to an actor’s request.
Each use case captures a specific functionality of the system from the user’s
perspective.
o Components: A use case typically includes actors (who interact with the
system), a description of the interaction, preconditions (what must be true
before the use case starts), and postconditions (what is true after the use case
completes).

o Example: In a home surveillance system, a use case might describe how a


homeowner remotely accesses camera feeds via the internet.

3. Activity Diagrams:

o These diagrams visually represent the flow of control or data within a


scenario. They help in understanding the sequence of activities and decision
points in a process.

o Swimlane Diagrams: A specific type of activity diagram that divides


activities into "lanes," each representing a different actor or system
component, clarifying the responsibilities of each.

4. Importance:

o User-Centric: Scenario-based models focus on how users will actually use the
system, ensuring that the system’s design meets user needs.

o Communication: These models provide a clear and easily understandable


way to communicate requirements between stakeholders and developers.

o Validation: By modeling scenarios, stakeholders can validate whether the


proposed system behaviors align with their needs and expectations.

5. Development:

o Inception and Elicitation: The process begins with identifying stakeholders,


gathering requirements, and understanding the system’s context.

o Refinement: Scenarios are refined through discussions with stakeholders to


cover alternative actions, error conditions, and exceptional situations, ensuring
robustness.

6. Example of a Use Case:


14. How can you develop an effective use case? Develop a UML use case
diagram for home security function.

• A use case is a contract that describes the system's behavior in response to a


stakeholder's request. It tells a story about how an end user interacts with the system
under specific conditions.

• The first step in creating a use case is to identify the "actors," which are roles that people
or devices play when interacting with the system. An actor is anything external to the
system that communicates with it.

• An actor represents a role rather than a specific person. For example, a single user might
play multiple roles (e.g., programmer, tester, monitor) that translate into different actors
within the use case.

• Primary actors interact directly with the system to achieve its main functions, while
secondary actors support the primary actors.

• Use cases are developed by answering specific questions about the actors and their
interactions with the system. Questions include identifying primary and secondary
actors, their goals, main tasks, potential exceptions, and variations in interactions.

Use Case Template:


15. Elements of Requirements model
In requirements engineering, the requirements model serves as a critical tool for understanding
and documenting the needs of stakeholders. Different modelling methods may dictate the
specific elements used, but several generic elements are common to most requirements models.
These elements provide a comprehensive view of the system from various perspectives,
enhancing the chances of uncovering omissions, inconsistencies, and ambiguities.

1. Scenario-Based Elements: Scenario-based elements describe the system from the user's
perspective, often using use cases and corresponding diagrams. These elements are typically
the first part of the requirements model to be developed and serve as input for other modelling
elements.

• Use Cases: Detailed descriptions of user interactions with the system, capturing
functional requirements.
• Use-Case Diagrams: Visual representations of the interactions between actors (users or
other systems) and the system itself.
• Activity Diagrams: Show the flow of activities involved in a use case, as illustrated in
Figure 5.3.
2. Class-Based Elements: Class-based elements focus on the objects manipulated by the
system and their interactions.

• Class Diagrams: Represent classes (e.g., Sensor class in Figure 5.4) with their attributes
(e.g., name, type) and operations (e.g., identify, enable).
• Relationships and Interactions: Diagrams depicting how classes interact and collaborate
with each other.

3. Behavioral Elements: Behavioral elements model how the system behaves in response to
external stimuli and internal processes.

• State Diagrams: Show the states of a system and the transitions between these states
triggered by events. For example, a state diagram for the SafeHome control panel
software could depict modes like reading user input, processing input, and responding
to commands (Figure 5.5).
• Sequence Diagrams: Depict the sequence of messages exchanged between objects to
carry out a function.

4. Flow-Oriented Elements: Flow-oriented elements model how information flows through


the system, transforming inputs into outputs.
• Data Flow Diagrams (DFDs): Represent the flow of data within the system, showing
data sources, processes, data stores, and data destinations.
• Flowcharts: Illustrate the logical flow of operations within a process, showing steps,
decisions, and loops.

16. Domain Analysis

Domain analysis is a critical activity in software engineering that focuses on identifying


common patterns, classes, and reusable components within a specific application domain. This
process is not tied to any particular software project but instead serves as an ongoing effort to
create reusable assets that can be applied across multiple projects within the same domain.
Here’s a breakdown of the key concepts and processes involved in domain analysis:

Key Concepts of Domain Analysis

1. Purpose:

o The primary goal of domain analysis is to identify and create reusable analysis
patterns and classes that can be applied to various projects within a specific
business domain. By doing so, the development process is expedited, time-to-
market is improved, and development costs are reduced.

2. Application Domain:

o An application domain refers to a specific area of business or industry where


similar software applications are developed. Examples include banking,
avionics, multimedia video games, and medical devices. Domain analysis
focuses on understanding and abstracting the common requirements, objects,
and patterns within this domain.

3. Reusability:

o Domain analysis emphasizes creating reusable assets, such as classes, objects,


subassemblies, and frameworks. These reusable components can be applied
across multiple projects, which helps in standardizing solutions and improving
efficiency.

4. Analysis Patterns:
o Analysis patterns are recurring solutions to common problems within a specific
domain. These patterns are identified through domain analysis and categorized
so they can be applied to new projects within the same domain.

Process of Domain Analysis

1. Sources of Domain Knowledge:

o Technical Literature: Research papers, books, and articles that provide


insights into common problems and solutions within the domain.

o Existing Applications: Analyzing current software applications to identify


reusable components and patterns.

o Customer Surveys: Gathering requirements and feedback from customers to


understand common needs across different projects.

o Expert Advice: Consulting domain experts who have deep knowledge and
experience in the domain.

o Current/Future Requirements: Considering the current and anticipated future


needs of projects within the domain.

2. Domain Analysis Activities:

o Identification: Recognizing common objects, classes, and patterns that can be


reused across multiple projects.

o Analysis: Analyzing the identified components to ensure they are broadly


applicable and reusable.

o Specification: Documenting the reusable components in a way that they can be


easily integrated into future projects.

3. Outputs of Domain Analysis:

o Domain Analysis Model: A comprehensive model that includes reusable


classes, patterns, functional models, class taxonomies, reuse standards, and
domain-specific languages. This model serves as a toolkit for developers
working on projects within the domain.

Benefits of Domain Analysis


• Efficiency: By reusing common components, development time is reduced, and teams
can focus on solving unique problems rather than reinventing the wheel.

• Consistency: Reusable components ensure a consistent approach across different


projects within the domain.

• Cost Reduction: Reuse leads to lower development costs as less time and effort are
spent on creating new solutions.

• Improved Quality: Reused components are typically well-tested and refined, leading
to higher quality software.

17. Data Modelling Concepts

1. Entity-Relationship Diagram (ERD)

• Definition: An ERD is a visual tool used in data modeling to represent all data objects
within a system, their relationships, and other relevant details.

• Purpose: It shows how data objects are interconnected and how they interact within an
application.

2. Data Objects

• Composite Information: Data objects are composed of multiple attributes. For


example, "dimensions" include height, width, and depth, making it a data object.

• Forms of Data Objects: They can represent external entities (e.g., a person),
occurrences (e.g., an event like an alarm), roles (e.g., salespeople), organizational units
(e.g., departments), places (e.g., warehouses), or structures (e.g., files).

• Description and Representation: A data object's description includes the object itself
and its attributes. It can be represented in a table format where attributes are the
headings, and rows represent specific instances (e.g., a table of cars with attributes like
make, model, ID number).

3. Data Attributes

• Purpose: Attributes are used to name, describe, and sometimes reference data objects.

• Functions:

o Naming: Attributes can name an instance of a data object.


o Describing: Attributes describe characteristics of an instance.

o Referencing: Attributes can reference another instance in another table.

o Identifiers: One or more attributes act as an identifier (key) to find an instance


of the data object.

• Contextual Choice: The choice of attributes depends on the specific application. For
example, attributes in a DMV application might include make, model, and ID number,
while an automobile manufacturing control software might include interior code and
transmission type.

4. Relationships

• Definition: Relationships describe how data objects are connected to one another,
crucial for developing a comprehensive data model.

• Examples:

o Owns: A person owns a car.

o Insured to Drive: A person is insured to drive a car.

• Representation: In an ERD, relationships are typically shown with arrows to indicate


directionality and clarify the nature of the connections between data objects.

These concepts form the foundation of understanding how data is structured and interacted
with in software systems, particularly through the use of ERDs in the design and development
of databases and applications.
SEPM – MODULE 3

1. Discuss in detail of agile process model.

Agile process in the context of software development refers to the ability of a development
process to quickly adapt to changes and unpredictability. This concept, primarily discussed in
agile software methodologies, addresses several key assumptions and characteristics about
software projects:

1. Unpredictability of Requirements and Priorities: It is challenging to predict which


software requirements will remain constant and which will change over time. Customer
priorities can also shift unexpectedly as the project progresses.

2. Interleaving of Design and Construction: For many software projects, the design and
construction phases are not strictly sequential but are instead interleaved. This means that
design models are validated through construction activities as they are created, making it
difficult to determine the extent of design needed before beginning construction.

3. Unpredictability of Project Activities: Analysis, design, construction, and testing activities


are not as predictable from a planning perspective as one might hope. This unpredictability
requires a process that can manage changing conditions effectively.

Key Aspects of an Agile Process:

1. Incremental Adaptation: Rather than attempting to predict and plan for all changes upfront,
an agile process adapts incrementally. This means delivering software in small, manageable
increments that can be reviewed and adjusted based on customer feedback.

2. Customer Feedback: Frequent and ongoing feedback from customers is crucial. It helps
the development team make the necessary adaptations to the product, ensuring it meets the
evolving needs and priorities of the customers.

3. Iterative Development: An iterative approach involves breaking down the development


process into smaller cycles, or iterations, each resulting in a functional piece of software. This
allows for regular evaluation and feedback, fostering continual improvement.
4. Operational Prototypes: To facilitate customer feedback, operational prototypes or
portions of the operational system are delivered regularly. This enables customers to interact
with and evaluate the software, providing insights that guide further development.

Benefits of Agility:

• Reduced Cost of Change: By delivering software in increments and regularly


incorporating feedback, the cost of making changes is reduced. Changes are made
within smaller increments rather than requiring extensive rework of the entire system.
• Improved Customer Satisfaction: Frequent delivery of working software and regular
customer involvement ensure that the product meets customer needs and expectations
more closely.
• Enhanced Flexibility and Responsiveness: Agile processes allow teams to respond
quickly to changing requirements and priorities, maintaining progress even in the face
of uncertainty.

2. Infer the toolset of agile process.

The toolset of the agile process includes both technological and non-technological aids
designed to enhance team collaboration, communication, and overall project efficiency.

1. Social Tools:

• Hiring Practices: One of the social tools is the practice of assessing a prospective team
member’s fit through pair programming sessions with an existing team member. This
allows the team to evaluate the candidate’s skills and compatibility with the team
dynamics in real-time, ensuring that the right people are brought on board.

2. Collaboration and Communication Tools:

• Physical Proximity: Encouraging team members to work in close physical proximity


to enhance communication and collaboration.

• Whiteboards, Poster Sheets, Index Cards, Sticky Notes: These low-tech tools
facilitate active communication by allowing team members to visualize and manipulate
information during meetings or brainstorming sessions.
• Information Radiators: Passive communication tools such as flat panel displays that
show the overall status of different components of a project, enabling the team to stay
informed about progress without needing constant verbal updates.

3. Project Management Tools:

• Earned Value Charts and Graphs of Tests Created vs. Passed: These tools provide
a clear, visual representation of project progress, focusing on tangible outcomes rather
than traditional project management tools like Gantt charts.

• Time-Boxing and Pair Programming: These are process tools that help streamline
the work process and ensure efficiency. Time-boxing restricts tasks to a set timeframe,
while pair programming encourages collaborative coding practices.

4. Physical Tools and Environment Optimization:

• Efficient Meeting Areas: Creating environments conducive to productive meetings is


considered a tool in agile processes, as it directly impacts the quality of team
interactions.

• Electronic Whiteboards: These are physical devices that enable dynamic and real-
time collaboration, particularly useful for distributed teams.

• Collocated Teams: Encouraging teams to work in a shared physical space fosters better
collaboration and stronger team culture.

In the agile process, the term "tools" extends beyond software or digital tools to include any
social, physical, or process mechanisms that enhance the work environment, collaboration, and
communication among team members. The agile toolset is diverse, encompassing everything
from hiring practices and team dynamics to physical workspaces and visual management aids,
all aimed at improving the efficiency and quality of the final product. These tools are critical
to the success of agile teams as they align with the core principles of agility: collaboration,
responsiveness, and continuous improvement.

3. Define agility. Explain the core Principles of Agile process.

Agility in the context of software engineering refers to the ability of a software development
team to quickly and effectively respond to changes throughout the development process.

Principles:
1. Our highest priority is to satisfy the customer through early and continuous delivery of
valuable software.

2. Agile processes harness change for the customer’s competitive advantage.

3. Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference to the shorter timescale.

4. Business people and developers must work together daily throughout the project.

5. Build projects around motivated individuals. Give them the environment and support they
need and trust them to get the job done.

6. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.

7. Working software is the primary measure of progress.

8. Agile processes promote sustainable development. The sponsors, developers, and users
should be able to maintain a constant pace indefinitely.

9. Continuous attention to technical excellence and good design enhances agility.

10. Simplicity—the art of maximizing the amount of work not done—is essential.

11. The best architectures, requirements, and designs emerge from self–organizing teams.

12. At regular intervals, the team reflects on how to become more effective, then tunes and
adjusts its behaviour accordingly.

4. Explain any two agile process models other than XP that have been
proposed
i. Adaptive Software Development (ASD)

Adaptive Software Development (ASD), proposed by Jim Highsmith, is a technique for


building complex software and systems focusing on human collaboration and team self-
organization. Highsmith argues that an agile, adaptive development approach based on
collaboration is as much a source of order in complex interactions as discipline and
engineering.

The ASD life cycle consists of three phases: speculation, collaboration, and learning.
1. Speculation: The project is initiated and adaptive cycle planning is conducted. This
phase uses project initiation information—customer’s mission statement, project
constraints, and basic requirements—to define the set of release cycles (software
increments) needed for the project.
2. Collaboration: Motivated people work together in a way that multiplies their talent
and creative output beyond their absolute numbers. Collaboration involves
communication and teamwork, individual creativity, and above all, trust. Team
members must trust one another to criticize without animosity, assist without
resentment, work diligently, possess the necessary skills, and communicate problems
effectively.
3. Learning: ASD teams learn through focus groups, technical reviews, and project
postmortems. This phase emphasizes the dynamics of self-organizing teams,
interpersonal collaboration, and both individual and team learning, leading to a higher
likelihood of project success.

ASD promotes an environment where progress is as important as the adaptive cycle's success,
fostering a collaborative and learning-oriented approach to software development.

JAD (Joint Application Development)

ii. SCRUM
Scrum is an agile software development method developed by Jeff Sutherland and his team in
the early 1990s. It aligns with agile principles and guides development through a framework
involving the following activities: requirements, analysis, design, evolution, and delivery.
These activities are structured into "sprints," adaptable work units defined and modified in real
time by the Scrum team. The Scrum process includes several key components and activities:

1. Backlog: A prioritized list of project requirements or features that provide business


value to the customer. Items can be added at any time, with the product manager
assessing and updating priorities as needed.
2. Sprints: Time-boxed work units (typically 30 days) required to achieve specific
requirements from the backlog. Changes are not introduced during the sprint, allowing
team members to work in a stable, short-term environment.
3. Scrum Meetings: Daily, short meetings (about 15 minutes) where team members
answer three key questions:
o What did you do since the last meeting?
o What obstacles are you encountering?
o What do you plan to accomplish by the next meeting?

Led by a Scrum master, these meetings help identify potential problems early and
promote "knowledge socialization."
4. Demos: At the end of each sprint, the software increment is demonstrated to the
customer for evaluation. The demo may not include all planned functionality, but
showcases what can be delivered within the established time-box.

Scrum emphasizes the use of software process patterns that have proven effective for projects
with tight timelines, changing requirements, and critical business needs. This method fosters a
collaborative and adaptive approach to software development, ensuring continuous
improvement and customer satisfaction.

iii. Dynamic Systems Development Method (DSDM)

The Dynamic Systems Development Method (DSDM) is an agile software development


approach designed to deliver systems under tight time constraints using incremental
prototyping within a controlled project environment. The DSDM philosophy aligns with a
modified Pareto principle, positing that 80 percent of an application can be delivered in 20
percent of the time required to complete the entire application. Each iteration in DSDM follows
this 80 percent rule, meaning only the necessary work for each increment is completed to move
forward, with remaining details addressed later as business requirements evolve or changes are
needed.

The DSDM life cycle consists of five activities, with the last three forming iterative cycles:

1. Feasibility Study: Establishes basic business requirements and constraints for the
application, assessing its viability as a project candidate.
2. Business Study: Defines the functional and information requirements needed for the
application to provide business value, along with the basic application architecture and
maintainability requirements.
3. Functional Model Iteration: Produces incremental prototypes to demonstrate
functionality for the customer, gathering additional requirements through user feedback
as the prototype is exercised.
4. Design and Build Iteration: Revisits and refines prototypes from the functional model
iteration to ensure they are engineered to provide operational business value for end
users. This cycle may occur concurrently with the functional model iteration.
5. Implementation: Deploys the latest software increment into the operational
environment. The increment may not be 100 percent complete, and changes may be
requested during this phase. Development work then continues by returning to the
functional model iteration activity.

DSDM emphasizes iterative development and incremental delivery, ensuring that the system
evolves based on user feedback and changing requirements, thus maximizing business value
and flexibility in the development process.

iv. Crystal

Alistair Cockburn and Jim Highsmith created the Crystal family of agile methods in order to
achieve a software development approach that puts a premium on “manoeuvrability” during
what Cockburn characterizes as “a resource limited, cooperative game of invention and
communication, with a primary goal of delivering useful, working software and a secondary
goal of setting up for the next game”

The Crystal family is actually a set of example agile processes that have been proven effective
for different types of projects. The intent is to allow agile teams to select the member of the
crystal family that is most appropriate for their project and environment.

V. Feature Driven Development (FDD)

Feature Driven Development (FDD) was originally conceived by Peter Coad and later extended
by Stephen Palmer and John Felsing to create an adaptive, agile process suitable for moderately
sized and larger software projects. FDD focuses on object-oriented software engineering and
incorporates several key principles and activities to manage complexity and ensure software
quality.

Key Principles and Activities of FDD:

1. Collaboration: Emphasizes teamwork and communication among FDD team


members.

2. Feature-based Decomposition: Manages problem and project complexity by breaking


down the system into small, client-valued functions (features), which are then
integrated into software increments.

3. Communication: Uses verbal, graphical, and text-based methods to convey technical


details.

Software Quality Assurance in FDD:

FDD emphasizes quality assurance through several practices:

• Incremental Development Strategy: Encourages small, manageable increments of


functionality.
• Design and Code Inspections: Regular reviews to ensure quality.

• Software Quality Assurance Audits: Periodic audits to maintain standards.

• Metrics Collection: Gathers data to measure progress and quality.

• Use of Patterns: Applies design patterns for consistent and effective analysis, design,
and construction.

Definition of a Feature:

In FDD, a feature is a client-valued function that can be implemented in two weeks or less.
This approach has several benefits:

• Ease of Description: Features are small and deliverable, making them easier for users
to describe and understand.

• Hierarchical Grouping: Features can be organized into business-related groups,


facilitating better project management.

• Regular Deliverables: Teams develop operational features every two weeks, providing
regular, incremental progress.

• Effective Inspections: Small features make design and code inspections more
manageable and effective.

• Feature-driven Planning: Project planning, scheduling, and tracking are based on the
feature hierarchy rather than arbitrary tasks.
vi. Lean Software Development (LSD)

Lean Software Development (LSD) has adapted the principles of lean manufacturing to the
world of software engineering. The lean principles that inspire the LSD process can be
summarized as eliminate waste, build quality in, create knowledge, defer commitment, deliver
fast, respect people, and optimize the whole. Each of these principles can be adapted to the
software process. For example, eliminate waste within the context of an agile software project
as

1)adding no extraneous features or functions

(2) assessing the cost and schedule impact of any newly requested requirement,

(3) removing any superfluous process steps,

(4) establishing mechanisms to improve the way team members find information,

(5) ensuring the testing finds as many errors as possible,

5. Principles process and practice


ANS: Principles that Guide Process:

Principle 1: Be agile. Whether the process model you choose is prescriptive or agile.

• keep your technical approach as simple as possible.


• keep the work products you produce as concise(short) as possible.
• Make decisions locally whenever possible.

Principle 2: Focus on quality at every step. For every process activity, action, and task
should focus on the quality of the work product that has been produced.

Principle 3: Be ready to adapt. adapt your approach to conditions imposed by the problem,
the people, and the project itself.

Principle 4: Build an effective team. Build a self-organizing team that has mutual trust and
respect.

Principle 5: Establish mechanisms for communication and coordination. Projects fail


because stakeholders fail to coordinate their efforts to create a successful end product.

Principle 6: Manage change. The methods must be established to manage the way changes
are requested, approved, and implemented.

Principle 7: Assess risk. Lots of things can go wrong as software is being developed.
Principle 8: Create work products that provide value for others. Create only those work
products that provide value for other process activities, actions and tasks.

Principles That Guide Practice:


Principle 1: Divide and Conquer

• Break large problems into smaller, manageable parts (modules).


• Each part should deliver distinct functionality.

Principle 2: Use Abstraction

• Simplify complex elements to communicate meaning effectively.


• Use concise overviews to represent complex systems.

Principle 3: Strive for Consistency

• Ensure consistency in requirements, design, code, and testing.


• Consistency makes the software easier to develop and maintain.

Principle 4: Focus on Information Transfer

• Emphasize efficient transfer of information between components.


• Examples: database to user, OS to application.

Principle 5: Effective Modularity

• Divide complex systems into well-defined modules.


• Modularity helps manage complexity and enhances maintainability.

Principle 6: Look for Patterns

• Use patterns to solve recurring problems in software development.


• Patterns provide proven solutions and best practices.

Principle 7: Multiple Perspectives

• Examine problems and solutions from various perspectives.


• Different views can provide better understanding and solutions.

Principle 8: Maintainability

• Design software with future maintenance in mind.


• Software will need corrections, adaptations, and enhancements.

6. List and explain the practices of Industrial XP Programming.

1. Readiness Assessment
Before starting an IXP project, the organization needs to conduct a readiness assessment. This
assessment ensures:

1. An appropriate development environment is in place to support IXP.


2. The team comprises the right stakeholders.
3. The organization has a robust quality program and supports continuous improvement.
4. The organizational culture aligns with the values of an agile team.
5. The broader project community is appropriately populated.

2. Project Community

• Team members should be well-trained, adaptable, skilled, and suitable for a self-
organizing team.
• For large projects, the team concept evolves into a community. This community
includes technologists, customers, and various stakeholders (e.g., legal staff, quality
auditors, manufacturing, sales) who play important roles even if they are on the
periphery.
• Roles should be explicitly defined, and communication and coordination mechanisms
should be established.

3. Project Chartering

The IXP team evaluates the project to:

1. Determine the business justification.


2. Assess alignment with organizational goals and objectives.
3. Examine how the project complements, extends, or replaces existing systems or
processes.

4. Test-Driven Management

IXP projects require measurable criteria to assess project progress. This involves:

1. Establishing measurable "destinations."


2. Defining mechanisms to determine if these destinations have been reached.

5. Retrospectives

After delivering a software increment, the IXP team conducts retrospectives, which are
specialized technical reviews. These retrospectives:

1. Examine issues, events, and lessons learned across a software increment or the entire
release.
2. Aim to improve the IXP process.
6. Continuous Learning

Continuous learning is essential for process improvement. XP team members are encouraged
(and possibly incentivized) to learn new methods and techniques to enhance product quality.

In addition to the six new practices discussed, IXP modifies a number of existing XP practices.

• Story-driven development (SDD) insists that stories for acceptance tests be written
before a single line of code is generated.
• Domain-driven design (DDD) is an improvement on the “system metaphor” concept
used in XP. DDD suggests the evolutionary creation of a domain model that “accurately
represents how domain experts think about their subject”.
• Pairing extends the XP pair programming concept to include managers and other
stakeholders. The intent is to improve knowledge sharing among XP team members
who may not be directly involved in technical development.
• Iterative usability discourages front-loaded interface design in favour of usability
design that evolves as software increments are delivered and users’ interaction with the
software is studied.

7. Explain XP Process with diagram.

Extreme Programming (XP) follows an object-oriented approach and encompasses a set of


rules and practices within the context of four main framework activities: planning, design,
coding, and testing.

Planning
• The process begins with listening to understand the business context and gather
requirements. This leads to the creation of “user stories”, which describe the required
output, features, and functionality.
• Customers write user stories and prioritize them based on business value and is placed
on an index card. The XP team assesses each story and estimates the development effort
in weeks. Stories requiring more than three weeks are split into smaller stories.
• Customers and developers work together to decide how to group stories into the next
release to be developed by the XP team. Once a basic commitment is made for a release,
the XP team orders the stories that will be developed in one of three ways:

(1) all stories will be implemented immediately (within a few weeks),

(2) the stories with highest value will be moved up in the schedule and implemented
first, or

(3) the riskiest stories will be moved up in the schedule and implemented first.

• Project Velocity: After the first release, project velocity (the number of stories
implemented) is computed to estimate delivery dates and manage project scope.

Design

• XP design emphasizes simplicity (Keep It Simple principle) and uses CRC (Class-
Responsibility-Collaborator) cards to organize object-oriented classes relevant to the
current increment.
• For challenging design problems, spike solutions (prototypes) are created to reduce risk
and validate estimates.
• Refactoring: Refactoring is the process of changing a software system in such a way
that it does not alter the external behaviour of the code yet improves the internal
structure. Continuous refactoring improves internal design without changing external
behaviour. Design is considered transient and can be modified continuously.

Coding

• Before coding, unit tests are created to ensure each story's requirements are met. This
focuses the developer on essential functionality.
• Pair Programming: Two programmers work together at one workstation to write code.
This enhances problem-solving, real-time quality assurance, and adherence to coding
standards.
• Continuous Integration: Code is integrated frequently (often daily) to avoid
compatibility issues and enable early error detection.

Testing
• Automated unit tests are run frequently to support regression testing and ensure code
modifications do not introduce new errors.
• Integration testing occurs regularly, providing continuous progress indications and
early problem detection.
• Customer-specified acceptance tests validate overall system features and functionality,
ensuring the software meets user requirements.

8. Explain human traits required for XP programming.

i. Competence: Agile teams require members with innate talent, specific software-
related skills, and knowledge of the chosen process. While skills can be taught, a
baseline competence is essential for effective execution.
ii. Common Focus: Despite diverse roles and skills, all team members must share a
singular goal: delivering working software increments to customers as promised.
iii. Collaboration: Agile software development thrives on effective communication and
collaboration. Team members must actively communicate with each other and
stakeholders, creating and using information that drives business value.
iv. Decision-Making Ability: Agile teams operate best when they have autonomy over
technical and project decisions. Empowering teams to make decisions fosters
ownership and commitment to project success.
v. Fuzzy Problem-Solving Ability: Agile teams face ambiguity and change regularly.
They must be adaptable and capable of addressing evolving problems and requirements
flexibly. Learning from each problem-solving activity contributes to overall project
success.
vi. Mutual Trust and Respect: Trust and respect among team members are critical. A
"jelled" team is cohesive and operates collaboratively, leveraging collective strengths
for superior outcomes.
vii. Self-Organization: In the context of agile development, self-organization implies three
things:
(1) the agile team organizes itself for the work to be done,
(2) the team organizes the process to best accommodate its local environment,
(3) the team organizes the work schedule to best achieve delivery of the software
increment.

Self-organization promotes morale and enhances collaboration within the team.

9. Effective communication is among the most challenging activities that you


will confront. Justify this statement by discussing about the principles that
apply for communication within a software project
Principle 1. Listen: Focus on understanding the speaker's words without prematurely
formulating a response. Ask for clarification when needed, but avoid interruptions or negative
reactions.
Principle 2. Prepare before you communicate: Understand the problem and relevant
business jargon before discussions. If leading a meeting, prepare an agenda in advance.

Principle 3. Someone should facilitate the activity: A facilitator should guide the
communication, mediate conflicts, and ensure productive discussion.

Principle 4. Face-to-face communication is best: Face-to-face interactions are most


effective, especially when supplemented with visual aids like drawings or documents.

Principle 5. Take notes and document decisions: Keep detailed notes of important points
and decisions to avoid any misunderstandings later.

Principle 6. Strive for collaboration: Encourage collaboration and consensus to leverage


team knowledge and build trust.

Principle 7. Stay focused: modularize your discussion: Keep discussions on-topic and
modular, addressing one issue at a time.

Principle 8. If something is unclear, draw a picture: Use sketches or drawings to clarify


complex points when verbal communication isn't sufficient.

Principle 9. (a) Once you agree to something, move on. (b) If you can’t agree to something,
move on. (c) If a feature or function is unclear and cannot be clarified at the moment, move on.

Principle 10. Negotiation is not a contest or a game. It works best when both parties win:
Approach negotiation as a cooperative process, aiming for a win-win outcome for all parties
involved.

10. What is Agility? Explain Agility with the cost of change with Diagram.

• In software development, it's widely accepted that the cost of changes increases
nonlinearly as a project progresses.
• Early changes during requirements gathering are relatively low-cost and easy to
implement, but as the project advances, especially into later stages like validation
testing, the cost and complexity of changes escalate significantly. This is because
changes at later stages often require major modifications to the software's architecture,
components, and tests, leading to substantial time and cost implications.
• Agile methodologies aim to "flatten" this cost curve by enabling incremental delivery
and incorporating practices like continuous unit testing and pair programming. These
practices allow teams to accommodate changes even late in the project with reduced
cost and time impacts.
• While the extent of this cost reduction is still debated, evidence suggests that agile
processes can significantly mitigate the high costs traditionally associated with late-
stage changes in software development.

11. Describe briefly the design modelling principles that guide the respective
framework activity
Principle 1: Traceability to Requirements: Ensure that every element of the design model is
traceable back to the requirements model, which includes the problem's information domain,
user functions, system behaviour, and requirements classes.

Principle 2: Consider System Architecture: Begin design with architectural considerations,


as architecture influences interfaces, data structures, program control flow, testing, and
maintainability. Only address component-level issues after establishing the architecture.

Principle 3: Data Design is Crucial: Treat data design as critically important as processing
functions. A well-structured data design simplifies program flow, facilitates component
implementation, and improves processing efficiency.

Principle 4: Design Interfaces Carefully: Design both internal and external interfaces with
care to ensure efficient data flow, minimize error propagation, and simplify integration and
testing.

Principle 5: User Interface Design: Tailor the user interface to meet end-user needs with an
emphasis on ease of use, as a poorly designed interface can detract from the software's
perceived quality.

Principle 6: Functional Independence: Design components to be functionally independent,


focusing each component on a single function or subfunction to ensure cohesion and clarity.

Principle 7: Loose Coupling: Maintain loose coupling between components and with the
external environment to reduce error propagation and enhance maintainability.

Principle 8: Understandable Design Models: Create design representations that are easily
understandable to effectively communicate with those involved in coding, testing, and future
maintenance.
Principle 9: Iterative Design Development: Develop the design iteratively, refining it with
each iteration and aiming for simplicity as the design evolves.

12. Modelling Principles:


Principle 1. The primary goal of the software team is to build software, not create models.
Agility means getting software to the customer in the fastest possible time. Models that make
this happen are worth creating, but models that slow the process down or provide little new
insight should be avoided.

Principle 2. Travel light: Create only the essential models needed to facilitate construction.
Excessive modelling takes time and effort that could be better spent on coding and testing.

Principle 3. Strive to produce the simplest model that will describe the problem or the
software. Don’t overbuild the software. By keeping models simple, the resultant software will
also be simple. The result is software that is easier to integrate, easier to test, and easier to
maintain. In addition, simple models are easier for members of the software team to understand
and critique, resulting in an ongoing form of feedback that optimizes the end result.

Principle 4. Build models in a way that makes them amenable to change: However, don't
neglect thoroughness, especially in requirements modelling, as it forms the foundation for
accurate design.

Principle 5. Be able to state an explicit purpose for each model that is created. Every time
you create a model, ask yourself why you’re doing so. If you can’t provide solid justification
for the existence of the model, don’t spend time on it.

Principle 6. Adapt the models you develop to the system at hand. It may be necessary to
adapt model notation or rules to the application; for example, a video game application might
require a different modelling technique than real-time, embedded software that controls an
automobile engine.

Principle 7. Try to build useful models, but forget about building perfect models. When
building requirements and design models, a software engineer reaches a point of diminishing
returns. That is, the effort required to make the model absolutely complete and internally
consistent is not worth the benefits of these properties.

Principle 8. Don’t become dogmatic about the syntax of the model.: It communicates
content successfully, representation is secondary. Although everyone on a software team
should try to use consistent notation during modell ing, the most important characteristic of
the model is to communicate information that enables the next software engineering task. If a
model does this successfully, incorrect syntax can be forgiven.

Principle 9. If your instincts tell you a model isn’t right even though it seems okay on
paper, you probably have reason to be concerned. If you are an experienced software
engineer, trust your instincts. Software work teaches many lessons—some of them on a
subconscious level. If something tells you that a design model is doomed to fail, you have
reason to spend additional time examining the model or developing a different one.

Principle 10. Get feedback as soon as you can. Every model should be reviewed by members
of the software team. The intent of these reviews is to provide feedback that can be used to
correct modelling mistakes, change misinterpretations, and add features or functions that were
inadvertently omitted.

13. Write a note on Software Engineering Knowledge.


Steve McConnell's editorial, published in IEEE Software in 1999, highlights the distinction
between two types of knowledge in software development: technology-related knowledge and
software engineering principles.

Technology-Related Knowledge: This type of knowledge includes specific programming


languages, tools, and platforms like Java, Perl, C++, Linux, and Windows NT. McConnell
notes that this knowledge has a "half-life" of about three years, meaning that within three years,
half of what a developer knows in this domain may become obsolete due to rapid technological
advancements.

Software Engineering Principles: In contrast, McConnell emphasizes the enduring value of


software engineering principles. These are the foundational ideas that guide the work of
software engineers and have a much longer shelf life than specific technologies. According to
McConnell, these principles do not become obsolete as quickly and are likely to serve a
professional programmer throughout their career.

McConnell also argues that by the year 2000, a "stable core" of software engineering
knowledge had emerged, representing about 75% of what is needed to develop complex
systems. This stable core consists of fundamental principles that underlie software engineering
practices, providing a solid foundation for applying and evaluating software engineering
models, methods, and tools.

14. Planning Principles

Principle 1: Understand the Scope: Clearly define the project scope to establish a clear
destination for the software team, guiding all planning and execution efforts.

Principle 2: Involve Stakeholders: Engage stakeholders in the planning process to define


priorities, constraints, and negotiate project-related issues such as delivery order and timelines.

Principle 3: Recognize Iterative Planning: Understand that planning is iterative and must
adapt to changes as work progresses. Replan after each software increment based on user
feedback and project developments.
Principle 4: Base Estimates on Known Information: Provide estimates for effort, cost, and
duration based on current knowledge. Reliable estimates depend on having accurate and clear
information.

Principle 5: Consider Risks in Planning: Identify high-impact, high-probability risks and


develop contingency plans. Adjust the project plan and schedule to accommodate potential
risks.

Principle 6: Be Realistic: Acknowledge that factors such as human error, communication


issues, and inevitable changes can impact the project. Build these realities into the project plan.

Principle 7: Adjust Granularity: Adapt the level of detail in the project plan according to the
time frame. Use high granularity for near-term tasks and lower granularity for long-term tasks,
as details become less certain over time.

Principle 8: Define Quality Assurance: Specify methods for ensuring quality in the plan, such
as scheduling technical reviews or using pair programming, to maintain high standards
throughout the project.

Principle 9: Accommodate Change: Outline how changes will be managed, including


procedures for customer requests, immediate implementation, and impact assessment of
changes.

Principle 10: Track and Adjust the Plan: Monitor progress frequently, ideally daily, to
identify and address issues promptly. Adjust the plan as needed to stay on track and manage
any slippage.

15. Requirement Modelling Principles

Principle 1: Information Domain Representation: Understand and represent the information


domain, which includes data flowing into, out of, and within the system, as well as persistent
data stores.

Principle 2: Function Definition: Clearly define the software's functions, which provide value
to end users and internal support, ranging from general purpose to detailed processing tasks.

Principle 3: Behaviour Representation: Represent the software's behaviour in response to


external events, driven by interactions with the environment.

Principle 4: Partitioning: Use a divide-and-conquer approach by partitioning complex


problems into manageable subproblems in a layered manner.
Principle 5: Move from Essence to Implementation: Begin with the essential problem from
the user's perspective, then gradually move toward implementation details in the design phase.

16. Coding Principles

Coding Principles. The principles that guide the coding task are closely aligned with
programming style, programming languages, and programming methods. However, there are
a number of fundamental principles that can be stated:

Preparation principles: Before you write one line of code, be sure you

• Understand of the problem you’re trying to solve.

• Understand basic design principles and concepts.

• Pick a programming language that meets the needs of the software to be built and the
environment in which it will operate.

• Select a programming environment that provides tools that will make your work easier.

• Create a set of unit tests that will be applied once the component you code is completed.

Programming principles: As you begin writing code, be sure you

• Constrain your algorithms by following structured programming [Boh00] practice.

• Consider the use of pair programming.

• Select data structures that will meet the needs of the design.

• Understand the software architecture and create interfaces that are consistent with it.

• Keep conditional logic as simple as possible.

• Create nested loops in a way that makes them easily testable.

• Select meaningful variable names and follow other local coding standards.

• Write code that is self-documenting.

• Create a visual layout (e.g., indentation and blank lines) that aids understanding.

Validation Principles: After you’ve completed your first coding pass, be sure you

• Conduct a code walkthrough when appropriate.

• Perform unit tests and correct errors you’ve uncovered.

• Refactor the code.

17. Testing Principles:


Principle 1: Traceability to Requirements: All tests should be directly traceable to
customer requirements, as the most critical errors are those that prevent the software from
meeting its requirements.

Principle 2: Early Test Planning: Tests should be planned early, ideally after the
requirements model is complete, and before code generation begins, to ensure thorough
preparation.

Principle 3: Pareto Principle in Testing: The Pareto principle suggests that 80% of errors
are likely found in 20% of the components, so identifying and thoroughly testing these
components is crucial.

Principle 4: Small to Large Testing: Testing should start with individual components ("in
the small") and progressively expand to integrated clusters and the entire system ("in the
large").

Principle 5: Exhaustive Testing is Impossible: Due to the vast number of possible path
combinations, exhaustive testing is unfeasible, but adequate coverage of program logic and
conditions can be achieved.

18. Deployment Principles:

Principle 1: Manage Customer Expectations: Ensure customer expectations align with


what the team can deliver to avoid disappointment and maintain positive feedback and team
morale.

Principle 2: Assemble and Test a Complete Delivery Package: Before delivery, compile
all software, support files, and documentation into a complete package and thoroughly beta-
test it across various computing environments.

Principle 3: Establish a Support Regime: Set up a robust support system before delivery
to provide timely and accurate assistance to end users, ensuring customer satisfaction.

Principle 4: Provide Instructional Materials: Deliver appropriate instructional materials,


including training aids, troubleshooting guidelines, and updates on new software
increments, to help users effectively use the software.

Principle 5: Fix Bugs Before Delivery: Prioritize fixing bugs before delivering software,
even under time pressure, as delivering a high-quality product late is better than delivering
a buggy product on time.
SEPM – MODULE 4

1. What is project management. What is the importance of project


management.
Project management includes planning, executing, and overseeing projects to ensure they are
completed on time, within budget, and meet specified requirements. The importance of project
management is its role in managing resources effectively, mitigating risks, ensuring quality
control, and maintaining clear communication among stakeholders.

Importance:

1. Financial Stakes: ICT (Information and communication technology) projects often


involve significant financial investments. Mismanagement can lead to overspending
and reduced funds for other essential services, such as healthcare.

2. Project Success Rates: Many projects fail due to poor management. For example, the
Standish Group’s analysis found that only a third of projects were successful, with
many being late or over budget.

3. Skill and Approach: Effective project management requires specific skills and a
proven approach to managing projects and risks. The National Audit Office in the UK
identified a lack of these skills as a key factor in project failures.

In summary, project management is essential for guiding projects to successful completion,


ensuring efficient use of resources, and minimizing risks and failures.

2. What are the different activities covered by software project


management.
The Feasibility Study: This is an investigation into whether a prospective project is worth
starting that it has a valid business case. Information is gathered about the requirements of the
proposed application. The probable developmental and operational costs, along with the value
of the benefits of the new system, are estimated. The study could be part of a strategic planning
exercise examining and prioritizing a range of potential software developments.

Planning: If the feasibility study produces results which indicate that the prospective project
appears viable, planning of the project can take place. However, for a large project, we would
not do all our detailed planning right at the beginning. We would formulate an outline plan for
the whole project and a detailed one for the first stage. More detailed planning of the later
stages would be done as they approached. This is because we would have more detailed and
accurate information upon which to base our plans nearer to the start of the later stages.
3. Write about traditional versus modern project management practices.
1. Planning Incremental Delivery

• Traditional Practice: In traditional project management, projects were planned with


long-term completion goals, and extensive planning was done upfront before the actual
execution began. The entire project was mapped out, and changes during execution
were discouraged.

• Modern Practice: Modern approaches advocate for incremental delivery, where the
project is divided into smaller, manageable increments. This allows for regular updates
and adjustments based on ongoing customer feedback, making the project more
adaptable to changing requirements.

2. Quality Management

• Traditional Practice: Quality management in traditional practices was more reactive,


with quality checks often performed at the end of the development process. The focus
was on meeting predefined specifications.
• Modern Practice: Modern project management involves continuous quality
assessment throughout the project lifecycle. This proactive approach ensures that
quality is built into the product at every stage, with frequent evaluations to maintain
high standards.

3. Change Management

• Traditional Practice: Traditionally, changes to project requirements were minimal and


typically "frozen" after initial approval. The project followed a rigid path with little
room for modifications once development had started.

• Modern Practice: In modern project management, change is an integral part of the


process. Customer feedback is actively sought, and changes are incorporated
incrementally. This flexibility allows for continuous improvement and adaptation,
ensuring that the final product meets evolving customer needs.

4. Requirements Management

• Traditional Practice: In older methodologies, requirements were defined and agreed


upon before the project started, with minimal changes allowed during development.
This often led to a mismatch between customer expectations and the delivered product.

• Modern Practice: Modern practices involve continuous interaction with customers to


refine and adjust requirements throughout the project. This dynamic approach ensures
that the project remains aligned with customer needs and reduces the risk of delivering
a product that is out of sync with their expectations.

5. Release Management

• Traditional Practice: Traditional release management involved fewer releases,


typically at the end of the project, after all development was completed. The focus was
on delivering a final, complete product.

• Modern Practice: Modern release management supports frequent releases throughout


the project lifecycle. By releasing smaller, functional components regularly, teams can
respond quickly to feedback and ensure that the software evolves in line with customer
expectations.

6. Risk Management

• Traditional Practice: Risk management was often less emphasized in traditional


practices, with risks being identified and mitigated only after they occurred, leading to
potential project delays or failures.

• Modern Practice: Modern project management places a strong emphasis on proactive


risk management. Risks are identified early, continuously assessed, and mitigated
through strategic planning, ensuring that the project remains on track.
7. Scope Management

• Traditional Practice: In traditional scope management, the project scope was defined
at the outset and adhered to strictly, with little room for changes. This often led to
"scope creep" when changes were necessary but not properly managed.

• Modern Practice: Modern scope management is more flexible, allowing for


adjustments to the project scope based on ongoing feedback and changing
requirements. However, it also requires vigilant management to avoid unnecessary or
ornamental changes that don't add value, often referred to as "gold plating" or "scope
creep."

4. What is project management life cycle. Describe each phase of it in detail.


1. Project Initiation: The project initiation phase starts with project concept development.
During concept development the different characteristics of the software to be developed are
thoroughly understood, which includes, the scope of the project, the project constraints, the
cost that would be incurred and the benefits that would accrue. Based on this understanding, a
feasibility study is undertaken to determine the project would be financially and technically
feasible. Based on feasibility study, the business case is developed. Once the top management
agrees to the business case, the project manager is appointed, the project charter is written and
finally project team is formed. This sets the ground for the manager to start the project planning
phase.

W5HH Principle: Barry Boehm, summarized the questions that need to be asked and
answered in order to have an understanding of these project characteristics.

➢ Why is the software being built?

➢ What will be done?

➢ When will it be done?

➢ Who is responsible for a function?

➢ Where are they organizationally located?

➢ How will the job be done technically and managerially?

➢ How much of these each resource is needed.

2. Project Bidding: Once the top management is convinced by the business case, the project
charter is developed. For some categories of projects, it may be necessary to have formal
bidding process to select suitable vendor based on some cost-performance criteria. The
different types of bidding techniques are:
• Request for quotation (RFQ): An organization advertises an RFQ if it has good
understanding of the project and the possible solutions.
• Request for Proposal (RFP): An organization had reasonable understanding of the
problem to be solved, however, it does not have good grasp of the solution aspects. i.e.
may not have sufficient knowledge about different features to be implemented. The
purpose of RFP is to get an understanding of the alternative solutions possible that can
be deployed and not vendor selection. Based on the RFP process, the requesting
organization can form a clear idea of the project solutions required, based on which it
can form a statement of work (SOW) for requesting RFQ for the vendors.
• Request for Information (RFI): An organization soliciting bids may publish an RFI.
Based on the vendor response to the RFI, the organization can assess the competencies
of the vendors and shortlist the vendors who can bid for the work.

3. Project Planning: During the project planning the project manager carries out several
processes and creates the following documents:

• Project plan: This document identifies the project the project tasks and a schedule for
the project tasks that assigns project resources and time frames to the tasks.
• Resource Plan: It lists the resources, manpower and equipment that would be required
to execute the project.
• Functional Plan: It documents the plan for manpower, equipment and other costs.
• Quality Plan: Plan of quality targets and control plans are included in this document.
• Risk Plan: This document lists the identification of the potential risks, their
prioritization and a plan for the actions that would be taken to contain the different risks.

4. Project Execution: In this phase the tasks are executed as per the project plan developed
during the planning phase. Quality of the deliverables is ensured through execution of proper
processes. Once all the deliverables are produced and accepted by the customer, the project
execution phase completes and the project closure phase starts.

5. Project Closure: Project closure involves completing the release of all the required
deliverables to the customer along with the necessary documentation. All the Project resources
are released and supply agreements with the vendors are terminated and all the pending
payments are completed. Finally, a postimplementation review is undertaken to analyse the
project performance and to list the lessons for use in future projects.

5. Define project success and failure.


Project Success:

A project is generally considered successful if it meets its project objectives, which typically
include:
1. Delivering the Agreed Functionality: The project meets the functional requirements
and specifications as agreed upon at the outset.

2. Achieving the Required Level of Quality: The final product is of the quality expected
and required by stakeholders.

3. Being Completed on Time: The project is delivered within the agreed timeframe.

4. Being Completed Within Budget: The project does not exceed the allocated financial
resources.

However, success in business terms goes beyond meeting these objectives. A project is
successful in business terms if the value of the benefits generated by the project exceeds the
costs incurred.

Project Failure:

A project can be considered a failure if:

1. It Does Not Meet Project Objectives: It fails to deliver the agreed functionality, does
not meet the required quality, is late, or exceeds the budget.

2. It Fails in Business Terms: Even if the project meets its technical objectives, it may
still be a failure if it does not provide the expected business benefits. For example, a
product might be delivered on time and within budget, but if it fails to attract customers
or generate revenue, it is a business failure.

• A project might be successful on delivery but later become a business failure if it does not
continue to generate value or if the market changes.

• Conversely, a project could be delayed and over budget, but if its deliverables generate
significant long-term benefits, it might be considered a success over time.

• The distinction between project objectives and business success is crucial. Project
managers often have control over project costs but less control over the external factors that
influence the business success of the project deliverables.

• Reducing the gap between project success and business success can involve considering
broader business issues, such as market research, customer feedback, and risk management,
during the project planning and execution phases.

• Long-term benefits such as technical expertise, reusable code, and strong customer
relationships can contribute to the success of future projects, even if the immediate project
faces challenges.

6. Write about some ways of categorizing software projects.


Changes to the Characteristics of Software Projects

• Code Reusability: In the past, software development required writing code from
scratch with no reusability options. Today, almost every programming language
supports code reusability, allowing developers to customize and extend existing
code efficiently.
• Project Duration: Historically, software projects could span multiple years. Now,
project durations have significantly reduced to only a few months due to
advancements in development methodologies and tools.

Compulsory versus Voluntary Users

• Compulsory Systems: These are systems that users are required to use to perform
their tasks, such as an order processing system in an organization.
• Voluntary Systems: These systems are used at the user's discretion, such as
computer games, where requirements are often less precise and depend on
developer creativity, market surveys, and prototype evaluations.

Information Systems versus Embedded Systems

• Information Systems: These systems enable staff to carry out office processes,
such as a stock control system used to manage inventory.
• Embedded Systems: These control machines or processes, such as an air
conditioning system in a building. Some systems may combine elements of both,
like a stock control system that also manages an automated warehouse.

Software Products versus Services

• Software Product Development: Involves developing software for a broad


audience with general customer requirements in mind, such as Microsoft Windows
or Oracle's database management systems.
• Software Services: Involves a range of activities such as customization,
outsourcing, maintenance, testing, and consultancy, tailored to specific client needs
and objectives.
Outsourced Projects

• Expertise Deficiency: Companies may outsource parts of a project when they lack
the necessary expertise to develop certain components internally.
• Cost-Effectiveness: Outsourcing can be a cost-effective solution, allowing
companies to leverage specialized skills and resources from external providers.

Object-Driven Development

• Objective Identification: The first stage of many software projects involves an


object-driven phase, where the need for a new software system is identified and
recommendations are made.
• Software Creation: The next stage involves the actual development of the software
product based on the objectives and recommendations from the first stage.

7. Interpret the Contract Management in Project Management.

8. Write short notes on:

i) SMART objectives
ii) Management control with project control cycle.

Management control in a project context involves setting objectives for a system and
continuously monitoring its performance to ensure it aligns with the set objectives. The process
is dynamic, requiring constant adjustments and updates based on the ongoing circumstances
and challenges that arise during the project's execution.
Example:

Project Control Cycle:

The project control cycle typically includes the following stages:

1. Setting Objectives:

o Establishing clear, measurable objectives that the project needs to achieve.

o For the ICT project, the objective is to replace paper-based records with a
centrally organized database, ensuring that the system is fully operational once
all records are transferred.

2. Data Collection:

o Definition: Gathering raw data related to the project’s progress and other
critical parameters.

o Data needs to be collected about the percentage of records processed, average


documents processed per day per person, and the estimated completion date for
record transfers. Simple data like "location X has processed 2000 documents"
is insufficient for effective management control.

3. Processing and Analysis:

o Definition: Transforming raw data into useful information that can be used to
evaluate the project's progress.

o The collected data should be analyzed to provide insights into the actual
performance against planned targets, such as comparing the estimated
completion date with the overall project timeline.

4. Decision-Making:

o Definition: Based on the analyzed data, decisions are made to keep the project
on track or adjust plans as necessary.

o If the analysis shows that some branches are behind in transferring details,
management may need to make decisions about reallocating resources, such as
moving staff temporarily to assist in data transfer.

5. Implementation of Decisions:
o Definition: Taking corrective actions based on the decisions made to ensure the
project remains aligned with its objectives.

o Actions might include adjusting resource allocation or updating procedures to


accelerate data transfer.

6. Review and Feedback:

o Definition: Continuously monitoring the outcomes of implemented decisions


and making further adjustments if needed.

o The process is iterative, and progress should be regularly reviewed, ensuring


that the project adapts to any new challenges or changes in circumstances.

10. Explain the software development life cycle with block diagram

Requirements Analysis: This starts with requirement elicitation or requirement gathering


which establishes what the users require of the system that the project is to implement. Some
work along these lines will almost certainly have been carried out when the project was
evaluated, but now the original information obtained needs to be updated and supplemented.
Design: A design has to be drawn up which meets the specification. This design will be in two
stages. One will be the external or user design concerned with the external appearance of the
application. The other produces the physical design which tackles the way that the data and
software procedures are to be structured internally.

➢ Architecture Design: This maps the requirements to the components of the system
that is to be built. At the system level, decisions will need to be made about which
processes in the new system will be carried out by the user and which can be
computerized. This design of the system architecture thus forms an input to the
development of the software requirements. A second architecture design process then
takes place which maps the software requirements to software components.

➢ Detailed Design: Each software component is made up of a number of software


units that can be separately coded and tested. The detailed design of these units is
carried out separately.

Coding: This may refer to writing code in a procedural language or an object-oriented language
or could refer to the use of an application-builder. Even where software is not being built from
scratch, some modification to the base package could be required to meet the needs of the new
application.

Testing (Verification and Validation): Whether software is developed specially for the
current application or not, careful testing will be needed to check that the proposed system
meets its requirements.

• Integration: The individual components are collected together and tested to see if they
meet the overall requirements. Integration could be at the level of software where
different software components are combined, or at the level of the system as a whole
where the software and other components of the system such as the hardware platforms
and networks and the user procedures are brought together.
• Qualification Testing: The system, including the software components, has to be
tested carefully to ensure that all the requirements have been fulfilled.

Implementation/ Installation: Some system development practitioners refer to the whole of


the project after design as ‘implementation’ (that is, the implementation of the design) while
others insist that the term refers to the installation of the system after the software has been
developed.

Acceptance Support: Once the system has been implemented there is a continuing need for
the correction of any errors that may have crept into the system and for extensions and
improvements to the system. Maintenance and support activities may be seen as a series of
minor software projects.
11. List the characteristics of projects and show the differences between
Contract management and project management

12. Elucidate the concepts in activity planning in software project


management.
A plan for an activity must be based on some idea of a method of work. For example, if you
were asked to test some software, you may know nothing about the software to be tested, but
you could assume that you would need to:

• Analyze the requirements for the software.

• Devise and write test cases that will check that each requirement has been satisfied.

• Create test scripts and expected results for each test case.

• Compare the actual results and the expected results and identify discrepancies.

Methodology: While a method relates to a type of activity in general, a plan takes that method
(and perhaps others) and converts it to real activities, identifying for each activity:

• Its start and end dates.


• Who will carry it out.
• What tools and materials - including information - will be needed.

13. Explain with necessity block diagram how a project management life
cycle (PMC) drives a software development lifecycle.
14. List the different types of stakeholders responsible for successful
completion of software project.
Stakeholders are the people who have a stake or interest in the project.

Categories of Stakeholders:

1. Internal to the Project Team: Under direct managerial control of the project leader.
2. External to the Project Team but within the Same Organization: For example, users
assisting with system testing; requires negotiated commitment.
3. External to Both the Project Team and the Organization: Includes customers or
users benefiting from the system and contractors working on the project; relationships
based on contracts.
• Different stakeholders have different objectives that need to be recognized and reconciled
by the project leader (e.g., ease of use for end-users vs. staff savings for managers).
• Theory W: Proposed by Boehm and Ross, where the project manager aims to create win-
win situations for all parties involved.
• Important stakeholder groups can sometimes be missed, especially in unfamiliar business
contexts.
• Communication Plan is the Recommended practice to create a communication plan at the
start of a project to coordinate stakeholder efforts effectively.

15. List the activities involved in management and explain principal project
management process.
Management in the context of software project management involves several keyactivities:

1. Planning: Deciding what is to be done.

2. Organizing: Making arrangements.

3. Staffing: Selecting the right people for the job.

4. Directing: Giving instructions.

5. Monitoring: Checking on progress.

6. Controlling: Taking action to remedy hold-ups.


7. Innovating: Coming up with new solutions.

8. Representing: Liaising with clients, users, developers, suppliers, and other stakeholders.

Principal Project Management Process

• The project management process is iterative, meaning plans are revised as more
information becomes available.

• Accurate estimation of cost, duration, and effort is crucial for effective planning and
execution.

• Risk management is an essential component of successful project management.

• Monitoring and control are ongoing activities throughout the project lifecycle.

Project Initiation:

• Initial planning is conducted to establish a project's foundation.

Project Planning:

• Detailed planning occurs, including estimation, scheduling, staffing, risk management,


and creating other essential plans.
• Estimation: Determining project cost, duration, and effort.
• Scheduling: Developing manpower and resource schedules.
• Staffing: Creating staffing plans and organizing the team.
• Risk Management: Identifying, analyzing, and planning for potential risks.
• Miscellaneous Planning: Developing quality assurance, configuration management,
and other necessary plans.

Project Execution:
• The project is implemented according to the plan, with ongoing monitoring and control
to ensure it stays on track.
• Monitoring: Tracking project progress.
• Control: Taking corrective actions to keep the project on track.

Project Closing:

• All project activities are completed, and contracts are formally closed.

• Completion of Activities: Finalizing all project tasks.

• Contract Closure: Formally ending all contractual obligations.

16. Write a short note on Business Case


A business case is a document that justifies the investment in a project or initiative.

It outlines the expected benefits, costs, and risks, providing a clear rationale for proceeding.

Key Components of a Business Case

• Problem or opportunity: Clearly defines the issue the project aims to address.

• Objectives: Specifies the goals and desired outcomes of the project.

• Alternatives: Explores different approaches to solving the problem.

• Costs and benefits: Quantifies the financial implications and expected returns.

• Risks: Identifies potential challenges and mitigation strategies.

Importance of a Business Case

• Secures funding: Convinces stakeholders to allocate resources.

• Aligns with strategic goals: Ensures the project contributes to overall business
objectives.

• Decision-making: Provides a structured approach to evaluating project viability.

• Risk management: Helps identify and address potential issues.

17. Write a short note on Project Charter


A project charter is a formal document that authorizes the initiation of a project. It serves as a
high-level blueprint, outlining the project's purpose, objectives, scope, stakeholders, and initial
resources. While it's not as detailed as a project plan, the charter provides a clear direction for
the project team.

Importance of a Project Charter

• Ensures that the project team is aligned with the project's goals.

• Clearly defines roles and responsibilities for stakeholders.

• Provides a basis for allocating necessary resources.

• Serves as a reference point for making project decisions.

• Helps identify potential risks early in the project lifecycle.

18. What is Project? Explain the activities that benefit from the project
management. List the characteristics that distinguish projects
A project is a temporary endeavour undertaken to create a unique product, service, or result. It
has a defined beginning and end, and requires the organized application of resources and
activities to achieve specific objectives.

Activities Benefiting from Project Management

The image suggests that project management is most beneficial for activities that fall between
routine jobs and exploratory projects. These activities share characteristics of both, requiring a
degree of planning and control, but also involving a certain level of uncertainty and novelty.

Examples of such activities include:

• Product development: Creating new products or services involves a mix of planning


and innovation.

• System implementation: Installing new software or hardware systems often requires


careful planning and adaptation.

• Construction projects: Building structures involves both routine tasks and unexpected
challenges.
• Research and development: Exploratory work combined with structured
experimentation benefits from project management.

Essentially, any activity that is complex, has a clear beginning and end, and involves multiple
interconnected tasks is a potential candidate for project management. By applying project
management principles, organizations can improve efficiency, reduce risks, and increase the
likelihood of successful project outcomes.

The following characteristics distinguish projects:

• non-routine tasks are involved;


• planning is required;
• specific objectives are to be met or a specified product is to be created;
• the project has a predetermined time span;
• work is carried out for someone other than yourself;
• work involves several specialisms;
• people are formed into a temporary work group to carry out the task;
• work is carried out in several phases;
• the resources that are available for use on the project are constrained;
• The project is large or complex.
SEPM – MODULE 5

1. The place of software quality in project planning.


1. Select Project (Step 0)

o Begin by choosing the project to be undertaken. This step involves deciding on


a project that aligns with organizational goals and strategic objectives.

2. Identify Project Scope and Objectives (Step 1)

o Clearly define the project's scope and its objectives. Determine what the project
aims to achieve and the boundaries within which it will operate.

3. Identify Project Infrastructure (Step 2)

o Establish the necessary infrastructure required to support the project. This


includes tools, technologies, resources, and team structures.

4. Analyze Project Characteristics (Step 3)

o Examine the characteristics of the project. This involves understanding the


project's requirements, complexity, and potential challenges.

5. Identify the Products and Activities (Step 4)

o Identify the products and the activities required to produce them. This step
involves breaking down the project into manageable tasks and defining what
needs to be done.

6. Estimate Effort for Each Activity (Step 5)

o Estimate the effort required to complete each identified activity. This includes
assessing the time, resources, and cost involved in each task.

7. Identify Activity Risks (Step 6)

o Identify the potential risks associated with each activity. Consider what could
go wrong and the impact these risks may have on the project's success.

8. Allocate Resources (Step 7)

o Allocate the necessary resources to each activity. This includes assigning team
members, budget, and tools needed to complete the tasks.

9. Review/Publicize Plan (Step 8)


o Review the project plan to ensure its completeness and accuracy. Publicize the
plan to all stakeholders to ensure everyone is aware of the project's objectives,
activities, and timeline.

10. Execute Plan (Step 9)

o Implement the project plan by carrying out the defined activities. Monitor
progress and make adjustments as necessary to stay on track.

11. Lower-Level Planning (Step 10)

o Engage in more detailed planning for lower-level tasks as the project progresses.
This involves refining activities, updating estimates, and continually assessing
risks.

12. Review (Feedback Loop)

o Continually review the project at each stage to ensure quality and alignment
with objectives. This involves feedback loops where you revisit previous steps
to refine and improve the plan.

This step-wise process ensures a structured approach to project management, emphasizing


thorough planning, risk management, and resource allocation to achieve successful project
outcomes.
2. Importance of Software Quality
Now a days, quality is the important aspect of all organization. Good quality software is the
requirement of all users. There are so many reasons that describe why the quality of software
is important; few among of those which are most important are described below:

Increasingly criticality of software:

➢ The final customer or user is naturally anxious about the general quality of software
especially about the reliability.

➢ They are concern about the safety because of their dependency on the software system such
as aircraft control system are more safety critical systems.

Earlier detection of errors during development:

➢ As software is developed through a number of phases; output of one phase is given as input
to the other one. So, if error in the initial phase is not found, then at the later stage, it is difficult
to fix that error and also the cost indulged is more.

The intangibility of software :

➢ Difficulty in verifying the satisfactory completion of project tasks.

➢ Tangibility is achieved by requiring developers to produce "deliverables" that can be


examined for quality.

Accumulating errors during software development:

➢ Errors in earlier steps can propagate and accumulate in later steps.

➢ Errors found later in the project are more expensive to fix.

➢ The unknown number of errors makes the debugging phase difficult to control.

3. Explain the different levels of Capability process models.


1. SEI Capability Maturity Model (SEI CMM)

The SEI Capability Maturity Model (CMM) is a framework developed by the Software
Engineering Institute (SEI) to assess and improve the maturity of software development
processes within organizations. It categorizes organizations into five maturity levels based on
their process capabilities and practices:

SEI CMM Levels:

1. Level 1: Initial Characteristics:

❖ Chaotic and ad hoc development processes.


❖ Lack of defined processes or management practices.

❖ Relies heavily on individual heroics to complete projects.

Outcome:

❖ Project success depends largely on the capabilities of individual team members.

❖ High risk of project failure or delays.

2. Level 2: Repeatable Characteristics:

❖ Basic project management practices like planning and tracking costs/schedules are in place.

❖Processes are somewhat documented and understood by the team.

Outcome:

❖ Organizations can repeat successful practices on similar projects.

❖ Improved project consistency and some level of predictability.

3. Level 3: Defined Characteristics:

❖ Processes for both management and development activities are defined and documented.

❖ Roles and responsibilities are clear across the organization.

❖ Training programs are implemented to build employee capabilities.

❖ Systematic reviews are conducted to identify and fix errors early.

Outcome:

❖ Consistent and standardized processes across the organization.

❖ Better management of project risks and quality.

4. Level 4: Managed Characteristics:

❖ Processes are quantitatively managed using metrics.

❖ Quality goals are set and measured against project outcomes.

❖ Process metrics are used to improve project performance.

Outcome:

❖ Focus on managing and optimizing processes to meet quality and performance goals.

❖ Continuous monitoring and improvement of project execution.


5. Level 5: Optimizing Characteristics:

❖ Continuous process improvement is ingrained in the organization's culture.

❖ Process metrics are analysed to identify areas for improvement.

❖ Lessons learned from projects are used to refine and enhance processes.

❖ Innovation and adoption of new technologies are actively pursued.

Outcome:

❖ Continuous innovation and improvement in processes.

❖ High adaptability to change and efficiency in handling new challenges.

❖ Leading edge in technology adoption and process optimization.

Use of SEI CMM:

1) Capability Evaluation: Used by contract awarding authorities (like the US DoD) to assess
potential contractors' capabilities to predict performance if awarded a contract.

2) Process Assessment: Internally used by organizations to improve their own process


capabilities through assessment and recommendations for improvement.

2. CMMI (Capability Maturity Model Integration):

Initial (Level 1)

• Key Process Areas: Not applicable.


• Description: Processes are unpredictable and poorly controlled. The organization often
reacts to problems as they occur.

Managed (Level 2 )

• Key Process Areas: Requirements management, project planning and monitoring,


supplier agreement management, measurement and analysis, process and product
quality assurance, configuration management.
• Description: Processes are planned and executed in accordance with policy; projects
are managed and ensure that processes are performed as planned.

Defined (Level 3)

• Key Process Areas: Requirements development, technical solution, product


integration, verification, validation, organizational process focus and definition,
training, integrated project management, risk management, integrated teaming,
integrated supplier management, decision analysis and resolution, organizational
environment for integration.
• Description: Processes are well characterized and understood, and are described in
standards, procedures, tools, and methods. The organization’s set of standard processes
is established and improved over time.

Quantitatively Managed (Level 4)

• Key Process Areas: Organizational process performance, quantitative project


management.
• Description: The organization and projects establish quantitative objectives for quality
and process performance and use them as criteria in managing projects. Processes are
controlled using statistical and other quantitative techniques.

Optimizing (Level 5)

• Key Process Areas: Organizational innovation and deployment, causal analysis and
resolution.
• Description: The focus is on continuous process improvement. The organization
continually improves its processes based on a quantitative understanding of the
common causes of variation inherent in processes.

Benefits of CMMI

❖ Broad Applicability: CMMI's abstract nature allows it to be applied not only to software
development but also to various other disciplines and industries.

❖ Consistency and Integration: Provides a unified framework for improving processes,


reducing redundancy, and promoting consistency across organizational practices.

❖ Continuous Improvement: Encourages organizations to continuously assess and refine


their processes to achieve higher levels of maturity and performance.

4. Explain Quality Management Systems with Principles of BS EN ISO


9001:2000
ANS: Principles of BS EN ISO 9001:2000:

1. Customer Focus: Understanding and meeting customer requirements to enhance


satisfaction.

2. Leadership: Providing unity of purpose and direction for achieving quality objectives.

3. Involvement of People: Engaging employees at all levels to contribute effectively to the


QMS.

4. Process Approach: Focusing on individual processes that create products or deliver


services. Managing these processes as a system to achieve organizational objectives.

5. Continuous Improvement: Continually enhancing the effectiveness of processes based on


objective measurements and analysis.

6. Factual Approach to Decision Making: Making decisions based on analysis of data and
information.

7. Mutually Beneficial Supplier Relationships: Building and maintaining good relationships


with suppliers to enhance capabilities and performance.

5. Write about Software Quality


Quality is a rather vague term and we need to define carefully what we mean by it. For any
software system there should be three specifications.

• A functional specification describing what the system is to do

• A quality specification concerned with how well the function are to operate

• A resource specification concerned with how much is to be spent on the system.

External and Internal Qualities:

External Qualities: Reflect the user's view, such as usability.

Internal Factors: Known to developers, such as well-structured code, which may enhance
reliability.

Measuring Quality:

Necessity of Measurement: To judge if a system meets quality requirements, its qualities must
be measurable.

Good Measure: Relates the number of units to the maximum possible (e.g., faults per thousand
lines of code).

Clarification Through Measurement: Helps to define and communicate what quality really
means, effectively answering "how do we know when we have been successful?"
Direct vs. Indirect Measures:

Direct Measurement: Measures the quality itself (e.g., faults per thousand linesof code).

Indirect Measurement: Measures an indicator of the quality (e.g., number of user inquiries at a
help desk as an indicator of usability).

Setting Targets:

Impact on Project Team: Quality measurements set targets for team members.

Meaningful Improvement: Ensure that improvements in measured quality are meaningful.

Example: Counting errors found in program inspections may not be meaningful if errors are
allowed to pass to the inspection stage rather than being eradicated earlier.

Drafting a Quality Specification for Software Products

Measurements Applicable to Quality Characteristics in Software

6. Software Quality Models:


i. Garvin’s quality model:
ii. McCall’ quality model:

iii. Dromey’s Model:


iv. Bohem’s quality model:
7. Write about ISO 9126
ISO 9126 standards were first introduced in 1991 to tackle the question of the definition of
software quality. The original 13-page document was designed as a foundation upon which
further more detailed standard could be built. ISO9126 documents are now very lengthy.

Motivation might be-

• Acquires who are obtaining software from external suppliers

• Developers who are building a software product

• Independent evaluators who are accessing the quality of a software product, not for
themselves but for a community of user.

ISO 9126 also introduces another type of elements – quality in use- for which following
element has been identified

8. Explain the six major external software quality characteristics identified


by ISO 9126.
i. Functionality:

• Definition: The functions that a software product provides to satisfy user needs.
• Sub-characteristics: Suitability, accuracy, interoperability, security, compliance.
• ‘Functionality Compliance’ refers to the degree to which the software adheres to
application-related standard or legal requirements. Typically, these could be auditing
requirement. ‘Interoperability’ refers to the ability of software to interact with others.

2. Reliability:
• Definition: The capability of the software to maintain its level of performance under
stated conditions.
• Sub-characteristics: Maturity, fault tolerance, recoverability.
• Maturity refers to frequency of failures due to fault in software more identification of
fault more changes to remove them. Recoverability describes the control of access to a
system.

3. Usability:

• Definition: The effort needed to use the software.


• Sub-characteristics: Understandability, learnability, operability, attractiveness.
• Understand ability is a clear quality to grasp. Although the definition attributes that bear
on the user efforts for recognizing the logical concept and its applicability in our
actually makes it less clear.
• Learnability has been distinguished from operability. A software tool might be easy to
learn but time consuming to use say it uses a large number of nested menus. This is for
a package that is used only intermittently but not where the system is used or several
hours each day by the end user. In this case learnability has been incorporated at the
expense of operability

4. Efficiency:

• Definition: The ability to use resources in relation to the amount of work done.
• Sub-characteristics: Time behaviour, resource utilization.
5. Maintainability:

• Definition: The effort needed to make changes to the software.


• Sub-characteristics: Analysability, modifiability, testability.
• Analysability is the quality that McCall called diagnose ability, the ease with which the
cause of failure can be determined. Changeability is the quality that others call
flexibility: the latter name implies suppliers of the software are always changing it.
Stability means that there is a low risk of a modification to software having unexpected
effects.

6. Portability:

• Definition: The ability of the software to be transferred from one environment to


another.
• Sub-characteristics: Adaptability, install ability, co-existence.
• Portability compliance relates to those standards that have a bearing on portability.
Replaceability refers to the factors that give upward compatibility between old software
components and the new ones. 'Coexistence' refers to the ability of the software to share
resources with other software components; unlike 'interoperability', no direct data
passing is necessarily involved

9. List the guidelines given by ISO 9126 for the use of the quality
characteristics.

4. Identify the relevant internal measurements and the intermediate products in which they
appear.

• Identify and track internal measurements such as cyclomatic complexity, code coverage,
defect density, etc.

• Relate these measurements to intermediate products like source code, test cases, and
documentation.

5. Overall assessment of product quality: To what extent is it possible to combine ratings for
different quality characteristics into a single overall rating for the software?

• Use weighted quality scores to assess overall product quality.

• Focus on key quality requirements and address potential weaknesses early to avoid the need
for an overall quality rating later.

10. Explain Product and Process Metrics.


Users assess the quality of a software product based on its external attributes, whereas during
development, the developers assess the product’s quality based on various internal attributes.
The internal attributes may measure either some aspects of product or of the development
process (called process metrics).
11. Explain product v/s process quality management.
12. ISO 15504 process assessment
ISO/IEC 15504, also known as SPICE (Software Process Improvement and Capability
determination), is a standard for assessing and improving software development processes.
When assessors are judging the degree to which a process attribute is being fulfilled, they
allocate one of the following scores:

1. Process Definition (PD):

▪ Evidence: A section in the procedures manual that outlines the steps, roles, and
responsibilities for conducting requirements analysis.

▪ Assessment: Assessors would review the documented procedures to ensure they clearly define
how requirements analysis is to be conducted. This indicates that the process is defined (3.1 in
Table 13.5).

2. Process Deployment (PR):

▪ Evidence: Control documents or records showing that the documented requirements analysis
process has been used and followed in actual projects.

▪ Assessment: Assessors would look for signed-off control documents at each step of the
requirements analysis process, indicating that the defined process is being implemented and
deployed effectively (3.2 in Table 13.5).

13. Explain implementation of process improvement


Implementing process improvement in UVW, especially in the context of software
development for machine tool equipment, involves addressing several key challenges
identified within the organization.

Here’s a structured approach, drawing from CMMI principles, to address these issues and
improve process maturity:

Identified Issues at UVW

1. Resource Overcommitment:

Issue: Lack of proper liaison between the Head of Software Engineering and Project Engineers
leads to resource overcommitment across new systems and maintenance tasks simultaneously.

Impact: Delays in software deliveries due to stretched resources.

2. Requirements Volatility:
Issue: Initial testing of prototypes often reveals major new requirements.

Impact: Scope creep and changes lead to rework and delays.

3. Change Control Challenges:

Issue: Lack of proper change control results in increased demands for software development
beyond original plans.

Impact: Increased workload and project delays.

4. Delayed System Testing:

Issue: Completion of system testing is delayed due to a high volume of bug fixes.

Impact: Delays in product release and customer shipment.

Steps for Process Improvement

1. Formal Planning and Control

Objective: Introduce structured planning and control mechanisms to assess and distribute
workloads effectively.

Actions:

❖ Implement formal project planning processes where software requirements are mapped to
planned work packages.

❖ Define clear milestones and deliverables, ensuring alignment with both hardware and
software development phases.

❖ Monitor project progress against plans to identify emerging issues early.

Expected Outcomes:

❖ Improved visibility into project status and resource utilization.

❖ Early identification of potential bottlenecks or deviations from planned schedules.

❖ Enable better resource allocation and management across different projects.

2. Change Control Procedures

Objective: Establish robust change control procedures to manage and prioritize system changes
effectively.

Actions:

❖ Define a formal change request process with clear documentation and approval workflows.
❖ Ensure communication channels between development teams, testing groups, and project
stakeholders are streamlined for change notifications.

❖ Implement impact assessment mechanisms to evaluate the effects of changes on project


timelines and resources.

Expected Outcomes:

Reduced scope creep and unplanned changes disrupting project schedules.

Enhanced control over system modifications, minimizing delays and rework.

3. Enhanced Testing and Validation

Objective: Improve testing and validation processes to reduce delays in system testing and bug
fixes.

Actions:

❖ Strengthen collaboration between development and testing teams to ensure comprehensive


test coverage early in the development lifecycle.

❖ Implement automated testing frameworks where feasible to expedite testing cycles.

❖ Foster a culture of quality assurance and proactive bug identification throughout the
development phases.

Expected Outcomes:

❖ Faster turnaround in identifying and resolving bugs during testing.

❖ Timely completion of system testing phases, enabling on-time product releases.

Moving Towards Process Maturity Levels

Level 1 to Level 2 Transition:

Focus: Transition from ad-hoc, chaotic practices to defined processes with formal planning and
control mechanisms.

Benefits: Improved predictability in project outcomes, better resource management, and


reduced project risks.
The next step would be to identify the processes involved in each stage of the development life
cycle. As in Fig 13.6. The steps of defining procedures for each development task and ensuring
that they are actually carried out help to bring an organization up to Level 3.
14. Explain PSP.
PSP is based on the work of Watts Humphrey. PSP is suitable for individual use. PSP is a
framework that helps engineers to measure and improve the way they work .It helps in
developing personal skills and methods by estimating, planning, and tracking performance
against plans, and provides a defined process which can be tuned by individuals.

Time Management: PSP advocates that developers should rack the way they spend time. The
actual time spent on a task should be measured with the help of a stop-clock to get an objective
picture of the time spent. An engineer should measure the time he spends for various
development activities such as designing, writing code, testing etc.

PSP Planning: Individuals must plan their project. The developers must estimate the
maximum. minimum, and the average LOC required for the product. They record the plan data
in a project plan summary.

The PSP is schematically shown in Figure 13.7 . As an individual developer must plan the
personal activities and make the basic plans before starting the development work. While
carrying out the activities of different phases of software development, the individual developer
must record the log data using time measurement.

During post implementation project review, the developer can compare the log data with the
initial plan to achieve better planning in the future projects, to improve his process etc. The
four maturity levels of PSP have schematically been shown in Fig 13.8. The activities that the
developer must perform for achieving a higher level of maturity have also been annotated on
the diagram.
15. Explain Six Sigma method.
• Motorola, USA, initially developed the six-sigma method in the early 1980s. The
purpose of six sigma is to develop processes to do things better, faster, and at a lower
cost.
• Six sigma becomes applicable to any activity that is concerned with cost, timeliness,
and quality of results. Therefore, it is applicable to virtually every industry.
• Six sigma seeks to improve the quality of process outputs by identifying and removing
the causes of defects and minimizing variability in the use of process.
• Six sigma is essentially a disciplined, data-driven approach to eliminate defects in any
process. The statistical representation of six sigma describes quantitatively how a
process is performing. To achieve six sigma, a process must not produce more than 3.4
defects per million defect opportunities.
• A six-sigma defect is defined as any system behavior that is not as per customer
specifications. Total number of six sigma defect opportunities is then the total number
of chances for committing an error. Sigma of a process can easily be calculated using a
six-sigma calculator.
16. Explain the themes that emerge in discussion of software quality.

17. Write a note on Inspection in terms of enhancing software quality.


Inspections are critical in ensuring quality at various development stages, not just in coding but
also in documentation and test case creation.

It is very effective way of removing superficial errors from a piece of work.

• It motivates developers to produce better structured and self-explanatory software.


• It helps spread good programming practice as the participants discuss the advantages
and disadvantages of specific piece of code.
• It enhances team spirit.
• Techniques like Fagan inspections, pioneered by IBM, formalize the review process
with trained moderators leading discussions to identify defects and improve quality.

Benefits of Inspections:

▪ Inspections are noted for their effectiveness in eliminating superficial errors, motivating
developers to write better-structured code, and fostering team collaboration and spirit.

▪ They also facilitate the dissemination of good programming practices and improve overall
software quality by involving stakeholders from different stages of development.

The general principles behind Fagan method

• Inspections are carried out on all major deliverables.


• All types of defects are noted.

• Inspection can be carried out by colleagues at all levels except the very top.

• Inspection can be carried using a predefined set of steps.

• Inspection meeting does not last for more than two hours.

• The inspection is led by a moderator who has had specific training in the techniques.

• The participants have defined rules.

• Checklist are used to assist the fault-finding process.

• Material is inspected at an optimal rate of about 100 lines an hour.

• Statistics are maintained so that the effectiveness of the inspection process can be monitored

18. Explain Structured programming and clean room software development


The late 1960s marked a pivotal period in software engineering where the complexity of
software systems began to outstrip the capacity of human understanding and testing
capabilities. Here are the key developments and concepts that emerged during this time:

1. Complexity and Human Limitations:

▪ Software systems were becoming increasingly complex, making it impractical to test every
possible input combination comprehensively.

▪ Edsger Dijkstra and others argued that testing could only demonstrate the presence of errors,
not their absence, leading to uncertainty about software correctness.

2. Structured Programming:

▪ To manage complexity, structured programming advocated breaking down software into


manageable components.

▪ Each component was designed to be self-contained with clear entry and exit points,
facilitating easier understanding and validation by human programmers.

3. Clean-Room Software Development:

▪ Developed by Harlan Mills and others at IBM, clean-room software development introduced
a rigorous methodology to ensure software reliability.

▪ It involved three separate teams:

➢ Specification Team: Gathers user requirements and usage profiles.

➢ Development Team: Implements the code without conducting machine testing; focuses on
formal verification using mathematical techniques.
➢ Certification Team: Conducts testing to validate the software, using statistical models to
determine acceptable failure rates.

4. Incremental Development:

▪ Systems were developed incrementally, ensuring that each increment was capable of
operational use by end-users.

▪ This approach avoided the pitfalls of iterative debugging and ad-hoc modifications, which
could compromise software reliability.

5. Verification and Validation:

▪ Clean-room development emphasized rigorous verification at the development stage rather


than relying on extensive testing to identify and fix errors.

▪ The certification team's testing was thorough and continued until statistical models showed
that the software failure rates were acceptably low.

Formal methods

• Clean-room development, uses mathematical verification techniques. These techniques use


unambiguous, mathematically based, specification language of which Z and VDM are
examples. They are used to define preconditions and postconditions for each procedure.

• Precondition define the allowable states, before processing, of the data items upon which a
procedure is to work.

• Post condition define the state of those data items after processing. The mathematical notation
should ensure that such a specification is precise and unambiguous.

19. Write a note on Software quality circles (SWQC)


▪ SWQCs are adapted from Japanese quality practices to improve software development
processes by reducing errors.

• Staff are involved in the identification of sources of errors through the formation of quality
circle. These can be set up in all departments of an organizations including those producing
software where they are known as software quality circle (SWQC).

• A quality circle is a group of four to ten volunteers working in the same area who meet for,
say, an hour a week to identify, analyze and solve their work -related problems. One of their
number is a group leader and there could be an outsider a facilitator, who can advise on
procedural matters.

• Associated with quality circles is the compilation of most probable error lists. For example,
at IOE, Amanda might find that the annual maintenance contracts project is being delayed
because of errors in the requirements specifications.
20. Write a short note on Lessons learnt reports

21. Verification vs Validation

22. Explain Test case design


23. Explain Testing activities
Testing involves performing the following main activities:

1) Test Planning: Test Planning consists of determining the relevant test strategies and
planning for any test bed that may be required. A test bed usually includes setting up the
hardware or simulator.

2) Test Case Execution and Result Checking: Each test case is run and the results are
compared with the expected results. A mismatch between the actual result and expected results
indicates a failure. The test cases for which the system fails are noted down for test reporting.

3) Test Reporting: When the test cases are run, the tester may raise issues, that is, report
discrepancies between the expected and the actual findings. A means of formally recording
these issues and their history is needed. A review body adjudicates these issues. The outcome
of this scrutiny would be one of the following:

• The issue is dismissed on the grounds that there has been a misunderstanding of a requirement
by the tester.

• The issue is identified as a fault which the developers need to correct - Where development
is being done by contractors, they would be expected to cover the cost of the correction.

• It is recognized that the software is behaving as specified, but the requirement originally
agreed is in fact incorrect.

• The issue is identified as a fault but is treated as an off-specification -It is decided that the
application can be made operational with the error still in place.

4) Debugging: For each failure observed during testing, debugging is carried out to identify
the statements that are in error.
5) Defect Retesting: Once a defect has been dealt with by the development team, the corrected
code is retested by the testing team to check whether the defect has successfully been addressed.
Defect retest is also called resolution testing. The resolution tests are a subset of the complete
test suite (Fig: 13.10).

6) Regression Testing: Regression testing checks whether the unmodified functionalities still
continue to work correctly. Thus, whenever a defect is corrected and the change is incorporated
in the program code, the change introduced to correct an error could actually introduce errors
in functionalities that were previously working correctly.

7) Test Closure: Once the system successfully passes all the tests, documents related to lessons
learned, results, logs etc., are achieved for use as a reference in future projects.

24. Explain Automation testing


Test Automation:

1) Testing is most time consuming and laborious of all software development. With the
growing size of programs and the increased importance being given to product quality, test
automation is drawing attention.

2) Test automation is automating one or some activities of the test process. This reduces human
effort and time which significantly increases the thoroughness of testing.

3) With automation, more sophisticated test case design techniques can be deployed. By using
the proper testing tools automated test results are more reliable and eliminates human errors
during testing.
4) Every software product undergoes significant change overtime. Each time the code changes,
it needs to be tested whether the changes induce any failures in the unchanged features. Thus
the originally designed test suite need to be run repeatedly each time the code changes.
Automated testing tools can be used in repeatedly running the same set of test cases.

Types of Automated Testing Tools

➢ Capture and Playback Tools: In this type of tools, the test cases are executed manually
only once. During manual execution, the sequence and values of various inputs as the outputs
produced are recorded. Later, the test can be automatically replayed and the results are checked
against the recorded output.

Advantage: This tool is useful for regression testing.

Disadvantage: Test maintenance can be costly when the unit test changes , since some of the
captured tests may become invalid.

➢ Automated Test Script Tool: Test Scripts are used to drive an automated test tool. The
scripts provide input to the unit under test and record the output. The testers employ a variety
of languages to express test scripts.

Advantage: Once the test script is debugged and verified, it can be rerun a large number oftimes
easily and cheaply.

Disadvantage: Debugging test scripts to ensure accuracy requires significant effort.

➢ Random Input Test Tools: In this type of an automatic testing tool, test values are
randomly generated to cover the input space of the unit under test. The outputs are ignored
because analyzing them would be extremely expensive.

Advantage: This is relatively easy and cost-effective for finding some types of defects.

Disadvantage: Is very limited form of testing. It finds only the defects that crash the unit under
test and not the majority of defects that do not crash but simply produce incorrect results.

➢ Model-Based Test Tools: A model is a simplified representation of program. These models


can either be structural models or behavioral models. Examples of behavioral models are state
models and activity models. A state model-based testing generates tests that adequately cover
the state space described by the model.

25. Estimation of latent errors


26. Define Software reliability and explain its importance and challenges
• The reliability of a software product denotes trustworthiness or dependability.
• It can be defined as the probability of its working correctly over a given period of time.
Software product having a large number of defects is unreliable. Reliability of the
system will improve if the number of defects in it is reduced.
• Reliability is a observer dependent, it depends on the relative frequency with which
different users invoke the functionalities of a system. It is possible that because of
different usage patterns of the available functionalities of software, a bug which
frequently shows up for one user, may not show up at all for another user, or may show
up very infrequently.
• Reliability of the software keeps on improving with time during the testing and
operational phases as defects are identified and repaired. The growth of reliability over
the testing and operational phases can be modelled using a mathematical expression
called Reliability Growth Model (RGM).
• RGMs help predict reliability levels during the testing phase and determine when
testing can be stopped.
• Challenges in Software Reliability which makes which makes difficult to measure than
hardware reliability.
• Dependence on the specific location of bugs
• Observer-dependent nature of reliability
• Continuous improvement as errors are detected and corrected.

27. Hardware vs Software Reliability


➢ Hardware failures typically result from wear and tear, whereas software failures are due to
bugs.

➢ Hardware failure often requires replacement or repair of physical components. Software


failures need bug fixes in the code, which can affect reliability positively or negatively.

➢ Hardware Reliability: Concerned with stability and consistent inter-failure times. Software
Reliability: Aims for growth, meaning an increase in inter-failure times as bugs are fixed.

➢ Hardware: Shows a "bathtub" curve where failure rate is initially high, decreases during the
useful life, and increases again as components wear out. Software: Reliability generally
improves over time as bugs are identified and fixed, leading to decreased failure rates

Figure 13.11(a): Illustrates the hardware product's failure rate over time, depicting the
"bathtub" curve.

Figure 13.11(b): Shows the software product's failure rate, indicating a decline in failure rate
over time due to bug fixes and improvements.

28. What are Reliability Metrics. List and Explain


Probability of Failure on Demand (POFOD):

• POFOD measures likelihood of system failure when a service request is made. For example,
POFOD of 0.001 means that 1 out of every 1000 service requests would result in a failure.

• Suitable for systems not required to run continuously.

Availability:

• Measures how likely the system is available for use during a given period.

• Takes into account both failure occurrences and repair times.

29. Brief about Types of Software Failures


30. Reliability growth models
31. Quality plans
Organizations produce quality plans for each project to show how standard quality procedures
and standards from the organization's quality manual will be applied to the project.

If quality-related activities and requirements have been identified by the main planning process,
a separate quality plan may not be necessary.

When producing software for an external client, the client’s quality assurance staff might
require a dedicated quality plan to ensure the quality of the delivered products.

Function of a Quality Plan:

• A quality plan acts as a checklist to confirm that all quality issues have been addressed during
the planning process.
• Most of the content in a quality plan references other documents that detail specific quality
procedures and standards.

Components of a Quality Plan

A quality plan might include:

❖ Purpose and scope of the plan

❖ List of references to other documents

❖ Management arrangements: Including organization, tasks, and responsibilities

❖ Documentation to be produced

❖ Standards, practices, and conventions

❖ Reviews and audits

❖ Testing

❖ Problem reporting and corrective action

❖ Tools, techniques, and methodologies

❖ Code, media, and supplier control

❖ Records collection, maintenance, and retention

❖ Training

❖ Risk management: Methods of risk management to be used

32. List and Explain the Techniques to enhance Software Quality


Techniques to enhance software quality:

1. Inspection (Refer Q. 17)

2. Structured Programming and clean room software development (Refer Q. 18)

3. SWQC (Refer Q. 19)

4. Lessons learnt reports (Refer Q. 20)

You might also like