0% found this document useful (0 votes)
2 views20 pages

Software Engg

The document provides detailed example questions and answers for an SPPU exam on Software Engineering, covering topics such as Software Engineering definitions, layered technology, SDLC models (Waterfall and Spiral), Requirement Engineering, and Use Case and Class diagrams. It emphasizes the importance of quality in software development and outlines key phases of Requirement Engineering along with the differences between Functional and Non-Functional Requirements. Additionally, it explains the purpose and components of Use Case and Class diagrams with examples.

Uploaded by

bhongale41
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views20 pages

Software Engg

The document provides detailed example questions and answers for an SPPU exam on Software Engineering, covering topics such as Software Engineering definitions, layered technology, SDLC models (Waterfall and Spiral), Requirement Engineering, and Use Case and Class diagrams. It emphasizes the importance of quality in software development and outlines key phases of Requirement Engineering along with the differences between Functional and Non-Functional Requirements. Additionally, it explains the purpose and components of Use Case and Class diagrams with examples.

Uploaded by

bhongale41
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Okay, here are some example questions and detailed answers for each unit, framed for an 8-9

mark evaluation from an SPPU exam perspective. Remember to include diagrams where
appropriate in your actual exam answers!

Unit 1: Overview of Software Engineering


Question 1: What is Software Engineering? Explain the layered
technology of software engineering. Discuss any two Software
Development Life Cycle (SDLC) models in detail with their respective
advantages and disadvantages. (8 Marks)

Answer:

Software Engineering (SE) is a discipline that involves the application of systematic,


disciplined, quantifiable approaches to the development, operation, and maintenance of
software. It aims to produce high-quality software that meets user needs, is delivered on
time, and stays within budget, addressing the complexities inherent in software
development.

Layered Technology of Software Engineering: Software engineering can be viewed


as a layered technology, with quality as the foundation:

1.​ A Quality Focus (Foundation): The bedrock of software engineering is a


commitment to quality. This includes process quality (how software is developed)
and product quality (the characteristics of the software itself, like reliability,
usability, and efficiency). Total Quality Management (TQM) and similar
philosophies emphasize continuous process improvement.
2.​ Process (Layer 1): This layer provides the framework for software development.
It defines the sequence of activities, deliverables, and control mechanisms. Key
Process Areas (KPAs) in models like CMMI specify what needs to be done.
Examples include project planning, requirements management, and configuration
management. A process provides the context for applying methods and tools.
3.​ Methods (Layer 2): This layer provides the "how-to's" for building software. It
encompasses a broad array of tasks such as requirements analysis, design,
coding, testing, and maintenance. Examples include object-oriented analysis,
structured design, various testing techniques (e.g., black-box, white-box), and
agile methods.
4.​ Tools (Layer 3): This layer provides automated or semi-automated support for
the process and methods. Tools help improve productivity and quality. Examples
include:
○​ CASE (Computer-Aided Software Engineering) tools
○​ Integrated Development Environments (IDEs)
○​ Testing tools (e.g., Selenium, JUnit)
○​ Version control systems (e.g., Git)
○​ Project management tools (e.g., Jira, MS Project)

(Diagram: A simple layered diagram showing Quality at the base, then Process,
Methods, and Tools stacked on top would be beneficial here.)

Software Development Life Cycle (SDLC) Models:

An SDLC is a conceptual framework describing the stages involved in an information


system development project, from an initial feasibility study through to maintenance of
the completed application.

1. Waterfall Model: The Waterfall Model is a sequential design process, often used in
traditional software development, in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases of conception, initiation, analysis,
design, construction, testing, deployment, and maintenance.

●​ Phases:​

○​ Requirement Analysis and Specification: All requirements are gathered


and documented.
○​ System Design: The system architecture and high-level design are
created.
○​ Implementation: Code is written based on the design.
○​ Testing: The system is tested to find and fix defects.
○​ Deployment: The system is released to users.
○​ Maintenance: Ongoing support and enhancements are provided.
●​ Advantages:​

○​ Simple and easy to understand and use.


○​ Phases are processed and completed one at a time, making it easy to
manage.
○​ Works well for projects where requirements are very well understood and
fixed.
○​ Clear deliverables and review processes for each phase.
●​ Disadvantages:​

○​ Inflexible; difficult to accommodate changes once a phase is complete.


○​ Working software is not produced until late in the life cycle.
○​ High risk and uncertainty; if requirements are misunderstood, it's costly to
fix later.
○​ Not suitable for complex, object-oriented projects, or projects with
changing requirements.

(Diagram: A diagram showing the sequential flow of phases in the Waterfall


model is essential.)

2. Spiral Model: The Spiral Model is a risk-driven process model generator for software
projects. It combines the iterative nature of prototyping with the controlled and
systematic aspects of the Waterfall model. It is typically used for large, expensive, and
complicated projects.

●​ Phases (per spiral/iteration):​

○​ Planning: Determine objectives, alternatives, and constraints for the


iteration.
○​ Risk Analysis: Identify and analyze risks; develop strategies to mitigate
them. Prototypes may be built here.
○​ Engineering (Development & Test): Develop and test the next level of
the product.
○​ Evaluation (Customer Evaluation): Assess the results of the iteration
and plan for the next spiral.
●​ Advantages:​

○​High amount of risk analysis, making it suitable for high-risk projects.


○​Good for large and complex projects.
○​Allows for changes and addition of functionality at later phases.
○​Working software is produced early in the life cycle in the form of
prototypes.
○​ Strong focus on quality through iterative refinement.
●​ Disadvantages:​

○​ Can be expensive as a model to use, as it involves multiple iterations and


risk analysis.
○​ Risk analysis requires highly specific expertise.
○​ Process is complex and not suitable for small projects.
○​ Success depends heavily on the risk analysis phase.
○​ If risks are not identified properly, the project can be in trouble.

(Diagram: A spiral diagram showing the four quadrants (Planning, Risk Analysis,
Engineering, Evaluation) and the iterative progression is crucial.)

Question 2: What is Requirement Engineering? Explain its key


phases. Differentiate between Functional and Non-Functional
Requirements with suitable examples. (8 Marks)

Answer:

Requirement Engineering (RE) is the process of defining, documenting, and


maintaining requirements for a software system. It is a critical early stage in the
software development life cycle, as errors or omissions in requirements can lead to
significant problems and increased costs later in the project. The goal of RE is to ensure
that the developed system meets the needs and expectations of its stakeholders.

Key Phases of Requirement Engineering:

1.​ Elicitation (Gathering):​

○​ Objective: To discover and gather requirements from all stakeholders


(users, customers, domain experts, etc.).
○​ Techniques: Interviews, questionnaires, workshops (e.g., JAD sessions),
brainstorming, observation, document analysis (studying existing
systems), prototyping.
○​ Challenge: Stakeholders may have conflicting requirements, unstated
assumptions, or difficulty articulating their needs.
2.​ Analysis and Negotiation:​

○​ Objective: To refine, classify, and structure the elicited requirements. This


involves identifying inconsistencies, ambiguities, and incompleteness.
○​ Activities: Building models (e.g., use cases, data models), prioritizing
requirements, resolving conflicts through negotiation among stakeholders.
○​ Output: A clearer, more organized set of requirements.
3.​ Specification (Documentation):​

○​ Objective: To formally document the agreed-upon requirements in a clear,


concise, and unambiguous manner.
○​ Tool: Software Requirement Specification (SRS) document. The SRS
serves as a contract between the development team and the customer.
○​ Characteristics of good SRS: Correct, complete, consistent, verifiable,
traceable, modifiable, unambiguous.
4.​ Validation and Verification:​

○​ Objective: To ensure that the specified requirements are correct,


complete, and accurately reflect the stakeholders' needs (validation: "Are
we building the right product?") and that the requirements are well-defined
and testable (verification: "Are we building the product right?").
○​ Techniques: Reviews, walkthroughs, inspections, prototyping, traceability
analysis.
○​ Goal: To catch errors before design and development begin.
5.​ Management:​

○​ Objective: To manage changes to requirements throughout the project


lifecycle. Requirements are rarely static.
○​ Activities: Establishing a baseline, change control process (evaluating
impact, approving/rejecting changes), maintaining traceability of
requirements.

(Diagram: A flowchart illustrating these phases (Elicitation -> Analysis ->


Specification -> Validation -> Management, with feedback loops) would be
beneficial.)

Functional vs. Non-Functional Requirements:

Functional Requirements (FRs):

●​ Definition: Describe what the system should do – the services or functions the
software must provide. They specify the inputs, behavior, and outputs of the
system.
●​ Focus: System features and user tasks.
●​ Examples:
○​ The system shall allow a user to register with a unique username and
password.
○​ The system shall calculate the total amount for an order, including taxes
and shipping.
○​ A librarian shall be able to add new books to the catalog.
○​ The system shall generate a monthly sales report.
○​ Users must be able to search for products by name or category.
Non-Functional Requirements (NFRs):

●​ Definition: Describe how the system should perform its functions. They specify
the quality attributes or constraints of the system. They do not change the core
functionality but define the quality of service.
●​ Focus: System properties like performance, reliability, usability, security,
maintainability, portability.
●​ Examples:
○​ Performance: The system login process shall take no more than 2
seconds.
○​ Performance: The system shall be able to handle 1000 concurrent users
without performance degradation.
○​ Usability: The user interface shall be intuitive enough for a novice user to
learn basic operations within 10 minutes.
○​ Security: All user passwords shall be stored in an encrypted format.
○​ Reliability: The system shall have an uptime of 99.9%.
○​ Maintainability: The code shall follow specified coding standards to
ensure ease of modification.
○​ Portability: The application shall be deployable on both Windows and
Linux servers.

Key Differences Summarized:

Feature Functional Non-Functional Requirements


Requirements

Defines What the system does How the system does it (quality attributes)

Nature Specific tasks, services, Constraints, qualities, properties


behaviors

Verifiability Often directly testable Can be harder to test directly (e.g.,


(e.g., function works) "user-friendly") but often testable via
specific metrics.

Source User needs, business Standards, policies, system-wide qualities


rules

Impact of System fails to perform a System may work but be unusable, slow,
absence required task insecure, or unreliable
Export to Sheets
Understanding both FRs and NFRs is crucial for developing software that not only
works correctly but also satisfies user expectations regarding quality and performance.

Unit 2: System Analysis and Modeling


Question 1: Explain Use Case diagrams and Class diagrams with their
purpose, components, notations, and relationships. Provide a simple
example for each. (9 Marks)

Answer:

Use Case Diagrams

●​ Purpose: Use Case diagrams are part of UML (Unified Modeling Language) and
are used to represent the system's functionality from an external user's
perspective. They depict the interactions between users (actors) and the system
to achieve specific goals. They are excellent for defining the scope of a system
and for communicating the system's intended behavior to stakeholders.​

●​ Components and Notations:​

○​ Actor: Represents a role played by a user or another system that


interacts with the subject (system).
■​ Notation: Stick figure.
■​ Example: Customer, Administrator, Payment Gateway.
○​ Use Case: Represents a specific, discrete piece of functionality that the
system provides to achieve a goal for an actor.
■​ Notation: Oval or ellipse with the use case name inside.
■​ Example: Login, Register User, Place Order, Generate
Report.
○​ System Boundary: A rectangle that encloses all the use cases of the
system, separating the system from the external actors.
■​ Notation: A large rectangle with the system name (optional).
○​ Relationships:
■​ Association: Represents communication between an actor and a
use case. Shows that an actor participates in a use case.
■​ Notation: Solid line connecting an actor and a use case.
■​ Include (<<include>>): A relationship where one use case (the
base use case) incorporates the behavior of another use case (the
included use case). The included use case is essential for the base
use case to complete.
■​ Notation: Dashed arrow from the base use case to the
included use case, stereotyped with <<include>>.
■​ Extend (<<extend>>): A relationship where one use case (the
extending use case) provides optional behavior that can be added
to another use case (the extended or base use case) at a specific
extension point, under certain conditions.
■​ Notation: Dashed arrow from the extending use case to the
base use case, stereotyped with <<extend>>.
■​ Generalization: A relationship between a more general use
case/actor and a more specific use case/actor. The specific
element inherits and may add to or override the behavior of the
general element.
■​ Notation: Solid line with a hollow arrowhead pointing from
the specific element to the general element.
●​ Example: Simple Online Shopping System (Diagram: A Use Case Diagram
should be drawn here.)​

○​ Actors: Customer, Administrator


○​ Use Cases for Customer: View Products, Add to Cart, Checkout,
Login
○​ Use Cases for Administrator: Manage Products, View Orders,
Login
○​ Relationships:
■​ Customer is associated with View Products, Add to Cart,
Checkout.
■​ Administrator is associated with Manage Products, View
Orders.
■​ Checkout might <<include>> Login (if login is mandatory for
checkout).
■​ View Products might be <<extend>>ed by View Product
Reviews (optional functionality).

Class Diagrams
●​ Purpose: Class diagrams are static structure diagrams in UML that describe the
structure of a system by showing its classes, their attributes (data), operations
(methods), and the relationships among objects. They are fundamental to
object-oriented modeling and design.​

●​ Components and Notations:​

○​ Class: A blueprint for creating objects. It defines properties and behaviors


common to all objects of that type.
■​ Notation: Rectangle divided into three compartments:
■​ Top: Class Name (bold, centered)
■​ Middle: Attributes (e.g., - attributeName: dataType)
■​ Bottom: Operations/Methods (e.g., +
methodName(parameters): returnType)
■​ Visibility: + (public), - (private), # (protected), ~ (package).
○​ Attribute: A named property of a class that describes a data value held by
each object of that class.
○​ Operation (Method): A function or procedure that can be performed by
objects of the class.
○​ Relationships:
■​ Association: A semantic relationship between two or more classes
that specifies connections among their instances.
■​ Notation: Solid line between classes. Can have multiplicity
(e.g., 1, *, 0..1, 1..*) and role names.
■​ Aggregation (Shared Association): A "has-a" relationship where
one class (the whole) is composed of other classes (the parts), but
the parts can exist independently of the whole.
■​ Notation: Solid line with an open (hollow) diamond at the
"whole" class end.
■​ Composition (Composite Aggregation): A strong "has-a"
relationship where one class (the whole) is composed of other
classes (the parts), and the parts cannot exist independently of the
whole (strong lifecycle dependency).
■​ Notation: Solid line with a filled diamond at the "whole" class
end.
■​ Generalization/Inheritance: An "is-a" relationship where one class
(the subclass or child) inherits attributes and operations from
another class (the superclass or parent).
■​ Notation: Solid line with a hollow arrowhead pointing from
the child class to the parent class.
■​ Dependency: A relationship where one class (the client) depends
on another class (the supplier) because it uses it. A change in the
supplier may affect the client.
■​ Notation: Dashed arrow pointing from the client to the
supplier.
■​ Realization/Implementation: A relationship where one model
element (e.g., a class) implements the behavior specified by
another model element (e.g., an interface).
■​ Notation: Dashed line with a hollow arrowhead pointing
from the implementing class to the interface.
●​ Example: Simple University System (Diagram: A Class Diagram should be
drawn here.)​

○​ Classes: Student, Course, Professor, Department


○​ Attributes/Operations (Examples):
■​ Student: - studentId: int, - name: String, +
enrollCourse(course: Course)
■​ Course: - courseCode: String, - courseName: String, +
addStudent(student: Student)
■​ Professor: - staffId: int, - name: String, +
teachCourse(course: Course)
■​ Department: - deptName: String, +
offerCourse(course: Course)
○​ Relationships:
■​ Student enrolls in Course (Many-to-Many Association: * on both
ends, possibly with an association class like Enrollment).
■​ Professor teaches Course (One-to-Many Association: 1
Professor to * Courses).
■​ Department offers Course (One-to-Many Aggregation:
Department "has" Courses).
■​ If UndergraduateStudent and GraduateStudent are special
types of Student, they would have a Generalization relationship
with Student.

Class diagrams and Use Case diagrams provide different but complementary views of a
system. Use cases focus on behavior from the user's view, while class diagrams focus
on the static structure of the system that enables this behavior.
Unit 3: Fundamentals of Project Management
Question 1: Explain the COCOMO-I (Constructive Cost Model) for
software cost estimation. Discuss its different modes and the
parameters involved in its basic form. What are cost drivers in
Intermediate COCOMO? (9 Marks)

Answer:

COCOMO-I (Constructive Cost Model - Version I)

The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation


model developed by Barry Boehm. COCOMO-I, the first version, is a widely used model
that predicts software development effort and schedule based primarily on the
estimated size of the software project in Kilo Lines of Code (KLOC).

Purpose: To provide a quantitative estimate of:

1.​ Effort: The amount of labor required to develop the software, usually measured
in Person-Months (PM).
2.​ Development Time (Schedule): The calendar time required to complete the
project, usually measured in Months (M).

Different Modes of COCOMO-I:

COCOMO-I defines three modes of software development projects, which affect the
constants used in its estimation formulas. These modes are based on the
characteristics of the project, development team, and environment:

1.​ Organic Mode:​

○​ Characteristics: Relatively small, simple projects. The development team


is small, experienced, and works in a familiar, stable environment.
Requirements are well-understood and not very stringent. Less innovation
is typically involved.
○​ Examples: Simple business applications, small scientific programs,
familiar utilities.
2.​ Semi-detached Mode:​
○​ Characteristics: Intermediate in size and complexity. The development
team may have a mix of experienced and inexperienced staff.
Requirements may include a mixture of rigid and less defined
specifications. The project might involve some unfamiliarity with the
application area or development environment.
○​ Examples: New operating systems, database management systems,
complex inventory systems.
3.​ Embedded Mode:​

○​ Characteristics: Projects with tight, inflexible constraints (hardware,


software, operational). The software is often part of a larger, complex
system (e.g., embedded in hardware). High reliability and performance are
critical. The development environment is often challenging, and innovation
is common.
○​ Examples: Avionics software, real-time control systems, complex banking
systems.

Parameters and Formulas in Basic COCOMO-I:

Basic COCOMO-I provides a quick, early estimate.

●​ Input Parameter:​

○​ KLOC (Kilo Lines of Code): The estimated size of the software product
in thousands of delivered source instructions.
●​ Effort Estimation Formula: Effort=a⋅(KLOC)b(in Person-Months)​

●​ Development Time (Schedule) Estimation Formula: TDEV​=c⋅(Effort)d(in


Months)​

●​ Constants (a, b, c, d): These constants vary depending on the project mode:​

| Project Mode | a | b | c | d | | :------------- | :--- | :---- | :--- | :---- | | Organic | 2.4 |
1.05 | 2.5 | 0.38 | | Semi-detached | 3.0 | 1.12 | 2.5 | 0.35 | | Embedded | 3.6 |
1.20 | 2.5 | 0.32 |​

Intermediate COCOMO-I and Cost Drivers:

Intermediate COCOMO extends Basic COCOMO by introducing a set of 15 "Cost


Drivers" that account for various attributes of the software product, hardware, personnel,
and project. These drivers are used to adjust the nominal effort estimate obtained from
the Basic COCOMO formula for a more accurate prediction.

The formula for effort in Intermediate COCOMO is: Effort=a⋅(KLOC)b⋅EAF(in


Person-Months)

Where EAF (Effort Adjustment Factor) is calculated by multiplying the effort


multipliers associated with each of the 15 cost drivers. Each cost driver is rated on a
scale (e.g., Very Low, Low, Nominal, High, Very High, Extra High), and each rating has
a corresponding effort multiplier.

Categories of Cost Drivers:

1.​ Product Attributes:


○​ RELY: Required Software Reliability
○​ DATA: Database Size
○​ CPLX: Product Complexity
2.​ Hardware Attributes:
○​ TIME: Execution Time Constraint
○​ STOR: Main Storage Constraint
○​ VIRT: Virtual Machine Volatility
○​ TURN: Computer Turnaround Time
3.​ Personnel Attributes:
○​ ACAP: Analyst Capability
○​ AEXP: Applications Experience
○​ PCAP: Programmer Capability
○​ VEXP: Virtual Machine Experience
○​ LEXP: Programming Language Experience
4.​ Project Attributes:
○​ MODP: Modern Programming Practices
○​ TOOL: Use of Software Tools
○​ SCED: Required Development Schedule

Significance: COCOMO-I, especially the Intermediate version, provides a structured


way to estimate software project costs and schedules by considering various influencing
factors. While it has limitations (e.g., reliance on KLOC, which is hard to estimate early),
it has been a foundational model in software project management and has paved the
way for more advanced models like COCOMO II.
Unit 4: Agile Project Management Framework
Question 1: What is Agile Methodology? Explain the Agile Manifesto
(values and principles). How do Agile models differ from traditional
Waterfall models? (9 Marks)

Answer:

Agile Methodology

Agile methodology is an iterative and incremental approach to software development


and project management. It emphasizes flexibility, collaboration, customer feedback,
and rapid delivery of functional software. Instead of detailed upfront planning for the
entire project, Agile focuses on breaking down the project into smaller, manageable
cycles or iterations (often called Sprints in Scrum). Each iteration typically results in a
working product increment, allowing teams to adapt to changing requirements and
deliver value continuously.

The Agile Manifesto (2001):

The Agile Manifesto was created by a group of software developers to define a better
way of developing software. It is based on four core values and twelve supporting
principles.

Core Values of the Agile Manifesto:

1.​ Individuals and interactions over processes and tools:


○​ Valuing people and their ability to collaborate effectively is seen as more
critical than rigidly adhering to predefined processes or relying solely on
tools.
2.​ Working software over comprehensive documentation:
○​ While documentation has its place, the primary measure of progress is
functional software that meets user needs. Agile prioritizes delivering
working increments over producing extensive documentation that might
not be used.
3.​ Customer collaboration over contract negotiation:
○​ Agile encourages continuous engagement and collaboration with the
customer throughout the development process to ensure the final product
meets their expectations, rather than relying solely on initial contractual
agreements.
4.​ Responding to change over following a plan:
○​ Agile methodologies embrace change. They recognize that requirements
can evolve, and it's more important to be able to adapt to these changes
than to strictly follow an initial, potentially outdated, plan.

The Twelve Principles Behind the Agile Manifesto:

These principles further elaborate on the agile philosophy:

1.​ Our highest priority is to satisfy the customer through early and continuous
delivery of valuable software.
2.​ Welcome changing requirements, even late in development. Agile processes
harness change for the customer's competitive advantage.
3.​ Deliver working software frequently, from a couple of weeks to a couple of
months, with a preference to the shorter timescale.
4.​ Business people and developers must work together daily throughout the project.
5.​ Build projects around motivated individuals. Give them the environment and
support they need, and trust them to get the job done.
6.​ The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
7.​ Working software is the primary measure of progress.
8.​ Agile processes promote sustainable development. The sponsors, developers,
and users should be able to maintain a constant pace indefinitely.
9.​ Continuous attention to technical excellence and good design enhances agility.
10.​Simplicity—the art of maximizing the amount of work not done—is essential.
11.​The best architectures, requirements, and designs emerge from self-organizing
teams.
12.​At regular intervals, the team reflects on how to become more effective, then
tunes and adjusts its behavior accordingly.

Differences between Agile Models and Traditional Waterfall Models:

Feature Agile Models Traditional Waterfall Model

Approach Iterative and Incremental Linear and Sequential

Planning Adaptive planning; detailed for Extensive upfront planning for the
current iteration entire project

Requirements Evolve and change is Fixed and defined upfront; change


welcomed is discouraged
Delivery Frequent, small releases of Single, large release at the end of
working software the project

Customer Inv. High and continuous Limited, primarily during


collaboration requirements and acceptance

Documentation Minimal, "just enough"; Comprehensive and detailed


working software focus documentation

Team Structure Self-organizing, Hierarchical, with specialized roles


cross-functional teams

Risk Addressed in each iteration; Primarily addressed during initial


Management early risk discovery planning phases

Flexibility Highly adaptable to changes Rigid and resistant to change

Testing Continuous throughout each A distinct phase towards the end


iteration of the project

Feedback Short and frequent; allows for Long; feedback often received late
Loop quick adjustments

Example Scrum, Kanban, XP, Lean Waterfall


Methods
Export to Sheets

(Diagram: A side-by-side comparison table or a visual showing the cyclical nature


of Agile vs. the linear flow of Waterfall would be effective.)

In essence, Agile is suited for projects where requirements are expected to change or
are not fully known upfront, and where rapid delivery of value is critical. Waterfall is
more appropriate for projects with stable, well-understood requirements and where a
sequential approach is feasible.

Unit 5: Implementation with Agile Tools


Question 1: What is Continuous Integration (CI)? Explain its key
principles, benefits, and the typical workflow. Mention some popular
tools used for CI. (8 Marks)
Answer:

Continuous Integration (CI)

Continuous Integration is a software development practice where members of a team


integrate their work frequently, usually each person integrates at least daily – leading to
multiple integrations per day. Each integration is then verified by an automated build
(including automated tests) to detect integration errors as quickly as possible. The
primary goal of CI is to prevent integration problems, often referred to as "integration
hell," by integrating early and often.

Key Principles of Continuous Integration:

1.​ Maintain a Single Source Repository: All source code, build scripts, and
relevant artifacts are stored in a version control system (e.g., Git, SVN)
accessible to all team members. This is the single source of truth.
2.​ Automate the Build: The process of compiling code, linking libraries, and
creating executable software or deployable units should be automated. This
ensures consistency and reduces manual errors.
3.​ Make the Build Self-Testing: The automated build should include a
comprehensive suite of automated tests (unit tests, integration tests) that verify
the correctness of the software. A build is considered successful only if all tests
pass.
4.​ Everyone Commits to the Mainline (or Main Branch) Every Day: Developers
should commit their changes to the central repository frequently, at least once a
day. This minimizes the divergence between individual workspaces and the
mainline.
5.​ Every Commit Should Trigger an Automated Build and Test: As soon as
code is committed, the CI server should automatically trigger a build and run all
associated tests.
6.​ Keep the Build and Test Process Fast: Builds and tests should execute quickly
so that developers get fast feedback. If it takes too long, developers might be
less inclined to wait or commit frequently.
7.​ Test in a Clone of the Production Environment: Automated tests should
ideally run in an environment that closely mimics the production environment to
catch environment-specific issues early.
8.​ Make it Easy for Anyone to Get the Latest Executable Version: The latest
successfully built and tested version of the software should be easily accessible
to developers, testers, and stakeholders.
9.​ Everyone Can See the Results of the Latest Build: Build status (success,
failure, test results) should be visible to the entire team (e.g., via dashboards,
email notifications, CI server interface). This promotes transparency and
collective ownership of build health.
10.​Automate Deployment (Often part of Continuous Delivery/Deployment -
CD): While CI focuses on integration and testing, it's a prerequisite for CD, where
successful builds are automatically deployed to staging or production
environments.

Benefits of Continuous Integration:

●​ Early Bug Detection and Prevention: Integration issues and bugs are found
quickly after they are introduced, making them easier and cheaper to fix.
●​ Reduced Integration Problems: Frequent integration minimizes the scope of
changes, reducing the complexity of merging and resolving conflicts.
●​ Improved Code Quality: Automated testing ensures a baseline quality and
helps maintain code health. Constant feedback encourages developers to write
better, testable code.
●​ Faster Release Cycles: Automation and reliable builds enable more frequent
and predictable releases of software.
●​ Increased Developer Productivity: Developers spend less time debugging
integration issues and more time developing features. Automated processes free
up developer time.
●​ Improved Team Collaboration and Communication: CI fosters a shared
responsibility for the codebase and build stability.
●​ Greater Confidence in the Software: Knowing that every change is
automatically built and tested increases confidence in the software's stability and
readiness for deployment.
●​ Reduced Risk: Issues are identified and addressed continuously, reducing the
risk of major failures late in the development cycle or after release.

Typical CI Workflow:

1.​ Developer Commits Code: A developer makes changes to their local codebase
and commits them to the central version control repository (e.g., pushes to a Git
branch).
2.​ CI Server Detects Change: The CI server continuously monitors the repository.
Upon detecting a new commit, it triggers the CI pipeline.
3.​ Automated Build: The CI server pulls the latest code and executes the
automated build script. This typically involves:
○​ Compiling source code.
○​ Running static code analysis.
○​ Creating build artifacts (e.g., executables, libraries, packages).
4.​ Automated Testing: After a successful build, the CI server runs automated
tests:
○​ Unit tests.
○​ Integration tests.
○​ (Optionally) Other tests like UI tests, performance tests (can be part of a
later stage in CD).
5.​ Report Results: The CI server reports the status of the build and tests.
○​ If successful: The build is marked as good. Artifacts may be stored.
Notifications are sent.
○​ If failed (build error or test failure): The build is marked as broken. The
team is notified immediately to fix the issue. The mantra is often "fix the
broken build first."
6.​ (Optional) Deploy Artifact: If the build and all tests pass, the artifact might be
automatically deployed to a testing, staging, or even production environment (this
extends into Continuous Delivery/Deployment).

(Diagram: A flowchart showing the CI workflow: Developer Commits -> VCS -> CI
Server (Pulls code, Builds, Tests) -> Feedback/Notification -> (Optional) Deploy.
This would be very helpful.)

Popular CI Tools:

●​ Jenkins: A widely used, open-source automation server with a vast ecosystem


of plugins.
●​ GitLab CI/CD: Integrated into the GitLab platform, offering powerful CI/CD
capabilities.
●​ GitHub Actions: CI/CD built into GitHub, allowing automation directly within
repositories.
●​ CircleCI: A cloud-based CI/CD platform known for its speed and flexibility.
●​ Travis CI: A hosted CI service, popular for open-source projects, integrates well
with GitHub.
●​ Azure Pipelines (part of Azure DevOps): Microsoft's CI/CD solution.
●​ TeamCity: A powerful CI/CD server from JetBrains.

Continuous Integration is a cornerstone of modern Agile and DevOps practices,


enabling teams to deliver high-quality software more reliably and frequently.

I hope these example questions and detailed answers help you prepare for your SPPU
exams! Remember to adapt them and prepare other topics with similar depth. Good
luck!

You might also like