Sepm Shortnotes
Sepm Shortnotes
2. Software Process
Definition: A set of activities and steps for developing software systematically.
Common Models:
o Waterfall Model: Sequential approach; each phase must be completed before
the next begins.
o Agile Model: Iterative and incremental; focuses on flexibility and customer
feedback.
o Spiral Model: Combines iterative development with risk assessment, suitable
for large, complex projects.
o V-Model: Verification and validation-focused; each development phase has a
corresponding testing phase.
Purpose: Ensures consistency, quality, and predictability in software development.
5. Project Scheduling
Definition: Arranging tasks and resources along a timeline to ensure the project
finishes on time.
Key Components:
o Task Breakdown: Divides project into smaller, manageable tasks.
o Resource Allocation: Assign resources (team members, tools) to tasks.
o Timeline Creation: Define start and end dates for each task.
Scheduling Tools: Gantt charts, PERT (Program Evaluation and Review Technique)
charts, and Critical Path Method (CPM).
Earned Value Analysis (EVA):
o Purpose: Measures project performance by comparing planned vs. actual
progress.
o Metrics:
Planned Value (PV): Budgeted cost for scheduled work.
Actual Cost (AC): Actual cost incurred.
Earned Value (EV): Budgeted cost of work actually performed.
6. Risk Management
Definition: Systematic process to identify, assess, and mitigate risks that could
negatively impact the project.
Key Steps:
o Risk Identification: List potential risks (e.g., technical, financial, schedule
risks).
o Risk Analysis: Assess probability and impact of each risk.
o Risk Prioritization: Rank risks by severity and likelihood.
o Risk Mitigation Strategies: Develop strategies to reduce or avoid high-
impact risks.
o Risk Monitoring: Continuously review risks throughout the project and
update mitigation plans.
1. Software Requirements
Definition: Specifications of what a software system should accomplish to satisfy
user needs.
Types of Requirements:
o Functional Requirements:
Define specific behaviors or functions of the system.
Describe what the system should do (e.g., process user input, generate
reports).
Examples: Login authentication, payment processing, data retrieval.
o Non-Functional Requirements:
Specify system attributes or qualities that affect user experience.
Describe how the system performs functions (e.g., performance,
security, usability).
Examples: Response time, data encryption, user interface design.
User Requirements:
o High-level descriptions of what users expect from the system.
o Written in non-technical language to be understandable by stakeholders.
o Examples: “The system should allow users to update profile information.”
System Requirements:
o Detailed and technical specifications of system functions, covering all
hardware and software.
o Include both functional and non-functional requirements.
o Serve as a basis for system design and implementation.
Software Requirements Document (SRS):
o Comprehensive document describing all software requirements.
o Serves as a contract between stakeholders and developers.
o Typically includes purpose, scope, functional and non-functional
requirements, system features, and constraints.
3. Classical Analysis
Definition: Traditional techniques used to analyze and model system requirements
before design.
Approaches:
o Structured System Analysis:
Emphasizes a systematic, top-down approach to model system
requirements.
Uses Data Flow Diagrams (DFDs), Entity-Relationship Diagrams
(ERDs), and process modeling.
Aims to represent functional requirements clearly and logically.
o Petri Nets:
A mathematical modeling tool used for describing and analyzing
concurrent processes.
Represented as a graphical model with places, transitions, and tokens.
Useful for modeling complex workflows, synchronizations, and
parallel processing.
4. Data Dictionary
Definition: Centralized repository that stores definitions and descriptions of data
elements.
Purpose: Ensure consistency and clarity in the use of data across the system.
Contents:
o Definitions of data entities, attributes, and relationships.
o Data types, formats, allowed values, and default values.
o Descriptions of data flows, storage, and processes.
Importance: Helps maintain a clear and organized structure, supporting both
developers and stakeholders in understanding data requirements.
1. Design Process
Definition: Structured approach to translating software requirements into an
architecture and detailed design.
Purpose: To create a blueprint that guides the implementation of the software.
Phases of Design Process:
o Architectural Design (high-level structure)
o Interface Design (interaction between components)
o Component-Level Design (detailed specification for each component)
o Data Design (structure for storing and accessing data)
2. Design Concepts
Abstraction: Focus on high-level details, hiding complex underlying mechanics.
Modularity: Dividing the system into separate, manageable modules or components.
Cohesion: Degree to which the responsibilities of a module are related; high cohesion
is preferred.
Coupling: Degree of dependency between modules; low coupling is ideal.
Encapsulation: Keeping internal details of modules hidden from other parts of the
system.
Separation of Concerns: Dividing a system into distinct sections, each handling a
specific concern or functionality.
Information Hiding: Hiding implementation details from other parts of the system to
minimize impact of changes.
Refinement: Gradual detailing and refinement of design from high-level abstraction
to implementation-level details.
3. Design Model
Definition: Blueprint that represents various aspects of the system and guides
development.
Elements of the Design Model:
o Data Design: Organizes data structures and how data flows through the
system.
o Architectural Design: Defines the overall system structure, components, and
their relationships.
o Interface Design: Specifies interactions between components, modules, or
user interfaces.
o Component-Level Design: Details each component’s internal logic, class
design, and interactions.
4. Design Heuristics
Definition: Practical guidelines or "rules of thumb" to improve design quality.
Examples of Design Heuristics:
o Minimize complexity: Aim for simple, clear design structures.
o Increase modularity: Use modules to isolate functionality and simplify
maintenance.
o Favor high cohesion and low coupling: Promote modular, independent
components.
o Anticipate change: Design with future modifications in mind to reduce
rework.
o Use patterns: Apply proven design patterns where applicable to solve
common problems.
5. Architectural Design
Purpose: Establish a high-level framework for the system, defining major
components and their interactions.
Architectural Styles:
o Layered Architecture: Organizes the system into layers, each with specific
functionality (e.g., presentation, business logic, data access).
o Client-Server: Divides system into clients (requesters) and servers (providers)
to manage networked applications.
o Microservices: Structures the system as small, independent services that
communicate over a network.
o Event-Driven: Based on producing and consuming events for real-time
responses (e.g., UI, sensor-based applications).
o Pipe-and-Filter: Uses data flow through components (filters) that transform
data as it flows through a pipeline.
Architectural Mapping using Data Flow:
o Mapping data flow diagrams (DFDs) into the architecture to establish
communication paths.
o Determines how data flows from input to output across architectural layers or
modules.
7. Component-Level Design
Definition: Detailed design of each module or component, focusing on individual
functionality and internal logic.
Component Types:
o Class-Based Components: Used in object-oriented programming; each class
represents a component with data and behavior.
Design Aspects: Define class responsibilities, attributes, and methods;
establish relationships between classes (e.g., inheritance, associations).
o Traditional Components: Function-based components or modules,
commonly used in procedural programming.
Design Aspects: Specify functionality, inputs, outputs, and control
structures within each component.
Designing Components:
o Define responsibilities: Clearly outline the component’s purpose.
o Specify interfaces: Define input, output, and interactions with other
components.
o Minimize dependencies: Reduce direct coupling to enhance modularity and
reusability.
o Error handling and exceptions: Address how errors within components will
be managed.
1. Software Testing Fundamentals
Purpose: Identify and resolve defects in software to ensure it meets requirements and
works reliably.
Goals of Testing:
o Detect errors and bugs.
o Ensure functionality aligns with requirements.
o Validate performance, usability, and security.
Internal vs. External Views of Testing:
o Internal (White Box Testing): Focuses on internal code structure and logic.
o External (Black Box Testing): Examines software from a user’s perspective,
without looking at internal code.
4. Regression Testing
Purpose: Ensure that recent changes or additions to the codebase have not introduced
new defects.
Key Features:
o Re-testing previously successful test cases after updates.
o Critical for maintaining stability in large and frequently updated systems.
Types: Full, partial, and selective regression, depending on the extent of code
changes.
5. Validation Testing
Definition: Confirms that the software meets user requirements and performs as
expected in a real-world environment.
Objectives: Validate that the product fulfills all functional and non-functional
requirements.
Techniques:
o Acceptance testing (e.g., User Acceptance Testing - UAT).
o Beta testing with end-users.
6. System Testing
Definition: Comprehensive testing of the complete system to evaluate its compliance
with requirements.
Types of System Testing:
o Performance Testing: Assesses speed, scalability, and response times.
o Security Testing: Identifies vulnerabilities and security gaps.
o Usability Testing: Evaluates user-friendliness and accessibility.
o Compatibility Testing: Verifies functionality across different platforms,
devices, and configurations.
o Recovery Testing: Confirms the system’s ability to recover from failures.
7. Debugging Techniques
Definition: Process of identifying, analyzing, and fixing bugs or defects in software.
Common Techniques:
o Code Tracing: Manually following the code execution to find errors.
o Breakpoints and Stepping: Use breakpoints in an IDE to pause and inspect
the program state.
o Automated Debugging Tools: Tools like GDB, Visual Studio Debugger for
efficient debugging.
Error Logging and Reporting: Logs provide detailed insights to help locate issues.
Root Cause Analysis: Identifying the fundamental source of a defect to prevent
recurrence.
8. Coding Practices
Best Practices in Coding:
o Consistent Naming Conventions: Use clear and consistent variable, function,
and class names.
o Code Readability: Write clean, well-commented, and organized code.
o Error Handling: Implement exception handling for managing errors
gracefully.
o Version Control: Use systems like Git to manage changes and track versions.
Refactoring:
o Definition: Improving the internal structure of code without changing its
external behavior.
o Purpose: Enhance code readability, reduce complexity, and make
maintenance easier.
o Techniques: Simplify code, remove redundancy, optimize algorithms.
9. Unit Testing
Definition: Testing individual components or modules to verify they function
correctly in isolation.
Characteristics:
o Usually conducted by developers.
o Aims for fast identification and fixing of small errors.
o Common tools: JUnit, NUnit, Mocha for different programming languages.
2. Planning
Project Plan:
o Purpose: A document outlining project goals, scope, timeline, resources, and
deliverables.
o Components: Scope, objectives, resource allocation, timelines, milestones,
risk management, and communication plans.
Planning Process:
o Steps: Define project scope, determine resources, set timelines, allocate tasks,
and establish budget.
o Involves: Stakeholders to ensure alignment with project objectives.
Request for Proposal (RFP):
o Definition: A document outlining project requirements, used to solicit bids
from external vendors.
o Purpose: Helps organizations evaluate potential vendors based on expertise,
cost, and proposed solutions.
3. Risk Management
Identification: Process of identifying potential project risks (e.g., technical,
budgetary, resource-related).
Projection: Evaluating the likelihood and impact of each risk on the project.
RMMM (Risk Mitigation, Monitoring, and Management):
o Mitigation: Steps to reduce or eliminate risks.
o Monitoring: Tracking risk status and early warning signs throughout the
project.
o Management: Taking corrective actions when risks materialize to minimize
impact on project objectives.