0% found this document useful (0 votes)
13 views18 pages

Sepm Shortnotes

Uploaded by

Aryan Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views18 pages

Sepm Shortnotes

Uploaded by

Aryan Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Introduction to Software Engineering

 Definition: Structured approach to software development focusing on quality,


efficiency, and maintenance.
 Goals:
o Build software that meets user requirements.
o Ensure reliability, maintainability, and performance.
 Key Phases:
o Requirements Gathering: Understand what the user needs.
o Design: Plan architecture and components of the software.
o Implementation: Write the actual code.
o Testing: Verify and validate functionality and performance.
o Maintenance: Update and improve software post-deployment.

2. Software Process
 Definition: A set of activities and steps for developing software systematically.
 Common Models:
o Waterfall Model: Sequential approach; each phase must be completed before
the next begins.
o Agile Model: Iterative and incremental; focuses on flexibility and customer
feedback.
o Spiral Model: Combines iterative development with risk assessment, suitable
for large, complex projects.
o V-Model: Verification and validation-focused; each development phase has a
corresponding testing phase.
 Purpose: Ensures consistency, quality, and predictability in software development.

3. Perspective and Specialized Process Models


 Perspective Models: Provide structured viewpoints or frameworks for process
activities.
o Examples: Waterfall (linear structure), Agile (flexible and iterative).
 Specialized Process Models: Adapted for specific domains or project requirements.
o Component-Based Development (CBD): Focuses on reusing pre-built
components to speed up development.
o Formal Methods Model: Uses rigorous mathematical models to specify and
verify software, often used in safety-critical systems.
o Product-Line Software Engineering: Emphasizes the reuse of software
assets within a product line for efficiency.

4. Software Project Management


 Purpose: Planning, executing, monitoring, and controlling software projects to
achieve goals within constraints.
 Key Aspects:
o Project Estimation: Predict time, cost, and resources required.
o Project Scheduling: Organize tasks to ensure timely completion.
o Risk Management: Identify and manage potential project risks.
o Performance Monitoring: Use metrics and methods to track project progress.
Estimation Techniques
 LOC (Lines of Code):
o Estimates effort based on the number of lines in the codebase.
o Suitable for traditional projects with detailed technical specs.
 FP (Function Point):
o Estimates based on functional user requirements (e.g., inputs, outputs,
inquiries).
o Language-independent and useful for business-oriented projects.
 COCOMO Model (Constructive Cost Model):
o Uses mathematical formulas to estimate time and cost based on project size
and complexity.
o Variants:
 Basic COCOMO: Quick, rough estimates.
 Intermediate COCOMO: Considers more project attributes for
accurate estimates.
 Detailed COCOMO: Includes all project phases and subcomponents
for a comprehensive estimate.

5. Project Scheduling
 Definition: Arranging tasks and resources along a timeline to ensure the project
finishes on time.
 Key Components:
o Task Breakdown: Divides project into smaller, manageable tasks.
o Resource Allocation: Assign resources (team members, tools) to tasks.
o Timeline Creation: Define start and end dates for each task.
 Scheduling Tools: Gantt charts, PERT (Program Evaluation and Review Technique)
charts, and Critical Path Method (CPM).
 Earned Value Analysis (EVA):
o Purpose: Measures project performance by comparing planned vs. actual
progress.
o Metrics:
 Planned Value (PV): Budgeted cost for scheduled work.
 Actual Cost (AC): Actual cost incurred.
 Earned Value (EV): Budgeted cost of work actually performed.

6. Risk Management
 Definition: Systematic process to identify, assess, and mitigate risks that could
negatively impact the project.
 Key Steps:
o Risk Identification: List potential risks (e.g., technical, financial, schedule
risks).
o Risk Analysis: Assess probability and impact of each risk.
o Risk Prioritization: Rank risks by severity and likelihood.
o Risk Mitigation Strategies: Develop strategies to reduce or avoid high-
impact risks.
o Risk Monitoring: Continuously review risks throughout the project and
update mitigation plans.
1. Software Requirements
 Definition: Specifications of what a software system should accomplish to satisfy
user needs.
 Types of Requirements:
o Functional Requirements:
 Define specific behaviors or functions of the system.
 Describe what the system should do (e.g., process user input, generate
reports).
 Examples: Login authentication, payment processing, data retrieval.
o Non-Functional Requirements:
 Specify system attributes or qualities that affect user experience.
 Describe how the system performs functions (e.g., performance,
security, usability).
 Examples: Response time, data encryption, user interface design.
 User Requirements:
o High-level descriptions of what users expect from the system.
o Written in non-technical language to be understandable by stakeholders.
o Examples: “The system should allow users to update profile information.”
 System Requirements:
o Detailed and technical specifications of system functions, covering all
hardware and software.
o Include both functional and non-functional requirements.
o Serve as a basis for system design and implementation.
 Software Requirements Document (SRS):
o Comprehensive document describing all software requirements.
o Serves as a contract between stakeholders and developers.
o Typically includes purpose, scope, functional and non-functional
requirements, system features, and constraints.

2. Requirements Engineering Process


 Purpose: Systematic approach to gathering, analyzing, documenting, and managing
software requirements.
 Key Phases:
o Feasibility Studies:
 Assess if the project is technically, financially, and operationally
feasible.
 Determines if the requirements are realistic and achievable within
constraints.
 Considers risks, costs, timelines, and benefits.
o Requirements Elicitation and Analysis:
 Process of collecting and understanding user and system needs.
 Techniques: Interviews, surveys, observation, brainstorming, and
prototyping.
 Goal: Clarify ambiguous requirements and address conflicting needs.
 Outputs: Detailed requirement specifications and initial SRS.
o Requirements Validation:
 Ensures that requirements accurately represent stakeholders’ needs.
 Methods: Reviews, prototyping, model validation, and testing.
 Goal: Detect issues early to avoid costly errors later in development.
o Requirements Management:
 Ongoing process to handle changes in requirements throughout the
project.
 Activities: Version control, traceability, change management.
 Goal: Maintain consistency and avoid scope creep.

3. Classical Analysis
 Definition: Traditional techniques used to analyze and model system requirements
before design.
 Approaches:
o Structured System Analysis:
 Emphasizes a systematic, top-down approach to model system
requirements.
 Uses Data Flow Diagrams (DFDs), Entity-Relationship Diagrams
(ERDs), and process modeling.
 Aims to represent functional requirements clearly and logically.
o Petri Nets:
 A mathematical modeling tool used for describing and analyzing
concurrent processes.
 Represented as a graphical model with places, transitions, and tokens.
 Useful for modeling complex workflows, synchronizations, and
parallel processing.

4. Data Dictionary
 Definition: Centralized repository that stores definitions and descriptions of data
elements.
 Purpose: Ensure consistency and clarity in the use of data across the system.
 Contents:
o Definitions of data entities, attributes, and relationships.
o Data types, formats, allowed values, and default values.
o Descriptions of data flows, storage, and processes.
 Importance: Helps maintain a clear and organized structure, supporting both
developers and stakeholders in understanding data requirements.
1. Design Process
 Definition: Structured approach to translating software requirements into an
architecture and detailed design.
 Purpose: To create a blueprint that guides the implementation of the software.
 Phases of Design Process:
o Architectural Design (high-level structure)
o Interface Design (interaction between components)
o Component-Level Design (detailed specification for each component)
o Data Design (structure for storing and accessing data)

2. Design Concepts
 Abstraction: Focus on high-level details, hiding complex underlying mechanics.
 Modularity: Dividing the system into separate, manageable modules or components.
 Cohesion: Degree to which the responsibilities of a module are related; high cohesion
is preferred.
 Coupling: Degree of dependency between modules; low coupling is ideal.
 Encapsulation: Keeping internal details of modules hidden from other parts of the
system.
 Separation of Concerns: Dividing a system into distinct sections, each handling a
specific concern or functionality.
 Information Hiding: Hiding implementation details from other parts of the system to
minimize impact of changes.
 Refinement: Gradual detailing and refinement of design from high-level abstraction
to implementation-level details.

3. Design Model
 Definition: Blueprint that represents various aspects of the system and guides
development.
 Elements of the Design Model:
o Data Design: Organizes data structures and how data flows through the
system.
o Architectural Design: Defines the overall system structure, components, and
their relationships.
o Interface Design: Specifies interactions between components, modules, or
user interfaces.
o Component-Level Design: Details each component’s internal logic, class
design, and interactions.

4. Design Heuristics
 Definition: Practical guidelines or "rules of thumb" to improve design quality.
 Examples of Design Heuristics:
o Minimize complexity: Aim for simple, clear design structures.
o Increase modularity: Use modules to isolate functionality and simplify
maintenance.
o Favor high cohesion and low coupling: Promote modular, independent
components.
o Anticipate change: Design with future modifications in mind to reduce
rework.
o Use patterns: Apply proven design patterns where applicable to solve
common problems.

5. Architectural Design
 Purpose: Establish a high-level framework for the system, defining major
components and their interactions.
 Architectural Styles:
o Layered Architecture: Organizes the system into layers, each with specific
functionality (e.g., presentation, business logic, data access).
o Client-Server: Divides system into clients (requesters) and servers (providers)
to manage networked applications.
o Microservices: Structures the system as small, independent services that
communicate over a network.
o Event-Driven: Based on producing and consuming events for real-time
responses (e.g., UI, sensor-based applications).
o Pipe-and-Filter: Uses data flow through components (filters) that transform
data as it flows through a pipeline.
 Architectural Mapping using Data Flow:
o Mapping data flow diagrams (DFDs) into the architecture to establish
communication paths.
o Determines how data flows from input to output across architectural layers or
modules.

6. User Interface (UI) Design


 Purpose: Create a user-friendly interface that allows users to interact effectively with
the system.
 Key Phases:
o Interface Analysis: Identify user needs, tasks, and interaction patterns.
 Techniques: User interviews, task analysis, and use case scenarios.
o Interface Design: Define how users will interact with the system through
screens, navigation, and feedback mechanisms.
 Goals: Ensure intuitiveness, accessibility, consistency, and
responsiveness.
o Prototyping: Create wireframes or prototypes to visualize and test the UI
design.
o Usability Testing: Evaluate the design with real users to identify and resolve
usability issues.

7. Component-Level Design
 Definition: Detailed design of each module or component, focusing on individual
functionality and internal logic.
 Component Types:
o Class-Based Components: Used in object-oriented programming; each class
represents a component with data and behavior.
 Design Aspects: Define class responsibilities, attributes, and methods;
establish relationships between classes (e.g., inheritance, associations).
o Traditional Components: Function-based components or modules,
commonly used in procedural programming.
 Design Aspects: Specify functionality, inputs, outputs, and control
structures within each component.
 Designing Components:
o Define responsibilities: Clearly outline the component’s purpose.
o Specify interfaces: Define input, output, and interactions with other
components.
o Minimize dependencies: Reduce direct coupling to enhance modularity and
reusability.
o Error handling and exceptions: Address how errors within components will
be managed.
1. Software Testing Fundamentals
 Purpose: Identify and resolve defects in software to ensure it meets requirements and
works reliably.
 Goals of Testing:
o Detect errors and bugs.
o Ensure functionality aligns with requirements.
o Validate performance, usability, and security.
 Internal vs. External Views of Testing:
o Internal (White Box Testing): Focuses on internal code structure and logic.
o External (Black Box Testing): Examines software from a user’s perspective,
without looking at internal code.

2. White Box Testing


 Definition: Testing approach that involves examining the internal structure of the
code.
 Techniques:
o Basis Path Testing:
 Ensures each line of code executes at least once.
 Identifies independent paths through the program based on control
flow.
 Commonly uses Cyclomatic Complexity (a metric to determine the
number of independent paths).
o Control Structure Testing:
 Tests different control structures within the code, such as loops,
conditionals, and branches.
 Types:
 Condition Testing: Validates the logical conditions in the code.
 Loop Testing: Focuses on testing loops for boundary values
and conditions.
 Branch Testing: Ensures that every possible branch or decision
is executed.

3. Black Box Testing


 Definition: Testing method that focuses on software functionality without looking at
internal code.
 Techniques:
o Equivalence Partitioning: Divides input data into partitions where all data
points are expected to yield similar results.
o Boundary Value Analysis: Tests values at the edge of each partition to
identify boundary defects.
o Decision Table Testing: Uses a table to represent combinations of inputs and
expected outputs.
o State Transition Testing: Evaluates how software behaves across various
states based on inputs and triggers.

4. Regression Testing
 Purpose: Ensure that recent changes or additions to the codebase have not introduced
new defects.
 Key Features:
o Re-testing previously successful test cases after updates.
o Critical for maintaining stability in large and frequently updated systems.
 Types: Full, partial, and selective regression, depending on the extent of code
changes.

5. Validation Testing
 Definition: Confirms that the software meets user requirements and performs as
expected in a real-world environment.
 Objectives: Validate that the product fulfills all functional and non-functional
requirements.
 Techniques:
o Acceptance testing (e.g., User Acceptance Testing - UAT).
o Beta testing with end-users.

6. System Testing
 Definition: Comprehensive testing of the complete system to evaluate its compliance
with requirements.
 Types of System Testing:
o Performance Testing: Assesses speed, scalability, and response times.
o Security Testing: Identifies vulnerabilities and security gaps.
o Usability Testing: Evaluates user-friendliness and accessibility.
o Compatibility Testing: Verifies functionality across different platforms,
devices, and configurations.
o Recovery Testing: Confirms the system’s ability to recover from failures.

7. Debugging Techniques
 Definition: Process of identifying, analyzing, and fixing bugs or defects in software.
 Common Techniques:
o Code Tracing: Manually following the code execution to find errors.
o Breakpoints and Stepping: Use breakpoints in an IDE to pause and inspect
the program state.
o Automated Debugging Tools: Tools like GDB, Visual Studio Debugger for
efficient debugging.
 Error Logging and Reporting: Logs provide detailed insights to help locate issues.
 Root Cause Analysis: Identifying the fundamental source of a defect to prevent
recurrence.

8. Coding Practices
 Best Practices in Coding:
o Consistent Naming Conventions: Use clear and consistent variable, function,
and class names.
o Code Readability: Write clean, well-commented, and organized code.
o Error Handling: Implement exception handling for managing errors
gracefully.
o Version Control: Use systems like Git to manage changes and track versions.
 Refactoring:
o Definition: Improving the internal structure of code without changing its
external behavior.
o Purpose: Enhance code readability, reduce complexity, and make
maintenance easier.
o Techniques: Simplify code, remove redundancy, optimize algorithms.
9. Unit Testing
 Definition: Testing individual components or modules to verify they function
correctly in isolation.
 Characteristics:
o Usually conducted by developers.
o Aims for fast identification and fixing of small errors.
o Common tools: JUnit, NUnit, Mocha for different programming languages.

10. Integration Testing


 Definition: Testing the interaction between integrated modules to ensure they work
together as expected.
 Types:
o Big Bang Integration: Integrate all modules at once and test.
o Incremental Integration: Gradually integrate modules in phases (Top-Down,
Bottom-Up, Sandwich).
 Objective: Detect interface issues and ensure smooth data flow between modules.

11. Software Implementation


 Definition: Process of translating the design and specifications into executable code.
 Key Phases:
o Coding: Writing code based on design documentation.
o Compilation and Build: Converting code into machine-executable form.
o Deployment: Installing and configuring the software in a real environment.
o Post-Deployment Support: Maintenance and troubleshooting after release.
 Best Practices: Follow coding standards, conduct regular code reviews, and use
automated build tools.
1. Estimation Techniques
 FP (Function Point) Based Estimation:
o Definition: Measures functionality from the user’s perspective, estimating
project size based on user interactions.
o Components: Includes inputs, outputs, user inquiries, files, and interfaces.
o Calculation: Uses weighted function counts adjusted by complexity factors to
determine function points.
o Use: Common for assessing size and complexity, especially in business
applications.
 LOC (Lines of Code) Based Estimation:
o Definition: Measures project size by counting lines of source code.
o Calculation: Estimates are made based on historical data on productivity per
line of code.
o Limitations: Not ideal for early stages when detailed coding isn’t known;
more accurate after design phase.
 Make/Buy Decision:
o Purpose: Determines whether to develop software in-house or buy a pre-
existing solution.
o Factors to Consider: Cost, time, resource availability, customization needs,
and support requirements.
o Outcome: Balances benefits, cost-effectiveness, and project timelines in
choosing the best approach.
 COCOMO II (Constructive Cost Model):
o Purpose: A model to estimate cost, effort, and time for software development.
o Types of Projects: Includes organic, semi-detached, and embedded projects,
each with specific coefficients.
o Calculation: Considers various factors such as software size, complexity,
team experience, and technology.
o Versions: COCOMO II has different models for early estimation (Early
Design Model) and detailed estimation (Post-Architecture Model).

2. Planning
 Project Plan:
o Purpose: A document outlining project goals, scope, timeline, resources, and
deliverables.
o Components: Scope, objectives, resource allocation, timelines, milestones,
risk management, and communication plans.
 Planning Process:
o Steps: Define project scope, determine resources, set timelines, allocate tasks,
and establish budget.
o Involves: Stakeholders to ensure alignment with project objectives.
 Request for Proposal (RFP):
o Definition: A document outlining project requirements, used to solicit bids
from external vendors.
o Purpose: Helps organizations evaluate potential vendors based on expertise,
cost, and proposed solutions.

3. Risk Management
 Identification: Process of identifying potential project risks (e.g., technical,
budgetary, resource-related).
 Projection: Evaluating the likelihood and impact of each risk on the project.
 RMMM (Risk Mitigation, Monitoring, and Management):
o Mitigation: Steps to reduce or eliminate risks.
o Monitoring: Tracking risk status and early warning signs throughout the
project.
o Management: Taking corrective actions when risks materialize to minimize
impact on project objectives.

4. Scheduling and Tracking


 Relationship Between People and Effort:
o More people don’t always mean faster results due to coordination and
communication overhead.
o Brooks’ Law: Adding people to a delayed project often further delays it.
 Task Set & Network:
o Task Set: A collection of project activities to achieve specific goals.
o Task Network (Dependency Diagram): A visual representation of task
dependencies, often displayed as a flowchart.
 Scheduling: Allocating time frames for each task based on dependencies, resource
availability, and deadlines.
o Tools: Gantt charts, PERT charts for visualization.
 Earned Value Analysis (EVA):
o Definition: A method to assess project performance and progress.
o Metrics: Compares planned value (PV), earned value (EV), and actual cost
(AC) to determine project health.
o Purpose: Helps measure cost efficiency and schedule adherence.

5. Process and Project Metrics


 Definition: Quantitative measures used to monitor, control, and improve software
development processes and outcomes.
 Examples of Project Metrics:
o Effort: Total time invested in development.
o Cost: Overall project expenditure.
o Schedule Variance: Difference between planned and actual schedules.
o Defect Density: Number of defects per unit size of software.
 Examples of Process Metrics:
o Productivity: Lines of code or function points per developer per hour.
o Quality: Number of defects found post-release.
o Cycle Time: Average time required to complete specific development
activities.

6. Recent Trends in Software Engineering


 Agile Methodology:
o Definition: Iterative and incremental approach that focuses on flexibility,
collaboration, and customer feedback.
o Key Principles: Adaptive planning, early delivery, continuous improvement.
o Benefits: Responds to change quickly, frequent delivery, and improves
customer satisfaction.
 Scrum:
o Definition: A framework within Agile that organizes work into sprints (time-
boxed iterations).
o Roles: Product Owner (defines requirements), Scrum Master (facilitates
process), Development Team (delivers increments).
o Ceremonies: Sprint Planning, Daily Standups, Sprint Review, and
Retrospective.
 Pair Programming:
o Definition: A development practice where two programmers work together at
one workstation.
o Roles: One programmer (driver) writes the code, while the other
(observer/navigator) reviews it.
o Benefits: Increases code quality, knowledge sharing, and collaborative
problem-solving.

You might also like