Computer Science and Information Technology
Computer Science and Information Technology
Technology
Digital Signal Processing (DSP): FFT & Filter Design
Introduction
Digital Signal Processing (DSP) refers to the manipulation of signals using digital
techniques to improve their quality or extract useful information. Two important
concepts in DSP are the Fast Fourier Transform (FFT) and Filter Design,
which play crucial roles in frequency analysis and signal enhancement.
Filter Design
Definition and Purpose
Filters are essential in DSP for removing unwanted components from a signal or
extracting desired information. Digital filters are classified into Finite Impulse
Response (FIR) and Infinite Impulse Response (IIR) filters based on their
impulse response characteristics.
Types of Digital Filters
1. Low-Pass Filter (LPF): Allows low-frequency signals to pass while
attenuating high-frequency components.
2. High-Pass Filter (HPF): Allows high-frequency signals to pass while
blocking low-frequency signals.
3. Band-Pass Filter (BPF): Allows a specific range of frequencies to pass
while blocking others.
4. Band-Stop Filter (BSF): Blocks a specific frequency range while allowing
others to pass.
Filter Implementation Techniques
1. Finite Impulse Response (FIR) Filters:
o Have a finite duration impulse response.
3. Blackman Window:
o Offers better side lobe suppression at the cost of a wider main
lobe.
o Defined as:
o Used in applications requiring high stopband attenuation,
such as speech processing.
Each windowing technique affects the frequency response of the filter,
making the choice dependent on the application's requirements
for resolution and attenuation.
2. Infinite Impulse Response (IIR) Filters:
o Have an infinite duration impulse response.
o Use feedback loops and require fewer coefficients than FIR filters.
Conclusion
The Fast Fourier Transform (FFT) and Filter Design are fundamental
concepts in Digital Signal Processing (DSP). While FFT helps analyze signals in
the frequency domain, digital filters modify signals to improve quality or extract
useful information. Their applications span across various fields including audio
processing, communications, biomedical engineering, and radar systems.
Understanding these concepts is essential for designing efficient signal
processing solutions.
Computer Networks: Data Communication Systems & Applications
Data Communication Systems
Data communication refers to the process of transferring digital or analog
data between devices through a communication medium. A well-structured
data communication system ensures efficient, secure, and reliable data transfer.
Components of a Data Communication System:
1. Source (Sender): The device or application generating the data (e.g.,
computer, sensor).
2. Transmitter: Converts data into a suitable signal for transmission (e.g.,
modem, network adapter).
3. Transmission Medium: The channel through which data travels (e.g.,
wired – coaxial, fiber optic; wireless – radio waves, infrared).
4. Receiver: The destination device that receives the data.
5. Destination: The end system where the received data is processed or
stored.
Modes of Data Transmission:
1. Simplex: One-way communication (e.g., TV broadcasting).
2. Half-Duplex: Data flows in both directions, but only one direction at a
time (e.g., walkie-talkies).
3. Full-Duplex: Simultaneous two-way communication (e.g., telephone
conversations).
Types of Data Transmission:
Serial Transmission: Data is transmitted bit-by-bit sequentially over a
single channel (e.g., USB, RS-232).
Parallel Transmission: Multiple bits are transmitted simultaneously over
multiple channels (used in internal computer buses).
Transmission Techniques:
1. Synchronous Transmission: Data is sent in continuous streams with
synchronization between sender and receiver (e.g., Ethernet).
2. Asynchronous Transmission: Data is sent in individual characters with
start and stop bits, requiring no synchronization (e.g., keyboard input).
1. Relational Model
The Relational Model organizes data into tables (relations), where each table
consists of rows (tuples) and columns (attributes).
Key Concepts:
o Primary Key: Unique identifier for each record.
2. Database Design
Database design ensures an efficient and optimized structure for storing and
managing data. It involves:
1. Conceptual Design: Entity-Relationship (ER) modeling.
2. Logical Design: Defining tables, relationships, and constraints.
3. Physical Design: Optimizing storage and indexing.
Normalization Stages:
1NF: Eliminates duplicate columns.
2NF: Removes partial dependencies.
3NF: Removes transitive dependencies.
BCNF: Ensures no redundant data dependencies.
3. Implementation Techniques
DBMSs use different implementation techniques to optimize storage and
retrieval of data:
Indexing: Speeds up search operations.
Hashing: Directly maps keys to memory locations.
Transactions: Ensures Atomicity, Consistency, Isolation, and
Durability (ACID properties).
Concurrency Control: Prevents data conflicts in multi-user
environments.
4. Distributed Databases
A Distributed Database stores data across multiple locations and allows
access from different networked systems.
Types:
1. Homogeneous Distributed DB: Same DBMS across all nodes.
2. Heterogeneous Distributed DB: Different DBMSs across nodes.
Advantages:
✔ Increased availability and reliability.
✔ Supports parallel processing for faster queries.
✔ Ensures fault tolerance in case of system failures.
Conclusion:
A DBMS provides an organized way of managing data, ensuring efficiency,
security, and integrity. Advancements in distributed databases, object-
oriented models, and data mining have made databases more powerful in
handling large-scale applications in various domains like finance,
healthcare, and e-commerce.
1. Waterfall Model
The Waterfall Model follows a sequential approach, where each phase
must be completed before moving to the next.
Phases:
1. Requirement Analysis – Gathering and defining requirements.
2. System Design – Planning architecture and system components.
3. Implementation – Coding and unit testing.
4. Integration & Testing – System testing to identify defects.
5. Deployment – Delivering the final product.
6. Maintenance – Bug fixes and enhancements.
Advantages:
✔ Simple and easy to understand.
✔ Best suited for well-defined projects with clear requirements.
Disadvantages:
✖ Not flexible for changes.
✖ Late testing phase may lead to costly fixes.
2. Agile Model
The Agile Model is an iterative and incremental approach that focuses
on flexibility, collaboration, and customer feedback.
Key Features:
Development occurs in short cycles (sprints).
Continuous customer involvement.
Uses frameworks like Scrum and Kanban.
Advantages:
✔ Rapid delivery of working software.
✔ Adaptable to changing requirements.
✔ Encourages collaboration and continuous improvement.
Disadvantages:
✖ Requires high customer involvement.
✖ Not ideal for projects with fixed scope and budget.
3. Spiral Model
The Spiral Model combines Waterfall and Prototyping approaches,
focusing on risk management.
Phases in Each Spiral Cycle:
1. Planning – Defining objectives and identifying risks.
2. Risk Analysis – Evaluating potential project risks.
3. Engineering – Developing and testing the prototype.
4. Evaluation – Reviewing and refining the system.
Advantages:
✔ Best for complex and high-risk projects.
✔ Allows for continuous risk assessment and early error detection.
Disadvantages:
✖ Expensive due to frequent risk evaluations.
✖ Requires skilled professionals for risk assessment.
Advantages:
✔ Detects defects early.
✔ Best suited for critical systems (healthcare, aviation, banking,
etc.).
Disadvantages:
✖ Rigid and does not handle changing requirements well.
✖ Higher initial planning and documentation effort.
Conclusion
Each SDLC model has its strengths and weaknesses, and the choice
depends on project complexity, risk factors, customer involvement,
and flexibility requirements.
1. Encapsulation
Encapsulation is the process of hiding the internal details of an object
and restricting direct access to its data.
Key Features:
Data is hidden using private or protected access modifiers.
Methods provide controlled access to data (getters and setters).
Example (Java):
class Student {
private String name; // Private variable
// Getter method
public String getName() {
return name;
}
// Setter method
public void setName(String newName) {
name = newName;
}
}
Advantages:
✔ Prevents unauthorized data access and modification.
✔ Increases security and code maintainability.
2. Inheritance
Inheritance allows a child class (subclass) to acquire properties and
behaviors from a parent class (superclass).
Types of Inheritance:
Single Inheritance: One parent, one child.
Multiple Inheritance (Supported in C++): A child inherits from
multiple parents.
Multilevel Inheritance: A class inherits from another derived class.
Hierarchical Inheritance: Multiple classes inherit from one parent.
Example (Java):
class Animal {
void makeSound() {
System.out.println("Animal makes a sound");
}
}
3. Polymorphism
Polymorphism allows objects to be treated as instances of their parent
class while behaving differently based on their actual type.
Types of Polymorphism:
Compile-time (Method Overloading): Multiple methods with the same
name but different parameters.
Runtime (Method Overriding): A subclass provides a different
implementation of a method.
Example (Java - Method Overloading):
class MathOperations {
int add(int a, int b) {
return a + b;
}
4. Abstraction
Abstraction is the concept of hiding implementation details while
exposing only necessary features.
Implementation:
Abstract Classes – Can have both implemented and abstract
methods.
Interfaces – Only method signatures, no implementation (in Java before
Java 8).
Example (Java - Abstract Class):
abstract class Vehicle {
abstract void start(); // Abstract method (no body)
void fuel() {
System.out.println("Filling fuel");
}
}
Conclusion
The four OOP principles Encapsulation, Inheritance, Polymorphism,
and Abstraction provide a strong foundation for building scalable and
maintainable software systems.
4. Software Design
Modular Design: Breaking software into smaller, manageable modules.
Architectural Design: Defines software structure using patterns like MVC
(Model-View-Controller).
User Interface Design: Ensuring usability and accessibility.
Real-Time Software Design: Focuses on time-constrained applications
like embedded systems.
System Design: Defines overall system architecture and data flow.
Data Acquisition System: Systems used for collecting and analyzing
data in real-time applications.
1. Unit Testing
Unit Testing focuses on testing individual components (functions,
methods, or modules) of a software application.
Key Aspects:
Performed by developers during development.
Uses test cases to verify correctness.
Usually automated using frameworks like JUnit (Java), pytest
(Python).
Example (Java - JUnit Test Case):
import static org.junit.Assert.*;
import org.junit.Test;
2. Integration Testing
Integration Testing checks how different modules interact with each
other after unit testing.
Types of Integration Testing:
Top-Down: Testing from higher-level modules to lower ones.
Bottom-Up: Testing lower modules first, then integrating higher ones.
Big Bang: All modules tested at once after unit testing.
Incremental: Modules integrated and tested step by step.
Example:
If a login module interacts with a database module, integration testing
ensures:
✔ The login module correctly retrieves user credentials.
✔ The database module responds as expected.
Advantages:
✔ Identifies interface issues between components.
✔ Ensures smooth data flow between modules.
3. System Testing
System Testing validates the entire system against functional and non-
functional requirements.
Key Features:
Performed after integration testing.
Ensures the software meets business requirements.
Includes functional, performance, security, and usability testing.
Example Tests:
✔ Verifying whether an e-commerce website correctly processes orders.
✔ Checking if a banking system properly handles transactions.
Advantages:
✔ Detects issues before deployment.
✔ Ensures the entire system works correctly.
4. User Acceptance Testing (UAT)
User Acceptance Testing (UAT) is the final phase, where end users
validate if the software meets their needs.
Key Features:
Conducted by actual users or clients.
Focuses on real-world scenarios.
Ensures user satisfaction before production release.
Example:
✔ Testing a payment gateway with real transactions before launch.
✔ Validating a hospital management system for actual usage.
Advantages:
✔ Confirms that software is ready for deployment.
✔ Reduces post-release issues and customer complaints.
Conclusion
Each level of testing plays a crucial role in software quality:
Unit Testing ensures individual modules work correctly.
Integration Testing checks interactions between components.
System Testing verifies the complete system.
UAT ensures real-world usability before release.
Defects and Test Case Design Strategies: Ensuring robust test cases
for finding bugs.
Software Quality & Reusability: Promoting modularity and efficiency.
Conclusion
Software engineering methodologies ensure that software is developed
efficiently, with high quality and maintainability. The use of requirement
management, testing, and project management techniques helps in
building reliable and scalable software systems.
1. Intelligent Agents
An intelligent agent perceives its environment through sensors and acts on it
through actuators to achieve specific goals.
Types of Intelligent Agents:
Simple Reflex Agents: Act based on current perception (e.g.,
thermostat).
Model-Based Agents: Maintain an internal state to handle dynamic
environments.
Goal-Based Agents: Decide actions based on predefined goals.
Utility-Based Agents: Optimize actions for maximum benefit.
Example:
✔ A self-driving car perceives traffic and makes driving decisions.
2. Search Strategies in AI
Search is fundamental to AI problem-solving, used in pathfinding, decision
trees, and optimization.
Types of Search Strategies:
Uninformed Search: No additional information (e.g., BFS, DFS).
Informed Search: Uses heuristics for better performance (e.g., A*,
Greedy Search).
Example:
✔ A GPS navigation system finds the shortest path using A* search.
3. Knowledge Representation
AI systems store and process knowledge using structured models.
Types of Knowledge Representation:
Semantic Networks: Graph-based relationships between concepts.
Frames: Data structures storing related attributes.
Logical Representations: Predicate logic for reasoning.
Ontologies: Define relationships between entities.
Example:
✔ Chatbots use ontologies to understand user queries.
4. Learning in AI
AI systems learn from data to improve performance.
Types of Machine Learning:
Supervised Learning: Uses labeled data (e.g., Spam detection).
Unsupervised Learning: Finds hidden patterns (e.g., Customer
segmentation).
Reinforcement Learning: AI learns via rewards and penalties (e.g.,
Game AI).
Example:
✔ Face recognition systems use supervised learning for identification.
5. Applications of AI
AI is widely used across industries:
Healthcare: AI-assisted diagnosis, drug discovery.
Finance: Fraud detection, algorithmic trading.
Autonomous Systems: Self-driving cars, drones.
Natural Language Processing (NLP): Chatbots, virtual assistants.
Robotics: AI-powered robots for automation.
Example:
✔ Google Assistant uses NLP and AI to process voice commands.
Conclusion
AI is transforming industries through intelligent agents, search strategies,
knowledge representation, learning, and applications. It enables
automation, enhances decision-making, and improves efficiency across multiple
domains.
Mobile Computing
Mobile computing enables wireless data transmission and remote
communication, allowing users to access computing resources anytime,
anywhere. It involves wireless communication fundamentals,
telecommunication systems, and wireless networks.
2. Telecommunication Systems
Telecommunication systems enable long-distance communication through a
structured network.
Components:
✔ Base Station: Acts as an access point for mobile devices.
✔ Switching Center: Routes calls/data between networks.
✔ Mobile Device: End-user device for communication (smartphones, tablets).
Example:
✔ GSM (Global System for Mobile Communication) is a telecommunication
system for mobile networks.
3. Wireless Networks
Wireless networks allow devices to communicate without wired connections.
Types of Wireless Networks:
✔ WLAN (Wireless Local Area Network): Short-range wireless
communication (e.g., Wi-Fi).
✔ WPAN (Wireless Personal Area Network): Very short-range, connecting
personal devices (e.g., Bluetooth).
✔ WMAN (Wireless Metropolitan Area Network): Covers a city or large area
(e.g., WiMAX).
✔ WWAN (Wireless Wide Area Network): Covers large geographical areas
(e.g., 4G, 5G).
Example:
✔ Wi-Fi networks allow laptops and smartphones to connect to the internet
wirelessly.
Conclusion
Mobile computing enables seamless communication via wireless technologies,
telecommunication systems, and wireless networks. It powers
smartphones, IoT devices, and real-time data sharing, making computing more
accessible and efficient.
Security in Computing
Security in computing focuses on protecting data, systems, and networks
from unauthorized access, attacks, and threats. It includes program security,
OS security, database & network security, scientific computing,
information coding techniques, cryptography, and network security.
1. Program Security
Program security ensures that software applications are protected against
vulnerabilities that could be exploited by attackers.
Key Aspects:
✔ Buffer Overflow Attacks: Occur when data exceeds buffer limits, leading to
memory corruption.
✔ Malware: Includes viruses, worms, trojans, ransomware.
✔ Secure Coding Practices: Prevents security flaws by following best
programming practices.
Example:
✔ A SQL injection attack exploits vulnerabilities in input validation to
manipulate databases.
Conclusion
Security in computing safeguards data and systems using program security,
OS security, database & network security, cryptography, and secure
coding techniques. With increasing cyber threats, robust security measures are
essential for modern computing environments.
1. Random Processes
A random process is a collection of random variables that evolve over time. It
models uncertainty in systems like communication networks and
manufacturing.
Types of Random Processes:
✔ Stationary Process: Statistical properties do not change over time.
✔ Markov Process: Future states depend only on the current state, not past
history.
✔ Poisson Process: Models the occurrence of random events (e.g., call arrivals
in a telecom system).
Example:
✔ Packet arrival in a network follows a Poisson Process.
2. Probability Distributions
Probability distributions describe the likelihood of outcomes in random
experiments.
✔ Discrete Distributions: Used for countable outcomes (e.g., Binomial,
Poisson).
✔ Continuous Distributions: Used for measurable quantities (e.g., Normal,
Exponential).
Example:
✔ Gaussian (Normal) Distribution is used in machine learning models for
data analysis.
3. Queuing Models and Simulation
Queuing models analyze waiting lines in systems like customer service, traffic,
and network servers.
Queuing System Components:
✔ Arrival Process: Customers arrive randomly (e.g., Poisson arrivals).
✔ Service Process: Service times are usually Exponential.
✔ Number of Servers: Single or multiple servers.
Common Models:
✔ M/M/1 Queue: Single server, Poisson arrivals, exponential service time.
✔ M/M/c Queue: Multiple servers, Poisson arrivals, exponential service time.
Example:
✔ Call centers use queuing models to optimize customer wait times.
4. Hypothesis Testing
Hypothesis testing is used in statistical inference to validate claims about a
population.
Key Steps:
✔ Null Hypothesis (H₀): Assumes no effect or difference.
✔ Alternative Hypothesis (H₁): Assumes a significant effect.
✔ Test Statistic: Used to decide whether to reject H₀ (e.g., t-test, chi-square
test).
✔ Significance Level (α): Typically 5% (0.05).
Example:
✔ A/B testing in marketing uses hypothesis testing to determine the better
strategy.
2. Graph Theory
Graph theory is used in networking, social media analysis, AI, and
algorithms.
Graph Representation:
✔ Graph (G): A collection of nodes (vertices) and edges (connections).
✔ Adjacency Matrix/List: Represents the structure of a graph.
Types of Graphs:
✔ Directed Graph (Digraph): Edges have a direction (e.g., web links).
✔ Undirected Graph: Edges have no direction (e.g., social networks).
✔ Weighted Graph: Edges have weights (e.g., road networks with distances).
Graph Algorithms:
✔ Dijkstra’s Algorithm: Finds the shortest path in a weighted graph.
✔ Kruskal’s Algorithm: Finds the minimum spanning tree (MST).
✔ DFS & BFS: Used in searching, AI, and pathfinding problems.
Example:
✔ Google Maps uses Dijkstra’s Algorithm to find the fastest route.
Conclusion
Formal Languages & Automata Theory help in compiler design, NLP, and AI,
while Graph Theory is essential for networking, optimization, and search
algorithms.
Compiler Design
Compiler design is a crucial aspect of computer science that deals with
converting high-level programming languages into machine code. A compiler
performs this transformation through multiple stages, ensuring that the code is
optimized, error-free, and efficient for execution.
1. Phases of a Compiler
A compiler works in six main phases, grouped under two categories:
A. Analysis Phase (Front-End)
✔ Lexical Analysis:
Converts a sequence of characters (source code) into tokens.
Example: int x = 10; is broken into tokens: int, x, =, 10, ;
✔ Syntax Analysis (Parsing):
Checks the syntax based on grammar rules.
Example: Detects missing semicolons or unmatched brackets.
✔ Semantic Analysis:
Ensures logical correctness (e.g., type checking, undeclared variables).
Example: Prevents assigning a float value to an integer variable.
3. Code Generation
The final phase of the compiler translates optimized code into machine-level
instructions.
Steps in Code Generation:
✔ Instruction Selection: Choosing the best CPU instructions for efficiency.
✔ Register Allocation: Assigning frequently used variables to registers
instead of memory.
✔ Code Scheduling: Reordering instructions for parallel execution.
Example:
High-Level Code:
A = B + C;
Assembly Code (for a hypothetical CPU):
LOAD R1, B
ADD R1, C
STORE A, R1
Conclusion
Compiler design plays a crucial role in software development by ensuring that
programs are efficient, optimized, and error-free. The different paradigms
in programming provide flexibility to solve complex problems effectively.
1. Process Management
A process is an executing program, and process management involves
scheduling, synchronization, and resource allocation.
Key Concepts:
✔ Process States: New → Ready → Running → Waiting → Terminated
✔ Process Scheduling:
Long-term scheduler: Selects processes to enter memory.
Short-term scheduler: Allocates CPU to ready processes.
Medium-term scheduler: Swaps processes in/out of memory.
✔ Inter-Process Communication (IPC): Mechanisms like shared
memory and message passing.
Example:
When running multiple applications like Chrome and MS Word, the OS
schedules CPU time for each process.
2. Storage Management
Storage management ensures efficient use of primary (RAM) and secondary
(HDD/SSD) memory.
Memory Management Techniques:
✔ Paging: Divides memory into fixed-size pages, reducing fragmentation.
✔ Segmentation: Divides memory logically (e.g., code, data, stack).
✔ Virtual Memory: Uses swap space on the disk when RAM is full.
File Systems:
✔ Types: FAT32, NTFS, ext4
✔ Operations: Creation, deletion, read/write, and access control.
Example:
When a program runs out of RAM, the OS uses virtual memory to store inactive
pages on the hard disk.
3. I/O Systems
I/O management handles interaction between the CPU and peripheral devices
(keyboard, mouse, printer, etc.).
✔ Device Drivers: Software that allows OS to communicate with hardware.
✔ Interrupt Handling: Notifies CPU about I/O events (e.g., keyboard input).
Example:
When you print a document, the OS sends data to the printer driver, which
converts it into a format the printer understands.
6. Conclusion
Operating systems manage processes, memory, storage, and I/O devices to
ensure efficient system operation. System software like assemblers, linkers,
loaders, and macro processors plays a crucial role in program execution.
Distributed Systems
A distributed system is a network of independent computers that work
together as a single system. These computers communicate over a network,
sharing resources and tasks to achieve a common goal.
1. Communication and Distributed Environment
Distributed systems rely on network communication to exchange data.
✔ Message Passing: Nodes send and receive messages using protocols like
TCP/IP.
✔ Remote Procedure Call (RPC): Allows a program to execute a function on a
remote machine.
✔ Middleware: Software that manages communication between distributed
components (e.g., CORBA, RMI).
Example:
Google Drive allows multiple users to edit documents in real time, using a
distributed system.
7. Conclusion
Distributed systems improve scalability, fault tolerance, and resource
sharing by distributing tasks across multiple computers. They enable cloud
computing, large-scale applications, and efficient communication.
Merg [2, 3, 5,
e 6, 8]
n Fibonacci
(n)
0 0
1 1
2 1
3 2
4 3
5 5
10 10 % 7 = 3 3
22 22 % 7 = 1 1
31 31 % 7 = 3 Chaining
(Collision) at 3
4 4%7=4 4
15 15 % 7 = 1 Chaining
(Collision) at 1
A 0 -
B 4 A
C 2 A
D 5 (via C) C
Conclusion
Efficient problem-solving techniques, sorting, trees, graphs, hashing,
and heap structures are fundamental for computer science applications. These
are used in databases, networking, AI, and OS scheduling.
Here’s a detailed explanation with calculations for all the algorithms you
mentioned.
1 2 12
2 1 10
3 3 20
4 2 15
Knapsack
Capacity = 5
Items/
01 2 3 4 5
Weight
No items 00 0 0 0 0
1 1 1 1
00 12
(W=2,V=12) 2 2 2
2 1 1 2 2
0 22
(W=1,V=10) 0 2 2 2
Items/
01 2 3 4 5
Weight
3 1 1 2 3
0 32
(W=3,V=20) 0 2 2 0
4 1 1 2 3 3
0
(W=2,V=15) 0 5 5 0 7
dp[i][w]=max(dp[i−1][w],dp[i−1][w−wt[i]]+val[i])dp[i][w] = \max(dp[i-1][w],
dp[i-1][w - wt[i]] + val[i])
✔ Time Complexity: O(nW)
A1 1 3
A2 2 5
A3 4 6
A4 6 8
A5 5 7
A6 8 9
✔ Greedy Approach:
1. Sort by finish time: (A1, A3, A5, A4, A6, A2)
2. Select A1 → A3 → A4 → A6
✔ Maximum Activities Selected = 4
✔ Time Complexity: O(n log n)
A 0 1 1 2
0 5 0
B 1 0 3 2
0 5 5
C 1 3 0 3
5 5 0
D 2 2 3 0
0 5 0
A B C D E
A 0 1 3 ∞ ∞
0
B 1 0 1 2 ∞
0
C 3 1 0 8 2
D ∞ 2 8 0 4
E ∞ ∞ 2 4 0
A B C D
A 0 - 4 ∞
1
B ∞ 0 3 2
C ∞ ∞ 0 -
2
D ∞ 1 ∞ 0
Conclusion
📌 Dynamic Programming: Used in Knapsack, LCS, Floyd Warshall
📌 Greedy Algorithm: Used in Activity Selection, Prim’s Algorithm
📌 NP-Complete Problems: TSP, Vertex Cover Approximation
📌 Shortest Path Algorithms: Dijkstra (No Negatives), Bellman-Ford
(Handles Negatives)