0% found this document useful (0 votes)
11 views

Advance Programming unit4

Uploaded by

mayank.7554s
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Advance Programming unit4

Uploaded by

mayank.7554s
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 149

Introduction to Event Driven

Programming using JavaFX


(Building Interactive Java Applications)
What is Event-Driven Programming?
Definition:

● Event-Driven Programming is a programming paradigm where the flow of the


program is determined by events like user actions (mouse clicks, key presses) or
system-generated events.
● The program waits for events to occur, and event listeners handle them when they
happen.

Key Points:

● Commonly used in GUI applications.


● Allows programs to respond to real-time user input.
Introduction to JavaFX
What is JavaFX?

● JavaFX is a rich client application platform for Java that supports GUI development.
● It provides a lightweight and powerful framework for developing desktop
applications with interactive UIs.

Key Features:

● Built-in components: Buttons, TextFields, Charts, etc.


● Scene Graph: Organizes graphical elements in a hierarchical structure.
● Event Handling: Supports event-driven programming through event listeners.
Event Handling in JavaFX
How Events Work:

● JavaFX uses an event-driven architecture where user actions generate events.


● The system listens for events and triggers event handlers to respond.

Key Elements:

1. Event Source: The component that generates the event (e.g., a button).
2. Event Object: Encapsulates information about the event (e.g., mouse click, key press).
3. Event Handler: The code that processes the event.
JavaFX Event Types
Common event types in JavaFX include:

1. Mouse Events: Triggered by mouse actions like clicks, dragging, etc.


○ onMouseClicked(), onMouseDragged(), etc.
2. Keyboard Events: Triggered by key presses and releases.
○ onKeyPressed(), onKeyReleased(), etc.
3. Action Events: Typically triggered by buttons, menu items, or other UI controls.
○ onAction()
Handling Events in JavaFX
In JavaFX, events are handled using event listeners that define how the application reacts when
an event occurs.

Steps to Handle an Event:


1. Create a UI Component: For example, a button.
2. Attach an Event Handler: Define how the event will be handled using lambda expressions
or inner classes.
3. Define Event Logic: Implement the behavior when the event is triggered.

Example:

Button btn = new Button("Click Me!");

btn.setOnAction(event -> {

System.out.println("Button clicked!");

});
Example of Event Handling in JavaFX
Example: Button Click Event
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.stage.Stage;

public class EventExample extends Application {


@Override
public void start(Stage primaryStage) {
Button btn = new Button("Click Me!");
Conti.,
// Event handler for button click
btn.setOnAction(event -> {
System.out.println("Button was clicked!");
});

Scene scene = new Scene(btn, 200, 100);


primaryStage.setScene(scene);
primaryStage.setTitle("Event-Driven Example");
primaryStage.show();
}
Conti.,
Key Points:

● The button "Click Me!" triggers an event handler when clicked.


● The event handler outputs a message to the console.
Lambda Expressions for Event Handling
In JavaFX, lambda expressions are commonly used to handle events in a more concise way.
Instead of writing an anonymous inner class, we can use a lambda to directly define the
action.

Syntax:

button.setOnAction(e -> System.out.println("Button clicked!"));

Benefits of Lambda Expressions:

● Makes code shorter and more readable.


● Ideal for simpler event-handling tasks.
Using Event Filters in JavaFX
Event Filters are used in JavaFX to handle events before they reach the intended target.
This allows you to intercept and modify events.
How it Works:
● Add an Event Filter to the parent node to catch events that happen in child nodes.
● Useful for tasks like preventing a certain event from reaching a control or logging all
events.
Example:
root.addEventFilter(MouseEvent.MOUSE_CLICKED, event -> {
System.out.println("Mouse Clicked: " + event.getSceneX() + ", " + event.getSceneY());
});
JavaFX Scene Graph and Event Propagation
Scene Graph is a hierarchical structure where nodes represent UI components in JavaFX.

● Events propagate through the Scene Graph from source to target nodes via two
phases:
1. Capturing Phase: The event moves down the Scene Graph.
2. Bubbling Phase: The event moves back up the Scene Graph.

Event Propagation Process:

● Events pass through parent-child relationships in the Scene Graph, allowing custom
handling at multiple levels.
Example: Mouse Event Handling
Handling mouse events like click, hover, and dragging using event listeners.
Example:
Rectangle rect = new Rectangle(100, 100);
rect.setOnMouseEntered(event -> {
System.out.println("Mouse entered rectangle!");
});
rect.setOnMouseExited(event -> {
System.out.println("Mouse exited rectangle!");
});
Conti.,
Key Points:
● onMouseEntered and onMouseExited handle when the mouse enters or leaves the
node area.
● Useful for interactive UI feedback like highlighting elements on hover.
Use of JavaFX Properties and Bindings
Properties in JavaFX represent an observable value.
Bindings allow you to automatically update UI components based on property changes.
Example:
DoubleProperty width = new SimpleDoubleProperty(100);
Rectangle rect = new Rectangle();
rect.widthProperty().bind(width);
Key Points:
● Event-driven changes in properties (like window resizing) can automatically update
the UI.
● Useful for responsive designs and dynamic UIs.
Best Practices in Event-Driven Programming
1. Keep Event Handlers Simple: Handlers should be lightweight and focus on specific
tasks.
2. Avoid Business Logic in Handlers: Use handlers to trigger business logic, but keep
complex logic outside.
3. Use Lambda Expressions: Simplify event-handling code with lambdas.
4. Event Bubbling & Filtering: Leverage bubbling and filtering to handle events
efficiently.
Key Points:
● Ensures that the event-driven system remains maintainable and efficient.
Conclusion
Summary:

● JavaFX supports event-driven programming with robust event handling for GUIs.
● Event listeners and handlers define how the application responds to user actions.
● Use JavaFX properties and bindings for dynamic interactions.
Introduction to
Multithreading in Java
(Enhancing Performance Through Concurrency)
What is Multithreading?
Definition:

● Multithreading is the capability of a program to execute multiple threads


concurrently.
● A thread is the smallest unit of a program that can be executed independently.

Key Points:

● Multithreading allows a program to perform multiple tasks simultaneously.


● Threads run within the same process, sharing resources like memory.
Why Use Multithreading?
Benefits of Multithreading:

1. Increased Performance: Allows multiple operations to be executed concurrently,


improving application responsiveness.
2. Efficient Resource Utilization: Threads share memory and resources within the same
process, making efficient use of system resources.
3. Parallelism: Ideal for multi-core processors, where tasks can be divided across
multiple cores.
4. Better User Experience: In GUI applications, multithreading ensures that the UI
remains responsive during long-running tasks.
Threads vs Processes
Processes:

● A process is a self-contained execution environment with its own memory space.


● Switching between processes is resource-intensive.

Threads:

● A thread is a smaller unit of a process that shares memory and resources with other
threads in the same process.
● Context switching between threads is faster than between processes.

Key Points:

● Threads are lightweight compared to processes.


● Threads in the same process can easily communicate and share data.
Creating Threads in Java
In Java, there are two main ways to create threads:

1. Extending the Thread class


2. Implementing the Runnable interface
Method 1 - Extending the ‘Thread’ Class
Steps to Create a Thread by Extending Thread:
1. Create a class that extends Thread.
2. Override the run() method.
3. Create an instance of the class and call start() to execute the thread.
Example:
class MyThread extends Thread {
public void run() {
System.out.println("Thread is running.");
}
}
Conti.,
public class Main {
public static void main(String[] args) {
MyThread t1 = new MyThread();
t1.start();
}
}
Method 2 - Implementing the ‘Runnable’ Interface
Steps to Create a Thread by Implementing Runnable:
1. Create a class that implements Runnable.
2. Override the run() method.
3. Create a Thread object and pass the Runnable object to its constructor.
4. Call start() to execute the thread.
Example:
class MyRunnable implements Runnable {
public void run() {
System.out.println("Runnable thread is running.");
}
}
Conti.,
public class Main {
public static void main(String[] args) {
MyRunnable runnable = new MyRunnable();
Thread thread = new Thread(runnable);
thread.start();
}
}
The ‘Thread’ Lifecycle
A thread in Java goes through several states:

1. New: Thread is created but not yet started.


2. Runnable: Thread is ready to run and is waiting for CPU time.
3. Running: Thread is executing.
4. Blocked/Waiting: Thread is paused, waiting for a resource or event.
5. Terminated: Thread has finished executing.
Conti.,
Diagram:
Thread Methods
Java provides several methods to control thread execution:

1. start(): Starts the thread.


2. run(): Contains the code to be executed in the thread.
3. sleep(ms): Causes the thread to pause for a specified time.
4. join(): Waits for a thread to die.
5. interrupt(): Interrupts a sleeping or waiting thread.
6. isAlive(): Checks if the thread is still running.
Multithreading Example in Java
Example: Using Two Threads

class MyRunnable implements Runnable {

private String name;

public MyRunnable(String name) {


this.name = name;
}

public void run() {


for (int i = 0; i < 5; i++) {

System.out.println(name + " is running: " + i);

}
Conti.,
}
}

public class Main {


public static void main(String[] args) {
Thread t1 = new Thread(new MyRunnable("Thread 1"));
Thread t2 = new Thread(new MyRunnable("Thread 2"));
t1.start();
t2.start();
}
}
Conti.,
Key Points:

● Two threads (t1 and t2) run concurrently, each executing the run() method.
● They share CPU time to execute tasks.
Synchronization in Multithreading
When multiple threads access shared resources, it can lead to issues like race conditions
and inconsistent data.
Synchronization ensures that only one thread can access a critical section of code at a time.
Example:
public synchronized void syncMethod() {
// critical section
}
Key Points:
● Use synchronized keyword to ensure that only one thread can access a method at a
time.
● Prevents data corruption and ensures thread safety.
Inter-Thread Communication
Java provides methods for threads to communicate with each other:
● wait(): Causes a thread to wait until another thread invokes notify() or notifyAll().
● notify(): Wakes up a single waiting thread.
● notifyAll(): Wakes up all waiting threads.
Example:
synchronized(obj) {
obj.wait(); // waits for notify
obj.notify(); // wakes up waiting thread
}
Deadlocks in Multithreading
What is a Deadlock?

● A deadlock occurs when two or more threads are blocked forever, each waiting for the
other to release a resource.

Example:

● Thread A holds resource X and waits for resource Y, while Thread B holds resource Y
and waits for resource X.

How to Avoid Deadlocks:

● Use lock hierarchy: Always acquire resources in a fixed order.


● Use timeouts when acquiring locks.
Best Practices for Multithreading
1. Minimize Shared Data: Reduce the need for synchronization by keeping shared data to
a minimum.
2. Use Thread Pools: Avoid creating too many threads by using the Executor
framework.
3. Avoid Deadlocks: Always be careful when acquiring multiple locks.
4. Handle InterruptedException: Properly handle thread interruptions for smooth
execution.

Key Points:

● Follow best practices to write efficient and safe multithreaded code.


Conclusion
Summary:

● Multithreading enhances performance by executing multiple tasks concurrently.


● Java provides flexible ways to create and manage threads through Thread and
Runnable.
● Use synchronization and inter-thread communication techniques to manage shared
resources safely.
Ahmdal’s Law
(Understanding the Limits of Parallel Processing)
What is Amdahl's Law?
Definition:

● Amdahl’s Law is a formula that predicts the theoretical maximum speedup of a task
using parallel processing, based on the proportion of the task that can be parallelized.
● Named after Gene Amdahl, who formulated the law in 1967.

Key Insight:

● No matter how many processors are added, the performance improvement is limited
by the portion of the task that cannot be parallelized.
Formula for Amdahl's Law
Formula:

Where:

● S = Maximum speedup.
● P = Fraction of the program that can be parallelized.
● N = Number of processors or cores.
● (1 - P) = Fraction of the program that is serial (cannot be parallelized).

Key Points:

● As N (number of processors) increases, the effect of the serial portion becomes more
significant in limiting the speedup.
Explanation of Components
● P (Parallelizable Portion):
The part of the task that can be executed concurrently across multiple processors.
● 1 - P (Serial Portion):
The portion of the task that must be executed sequentially, regardless of how many
processors are available.
● N (Number of Processors):
The number of processors or cores available to execute the parallelizable portion.

Key Insight:

● The larger the serial portion of a task, the less effective parallel processing will be.
Visualization of Amdahl's Law
Graph: Speedup vs Number of Processors

● X-axis: Number of processors (N).


● Y-axis: Speedup (S).
● Curve: Shows diminishing returns as the number of processors increases.

Key Observation:

● Speedup increases as processors are added, but eventually levels off due to the serial
portion limiting further improvements.
Example of Amdahl's Law
● A program has 70% of its tasks that can be parallelized (P = 0.7) and 30% that must
run sequentially.

Speedup with 4 processors:

Interpretation:

● With 4 processors, the maximum speedup is 2.11x, not 4x, due to the serial portion.
Impact of Serial Portion on Speedup
Key Insight:

● Even if you use an infinite number of processors, the maximum speedup is limited by
the serial portion of the program.

Maximum Speedup Formula:

For P = 0.7 (70% parallelizable), the theoretical maximum speedup is:


Diminishing Returns with More Processors
As the number of processors increases:

● Parallel portion speeds up, but the serial portion becomes a bottleneck.
● Adding more processors beyond a certain point yields diminishing returns.

Key Concept:

● Performance gains from adding processors flatten as the impact of the serial portion
dominates.
Amdahl's Law in Practice
Applications:

● High-Performance Computing (HPC): Used to assess the feasibility of speeding up


computations by adding more processors.
● Parallel Computing: Helps developers understand the limitations of improving
performance through parallelization.

Real-World Example:

● In data processing applications, if a task like reading from disk cannot be parallelized,
adding more processors will not significantly speed up the overall process.
Limitations of Amdahl's Law
Assumes Fixed Workload: Amdahl's Law does not account for changes in the problem size,
where more processors could handle larger workloads.

Ignores Overhead: It doesn’t account for communication and synchronization overhead


between processors.

No Dynamic Scaling: The law assumes the parallel and serial portions are fixed, which may
not be true in dynamic environments.
Gustafson's Law as a Counterpoint
Gustafson’s Law complements Amdahl's Law by considering that the size of the problem
can increase as the number of processors increases.

● It argues that more processors allow you to handle larger problems rather than just
completing a fixed task faster.

Key Insight:

● Scaling up the workload can make parallelization more effective.


Conclusion
Summary:

● Amdahl’s Law provides a useful framework for understanding the limitations of


parallel processing.
● It shows that adding more processors only improves performance up to the point
where the serial portion dominates.
● While useful, it has limitations in real-world scenarios where overhead and dynamic
scaling come into play
Understanding Speedup in
Parallel Computing
(Measuring Performance Gains in Computational Tasks)
What is Speedup?
Definition:

● Speedup is a metric used in parallel computing to quantify the performance


improvement when a task is executed on multiple processors compared to a single
processor.

Formula:

Where:

● N = number of processors or cores used.


Key Concept of Speedup
● Goal: To measure how much faster a program runs as the number of processors
increases.
● Baseline: The performance of a program running on a single processor is considered
the baseline for comparison.

Key Points:

● S = 1: No speedup, same performance.


● S > 1: Indicates a performance improvement.
● S < 1: Indicates a performance degradation (rare).
Ideal Speedup vs. Real Speedup
Ideal Speedup:

● When the program scales perfectly with the number of processors.


● If a program runs in half the time on 2 processors, the ideal speedup is 2x.

Real Speedup:

● In practice, speedup is often less than ideal due to factors like overhead,
communication delays, and the serial portion of the task.
Formula for Ideal Speedup
Ideal Speedup Formula:
S=N
Where N is the number of processors.
● If you have 4 processors, ideal speedup would be 4x.
Example:
● A task takes 10 seconds on 1 processor. On 4 processors, it should ideally take 2.5
seconds:
Realistic Speedup (Amdahl’s Law)
In reality, speedup is limited by the portion of the task that cannot be parallelized.

Amdahl’s Law:

Where:

● S = Maximum speedup.
● P = Fraction of the program that can be parallelized.(parallelizable portion)
● N = Number of processors or cores.
● (1 - P) = Fraction of the program that is serial (serial portion) or (cannot be
parallelized)
Speedup Example Using Amdahl's Law
Example:

● 60% of a program can be parallelized (P = 0.6).


● Using 4 processors, the maximum speedup is:

Key Insight:

● Even with 4 processors, the speedup is only 1.82x due to the serial portion of the task.
Superlinear Speedup
Definition:

● Superlinear speedup occurs when the speedup is greater than the number of
processors used (i.e., S > N).

Causes of Superlinear Speedup:

1. Cache Effects: More processors can lead to better cache utilization, reducing memory
access times.
2. Algorithmic Changes: Parallel execution might expose optimizations that improve
performance beyond just parallelization.

Key Insight:

● Superlinear speedup is rare but can happen under special circumstances.


Factors Affecting Speedup
1. Parallelizable Portion (P):
○ The greater the portion of the task that can be parallelized, the higher the
speedup.
2. Communication Overhead:
○ Overhead associated with communication between processors can reduce
speedup.
3. Load Balancing:
○ If work is not evenly distributed across processors, some may sit idle, reducing
efficiency.
Speedup and Scalability
Scalability:

● Refers to how well a program can maintain or increase its speedup as more processors
are added.

Types of Scalability:

● Strong Scaling: Speedup is measured while keeping the problem size constant and
increasing the number of processors.
● Weak Scaling: Speedup is measured while increasing both the problem size and the
number of processors proportionally.
Visualization of Speedup
Graph: Speedup vs Number of Processors

● X-axis: Number of processors (N).


● Y-axis: Speedup (S).

Key Observation:

● The speedup curve increases with more processors but levels off due to overhead and
the serial portion of the task.
Real-World Examples of Speedup
Example 1:

● Matrix Multiplication: A task where most of the work can be parallelized, leading to
significant speedup on multiple processors.

Example 2:

● Image Processing: Applying filters to an image can be parallelized on a pixel-by-pixel


basis, allowing for high speedup with multiple processors.
Limitations of Speedup
Diminishing Returns:

● As the number of processors increases, the speedup eventually levels off due to the
serial portion of the task.

Overhead:

● Communication and synchronization between processors introduce overhead, reducing


the effective speedup.
Conclusion
Summary:

● Speedup is a key measure of how well a task benefits from parallel processing.
● Ideal speedup is often unachievable due to serial portions of the task and overhead.
● Amdahl’s Law provides a framework to understand the limits of speedup.
Understanding Parallel
Efficiency
(Maximizing Performance in Parallel Computing)
What is Parallel Efficiency?
Definition:

● Parallel Efficiency is a metric that measures how effectively multiple processors are
utilized in parallel computing.
● It represents the ratio of achieved speedup to the number of processors used.

Formula:
Key Concept of Parallel Efficiency
● Efficiency (E): Indicates how well the computational workload is divided among the
processors.
● E = 1 (or 100%): Ideal efficiency, meaning perfect usage of all processors.
● E < 1 (or < 100%): Suboptimal efficiency due to overheads or imbalance in workload.
Formula Breakdown

Where:

● S = Speedup (performance gain from using multiple processors).


● N = Number of processors.

Key Insight:

● Parallel efficiency indicates whether adding more processors is providing proportional


performance improvement.
Ideal Parallel Efficiency
In an ideal scenario:

● If the speedup is equal to the number of processors (i.e., S = N), then E = 1 or 100%
efficiency.

Example:

● If using 4 processors gives a speedup of 4, the efficiency is:


Real-World Parallel Efficiency
In reality, parallel efficiency is less than 100% due to:

1. Overhead: Communication, synchronization, and task management across processors.


2. Non-parallelizable portions: Tasks that cannot be divided between processors (serial
portions).
3. Load imbalance: When work is not evenly distributed among processors, causing
some to be underutilized.

Key Insight:

● As more processors are added, the overhead often increases, reducing efficiency.
Example of Parallel Efficiency
Scenario:

● A task achieves a speedup of 5 on 8 processors.

Efficiency Calculation:

Interpretation:

● Only 62.5% of the processors’ potential is being effectively utilized.


Factors Affecting Parallel Efficiency
Task Granularity:
● Finer tasks are easier to divide across processors, leading to better efficiency.
Communication Overhead:
● Excessive communication between processors reduces overall efficiency.
Synchronization Delays:
● When processors need to synchronize frequently, the idle time increases, lowering
efficiency.
Load Imbalance:
● Uneven distribution of work results in some processors finishing earlier and staying
idle.
Parallel Efficiency with Amdahl's Law
Amdahl's Law limits the speedup, and therefore efficiency, based on the serial portion of a
task.

Amdahl’s Law:

Where P is the parallelizable portion and N is the number of processors.

Key Insight:

● As N increases, the serial portion (1 - P) becomes a bottleneck, reducing efficiency.


Visualization of Parallel Efficiency
Graph: Efficiency vs Number of Processors

● X-axis: Number of processors (N).


● Y-axis: Efficiency (E).

Key Observation:

● Efficiency decreases as the number of processors increases due to communication


overhead and serial portions of the task.
Improving Parallel Efficiency
1. Minimize Communication Overhead:
○ Reduce the amount of data shared between processors to avoid delays.
2. Optimize Load Balancing:
○ Ensure that all processors have an even distribution of work to prevent idle time.
3. Increase Parallelizable Portion (P):
○ Break tasks down further to maximize the portion that can be executed in
parallel.
4. Use Efficient Algorithms:
○ Choose algorithms designed for parallelism to reduce overhead and
synchronization needs.
Strong and Weak Scaling
Strong Scaling:

● Measures efficiency when the problem size stays constant and the number of
processors increases.
● Efficiency tends to decrease as more processors are added due to overhead.

Weak Scaling:

● Measures efficiency when the problem size increases proportionally with the number
of processors.
● Efficiency can remain more consistent if the workload increases with the processor
count.
Real-World Example: Parallel Efficiency in Matrix Multiplication
Matrix Multiplication Example:

● Parallelizing a matrix multiplication task using 16 processors achieves a speedup of


12.

Efficiency Calculation:

Key Insight:

● Although adding more processors improves speed, the efficiency is only 75% due to
communication overhead and non-parallelizable portions.
Conclusion
Summary:

● Parallel Efficiency measures how effectively multiple processors are utilized in


parallel computing.
● Ideal efficiency is 100%, but real-world factors like overhead, load imbalance, and
serial portions reduce efficiency.
● Optimization techniques can improve efficiency but are limited by Amdahl's Law.
Thread Creation in Java
(Using the Runnable Interface and Thread Class)
Introduction to Threads
● What is a Thread?
A thread is a lightweight process that allows a program to perform multiple tasks
simultaneously.
● Multithreading:
In Java, multithreading enables the concurrent execution of two or more parts of a
program to improve performance.

Key Point:

● Java provides two main ways to create threads: using the Runnable interface and the
Thread class.
Methods of Creating Threads in Java
1. Using the Runnable Interface
2. Extending the Thread Class

Key Difference:

● Runnable Interface separates the task from the thread itself, promoting loose
coupling and reusability.
● Thread Class binds the task and thread execution together.
Creating Threads Using the Runnable Interface
Step-by-Step Process:

1. Create a class that implements the Runnable interface.


2. Override the run() method to define the task to be executed by the thread.
3. Create an instance of the class.
4. Pass the instance to a Thread object.
5. Start the thread using the start() method.

Code Example:

class MyRunnable implements Runnable {

public void run() {

System.out.println("Thread is running using Runnable interface.");

}
Conti.,

public class Main {


public static void main(String[] args) {
MyRunnable myRunnable = new MyRunnable();
Thread thread = new Thread(myRunnable);
thread.start(); // Starts the thread
}
}
Creating Threads by Extending the Thread Class
Step-by-Step Process:
1. Create a class that extends the Thread class.
2. Override the run() method to define the task to be executed by the thread.
3. Create an instance of the class.
4. Start the thread using the start() method.
Code Example:
class MyThread extends Thread {
public void run() {
System.out.println("Thread is running by extending Thread class.");
}
}
Conti.,

public class Main {


public static void main(String[] args) {
MyThread myThread = new MyThread();
myThread.start(); // Starts the thread
}
}
Difference Between Runnable and Thread

Runnable Interface Thread Class

Implements the Runnable interface. Extends the Thread class.

Allows the class to extend another Cannot extend other classes since Java
class. supports single inheritance.

Promotes better decoupling. Directly ties task execution to thread.

Preferred when the class needs to Suitable for simple thread execution.
perform tasks other than just threading.
Thread Life Cycle
● New: The thread is created but not yet started.
● Runnable: After calling the start() method, the thread is ready to run when scheduled
by the OS.
● Running: The thread is actively executing in the run() method.
● Blocked: The thread is waiting for resources or I/O.
● Terminated: The thread has completed execution.

Key Methods:

● start(): Moves the thread to the Runnable state.


● run(): Contains the logic for the thread's task.
● join(): Waits for a thread to complete.
● sleep(): Pauses thread execution for a given time.
Runnable Example in a Real-World Scenario
Problem: Running two tasks concurrently — downloading a file and processing it.
Runnable Solution:
class DownloadTask implements Runnable {
public void run() {
System.out.println("Downloading file...");
// Simulate file download
}
}

class ProcessTask implements Runnable {


Conti.,
public void run() {
System.out.println("Processing file...");
// Simulate file processing
}
}

public class Main {


public static void main(String[] args) {
Thread downloadThread = new Thread(new DownloadTask());
Thread processThread = new Thread(new ProcessTask());
Conti.,

downloadThread.start();
processThread.start();
}
}
Advantages of Runnable Interface
Separation of Concerns: Allows you to separate the task logic from the thread
management.

Multiple Inheritance: A class can implement multiple interfaces, while it can only extend
one class.

Reusability: The same Runnable task can be reused across multiple threads.
When to Use Thread Class
1. Simple Threading Needs: If the class only handles thread execution and nothing else,
extending the Thread class can be simpler.
2. Small Tasks: For lightweight tasks, extending the Thread class can save extra code.

Key Limitation:

● Lack of flexibility as extending Thread prevents extending other classes.


Conclusion
● Runnable Interface: Best for separation of concerns and flexibility, especially when
the class needs to perform additional tasks.
● Thread Class: Suitable for simple, direct thread execution where extending another
class is unnecessary.

Final Thought:

● Best Practice: Use the Runnable interface for complex, scalable applications as it
allows more flexible design.
Multithreaded Client-Server
Application in Java
(Building Efficient Networked Systems)
Introduction to Client-Server Architecture
Client-Server Model:
A client-server architecture consists of a server that provides resources and services, and
clients that request those services.

Single-threaded vs. Multithreaded:


In a single-threaded server, only one client is served at a time, while a multithreaded
server can handle multiple clients concurrently.
Conti.,
What is a Multithreaded Client-Server Application?
Multithreaded Client-Server Application:

● A server that creates a new thread to handle each client request, allowing multiple
clients to interact with the server simultaneously.

Key Features:

● Concurrency: Multiple clients are handled at the same time.


● Thread Isolation: Each client interaction runs in its own thread, independent of others.
Conti.,
Components of a Multithreaded Client-Server Application
Client:

● Sends requests to the server and receives responses.


● Connects to the server using sockets.

Server:

● Listens for client requests on a specific port.


● For each incoming client, spawns a new thread to handle communication.

Threads:

● Each client request is processed by an individual thread on the server, ensuring


parallel processing of requests.
Server-Side Code Structure
Steps to Create a Multithreaded Server:
1. Create a Server Socket that listens for client connections.
2. Accept client connections and spawn a new thread for each client.
3. Handle client requests in each thread using input/output streams.
Code Example:
import java.io.*;
import java.net.*;

class ClientHandler extends Thread {


private Socket clientSocket;
Conti.,

public ClientHandler(Socket socket) {


this.clientSocket = socket;
}

public void run() {


try {
InputStream input = clientSocket.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
Conti.,
OutputStream output = clientSocket.getOutputStream();
PrintWriter writer = new PrintWriter(output, true);

String clientMessage;
while ((clientMessage = reader.readLine()) != null) {
System.out.println("Client: " + clientMessage);
writer.println("Server: " + clientMessage); // Echo message back to client
}
} catch (IOException e) {
e.printStackTrace();
Conti.,

}
}
}

public class MultiThreadedServer {


public static void main(String[] args) {
try (ServerSocket serverSocket = new ServerSocket(5000)) {
System.out.println("Server is listening on port 5000");
Conti.,
while (true) {

Socket clientSocket = serverSocket.accept();

System.out.println("New client connected");

// Create a new thread to handle client requests

new ClientHandler(clientSocket).start();

} catch (IOException ex) {

ex.printStackTrace();

}
Client-Side Code Structure
Steps to Create a Client:
1. Connect to the server using a socket.
2. Send requests to the server through the socket's output stream.
3. Receive responses from the server through the input stream.
Code Example:
import java.io.*;
import java.net.*;

public class Client {


public static void main(String[] args) {
Conti.,
String hostname = "localhost";
int port = 5000;

try (Socket socket = new Socket(hostname, port)) {


OutputStream output = socket.getOutputStream();
PrintWriter writer = new PrintWriter(output, true);

InputStream input = socket.getInputStream();


BufferedReader reader = new BufferedReader(new InputStreamReader(input));
Conti.,
BufferedReader consoleReader = new BufferedReader(new
InputStreamReader(System.in));
String message;

while (true) {
System.out.print("Enter message: ");
message = consoleReader.readLine();
writer.println(message);

String serverResponse = reader.readLine();


Conti.,
System.out.println(serverResponse);
}
} catch (UnknownHostException ex) {
System.out.println("Server not found: " + ex.getMessage());
} catch (IOException ex) {
System.out.println("I/O error: " + ex.getMessage());
}
}
Key Components of Multithreaded Server
Server Socket:
● Listens for client connections on a specific port.
Client Socket:
● Each thread uses a separate socket for client-server communication.
Input and Output Streams:
● InputStream to read data from the client.
● OutputStream to send data to the client.
Thread Pool (Optional):
● Instead of creating new threads on the fly, a thread pool can be used to manage a
fixed number of reusable threads.
Benefits of Multithreading in Client-Server Applications
Concurrency:
● Multiple clients can interact with the server simultaneously without waiting for each
other.
Scalability:
● Can handle a large number of clients by utilizing multiple threads to process requests
in parallel.
Improved Performance:
● Avoids bottlenecks associated with sequential processing of client requests.
Resource Efficiency:
● Threads share the same process memory space, which reduces resource overhead
compared to creating new processes.
Challenges in Multithreaded Client-Server Applications
Synchronization Issues:

● Shared resources may lead to race conditions and data inconsistency if not properly
synchronized.
Deadlocks:

● Threads may block each other, resulting in a deadlock if proper care is not taken in resource
allocation.
Scalability Limits:

● Creating a large number of threads may overwhelm the system, leading to performance
degradation.

Error Handling:
● Proper error handling for socket exceptions and thread interruptions is necessary to ensure
reliability.
Improving Efficiency with Thread Pools
Thread Pools:
Instead of creating new threads for every client, use a thread pool to manage a fixed number of threads for
better resource management.

ExecutorService Example:

import java.util.concurrent.*;

ExecutorService pool = Executors.newFixedThreadPool(10);

while (true) {

Socket clientSocket = serverSocket.accept();

pool.execute(new ClientHandler(clientSocket));

}
Conti.,
Advantages:
● Reduces overhead of creating and destroying threads.
● Better control over the number of concurrent threads.
Real-World Use Cases of Multithreaded Servers
1. Web Servers:
○ Handling multiple HTTP requests from different users concurrently (e.g., Apache,
Nginx).
2. Chat Applications:
○ Allowing real-time communication between multiple users in a chat room.
3. Game Servers:
○ Handling multiple players interacting with a game world in real time.
4. File Servers:
○ Serving file upload/download requests from multiple users simultaneously.
Conclusion
Multithreaded Client-Server Applications enable handling multiple client requests
concurrently, improving scalability and performance.

Java provides easy mechanisms to implement such applications using sockets, threads, and
executor services.

Proper care must be taken to avoid synchronization issues and performance bottlenecks.
Understanding Thread Pool
in Java
(ExecutorService and ForkJoinPool)
Introduction to Thread Pools
● What is a Thread Pool?
A thread pool is a pool of pre-created threads that are reused to execute multiple
tasks, instead of creating and destroying a thread for each task.

Benefits:

● Reduced Overhead: Avoids frequent thread creation and destruction.


● Improved Performance: Threads are reused, leading to faster task execution.
● Resource Management: Limits the number of concurrent threads to prevent system
overload.
Why Use a Thread Pool?
1. Performance Optimization:
○ Thread pools minimize the overhead associated with creating new threads.
2. Resource Efficiency:
○ By reusing threads, system resources like memory and CPU are used more
efficiently.
3. Controlled Concurrency:
○ Thread pools limit the number of concurrent tasks, preventing system overload by
controlling the thread count.
4. Better Task Management:
○ Tasks are queued and executed as threads become available, providing orderly
processing.
Executor Framework Overview
● Executor Framework: A high-level API that provides a mechanism to manage and
control threads, task execution, and scheduling.

Key Interfaces:

1. Executor: A simple interface that can execute submitted tasks.


2. ExecutorService: Provides methods to manage lifecycle and execution of tasks.
3. ScheduledExecutorService: Extends ExecutorService to handle tasks that are
scheduled to run at fixed intervals.
What is ExecutorService?
ExecutorService: A more advanced interface in the Executor framework that manages and
controls thread execution and termination.

Key Features:

● Manages thread pools.


● Provides methods to submit tasks, shutdown, and await termination.
● Allows asynchronous execution of tasks.

Key Methods:

● submit(): Submits a task for execution.


● shutdown(): Initiates an orderly shutdown of the pool.
● invokeAll(): Executes a collection of tasks.
Creating a Thread Pool Using ExecutorService
Example: Fixed-size thread pool using ExecutorService.
import java.util.concurrent.*;

public class ThreadPoolExample {


public static void main(String[] args) {
// Create a thread pool with 5 threads
ExecutorService executor = Executors.newFixedThreadPool(5);

// Submit tasks to the pool


for (int i = 0; i < 10; i++) {
Conti.,
Runnable task = new Task(i);

executor.submit(task);

// Shutdown the executor

executor.shutdown();

class Task implements Runnable {

private int taskId;


Conti.,

public Task(int id) {


this.taskId = id;
}

@Override
public void run() {
System.out.println("Executing task " + taskId + " by " +
Thread.currentThread().getName());
}
}
Types of ExecutorService Implementations
1. FixedThreadPool:
○ A pool with a fixed number of threads.
○ Example: Executors.newFixedThreadPool(4)
2. CachedThreadPool:
○ A pool that creates new threads as needed and reuses old ones.
○ Example: Executors.newCachedThreadPool()
3. SingleThreadExecutor:
○ A pool with only one thread, useful for sequential task execution.
○ Example: Executors.newSingleThreadExecutor()
4. ScheduledThreadPool:
○ A pool that allows scheduling tasks at fixed intervals.
○ Example: Executors.newScheduledThreadPool(3)
ForkJoinPool Overview
ForkJoinPool:

● A specialized thread pool designed for parallel processing tasks that can be broken
down into smaller sub-tasks (recursive tasks).

Key Concept:

● Fork/Join Framework: Tasks are recursively divided (forked) into smaller sub-tasks and
then combined (joined) after execution.

Use Case:

● Ideal for divide-and-conquer algorithms, like parallel sorting or computing large


sums.
ForkJoinPool in Action
Example: Parallel sum calculation using ForkJoinPool.
import java.util.concurrent.*;

class SumTask extends RecursiveTask<Integer> {


private int[] numbers;
private int start, end;
private static final int THRESHOLD = 10;

public SumTask(int[] numbers, int start, int end) {


this.numbers = numbers;
Conti.,
this.start = start;
this.end = end;
}

@Override
protected Integer compute() {
if (end - start <= THRESHOLD) {
int sum = 0;
for (int i = start; i < end; i++) {
sum += numbers[i];
Conti.,
}
return sum;
} else {
int middle = (start + end) / 2;
SumTask leftTask = new SumTask(numbers, start, middle);
SumTask rightTask = new SumTask(numbers, middle, end);

leftTask.fork();
int rightResult = rightTask.compute();
int leftResult = leftTask.join();
Conti.,
return leftResult + rightResult;
}
}
}

public class ForkJoinExample {


public static void main(String[] args) {
ForkJoinPool pool = new ForkJoinPool();
int[] numbers = new int[1000];
// Initialize array
Conti.,
SumTask task = new SumTask(numbers, 0, numbers.length);
int result = pool.invoke(task);
System.out.println("Sum: " + result);
}
}
ForkJoinPool vs ExecutorService

ForkJoinPool ExecutorService

Suitable for recursive, divide-and-conquer Suitable for simple tasks without complex
tasks. division.

Efficient for parallel processing of large General-purpose for managing task


tasks. execution.

Uses a work-stealing algorithm for Does not perform automatic task division or
dynamic load balancing. balancing.

Best for tasks that can be broken down into Best for executing independent tasks
sub-tasks. concurrently.
Advantages of Using Thread Pools
Optimized Resource Usage:

● Reusing threads reduces the overhead of creating/destroying threads.

Scalability:

● Thread pools can handle a large number of tasks concurrently without overwhelming
the system.
Task Queuing:

● Tasks are queued and executed as threads become available, ensuring orderly
execution.

Improved Performance:

● Reduced thread lifecycle management and reuse lead to better performance.


Common Challenges in Thread Pools
Deadlocks:
● Improper synchronization between threads can cause deadlocks, leading to hanging
tasks.
Thread Starvation:
● If too many long-running tasks are submitted, it can prevent other tasks from executing.
Task Rejection:
● Once the pool reaches its capacity, new tasks may be rejected unless properly handled
(e.g., via RejectedExecutionHandler).
Overhead with Large Pools:
● Having too many threads in the pool may lead to contention for system resources,
degrading performance.
Conclusion
ExecutorService: Ideal for general-purpose task execution with flexible thread pool
management.

ForkJoinPool: Best for recursive, parallelizable tasks like divide-and-conquer algorithms.

Thread pools help manage and optimize thread creation, resource usage, and task
execution in a scalable and efficient manner.
Parallel Performance
Analysis by Controlling Task
Granularity
(Optimizing Parallel Computing Performance)
Introduction to Parallel Performance
● Parallel Computing:
The process of breaking down large problems into smaller sub-tasks and executing
them concurrently to improve performance.
● Key Challenge:
Balancing the granularity of tasks to maximize parallel efficiency while minimizing
overhead.

Task Granularity:

● Refers to the size of the tasks or work units. It is a critical factor in determining the
performance of parallel programs.
What is Task Granularity?
Fine-Grained Tasks:
● Small tasks that require less computational work.
● Advantage: Greater parallelism potential.
● Disadvantage: High communication and synchronization overhead.
Coarse-Grained Tasks:
● Larger tasks with more computational work per task.
● Advantage: Less overhead in communication and synchronization.
● Disadvantage: Less parallelism, risk of load imbalance.
The Importance of Controlling Task Granularity
● Optimal Task Granularity ensures a balance between task execution time and
parallelism.
Key Considerations:
1. Task Creation Overhead:
Creating and managing smaller tasks can increase overhead.
2. Communication Overhead:
Fine-grained tasks may require more frequent communication between
processes/threads.
3. Load Balancing:
Fine-grained tasks are easier to distribute evenly across processing units.
4. Synchronization Costs:
More tasks can lead to more frequent synchronization, which might slow down
execution.
Balancing Granularity for Performance
Fine-Grained:

● Pros:
○ High degree of parallelism.
○ Better load distribution.
● Cons:
○ Increased communication overhead.
○ High task management and synchronization costs.
Coarse-Grained:

● Pros:
○ Reduced synchronization and communication overhead.
○ Simpler task management.
● Cons:
○ Limited parallelism potential.
○ Risk of uneven load distribution (load imbalance).
How Granularity Affects Parallel Performance
Scenarios:

1. Too Fine Granularity:


○ High overhead for task creation and communication, leading to poor performance
despite more parallelism.
2. Too Coarse Granularity:
○ Under-utilization of processing units, leading to idle processors and reduced
performance.

Goal:

● Find the sweet spot between fine-grained and coarse-grained tasks where the system
can maximize resource utilization while minimizing overhead.
Task Granularity and Amdahl's Law
Amdahl’s Law:

● Describes the potential speedup of a parallel program, where the degree of speedup is
limited by the sequential portion of the task.

Impact on Granularity:

● Increasing the parallel portion of the task improves speedup, but very fine-grained
tasks might result in higher overhead, reducing the benefits predicted by Amdahl’s
Law.
Example: Parallel Sorting Algorithm
Fine-Grained Approach:

● Divide the sorting task into very small sub-arrays. Each thread works on a small
portion of the array.
● Outcome:
○ High degree of parallelism but communication and synchronization overhead
dominate performance.

Coarse-Grained Approach:

● Divide the array into large chunks. Each thread sorts a large portion independently.
● Outcome:
○ Less overhead but potential for load imbalance between threads.
Performance Metrics in Parallel Analysis
1. Speedup:
○ Measures the performance improvement when running a task in parallel vs.
sequential.
2. Parallel Efficiency:
○ Ratio of speedup to the number of processing units.
○ Efficiency decreases with too fine or too coarse granularity.
3. Scalability:
○ How well the performance improves as more threads or processors are added.
○ Granularity plays a key role in determining how scalable an application is.
Optimizing Granularity Using Recursive Task Splitting
● ForkJoin Framework:
○ Utilizes recursive task splitting to control granularity dynamically during
execution.
● Strategy:
○ Split tasks recursively until they reach an optimal size for parallel execution.
○ Ensures the right balance of fine-grained and coarse-grained tasks.
Code Example:
class ParallelTask extends RecursiveTask<Integer> {
private int[] data;
private int start, end;
private static final int THRESHOLD = 100;
Conti.,
@Override
protected Integer compute() {
if (end - start <= THRESHOLD) {
return processSequentially();
} else {
int mid = (start + end) / 2;
ParallelTask leftTask = new ParallelTask(data, start, mid);
ParallelTask rightTask = new ParallelTask(data, mid, end);

leftTask.fork();
Conti.,
int rightResult = rightTask.compute();
int leftResult = leftTask.join();

return leftResult + rightResult;


}
}
}
Techniques to Control Task Granularity
1. Dynamic Task Splitting:
○ Automatically split tasks as they are processed to maintain an optimal granularity
level.
2. Thread Pool Management:
○ Control the number of active threads to match task granularity for better load
balancing.
3. Thresholding:
○ Set a threshold to determine when tasks should be processed sequentially vs.
parallel, based on task size.
4. Work-Stealing Algorithm:
○ Threads that finish early can "steal" work from others, improving load balancing
with finer-grained tasks.
Real-World Application of Task Granularity
Parallel Data Processing:
● In big data analytics, the data is divided into smaller chunks (fine-grained) but
ensuring tasks are large enough to justify overhead (coarse-grained).
Image Processing:
● Dividing the image into fine-grained tiles ensures parallelism, but controlling task size
is critical to avoiding excessive task management overhead.
Scientific Simulations:
● Fine control over task size ensures efficient parallel computation across large datasets.
Best Practices for Controlling Task Granularity
1. Measure Overhead:
○ Analyze the overhead associated with task creation, communication, and
synchronization.
2. Adjust Task Size Dynamically:
○ Use frameworks like ForkJoinPool to dynamically adjust task size at runtime.
3. Experiment with Thresholds:
○ Tune the threshold for task splitting to find the best balance for the problem at
hand.
4. Profile and Analyze Performance:
○ Regularly profile performance to adjust granularity based on empirical data.
Conclusion
Task Granularity is crucial in achieving optimal parallel performance.

Fine-grained tasks provide greater parallelism, but risk high overhead; coarse-grained tasks
minimize overhead but can suffer from load imbalance.

Balancing granularity is key to maximizing speedup, efficiency, and scalability in parallel


programs.

You might also like