Advance Programming unit4
Advance Programming unit4
Key Points:
● JavaFX is a rich client application platform for Java that supports GUI development.
● It provides a lightweight and powerful framework for developing desktop
applications with interactive UIs.
Key Features:
Key Elements:
1. Event Source: The component that generates the event (e.g., a button).
2. Event Object: Encapsulates information about the event (e.g., mouse click, key press).
3. Event Handler: The code that processes the event.
JavaFX Event Types
Common event types in JavaFX include:
Example:
btn.setOnAction(event -> {
System.out.println("Button clicked!");
});
Example of Event Handling in JavaFX
Example: Button Click Event
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.stage.Stage;
Syntax:
● Events propagate through the Scene Graph from source to target nodes via two
phases:
1. Capturing Phase: The event moves down the Scene Graph.
2. Bubbling Phase: The event moves back up the Scene Graph.
● Events pass through parent-child relationships in the Scene Graph, allowing custom
handling at multiple levels.
Example: Mouse Event Handling
Handling mouse events like click, hover, and dragging using event listeners.
Example:
Rectangle rect = new Rectangle(100, 100);
rect.setOnMouseEntered(event -> {
System.out.println("Mouse entered rectangle!");
});
rect.setOnMouseExited(event -> {
System.out.println("Mouse exited rectangle!");
});
Conti.,
Key Points:
● onMouseEntered and onMouseExited handle when the mouse enters or leaves the
node area.
● Useful for interactive UI feedback like highlighting elements on hover.
Use of JavaFX Properties and Bindings
Properties in JavaFX represent an observable value.
Bindings allow you to automatically update UI components based on property changes.
Example:
DoubleProperty width = new SimpleDoubleProperty(100);
Rectangle rect = new Rectangle();
rect.widthProperty().bind(width);
Key Points:
● Event-driven changes in properties (like window resizing) can automatically update
the UI.
● Useful for responsive designs and dynamic UIs.
Best Practices in Event-Driven Programming
1. Keep Event Handlers Simple: Handlers should be lightweight and focus on specific
tasks.
2. Avoid Business Logic in Handlers: Use handlers to trigger business logic, but keep
complex logic outside.
3. Use Lambda Expressions: Simplify event-handling code with lambdas.
4. Event Bubbling & Filtering: Leverage bubbling and filtering to handle events
efficiently.
Key Points:
● Ensures that the event-driven system remains maintainable and efficient.
Conclusion
Summary:
● JavaFX supports event-driven programming with robust event handling for GUIs.
● Event listeners and handlers define how the application responds to user actions.
● Use JavaFX properties and bindings for dynamic interactions.
Introduction to
Multithreading in Java
(Enhancing Performance Through Concurrency)
What is Multithreading?
Definition:
Key Points:
Threads:
● A thread is a smaller unit of a process that shares memory and resources with other
threads in the same process.
● Context switching between threads is faster than between processes.
Key Points:
}
Conti.,
}
}
● Two threads (t1 and t2) run concurrently, each executing the run() method.
● They share CPU time to execute tasks.
Synchronization in Multithreading
When multiple threads access shared resources, it can lead to issues like race conditions
and inconsistent data.
Synchronization ensures that only one thread can access a critical section of code at a time.
Example:
public synchronized void syncMethod() {
// critical section
}
Key Points:
● Use synchronized keyword to ensure that only one thread can access a method at a
time.
● Prevents data corruption and ensures thread safety.
Inter-Thread Communication
Java provides methods for threads to communicate with each other:
● wait(): Causes a thread to wait until another thread invokes notify() or notifyAll().
● notify(): Wakes up a single waiting thread.
● notifyAll(): Wakes up all waiting threads.
Example:
synchronized(obj) {
obj.wait(); // waits for notify
obj.notify(); // wakes up waiting thread
}
Deadlocks in Multithreading
What is a Deadlock?
● A deadlock occurs when two or more threads are blocked forever, each waiting for the
other to release a resource.
Example:
● Thread A holds resource X and waits for resource Y, while Thread B holds resource Y
and waits for resource X.
Key Points:
● Amdahl’s Law is a formula that predicts the theoretical maximum speedup of a task
using parallel processing, based on the proportion of the task that can be parallelized.
● Named after Gene Amdahl, who formulated the law in 1967.
Key Insight:
● No matter how many processors are added, the performance improvement is limited
by the portion of the task that cannot be parallelized.
Formula for Amdahl's Law
Formula:
Where:
● S = Maximum speedup.
● P = Fraction of the program that can be parallelized.
● N = Number of processors or cores.
● (1 - P) = Fraction of the program that is serial (cannot be parallelized).
Key Points:
● As N (number of processors) increases, the effect of the serial portion becomes more
significant in limiting the speedup.
Explanation of Components
● P (Parallelizable Portion):
The part of the task that can be executed concurrently across multiple processors.
● 1 - P (Serial Portion):
The portion of the task that must be executed sequentially, regardless of how many
processors are available.
● N (Number of Processors):
The number of processors or cores available to execute the parallelizable portion.
Key Insight:
● The larger the serial portion of a task, the less effective parallel processing will be.
Visualization of Amdahl's Law
Graph: Speedup vs Number of Processors
Key Observation:
● Speedup increases as processors are added, but eventually levels off due to the serial
portion limiting further improvements.
Example of Amdahl's Law
● A program has 70% of its tasks that can be parallelized (P = 0.7) and 30% that must
run sequentially.
Interpretation:
● With 4 processors, the maximum speedup is 2.11x, not 4x, due to the serial portion.
Impact of Serial Portion on Speedup
Key Insight:
● Even if you use an infinite number of processors, the maximum speedup is limited by
the serial portion of the program.
● Parallel portion speeds up, but the serial portion becomes a bottleneck.
● Adding more processors beyond a certain point yields diminishing returns.
Key Concept:
● Performance gains from adding processors flatten as the impact of the serial portion
dominates.
Amdahl's Law in Practice
Applications:
Real-World Example:
● In data processing applications, if a task like reading from disk cannot be parallelized,
adding more processors will not significantly speed up the overall process.
Limitations of Amdahl's Law
Assumes Fixed Workload: Amdahl's Law does not account for changes in the problem size,
where more processors could handle larger workloads.
No Dynamic Scaling: The law assumes the parallel and serial portions are fixed, which may
not be true in dynamic environments.
Gustafson's Law as a Counterpoint
Gustafson’s Law complements Amdahl's Law by considering that the size of the problem
can increase as the number of processors increases.
● It argues that more processors allow you to handle larger problems rather than just
completing a fixed task faster.
Key Insight:
Formula:
Where:
Key Points:
Real Speedup:
● In practice, speedup is often less than ideal due to factors like overhead,
communication delays, and the serial portion of the task.
Formula for Ideal Speedup
Ideal Speedup Formula:
S=N
Where N is the number of processors.
● If you have 4 processors, ideal speedup would be 4x.
Example:
● A task takes 10 seconds on 1 processor. On 4 processors, it should ideally take 2.5
seconds:
Realistic Speedup (Amdahl’s Law)
In reality, speedup is limited by the portion of the task that cannot be parallelized.
Amdahl’s Law:
Where:
● S = Maximum speedup.
● P = Fraction of the program that can be parallelized.(parallelizable portion)
● N = Number of processors or cores.
● (1 - P) = Fraction of the program that is serial (serial portion) or (cannot be
parallelized)
Speedup Example Using Amdahl's Law
Example:
Key Insight:
● Even with 4 processors, the speedup is only 1.82x due to the serial portion of the task.
Superlinear Speedup
Definition:
● Superlinear speedup occurs when the speedup is greater than the number of
processors used (i.e., S > N).
1. Cache Effects: More processors can lead to better cache utilization, reducing memory
access times.
2. Algorithmic Changes: Parallel execution might expose optimizations that improve
performance beyond just parallelization.
Key Insight:
● Refers to how well a program can maintain or increase its speedup as more processors
are added.
Types of Scalability:
● Strong Scaling: Speedup is measured while keeping the problem size constant and
increasing the number of processors.
● Weak Scaling: Speedup is measured while increasing both the problem size and the
number of processors proportionally.
Visualization of Speedup
Graph: Speedup vs Number of Processors
Key Observation:
● The speedup curve increases with more processors but levels off due to overhead and
the serial portion of the task.
Real-World Examples of Speedup
Example 1:
● Matrix Multiplication: A task where most of the work can be parallelized, leading to
significant speedup on multiple processors.
Example 2:
● As the number of processors increases, the speedup eventually levels off due to the
serial portion of the task.
Overhead:
● Speedup is a key measure of how well a task benefits from parallel processing.
● Ideal speedup is often unachievable due to serial portions of the task and overhead.
● Amdahl’s Law provides a framework to understand the limits of speedup.
Understanding Parallel
Efficiency
(Maximizing Performance in Parallel Computing)
What is Parallel Efficiency?
Definition:
● Parallel Efficiency is a metric that measures how effectively multiple processors are
utilized in parallel computing.
● It represents the ratio of achieved speedup to the number of processors used.
Formula:
Key Concept of Parallel Efficiency
● Efficiency (E): Indicates how well the computational workload is divided among the
processors.
● E = 1 (or 100%): Ideal efficiency, meaning perfect usage of all processors.
● E < 1 (or < 100%): Suboptimal efficiency due to overheads or imbalance in workload.
Formula Breakdown
Where:
Key Insight:
● If the speedup is equal to the number of processors (i.e., S = N), then E = 1 or 100%
efficiency.
Example:
Key Insight:
● As more processors are added, the overhead often increases, reducing efficiency.
Example of Parallel Efficiency
Scenario:
Efficiency Calculation:
Interpretation:
Amdahl’s Law:
Key Insight:
Key Observation:
● Measures efficiency when the problem size stays constant and the number of
processors increases.
● Efficiency tends to decrease as more processors are added due to overhead.
Weak Scaling:
● Measures efficiency when the problem size increases proportionally with the number
of processors.
● Efficiency can remain more consistent if the workload increases with the processor
count.
Real-World Example: Parallel Efficiency in Matrix Multiplication
Matrix Multiplication Example:
Efficiency Calculation:
Key Insight:
● Although adding more processors improves speed, the efficiency is only 75% due to
communication overhead and non-parallelizable portions.
Conclusion
Summary:
Key Point:
● Java provides two main ways to create threads: using the Runnable interface and the
Thread class.
Methods of Creating Threads in Java
1. Using the Runnable Interface
2. Extending the Thread Class
Key Difference:
● Runnable Interface separates the task from the thread itself, promoting loose
coupling and reusability.
● Thread Class binds the task and thread execution together.
Creating Threads Using the Runnable Interface
Step-by-Step Process:
Code Example:
}
Conti.,
Allows the class to extend another Cannot extend other classes since Java
class. supports single inheritance.
Preferred when the class needs to Suitable for simple thread execution.
perform tasks other than just threading.
Thread Life Cycle
● New: The thread is created but not yet started.
● Runnable: After calling the start() method, the thread is ready to run when scheduled
by the OS.
● Running: The thread is actively executing in the run() method.
● Blocked: The thread is waiting for resources or I/O.
● Terminated: The thread has completed execution.
Key Methods:
downloadThread.start();
processThread.start();
}
}
Advantages of Runnable Interface
Separation of Concerns: Allows you to separate the task logic from the thread
management.
Multiple Inheritance: A class can implement multiple interfaces, while it can only extend
one class.
Reusability: The same Runnable task can be reused across multiple threads.
When to Use Thread Class
1. Simple Threading Needs: If the class only handles thread execution and nothing else,
extending the Thread class can be simpler.
2. Small Tasks: For lightweight tasks, extending the Thread class can save extra code.
Key Limitation:
Final Thought:
● Best Practice: Use the Runnable interface for complex, scalable applications as it
allows more flexible design.
Multithreaded Client-Server
Application in Java
(Building Efficient Networked Systems)
Introduction to Client-Server Architecture
Client-Server Model:
A client-server architecture consists of a server that provides resources and services, and
clients that request those services.
● A server that creates a new thread to handle each client request, allowing multiple
clients to interact with the server simultaneously.
Key Features:
Server:
Threads:
String clientMessage;
while ((clientMessage = reader.readLine()) != null) {
System.out.println("Client: " + clientMessage);
writer.println("Server: " + clientMessage); // Echo message back to client
}
} catch (IOException e) {
e.printStackTrace();
Conti.,
}
}
}
new ClientHandler(clientSocket).start();
ex.printStackTrace();
}
Client-Side Code Structure
Steps to Create a Client:
1. Connect to the server using a socket.
2. Send requests to the server through the socket's output stream.
3. Receive responses from the server through the input stream.
Code Example:
import java.io.*;
import java.net.*;
while (true) {
System.out.print("Enter message: ");
message = consoleReader.readLine();
writer.println(message);
● Shared resources may lead to race conditions and data inconsistency if not properly
synchronized.
Deadlocks:
● Threads may block each other, resulting in a deadlock if proper care is not taken in resource
allocation.
Scalability Limits:
● Creating a large number of threads may overwhelm the system, leading to performance
degradation.
Error Handling:
● Proper error handling for socket exceptions and thread interruptions is necessary to ensure
reliability.
Improving Efficiency with Thread Pools
Thread Pools:
Instead of creating new threads for every client, use a thread pool to manage a fixed number of threads for
better resource management.
ExecutorService Example:
import java.util.concurrent.*;
while (true) {
pool.execute(new ClientHandler(clientSocket));
}
Conti.,
Advantages:
● Reduces overhead of creating and destroying threads.
● Better control over the number of concurrent threads.
Real-World Use Cases of Multithreaded Servers
1. Web Servers:
○ Handling multiple HTTP requests from different users concurrently (e.g., Apache,
Nginx).
2. Chat Applications:
○ Allowing real-time communication between multiple users in a chat room.
3. Game Servers:
○ Handling multiple players interacting with a game world in real time.
4. File Servers:
○ Serving file upload/download requests from multiple users simultaneously.
Conclusion
Multithreaded Client-Server Applications enable handling multiple client requests
concurrently, improving scalability and performance.
Java provides easy mechanisms to implement such applications using sockets, threads, and
executor services.
Proper care must be taken to avoid synchronization issues and performance bottlenecks.
Understanding Thread Pool
in Java
(ExecutorService and ForkJoinPool)
Introduction to Thread Pools
● What is a Thread Pool?
A thread pool is a pool of pre-created threads that are reused to execute multiple
tasks, instead of creating and destroying a thread for each task.
Benefits:
Key Interfaces:
Key Features:
Key Methods:
executor.submit(task);
executor.shutdown();
@Override
public void run() {
System.out.println("Executing task " + taskId + " by " +
Thread.currentThread().getName());
}
}
Types of ExecutorService Implementations
1. FixedThreadPool:
○ A pool with a fixed number of threads.
○ Example: Executors.newFixedThreadPool(4)
2. CachedThreadPool:
○ A pool that creates new threads as needed and reuses old ones.
○ Example: Executors.newCachedThreadPool()
3. SingleThreadExecutor:
○ A pool with only one thread, useful for sequential task execution.
○ Example: Executors.newSingleThreadExecutor()
4. ScheduledThreadPool:
○ A pool that allows scheduling tasks at fixed intervals.
○ Example: Executors.newScheduledThreadPool(3)
ForkJoinPool Overview
ForkJoinPool:
● A specialized thread pool designed for parallel processing tasks that can be broken
down into smaller sub-tasks (recursive tasks).
Key Concept:
● Fork/Join Framework: Tasks are recursively divided (forked) into smaller sub-tasks and
then combined (joined) after execution.
Use Case:
@Override
protected Integer compute() {
if (end - start <= THRESHOLD) {
int sum = 0;
for (int i = start; i < end; i++) {
sum += numbers[i];
Conti.,
}
return sum;
} else {
int middle = (start + end) / 2;
SumTask leftTask = new SumTask(numbers, start, middle);
SumTask rightTask = new SumTask(numbers, middle, end);
leftTask.fork();
int rightResult = rightTask.compute();
int leftResult = leftTask.join();
Conti.,
return leftResult + rightResult;
}
}
}
ForkJoinPool ExecutorService
Suitable for recursive, divide-and-conquer Suitable for simple tasks without complex
tasks. division.
Uses a work-stealing algorithm for Does not perform automatic task division or
dynamic load balancing. balancing.
Best for tasks that can be broken down into Best for executing independent tasks
sub-tasks. concurrently.
Advantages of Using Thread Pools
Optimized Resource Usage:
Scalability:
● Thread pools can handle a large number of tasks concurrently without overwhelming
the system.
Task Queuing:
● Tasks are queued and executed as threads become available, ensuring orderly
execution.
Improved Performance:
Thread pools help manage and optimize thread creation, resource usage, and task
execution in a scalable and efficient manner.
Parallel Performance
Analysis by Controlling Task
Granularity
(Optimizing Parallel Computing Performance)
Introduction to Parallel Performance
● Parallel Computing:
The process of breaking down large problems into smaller sub-tasks and executing
them concurrently to improve performance.
● Key Challenge:
Balancing the granularity of tasks to maximize parallel efficiency while minimizing
overhead.
Task Granularity:
● Refers to the size of the tasks or work units. It is a critical factor in determining the
performance of parallel programs.
What is Task Granularity?
Fine-Grained Tasks:
● Small tasks that require less computational work.
● Advantage: Greater parallelism potential.
● Disadvantage: High communication and synchronization overhead.
Coarse-Grained Tasks:
● Larger tasks with more computational work per task.
● Advantage: Less overhead in communication and synchronization.
● Disadvantage: Less parallelism, risk of load imbalance.
The Importance of Controlling Task Granularity
● Optimal Task Granularity ensures a balance between task execution time and
parallelism.
Key Considerations:
1. Task Creation Overhead:
Creating and managing smaller tasks can increase overhead.
2. Communication Overhead:
Fine-grained tasks may require more frequent communication between
processes/threads.
3. Load Balancing:
Fine-grained tasks are easier to distribute evenly across processing units.
4. Synchronization Costs:
More tasks can lead to more frequent synchronization, which might slow down
execution.
Balancing Granularity for Performance
Fine-Grained:
● Pros:
○ High degree of parallelism.
○ Better load distribution.
● Cons:
○ Increased communication overhead.
○ High task management and synchronization costs.
Coarse-Grained:
● Pros:
○ Reduced synchronization and communication overhead.
○ Simpler task management.
● Cons:
○ Limited parallelism potential.
○ Risk of uneven load distribution (load imbalance).
How Granularity Affects Parallel Performance
Scenarios:
Goal:
● Find the sweet spot between fine-grained and coarse-grained tasks where the system
can maximize resource utilization while minimizing overhead.
Task Granularity and Amdahl's Law
Amdahl’s Law:
● Describes the potential speedup of a parallel program, where the degree of speedup is
limited by the sequential portion of the task.
Impact on Granularity:
● Increasing the parallel portion of the task improves speedup, but very fine-grained
tasks might result in higher overhead, reducing the benefits predicted by Amdahl’s
Law.
Example: Parallel Sorting Algorithm
Fine-Grained Approach:
● Divide the sorting task into very small sub-arrays. Each thread works on a small
portion of the array.
● Outcome:
○ High degree of parallelism but communication and synchronization overhead
dominate performance.
Coarse-Grained Approach:
● Divide the array into large chunks. Each thread sorts a large portion independently.
● Outcome:
○ Less overhead but potential for load imbalance between threads.
Performance Metrics in Parallel Analysis
1. Speedup:
○ Measures the performance improvement when running a task in parallel vs.
sequential.
2. Parallel Efficiency:
○ Ratio of speedup to the number of processing units.
○ Efficiency decreases with too fine or too coarse granularity.
3. Scalability:
○ How well the performance improves as more threads or processors are added.
○ Granularity plays a key role in determining how scalable an application is.
Optimizing Granularity Using Recursive Task Splitting
● ForkJoin Framework:
○ Utilizes recursive task splitting to control granularity dynamically during
execution.
● Strategy:
○ Split tasks recursively until they reach an optimal size for parallel execution.
○ Ensures the right balance of fine-grained and coarse-grained tasks.
Code Example:
class ParallelTask extends RecursiveTask<Integer> {
private int[] data;
private int start, end;
private static final int THRESHOLD = 100;
Conti.,
@Override
protected Integer compute() {
if (end - start <= THRESHOLD) {
return processSequentially();
} else {
int mid = (start + end) / 2;
ParallelTask leftTask = new ParallelTask(data, start, mid);
ParallelTask rightTask = new ParallelTask(data, mid, end);
leftTask.fork();
Conti.,
int rightResult = rightTask.compute();
int leftResult = leftTask.join();
Fine-grained tasks provide greater parallelism, but risk high overhead; coarse-grained tasks
minimize overhead but can suffer from load imbalance.