PPL Unit - Iv
PPL Unit - Iv
UNIT – IV
Each task in a program can support one thread of control. Tasks are sometimes
called processes.
In some languages, for example Java and C#, certain methods serve as tasks. Such
methods are executed in objects called threads.
Second, when a program unit invokes a task, in some cases it need not wait for the
task to complete its execution before continuing its own
Third, when the execution of a task is completed, control may or may not return to
the unit that started that execution.
lightweight tasks can be more efficient than heavyweight tasks, because less effort is
required to manage their execution.
A task can communicate with other tasks through shared nonlocal variables, through
message passing, or through parameters.
If a task does not communicate with or affect the execution of any other task in the
program in any way, it is said to be disjoint.
Because tasks often work together to create simulations or solve problems and
therefore are not disjoint, they must use some form of communication to either
synchronize their executions or share data or both.
Two kinds of synchronization are required when tasks share data: cooperation and
competition.
Competition synchronization is required between two tasks when both require the
use of some resource that cannot be simultaneously used.
Produced data are usually placed in a storage buffer by the producing unit and
removed from that buffer by the consuming unit.
The sequence of stores to and removals from the buffer must be synchronized.
The consumer unit must not be allowed to take data from the buffer if the buffer is
empty.
The producer unit cannot be allowed to place new data in the buffer if the buffer is
full.
This is a problem of cooperation synchronization because the users of the shared data
structure must cooperate if the buffer is to be used correctly.
Competition synchronization prevents two tasks from accessing a shared data
structure at exactly the same time
To clarify the competition problem, consider the following scenario: Suppose
task A has the statement
At the machine language level, each task may accomplish its operation
on TOTAL with the following three-step process:
Value
of TOTAL3 4 6
Task A
Fetch Add 1 Store
TOTAL TOTAL
Task B
Fetch Multiply Store
TOTA
TOTAL by 2 L
Time ____________________________________
scheduler manages the sharing of processors among the tasks. If there were never
any interruptions and tasks all had the same priority,
the scheduler could simply give each task a time slice, such as 0.1 second, and when
a task’s turn came, the scheduler could let it execute on a processor for that amount
of time.
there are several events that complicate this, for example, task delays for
synchronization and for input or output operations.
Because input and output operations are very slow relative to the processor’s speed,
a task is not allowed to keep a processor while it waits for completion of such an
operation.
Tasks can be in several different states:
New: A task is in the new state when it has been created but has not yet begun its
execution.
Ready: A ready task is ready to run but is not currently running. Either it has not
been given processor time by the scheduler, or it had run previously but was blocked
in one of the ways.
asks that are ready to run are stored in a queue that is often called the task ready
queue.
Running: A running task is one that is currently executing; that is, it has a processor
and its code is being executed.
Blocked: A task that is blocked has been running, but that execution was interrupted
by one of several different events, the most common of which is an input or output
operation.
In addition to input and output, some languages provide operations for the user
program to specify that a task is to be blocked
Dead: A dead task is no longer active in any sense. A task dies when its execution is
completed or it is explicitly killed by the program.
Semaphores:
Semaphore is a simple mechanism that can be used to provide synchronization of
tasks.
In an effort to provide competition synchronization through mutually exclusive
access to shared data structures
Semaphores can also be used to provide cooperation synchronization.
To provide limited access to a data structure, guards can be placed around the code
that accesses the structure.
A guard is a linguistic device that allows the guarded code to be executed only when
a specified condition is true.
A guard can be used to allow only one task to access a shared data structure at a
time.
Symmetric Naming
Process P Process Q
Sender Receiver
. .
. .
send(Q, message); receive(P, &message);
. .
. .
Asymmetric Naming
Process P Process Q
Sender Receiver
. .
. .
send(Q, message); receive(&message);
. .
. .
Here the receiver will accept a message from any sender. The sender can pass its
own id inside the message if it wants.
Synchronous Communication
P and Q have to wait for each other (one blocks until the other is ready).
Asynchronous Communication
Underlying system buffers the messages so P and Q don't have to wait for each
other. This is less efficient due to the overhead of managing the buffer.
Client-Server Communication
A system in which the receiver (server) does care about the sender and is normally
responsible for sending a reply message.
Rendezvous
The client blocks while the server computes and sends back the reply.
If the computation may take a long time, and the client has useful work to do, the
rendezvous action should be kept short and some other way can be found to get the
result back to the client.
Note that because the client blocks while the response is sent back, we say message
exchange is atomic.
Callback
The client calls the server with the input data and its id and doesn't block. Sometime
later, the server will call the client and pass the result.
Receipt
The client calls the server, passing in the input data and receiving (immediately) a
receipt from the server; later, the client contacts the server again, using the receipt to
get the result. (Supposedly, dry cleaners and photo developers work this way.)
Mailbox
The client gives the server the address of the mailbox where it wants the result to go;
the client retrieves the result later.
A mailbox can also be used to send the input data to the server:
Relay
A relay implements a fully asynchronous, no-wait send, and is used only when no
reply is necessary. The relay blocks "on behalf of" the client.
Buffer
As we can observe in, the above diagram a thread runs inside the process and there
will be context-based switching between threads there can be multiple processes
running in OS, and each process again can have multiple threads running
simultaneously. The Multithreading concept is popularly applied in games,
animation…etc.
1. New State
By default, a Thread will be in a new state, in this state, code has not yet been run
and the execution process is not yet initiated.
2. Active State
A Thread that is a new state by default gets transferred to Active state when it
invokes the start() method, his Active state contains two sub-states namely:
Runnable State: In This State, The Thread is ready to run at any given
time and it’s the job of the Thread Scheduler to provide the thread time
for the runnable state preserved threads. A program that has obtained
Multithreading shares slices of time intervals which are shared between
threads hence, these threads run for some short span of time and wait in
the runnable state to get their schedules slice of a time interval.
Running State: When The Thread Receives CPU allocated by Thread
Scheduler, it transfers from the “Runnable” state to the “Running” state.
and after the expiry of its given time slice session, it again moves back to
the “Runnable” state and waits for its next time slice.
3. Waiting/Blocked State
If a Thread is inactive but on a temporary time, then either it is at waiting or
blocked state, for example, if there are two threads, T1 and T2 where T1 need to
communicate to the camera and other thread T2 already using a camera to scan then
T1 waits until T2 Thread completes its work, at this state T1 is parked in waiting
for the state, and in another scenario, the user called two Threads T2 and T3 with
the same functionality and both had same time slice given by Thread Scheduler
then both Threads T1, T2 is in a blocked state. When there are multiple threads
parked in Blocked/Waiting state Thread Scheduler clears Queue by rejecting
unwanted Threads and allocating CPU on a priority basis.
4. Timed Waiting State
Sometimes the longer duration of waiting for threads causes starvation, if we take
an example like there are two threads T1, T2 waiting for CPU and T1 is undergoing
Critical Coding operation and if it does not exit CPU until its operation gets
executed then T2 will be exposed to longer waiting with undetermined certainty, In
order to avoid this starvation situation, we had Timed Waiting for the state to avoid
that kind of scenario as in Timed Waiting, each thread has a time period for which
sleep() method is invoked and after the time expires the Threads starts executing its
task.
5. Terminated State
A thread will be in Terminated State, due to the below reasons:
Termination is achieved by a Thread when it finishes its task Normally.
Sometimes Threads may be terminated due to unusual events like
segmentation faults, exceptions…etc. and such kind of Termination can
be called Abnormal Termination.
A terminated Thread means it is dead and no longer available.
A = someCalculation() + anotherCalculation();
In C, either of these routines may have side effects, so they must be executed
sequentially.
In a functional language, however, they may not. This means that they may be
executed in either order.
In fact, in a language with lazy evaluation like Haskell, they may not be evaluated
until the value of a is actually used.
They can be executed in either order, they can also be executed concurrently. If they
are both very small functions, there isn’t much benefit in this, but if they are very
time-consuming things, then there is.
n a functional language, this transform is trivial for the compiler to do. The difficult
bit is deciding when to do it. Running every function call in parallel will almost
certainly be slower—even on a massively parallel machine—than running them all
sequentially just because of the overhead involved in creating, or communicating
with, all of the threads.
Synchronization is a mechanism that controls the order in which tasks execute. Two
kinds of syschronization are required when tasks share data, cooperation and
competition. Cooperation synchronization is required between task A and B when
task A must wait for task B to complete some specific activity before task A can
continue its execution. Competition syschronization is required between two tasks
when both require the use of some resource that can't be simultaneously used.
Exception Handling and Event Handling:
In this tutorial, we will learn about Java exceptions, it's types, and the difference
between checked and unchecked exceptions.
The core advantage of exception handling is to maintain the normal flow of the
application. An exception normally disrupts the normal flow of the application; that
is why we need to handle exceptions. Let's consider a scenario:
1. statement 1;
2. statement 2;
3. statement 3;
4. statement 4;
5. statement 5;//exception occurs
6. statement 6;
7. statement 7;
8. statement 8;
9. statement 9;
10.statement 10;
There are mainly two types of exceptions: checked and unchecked. An error is
considered as the unchecked exception. However, according to Oracle, there are
three types of exceptions namely:
1. Checked Exception
2. Unchecked Exception
3. Error
1) Checked Exception
The classes that directly inherit the Throwable class except RuntimeException and
Error are known as checked exceptions. For example, IOException, SQLException,
etc. Checked exceptions are checked at compile-time.
2) Unchecked Exception
The classes that inherit the RuntimeException are known as unchecked exceptions.
For example, ArithmeticException, NullPointerException,
ArrayIndexOutOfBoundsException, etc. Unchecked exceptions are not checked at
compile-time, but they are checked at runtime.
3) Error
Event Handling:
An event can be defined as changing the state of an object or behavior by
performing actions. Actions can be a button click, cursor movement, keypress
through keyboard or page scrolling, etc.
Classification of Events
Foreground Events
Background Events
1. Foreground Events
Foreground events are the events that require user interaction to generate, i.e.,
foreground events are generated due to interaction by the user on components in
Graphic User Interface (GUI). Interactions are nothing but clicking on a button,
scrolling the scroll bar, cursor moments, etc.
2. Background Events
Events that don’t require interactions of users to generate are known as background
events. Examples of these events are operating system failures/interrupts, operation
completion, etc.
Event Handling
It is a mechanism to control the events and to decide what should happen after
an event occur. To handle the events, Java follows the Delegation Event model.
Listeners: Listeners are used for handling the events generated from the
source. Each of these listeners represents interfaces that are responsible
for handling events.
To perform Event Handling, we need to register the source with the listener.
addTypeListener()