0% found this document useful (0 votes)
76 views20 pages

PPL Unit - Iv

This document discusses subprogram level concurrency in programming languages. It defines tasks as units of a program that can execute concurrently with other units. Tasks can be lightweight and share an address space, or heavyweight and each have their own address space. Tasks must communicate and synchronize their execution to coordinate access to shared resources. Mechanisms like semaphores and monitors can provide synchronization between tasks for both cooperation and competition over shared data.

Uploaded by

shashanknani1312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views20 pages

PPL Unit - Iv

This document discusses subprogram level concurrency in programming languages. It defines tasks as units of a program that can execute concurrently with other units. Tasks can be lightweight and share an address space, or heavyweight and each have their own address space. Tasks must communicate and synchronize their execution to coordinate access to shared resources. Mechanisms like semaphores and monitors can provide synchronization between tasks for both cooperation and competition over shared data.

Uploaded by

shashanknani1312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Principles of Programming Languages

UNIT – IV

Introduction to Subprogram Level Concurrency:

A task is a unit of a program, similar to a subprogram, that can be in concurrent


execution with other units of the same program.

Each task in a program can support one thread of control. Tasks are sometimes
called processes.

In some languages, for example Java and C#, certain methods serve as tasks. Such
methods are executed in objects called threads.

Three characteristics of tasks distinguish them from subprograms:

First, a task may be implicitly started, whereas a subprogram must be explicitly


called.

Second, when a program unit invokes a task, in some cases it need not wait for the
task to complete its execution before continuing its own

Third, when the execution of a task is completed, control may or may not return to
the unit that started that execution.

Tasks fall into two general categories: 

1. Heavyweight: A  heavyweight task executes in its own address space.

2. Lightweight: A Lightweight tasks all run in the same address space.

It is easier to implement lightweight tasks than heavyweight tasks.

 lightweight tasks can be more efficient than heavyweight tasks, because less effort is
required to manage their execution.

A task can communicate with other tasks through shared nonlocal variables, through
message passing, or through parameters.

 If a task does not communicate with or affect the execution of any other task in the
program in any way, it is said to be disjoint.
Because tasks often work together to create simulations or solve problems and
therefore are not disjoint, they must use some form of communication to either
synchronize their executions or share data or both.

Synchronization is a mechanism that controls the order in which tasks execute.

Two kinds of synchronization are required when tasks share data: cooperation and
competition.

Cooperation synchronization is required between task A and task B when


task A must wait for task B to complete some specific activity before task A can
begin or continue its execution.

Competition synchronization is required between two tasks when both require the
use of some resource that cannot be simultaneously used. 

A simple form of cooperation synchronization can be illustrated by a common


problem called the producer-consumer problem. 

This problem originated in the development of operating systems, in which one


program unit produces some data value or resource and another uses it.

Produced data are usually placed in a storage buffer by the producing unit and
removed from that buffer by the consuming unit.

The sequence of stores to and removals from the buffer must be synchronized.

The consumer unit must not be allowed to take data from the buffer if the buffer is
empty.

The producer unit cannot be allowed to place new data in the buffer if the buffer is
full.

This is a problem of cooperation synchronization because the users of the shared data
structure must cooperate if the buffer is to be used correctly.
Competition synchronization prevents two tasks from accessing a shared data
structure at exactly the same time
To clarify the competition problem, consider the following scenario: Suppose
task A has the statement 

TOTAL += 1, where TOTAL is a shared integer variable.

Furthermore, suppose task B has the statement TOTAL *= 2. 

Task A and task B could try to change TOTAL at the same time.

At the machine language level, each task may accomplish its operation
on TOTAL with the following three-step process:

1. Fetch the value of TOTAL.


2. Perform the arithmetic operation.
3. Put the new value back in TOTAL.

Value
of TOTAL3            4 6 

                     
Task A                      
  Fetch Add 1 Store     
  TOTAL       TOTAL
                     
Task B                      
    Fetch   Multiply    Store
TOTA
    TOTAL   by 2     L
Time ____________________________________
scheduler manages the sharing of processors among the tasks. If there were never
any interruptions and tasks all had the same priority,
the scheduler could simply give each task a time slice, such as 0.1 second, and when
a task’s turn came, the scheduler could let it execute on a processor for that amount
of time.
 there are several events that complicate this, for example, task delays for
synchronization and for input or output operations.
 Because input and output operations are very slow relative to the processor’s speed,
a task is not allowed to keep a processor while it waits for completion of such an
operation.
Tasks can be in several different states:
New: A task is in the new state when it has been created but has not yet begun its
execution.
Ready: A ready task is ready to run but is not currently running. Either it has not
been given processor time by the scheduler, or it had run previously but was blocked
in one of the ways.
asks that are ready to run are stored in a queue that is often called the task ready
queue.
Running: A running task is one that is currently executing; that is, it has a processor
and its code is being executed.
Blocked: A task that is blocked has been running, but that execution was interrupted
by one of several different events, the most common of which is an input or output
operation.
 In addition to input and output, some languages provide operations for the user
program to specify that a task is to be blocked
Dead: A dead task is no longer active in any sense. A task dies when its execution is
completed or it is explicitly killed by the program.

Semaphores:
Semaphore is a simple mechanism that can be used to provide synchronization of
tasks.
In an effort to provide competition synchronization through mutually exclusive
access to shared data structures
 Semaphores can also be used to provide cooperation synchronization.
To provide limited access to a data structure, guards can be placed around the code
that accesses the structure.
A guard is a linguistic device that allows the guarded code to be executed only when
a specified condition is true.
A guard can be used to allow only one task to access a shared data structure at a
time.

A semaphore is an implementation of a guard.  semaphore is a data structure that


consists of an integer and a queue that stores task descriptors.
A task descriptor is a data structure that stores all of the relevant information about
the execution state of a task.
An integral part of a guard mechanism is a procedure for ensuring that all attempted
executions of the guarded code eventually take place
For cooperation synchronization, such a buffer must have some way of recording
both the number of empty positions and the number of filled positions in the buffer
(to prevent buffer underflow and overflow).
emptyspots—can use its counter to maintain the number of empty locations in a
shared buffer used by producers and consumers.
fullspots—can use its counter to maintain the number of filled locations in the
buffer. 
The queues of these semaphores can store the descriptors of tasks that have been
forced to wait for access to the buffer.
The queue of emptyspots can store producer tasks that are waiting for available
positions in the buffer; the queue of fullspots can store consumer tasks waiting for
values to be placed in the buffer.
Eg:
A buffer is designed as an abstract data type in which all data enters the buffer
through the subprogram DEPOSIT, and all data leaves the buffer through the
subprogram FETCH. 
The FETCH subprogram has the opposite sequence of DEPOSIT. It checks
the fullspots semaphore to see whether the buffer contains at least one item.
If it does, an item is removed and the emptyspots semaphore has its counter
incremented by 1.
 If the buffer is empty, the calling task is put in the fullspots queue to wait until an
item appears.
When FETCH is finished, it must increment the counter of emptyspots.
The wait semaphore subprogram is used to test the counter of a given semaphore
variable.
 If the value is greater than zero, the caller can carry out its operation.
The release semaphore subprogram is used by a task to allow some other task to have
one of whatever the counter of the specified semaphore variable counts.
 If the queue of the specified semaphore variable is empty, which means no task is
waiting, release increments its counter (to indicate there is one more of whatever is
being controlled that is now available).
Monitors:
all synchronization operations on shared data be gathered into a single program unit. These structures are
named Monitors.
Competition Synchronization:
One of the most important features of monitors is that shared data is resident in the
monitor rather than in any of the client units.
The programmer does not synchronize mutually exclusive access to shared data
through the use of semaphores or other mechanisms.
Because the access mechanisms are part of the monitor, implementation of a monitor
can be made to guarantee synchronized access by allowing only one access at a time.
Calls to monitor procedures are implicitly blocked and stored in a queue if the
monitor is busy at the time of the call.
Cooperation Synchronization:
Although mutually exclusive access to shared data is intrinsic with a monitor,
cooperation between processes is still the task of the programmer. In particular, the
programmer must guarantee that a shared buffer does not experience underflow or
overflow.
Different languages provide different ways of programming cooperation
synchronization, all of which are related to semaphores.
Simple Message Passing
One process/thread is the sender and another is the receiver

Symmetric Naming

Process P Process Q
Sender Receiver

. .
. .
send(Q, message); receive(P, &message);
. .
. .

Asymmetric Naming

Process P Process Q
Sender Receiver

. .
. .
send(Q, message); receive(&message);
. .
. .

Here the receiver will accept a message from any sender. The sender can pass its
own id inside the message if it wants.
Synchronous Communication

P and Q have to wait for each other (one blocks until the other is ready).

Asynchronous Communication

Underlying system buffers the messages so P and Q don't have to wait for each
other. This is less efficient due to the overhead of managing the buffer.

Client-Server Communication

A system in which the receiver (server) does care about the sender and is normally
responsible for sending a reply message.

Knowledge of the sender can be implemented by

 Making the sender id a parameter to the receive operation


 Passing in the sender id as part of the message

Client-Server communication is basically synchronous, using the rendezvous as a


primitive for building up more sophisticated communication mechanisms, including
asynchronous ones.

Rendezvous

The client blocks while the server computes and sends back the reply.

If the computation may take a long time, and the client has useful work to do, the
rendezvous action should be kept short and some other way can be found to get the
result back to the client.

Note that because the client blocks while the response is sent back, we say message
exchange is atomic.
Callback

The client calls the server with the input data and its id and doesn't block. Sometime
later, the server will call the client and pass the result.

Receipt

The client calls the server, passing in the input data and receiving (immediately) a
receipt from the server; later, the client contacts the server again, using the receipt to
get the result. (Supposedly, dry cleaners and photo developers work this way.)

Mailbox

The client gives the server the address of the mailbox where it wants the result to go;
the client retrieves the result later.
A mailbox can also be used to send the input data to the server:

Relay

A relay implements a fully asynchronous, no-wait send, and is used only when no
reply is necessary. The relay blocks "on behalf of" the client.

Buffer

This one is classic.


Java Threads:
We can define threads as a subprocess with lightweight with the smallest unit of
processes and also has separate paths of execution. These threads use shared
memory but they act independently hence if there is an exception in threads that do
not affect the working of other threads despite them sharing the same memory. 

As we can observe in, the above diagram a thread runs inside the process and there
will be context-based switching between threads there can be multiple processes
running in OS, and each process again can have multiple threads running
simultaneously. The Multithreading concept is popularly applied in games,
animation…etc.

The Concept Of Multitasking


To help users Operating System accommodates users the privilege of multitasking,
where users can perform multiple actions simultaneously on the machine. This
Multitasking can be enabled in two ways: 
1. Process-Based Multitasking 
2. Thread-Based Multitasking 

1. Process-Based Multitasking (Multiprocessing)


In this type of Multitasking, processes are heavyweight and each process was
allocated by a separate memory area. And as the process is heavyweight the cost of
communication between processes is high and it takes a long time for switching
between processes as it involves actions such as loading, saving in registers,
updating maps, lists, etc. 
2. Thread-Based Multitasking 
As we discussed above Threads are provided with lightweight nature and share the
same address space, and the cost of communication between threads is also low. 
Why Threads are used? 
Now, we can understand why threads are being used as they had the advantage of
being lightweight and can provide communication between multiple threads at a
Low Cost contributing to effective multi-tasking within a shared memory
environment. 

Life Cycle Of Thread


There are different states Thread transfers into during its lifetime, let us know about
those states in the following lines: in its lifetime, a thread undergoes the following
states, namely: 
1. New State
2. Active State
3. Waiting/Blocked State
4. Timed Waiting State
5. Terminated State
We can see the working of  different states in a Thread in the above Diagram, let us
know in detail each and every state: 

1. New State 
By default, a Thread will be in a new state,  in this state, code has not yet been run
and the execution process is not yet initiated. 
2. Active State
A Thread that is a new state by default gets transferred to Active state when it
invokes the start() method, his Active state contains two sub-states namely:
 Runnable State: In This State, The Thread is ready to run at any given
time and it’s the job of the Thread Scheduler to provide the thread time
for the runnable state preserved threads. A program that has obtained
Multithreading shares slices of time intervals which are shared between
threads hence, these threads run for some short span of time and wait in
the runnable state to get their schedules slice of a time interval.
 Running State: When The Thread Receives CPU allocated by Thread
Scheduler, it transfers from the “Runnable” state to the “Running” state.
and after the expiry of its given time slice session, it again moves back to
the “Runnable” state and waits for its next time slice.
3. Waiting/Blocked State 
If a Thread is inactive but on a temporary time, then either it is at waiting or
blocked state, for example, if there are two threads, T1 and T2 where T1 need to
communicate to the camera and other thread T2 already using a camera to scan then
T1 waits until T2 Thread completes its work, at this state T1 is parked in waiting
for the state, and in another scenario, the user called two Threads T2 and T3 with
the same functionality and both had same time slice given by Thread Scheduler
then both Threads T1, T2 is in a blocked state. When there are multiple threads
parked in Blocked/Waiting state Thread Scheduler clears Queue by rejecting
unwanted Threads and allocating CPU on a priority basis. 
4. Timed Waiting State
Sometimes the longer duration of waiting for threads causes starvation, if we take
an example like there are two threads T1, T2 waiting for CPU and T1 is undergoing
Critical Coding operation and if it does not exit CPU until its operation gets
executed then T2 will be exposed to longer waiting with undetermined certainty, In
order to avoid this starvation situation, we had Timed Waiting for the state to avoid
that kind of scenario as in Timed Waiting, each thread has a time period for which
sleep() method is invoked and after the time expires the Threads starts executing its
task. 
5. Terminated State
A thread will be in Terminated State, due to the below reasons: 
 Termination is achieved by a Thread when it finishes its task Normally.
 Sometimes Threads may be terminated due to unusual events like
segmentation faults, exceptions…etc. and such kind of Termination can
be called Abnormal Termination.
 A terminated Thread means it is dead and no longer available.

What is Main Thread? 


As we are familiar that, we create Main Method in each and every Java Program,
which acts as an entry point for the code to get executed by JVM, Similarly in this
Multithreading Concept, Each Program has one Main Thread which was provided
by default by JVM, hence whenever a program is being created in java, JVM
provides the Main Thread for its Execution. 
How to Create Threads using Java Programming Language? 
We can create Threads in java using two ways, namely : 
1. By extending Thread Class
2. By Implementing a Runnable interface
1. By Extending Thread Class 
We can run Threads in Java by using Thread Class, which provides constructors
and methods for creating and performing operations on a Thread, which extends a
Thread class that can implement Runnable Interface. We use the following
constructors for creating the Thread: 
 Thread
 Thread(Runnable r)
 Thread(String name)
 Thread(Runnable r, String name)

Sample code to create Threads by Extending Thread Class: 


import java.io.*;
import java.util.*;
 
public class GFG extends Thread {
    // initiated run method for Thread
    public void run()
    {
        System.out.println("Thread Started Running...");
    }
    public static void main(String[] args)
    {
        GFG g1 = new GFG();
        // invoking Thread
        g1.run();
    }
}
Output
Thread Started Running...
Concurrency in Function Language:
One of the promises of functional programming has only recently started to appear,
and it’s still very difficult to achieve in practice. Consider the following snippet of C
code:

A = someCalculation() + anotherCalculation();

In C, either of these routines may have side effects, so they must be executed
sequentially.
In a functional language, however, they may not. This means that they may be
executed in either order.

 In fact, in a language with lazy evaluation like Haskell, they may not be evaluated
until the value of a is actually used.

They can be executed in either order, they can also be executed concurrently. If they
are both very small functions, there isn’t much benefit in this, but if they are very
time-consuming things, then there is.

n a functional language, this transform is trivial for the compiler to do. The difficult
bit is deciding when to do it. Running every function call in parallel will almost
certainly be slower—even on a massively parallel machine—than running them all
sequentially just because of the overhead involved in creating, or communicating
with, all of the threads.

Statement Level Concurrency:

Concurrency is naturally divided into instruction level, statement level(executing two


or more statements simultaneously), program unit level(execute two or more
subprogram units simultaneously) and program level(executing two or more
programs simultaneously). Here we only discuss unit level and statement level
concurrency.

Concurrent execution of program units can occur either physically on separate


processors or logically in some time-sliced fashion on a single-processor computer
system. Statement level concurrency is largely a matter of specifying how data
should be distributed over multiple memories and which statement can be executed
concurrently.

Task is a process(thread) running on each processor. A task can communicate with


other tasks through shared variables, or through message passing. Because tasks
often work together to create simulations or solve problems they must use some form
of communication to either synchronize their executions or share data or both.

Synchronization is a mechanism that controls the order in which tasks execute. Two
kinds of syschronization are required when tasks share data, cooperation and
competition. Cooperation synchronization is required between task A and B when
task A must wait for task B to complete some specific activity before task A can
continue its execution. Competition syschronization is required between two tasks
when both require the use of some resource that can't be simultaneously used.
Exception Handling and Event Handling:

The Exception Handling in Java is one of the powerful mechanism to handle the


runtime errors so that the normal flow of the application can be maintained.

In this tutorial, we will learn about Java exceptions, it's types, and the difference
between checked and unchecked exceptions.

Advantage of Exception Handling

The core advantage of exception handling is to maintain the normal flow of the
application. An exception normally disrupts the normal flow of the application; that
is why we need to handle exceptions. Let's consider a scenario:

1. statement 1;  
2. statement 2;  
3. statement 3;  
4. statement 4;  
5. statement 5;//exception occurs  
6. statement 6;  
7. statement 7;  
8. statement 8;  
9. statement 9;  
10.statement 10;  

Suppose there are 10 statements in a Java program and an exception occurs at


statement 5; the rest of the code will not be executed, i.e., statements 6 to 10 will not
be executed. However, when we perform exception handling, the rest of the
statements will be executed. That is why we use exception handling in Java.

Types of Java Exceptions

There are mainly two types of exceptions: checked and unchecked. An error is
considered as the unchecked exception. However, according to Oracle, there are
three types of exceptions namely:

1. Checked Exception
2. Unchecked Exception
3. Error

Difference between Checked and Unchecked Exceptions

1) Checked Exception

The classes that directly inherit the Throwable class except RuntimeException and
Error are known as checked exceptions. For example, IOException, SQLException,
etc. Checked exceptions are checked at compile-time.

2) Unchecked Exception

The classes that inherit the RuntimeException are known as unchecked exceptions.
For example, ArithmeticException, NullPointerException,
ArrayIndexOutOfBoundsException, etc. Unchecked exceptions are not checked at
compile-time, but they are checked at runtime.

3) Error

Error is irrecoverable. Some example of errors are OutOfMemoryError,


VirtualMachineError, AssertionError etc.

Event Handling:
An event can be defined as changing the state of an object or behavior by
performing actions. Actions can be a button click, cursor movement, keypress
through keyboard or page scrolling, etc. 

The java.awt.event package can be used to provide various event classes. 

Classification of Events
 Foreground Events
 Background Events

1. Foreground Events
Foreground events are the events that require user interaction to generate, i.e.,
foreground events are generated due to interaction by the user on components in
Graphic User Interface (GUI). Interactions are nothing but clicking on a button,
scrolling the scroll bar, cursor moments, etc.

2. Background Events
Events that don’t require interactions of users to generate are known as background
events. Examples of these events are operating system failures/interrupts, operation
completion, etc.
Event Handling
It is a mechanism to control the events and to decide what should happen after
an event occur. To handle the events, Java follows the Delegation Event model.

Delegation Event model

 It has Sources and Listeners.


 Source: Events are generated from the source. There are various sources
like buttons, checkboxes, list, menu-item, choice, scrollbar, text
components, windows, etc., to generate events.

 Listeners: Listeners are used for handling the events generated from the
source. Each of these listeners represents interfaces that are responsible
for handling events.
To perform Event Handling, we need to register the source with the listener.

Registering the Source With Listener

Different Classes provide different registration methods.


Syntax:

addTypeListener()

where Type represents the type of event.


Example 1: For KeyEvent we use addKeyListener() to register.
Example 2:that For ActionEvent we use addActionListener() to register.

You might also like