0% found this document useful (0 votes)
75 views134 pages

PPL Unit-IV

This document provides an overview of abstract data types including: - Definitions of data abstraction and encapsulation. - Examples of abstract data types in languages like C++, Java, C#, Ada 95, and Ruby. - Key concepts like parameterized ADTs, information hiding, access controls, and how different languages implement encapsulation through mechanisms like classes, packages, and properties.

Uploaded by

jyothinarne07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views134 pages

PPL Unit-IV

This document provides an overview of abstract data types including: - Definitions of data abstraction and encapsulation. - Examples of abstract data types in languages like C++, Java, C#, Ada 95, and Ruby. - Key concepts like parameterized ADTs, information hiding, access controls, and how different languages implement encapsulation through mechanisms like classes, packages, and properties.

Uploaded by

jyothinarne07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 134

UNIT-IV

Abstract Types
Prepared By:
Mrs.K.Pranathi
Asst.Professor
Syllabus
Abstract types: Data Abstraction and encapsulation.
Introductions to data abstraction, design issue Language
examples, C++ Parameterized ADT, Object oriented
programming in small talk.
C++, Java,C#, Ada 95 Concurrency: Subprogram level
Concurrency.Semaphores,Monitors, Message Passing . Java
threads,C# threads.
Exception Handling: exceptions,exception propogation.
Exception Handler in Ada.C++ and Java.Logical programming
Language: Introduction and overview of logic programming,
Basic elements of prolog, application of logical programming.
The Concept of Abstraction
• An abstraction is a view or representation of an
entity that includes only the most significant
attributes
• The concept of abstraction is fundamental in
programming (and computer science)
• Nearly all programming languages support
process abstraction with subprograms
• Nearly all programming languages designed
since 1980 support data abstraction
Introduction to Data Abstraction
• An abstract data type is a user-defined data type
that satisfies the following two conditions: –
• The representation of, and operations on, objects
of the type are defined in a single syntactic unit.
• The representation of objects of the type is
hidden from the program units that use these
objects, so the only operations possible are those
provided in the type's definition.
Advantages of Data Abstraction
• Advantage of the first condition – Program
organization, modifiability (everything associated
with a data structure is together), and separate
compilation.
• Advantage the second condition – Reliability--by
hiding the data representations, user code cannot
directly access objects of the type or depend on
the representation, allowing the representation to
be changed without affecting user code.
Language Requirements for ADTs
• A syntactic unit in which to encapsulate the
type definition
• A method of making type names and
subprogram headers visible to clients, while
hiding actual definitions
• Some primitive operations must be built into
the language processor
Design Issues
• What is the form of the container for the
interface to the abstract data type?
• Can abstract types be parameterized?
• What access controls are provided?
Language Examples: Ada
• The encapsulation construct is called a package –
Specification package (the interface) – Body
package (implementation of the entities named in
the specification)
• Information Hiding – The specification package has
two parts, public and private – The name of the
abstract type appears in the public part of the
specification package.
• This part may also include representations of
unhidden types
• The representation of the abstract type
appears in a part of the specification called
the private part.
• Private types have built-in operations for
assignment and comparison
• Limited private types have NO built-in
operations
Language Examples:Ada (contd..)
• Reasons for the public/private spec package:
• The compiler must be able to see the
representation after seeing only the spec
package (it cannot see the private part)
• Clients must see the type name, but not the
representation (they also cannot see the
private part)
Language Examples:Ada (contd..)
• Having part of the implementation details (the
representation) in the spec package and part
(the method bodies) in the body package is not
good
• One solution: make all ADTs pointers Problems
with this:
• Difficulties with pointers
• Object comparisons
• Control of object allocation is lost
An Example in Ada
package Stack_Pack is
type stack_type is limited private; max_size:
constant := 100;
function empty(stk: in stack_type) return
Boolean;
procedure push(stk: in out stack_type; elem:in Integer);
procedure pop(stk: in out stack_type);
function top(stk: in stack_type) return Integer;

private -- hidden from clients


type list_type is array (1..max_size) of Integer;
type stack_type is record
list: list_type;
topsub: Integer range 0..max_size) := 0; end
record;
end Stack_Pack
Language Examples: C++
• Based on C struct type and Simula 67 classes
• The class is the encapsulation device
• All of the class instances of a class share a
single copy of the member functions
• Each instance of a class has its own copy of
the class data members
• Instances can be static, stack dynamic, or
heap dynamic
Language Examples: C++ (continued)
• Information Hiding
• Private clause for hidden entities
• Public clause for interface entities
• Protected clause for inheritance
Language Examples: C++ (continued)
• Constructors:
• Functions to initialize the data members of instances
(they do not create the objects)
• May also allocate storage if part of the object is heap-
dynamic
• Can include parameters to provide parameterization of
the objects
• Implicitly called when an instance is created
• Can be explicitly called
• Name is the same as the class name
Language Examples: C++ (continued)
• Destructors
• Functions to cleanup after an instance is
destroyed; usually just to reclaim heap storage
• Implicitly called when the object’s lifetime
ends
• Can be explicitly called
• Name is the class name, preceded by a tilde
(~)
An Example in C++
class Stack { private:
int *stackPtr, maxLen, topPtr; public:
Stack() { // a constructor stackPtr =
new int [100]; maxLen = 99;
topPtr = -1;
};
~Stack () {delete [] stackPtr;}; void
push (int num) {…};
void pop () {…};
int top () {…};
int empty () {…};
}
A Stack class header file
// Stack.h - the header file for the Stack class
#include <iostream.h>
class Stack {
private: //** These members are visible only to other
//** members and friends (see Section 11.6.4)
int *stackPtr; int maxLen; int
topPtr;
public: //** These members are
visible to clients
Stack(); //** A constructor
~Stack(); //** A destructor
void push(int); void pop();
int top(); int empty();
The code file for Stack
•// Stack.cpp - the implementation file for the Stack class
#include <iostream.h>
•#include "Stack.h" using std::cout;
• Stack::Stack() { //** A constructor stackPtr
= new int [100];
• maxLen = 99;
• topPtr = -1;
•}
•Stack::~Stack() {delete [] stackPtr;}; //** A destructor
void Stack::push(int number) {
• if (topPtr == maxLen)
• cerr << "Error in push--stack is full\n";
• else stackPtr[++topPtr] = number;
•}
Evaluation of ADTs inC++ and Ada
• C++ support for ADTs is similar to expressive
power of Ada
• Both provide effective mechanisms for
encapsulation and information hiding
• Ada packages are more general
encapsulations; classes are types
Language Examples: C++ (continued)
• Friend functions or classes - to
provide access to private members to
some unrelated units or functions
• – Necessary in C++
Language Examples: Java
• Similar to C++, except:
– All user-defined types are classes
– All objects are allocated from the heap and
accessed through reference variables
– Individual entities in classes have access
control modifiers (private or public), rather
than clauses
– Java has a second scoping mechanism, package
scope, which can be used in place of friends
• All entities in all classes in a package that do not
have access control modifiers are visible throughout
the package
An Example in Java
• Private class StackClass {
• private int [] *stackRef; private int
[] maxLen, topIndex;
• public StackClass() { // a
constructor stackRef = new int
[100];
• maxLen = 99;
• topPtr = -1;
• };
• public void push (int num) {…};
public void pop () {…};
• public int top () {…}; public
boolean empty () {…};
•}
Language Examples: C#
• Based on C++ and Java
• Adds two access modifiers, internal and
protected internal
• All class instances are heap dynamic
• Default constructors are available for all
classes
• Garbage collection is used for most heap
objects, so destructors are rarely used.
• structs are lightweight classes that do
not support inheritance
Language Examples: C# (continued)
• Common solution to need for access
to data members: accessor methods
(getter and setter)
• C# provides properties as a way of
implementing getters and setters
without requiring explicit method
calls
C# Property Example
public class Weather {
public int DegreeDays { //** DegreeDays is a property
get {return degreeDays;} set {
if(value < 0 || value > 30) Console.WriteLine(
"Value is out of range: {0}", value);
else degreeDays = value;}
}
private int degreeDays;
...
}
...
Weather w = new Weather();
int degreeDaysToday, oldDegreeDays;
...
w.DegreeDays = degreeDaysToday;
...
oldDegreeDays = w.DegreeDays;
Abstract Data Typesin Ruby
• Encapsulation construct is the class
• Local variables have “normal” names
• Instance variable names begin with “at” signs (@)
• Class variable names begin with two “at” signs
(@@)
• Instance methods have the syntax of
Ruby functions (def … end)
• Constructors are named initialize (only one
per class)—implicitly called when new is called
• –If more constructors are needed, they must have
different names and they must explicitly call new
• Class members can be marked private or
public, with public being the default
• Classes are dynamic
Abstract Data Types in Ruby (continued)
@@class StackClass { def initialize
@stackRef = Array.new @maxLen = 100
@topIndex = -1 end
def
def
push(number)
pop … end
… end
def
def
top … end
empty … end
end
Parameterized Abstract Data Types
• Parameterized ADTs allow designing an ADT
that can store any type elements (among
other things) – only an issue for static typed
languages
• Also known as generic classes
• C++, Ada, Java 5.0, and C# 2005 provide
support for parameterized ADTs
Parameterized ADTs in Ada
Ada Generic Packages
– Make the stack type more flexible by making the element type and
the size of the stack generic
generic
Max_Size: Positive;
type Elem_Type is private; package Generic_Stack is
Type Stack_Type is limited private;
function Top(Stk: in out StackType) return Elem_type;

end Generic_Stack;
Package Integer_Stack is new Generic_Stack(100,Integer); Package
Float_Stack is new Generic_Stack(100,Float);
Parameterized ADTs in C++
Classes can be somewhat generic by writing parameterized
constructor functions.
class Stack {

Stack (int size)
{
stk_ptr = new int [size];
max_len = size - 1;
top = -1;
};

}

Stack (100);
Parameterized ADTs in C++ (continued)
The stack element type can be parameterized by making the class a
template class
template <class Type>
class Stack { private:
Type *stackPtr; const int maxLen; int topPtr;
public:
Stack() {
stackPtr = new Type[100]; maxLen = 99;
topPtr = -1;
}

}
Parameterized Classes InJava 5.0
• Generic parameters must be classes.
• Most common generic types are the
collection types, such as LinkedList and
ArrayList
• Eliminate the need to cast objects that are
removed
• Eliminate the problem of having multiple
types in a structure
Parameterized Classes in C# 2005
• Similar to those of Java 5.0
• Elements of parameterized structures can be
accessed through indexing
Encapsulation Concepts
• Large programs have two special needs:
– Some means of organization, other than simply
division into subprograms
– Some means of partial compilation (compilation
units that are smaller than the whole program)
• Obvious solution: a grouping of
subprograms that are logically related into
a unit that can be separately compiled
(compilation units)
• Such collections are called encapsulation
Nested Subprograms
• Organizing programs by nesting subprogram definitions
inside the logically larger subprograms that use them
• Nested subprograms are supported in Ada, Fortran
95, Python, and Ruby
Encapsulation in C
• Files containing one or more subprograms can be
independently compiled
• The interface is placed in a header file
• Problem: the linker does not check types
between a header and associated implementation
• #include preprocessor specification – used to
include header files in applications
Encapsulation in C++
• Can define header and code files, similar to those of
C
• Or, classes can be used for encapsulation
– The class is used as the interface (prototypes)
– The member definitions are defined in a
separate file
• Friends provide a way to grant access to private
members of a class
Ada Packages
• Ada specification packages can include any
number of data and subprogram declarations
• Ada packages can be compiled separately
• A package’s specification and body parts can be
compiled separately
C# Assemblies
• A collection of files that appear to be a single
dynamic link library or executable
• Each file contains a module that can be
separately compiled
• A DLL is a collection of classes and methods that
are individually linked to an executing program
• C# has an access modifier called internal; an internal
member of a class is visible to all classes in the
assembly in which it appears.
Naming Encapsulations
• Large programs define many global names; need a
way to divide into logical groupings
• A naming encapsulation is used to create a new
scope for names
• C++ Namespaces
– Can place each library in its own namespace and qualify
names used outside with the namespace
– C# also includes namespaces
Naming Encapsulations(contd..)
• Java Packages
– Packages can contain more than one class
definition; classes in a package are partial friends
– Clients of a package can use fully qualified name
or use the import declaration
• Ada Packages
– Packages are defined in hierarchies which
correspond to file hierarchies
– Visibility from a program unit is gained with the
“with" clause
Naming Encapsulations(contd..)
• Ruby classes are name encapsulations, but Ruby also has
modules
• Typically encapsulate collections of constants and methods
• Modules cannot be instantiated or subclassed, and they
cannot define variables
• Methods defined in a module must include the
module’s name
• Access to the contents of a module is requested with the
require method
-
Summary
• The concept of ADTs and their use in program
design was a milestone in the development of
languages
• Two primary features of ADTs are the packaging of data with
their associated operations and information hiding
• Ada provides packages that simulate ADTs
• C++ data abstraction is provided by classes
• Java’s data abstraction is similar to C++
• Ada, C++, Java 5.0, and C# 2005 support
parameterized ADTs
• C++, C#, Java, Ada, and Ruby provide naming
encapsulations
Object Oriented Programming in
Smalltalk: Language Overview
• Smalltalk was the first programming language that fully
supported object-oriented programming.
• The Smalltalk programming is based upon objects from
integer constants to large complex computer software
systems.
• All computing in Smalltalk is done by sending a
message to an object to invoke one of its methods.
• A reply to a message is an object , which either returns
the requested information or simply notifies the sender
that the requested processing has been completed.
Language Overview(contd…)
• The basic difference between a message and a sub-
program call is : A message is sent to a data object,
specifically to one of the methods defined for the
object.
• The called method is then executed often modifying
the data of the object to which the message was sent.
• A subprogram call is a message to the code of a
subprogram.
• In smalltalk object abstractions are classes, which are
very similar to the classes of SIMULA 67,C++ and java.
Example Program in Smalltalk
• The following is a class definition,
instantiations of which can draw equilateral
polygons of any number of sides
class name Polygon
superclass Object
instance variable names ourPen
numSides
sideLength
“Class methods”
“create an instance”
new
^super new getPen
“Get a pen for drawing polygons”
getPen
ourPen<-Pen new defaultNib:2
“Instance methods”
“Draw a polygon”
draw
numSides timesRepeat: [ourPen go: sideLength; turn:360 //numSides]
“Set length of sides”
Length: len
sideLength<- len
“Set number of sides”
sides: num
numSides<- num
OOP in C++
• C++ has both functions and methods, hence it
supports both procedural and object oriented
programming.
• Operators in C++ can be overloaded, means the
user can create operators for existing operators or
user-defined types.
• C++ methods can also be overloaded , means, the
user can define more than one method with the
same name provided either the numbers or types
of their parameters are different.
OOP in C++(contd..)
• Dynamic binding in C++ is provided by virtual
methods.
• These methods define type dependent operations,
using overloaded methods, within a collection of
classes that are related through inheritance.
• A pointer to an object of class A can also point to
objects of classes that have class A as an ancestor.
When this pointer points to an overloaded virtual
method, the method of the current type is chosen
dynamically.
OOP in C++(contd..)
• Both methods and classes can be templated,
which means that they can be parametirized.
• For example, a method can be written as a
template method to allow it to have versions
for a variety of parameter types.
• C++ also supports multiple inheritance.
OOP in C++(contd..)
• C++ remains a widely used language because of the
availability of good and inexpensive compilers.
• One more reason is that it is almost completely
backward compatible with C meaning that c programs
can be compiled as c++ programs with few changes.
• But C++ is very large and complex language , it suffers
the drawbacks similar to those of PL/I.
• It inherited most of the insecurities of C, which makes
it less safe than languages such as ADA and Java.
OOP in java (The Design Process)
• The design process of java started off when there is
a need to develop software for consumer electronic
devices.
• As neither c nor c++ provided the necessary level of
reliability, java was developed.
• Although C was small, it did not provide the support
for object oriented programming.
• C++ supported object-oriented but it was very large
and complex because it also provided support for
procedure oriented programming.
OOPS in Java(Contd..)
• Java is based on c++ but it was specifically designed to
be smaller, simpler and more reliable.
• Like C++, Java has both classes and primitive types.
• Java arrays are instances of a pre-defined class,
whereas in C++ they are not.
• Java does not have pointers but its reference types
provide some of the capabilities of pointers.
• Java has a primitive Boolean type named boolean used
mainly for control expressions of its control statements
such as if and while.
OOPS in Java(Contd..)
• A significant difference between Java and many of its
predecessors that support object oriented programming
including C++, is that it is not possible to write stand-alone
sub-programs in Java.
• All Java sub-programs are methods and are defined in the
class. Furthermore methods can be called through a class or
object only.
• Another important difference between C++ and Java is that C+
+ supports multiple inheritance directly in its class definitions .
• Among the C++ constructs that were not copied into Java are
structs and unions.
Concurrency
• Concurrency in software execution can occur at four different
levels:
-Instruction level(executing two or more machine instructions
simultaneously)
-Statement level(executing two or more high-level language
statements simultaneously)
-Unit level(executing two or more sub-program units
simultaneously)
-Program level(executing two or more programs simultaneously).
Because there are no language issues in instruction- and program-
level concurrency, they are not addressed here
The Evolution of Multiprocessor Architectures

Late 1950s - One general-purpose processor and one or more special-


purpose processors for input and output operations
• Early 1960s - Multiple complete processors, used for program-level
concurrency

Mid-1960s - Multiple partial processors, used for instruction-level


concurrency

Single-Instruction Multiple-Data (SIMD) machine The same


instruction goes to all processors, each with different data.

Multiple-Instruction Multiple-Data (MIMD) machines, Independent


processors that can be synchronized (unit-level concurrency).
Categories of Concurrency
• A thread of control: in a program is the
sequence of program points reached as control
flows through the program.
• Physical concurrency - Multiple independent
processors ( multiple threads of control)
• Logical concurrency - The appearance of physical
concurrency is presented by time sharing one
processor (software can be designed as if there
were multiple threads of control)
Motivations for the Use of Concurrency
• Multiprocessor computers capable of physical
concurrency are now widely used
• Even if a machine has just one processor, a program
written to use concurrent execution can be faster
than the same program written for nonconcurrent
execution
• Involves a different way of designing software that
can be very useful—many real-world situations
involve concurrency
• Many program applications are now spread over
multiple machines, either locally or over a network
Introduction to Subprogram-Level
Concurrency
• A task is a unit of program similar to a sub-program, that can be
in concurrent execution with other units of the same program.
• Each task in a program can support one thread of control.
• Tasks are sometimes called processes.
• A task or process or thread is a program unit that can be in
concurrent execution with other program units
• Tasks differ from ordinary subprograms in that:
– A task may be implicitly started
– When a program unit starts the execution of a task, it is not necessarily
suspended
– When a task’s execution is completed, control may not return to the caller
• Tasks usually work together
Two General Categories of Tasks
• Heavyweight tasks execute in their own
address space
• Lightweight tasks all run in the same address
space – more efficient
• A task is disjoint if it does not communicate
with or affect the execution of any other task
in the program in any way
Task Synchronization
• A mechanism that controls the order in which
tasks execute
• Two kinds of synchronization
– Cooperation synchronization
– Competition synchronization
• Task communication is necessary for
synchronization, provided by:
- Shared nonlocal variables
- Parameters
- Message passing
Kinds of Synchronization
• Cooperation: Task A must wait for task B to
complete some specific activity before task A
can continue its execution, e.g., the producer-
consumer problem
• Competition: Two or more tasks must use
some resource that cannot be simultaneously
used, e.g., a shared counter
– Competition is usually provided by mutually
exclusive access.
Producer-Consumer Problem
• The Producer Consumer problem is a process
synchronization problem. In this problem,
there is a memory buffer of a fixed size. Two
processes access the shared buffer: Producer
and Consumer. A producer creates new items
and adds to the buffer, while a consumer picks
items from the shared buffer.
Scheduler
• Providing synchronization requires a
mechanism for delaying task execution
• Task execution control is maintained by a
program called the scheduler, which maps task
execution onto available processors
Task Execution States
• New - created but not yet started
• Ready - ready to run but not currently running
(no available processor)
• Running
• Blocked - has been running, but cannot now
continue (usually waiting for some event to
occur)
• Dead - no longer active in any sense
Liveness and Deadlock
• Liveness is a characteristic that a program unit
may or may not have
- In sequential code, it means the unit will
eventually complete its execution
• In a concurrent environment, a task can easily
lose its liveness
• If all tasks in a concurrent environment lose
their liveness, it is called deadlock
Design Issues for Concurrency
• Competition and cooperation synchronization
• Controlling task scheduling
• How can an application influence task
scheduling
• How and when tasks start and end execution
• How and when are tasks created
Methods of Providing Synchronization
• Semaphores
• Monitors
• Message Passing
Semaphores
• A semaphore is a simple mechanism that can
be used to provide synchronization of tasks.
• They are still used both in contemporary
languages and in library based concurrency
support systems.
• Edsger Dijkstra devised semaphores in 1965.
• Semaphores can also be used to provide
cooperation synchronization.
Guards

• A guard is a linguistic device that allows the


guarded code to be executed only when a
specified condition is true.
• To provide limited access to a data structure,
guards can be placed around the code that
accesses the structure.
• So a guard can be used to allow only one task to
access a particular shared data structure at a time.
• A semaphore is an implementation of a guard.
• A semaphore is a data structure that consists
of an integer and a queue that stores task
descriptors.
• A task descriptor is a data structure that
stores all of the relevant information about
the execution state of a task.
• An integral part of a guard mechanism is a procedure
for ensuring that all attempted executions of the
guarded code eventually take place.
• The typical approach is to have requests for access
that occur.
• When access cannot be granted or stored in the task
descriptor queue,from which they are later allowed to
leave and execute the guarded code.
• This is the reason a semaphore must have both a
counter and a task descriptor queue.
Semaphore (Summarized Points)
• A semaphore is a data structure consisting of a counter
and a queue for storing task descriptors
– A task descriptor is a data structure that stores all of the
relevant information about the execution state of the task
• Semaphores can be used to implement guards on the
code that accesses shared data structures
• Semaphores have only two operations, wait and release
(originally called P and V by Dijkstra)
• Semaphores can be used to provide both competition
and cooperation synchronization.
Cooperation Synchronization with
Semaphores
• Here we use an example of a shared buffer(a
chunk of memory) used by producers and
consumers to illustrate the different approaches
to providing cooperation and competition
synchronization.
• For cooperation synchronization such buffer must
have some way of recording both the number of
empty positions and the number of filled positions
in the buffer(to prevent buffer undeflow and
overflow conditions).
Cooperation Synchronization with
Semaphores
• The counter component of a semaphore is
used for this. One semaphore variable-for
example,
• emptyspots can use its counter to maintain
the number of empty locations in a shared
buffer used by producers and consumers.
• Fullspots can use its counter to maintain the
number of filled positions in the buffer.
Cooperation Synchronization with
Semaphores
• The queues of these semaphores can store the
descriptors of these tasks that have been
forced to wait for access to the buffer.
• The queue of emptyspots can store producer
tasks that are waiting for available positions in
the buffer, the queue of fullspots can store the
consumer tasks waiting for the values to be
placed in the buffer.
Cooperation Synchronization with
Semaphores
• The buffer is implemented as an ADT with the
operations DEPOSIT and FETCH as the only
ways to access the buffer
• Use two semaphores for cooperation:
emptyspots and fullspots
Cooperation Synchronization with
Semaphores
• DEPOSIT must first check emptyspots to
see if there is room in the buffer
• If there is room, the counter of emptyspots
is decremented and the value is inserted
• If there is no room, the caller is stored in the
queue of emptyspots
• When DEPOSIT is finished, it must increment
the counter of fullspots
Cooperation Synchronization with
Semaphores
• FETCH must first check fullspots to see if there is a
value
– If there is a full spot, the counter of fullspots is
decremented and the value is removed
– If there are no values in the buffer, the caller must be placed in
the queue of fullspots
– When FETCH is finished, it increments the counter of
emptyspots
• The operations of FETCH and DEPOSIT on the
semaphores are accomplished through two semaphore
operations named wait and release
Semaphores: Wait and Release Operations
wait(aSemaphore)
if aSemaphore’s counter > 0 then
decrement aSemaphore’s counter
else
put the caller in aSemaphore’s queue
attempt to transfer control to a ready task
-- if the task ready queue is empty,
-- deadlock occurs
end
release(aSemaphore)
if aSemaphore’s queue is empty then
increment aSemaphore’s counter
else
put the calling task in the task ready queue
transfer control to a task from aSemaphore’s queue
end
Competition Synchronization with
Semaphores
• A third semaphore, named access, is used
to control access (competition
synchronization)
– The counter of access will only have the values
0 and 1
– Such a semaphore is called a binary semaphore
Evaluation of Semaphores
• Misuse of semaphores can cause failures in
cooperation synchronization, e.g., the buffer
will overflow if the wait of fullspots is left
out
• Misuse of semaphores can cause failures in
competition synchronization, e.g., the
program will deadlock if the release of
access is left out
Monitors
• A monitor is an abstract data type for shared
data
• The idea: encapsulate the shared data and its
operations to restrict access
**All synchronization operations on shared data
be gathered into a single program unit. This
concept is called monitors.
Competition Synchronization
• Shared data is resident in the monitor (rather than in the
client units).
• The programmer does not synchronize mutually exclusive
access to shared data through the use of semaphores or
other mechanisms.
• Because the access mechanisms are part of the monitor,
implementation of a monitor can be made to guarantee
synchronized access by allowing only one access at a time.
• Calls to monitor procedures are implicitly blocked and
stored in a queue if the monitor is busy at the time of the
call.
Cooperation Synchronization
 Although mutually exclusive access to shared data is
integral part of a monitor, cooperation between
processes is still the task of a programmer.
 Here the programmer must guarantee that a shared
buffer does not experience underflow or overflow.
 Different programming languages provide different
ways of programming cooperation synchronization ,
all of which are related to semaphores.
Cooperation Synchronization
• A program containing four tasks and a monitor that provides
synchronized access to a concurrently shared buffer is shown
below:
• In the above figure the interface to the
monitor is shown as the two boxes labeled
insert and remove(for the insertion and
removal of data).
• The monitor appears exactly like an abstract
data type i.e, a data structure with limited
access.
Message Passing
• Message passing is a general model for concurrency
– It can model both semaphores and monitors
 To support concurrent tasks with message passing, a
language needs:
 A mechanism to allow a task to indicate when it is willing to
accept messages
 A way to remember who is waiting to have its message
accepted and some “fair” way of choosing the next message.
 When a sender task’s message is accepted by a receiver
task, the actual message transmission is called a
rendezvous
Message Passing
• Furthermore, messages usually cause associated
processing in the receiver, which might not be
sensible if other processing is incomplete.
• The alternative is to provide a linguistic mechanism
that allows a task to specify to other tasks when it is
ready to receive messages.
• A task can be designed such that it can suspend its
execution at some point, either because its idle or
because it needs information from another unit
before it can continue.
Message Passing
• However if task A is waiting for a message at the
time task B sends that message, the message can
be transmitted.
• This actual transmission of the message is called a
rendezvous.
• A rendezvous can occur only if both the sender and
receiver want it to happen.
• During a rendezvous the information of the
message can be transmitted in either or both
directions
Message Passing
• Message passing can either be synchronous or
asynchronous. Here we address synchronous message
passing.
• The basic concept of synchronous message passing is
that tasks are often busy, and when busy they cannot be
interrupted by other units.
• Suppose task A and task B are both in execution, and A
wishes to send a message to B.
• Clearly if B is busy it is not desirable to allow other task
to interrupt it. That would disrupt B’s current processing.
Java Threads
• The concurrent units in java are methods named
run, whose code can be in concurrent execution
with other such methods(or other objects) and with
the main method.
• The process in which the run methods execute is
called a thread.
• Java’s threads are light weight tasks, which means
that they all run in the same address space. This is
different from Ada tasks, which are heavyweight
threads(they run in their own address spaces).
Java Threads
• The concurrent units in Java are methods named run
– A run method code can be in concurrent execution with other
such methods
– The process in which the run methods execute is called a
thread
class myThread extends Thread
public void run () {…}
}

Thread myTh = new MyThread ();
myTh.start();
Controlling Thread Execution
• The Thread class has several methods to
control the execution of threads
– The yield is a request from the running thread
to voluntarily surrender the processor
– The sleep method can be used by the caller of
the method to block the thread
– The join method is used to force a method to
delay its execution until the run method of
another thread has completed its execution
Thread Priorities
• A thread’s default priority is the same as the
thread that create it
– If main creates a thread, its default priority is
NORM_PRIORITY
• Threads defined two other priority constants,
MAX_PRIORITY and MIN_PRIORITY

• The priority of a thread can be changed with


the methods setPriority
Competition Synchronization with Java
Threads
• A method that includes the synchronized modifier disallows any
other method from running on the object while it is in execution

public synchronized void deposit( int i) {…}
public synchronized int fetch() {…}

• The above two methods are synchronized which prevents them from
interfering with each other
• If only a part of a method must be run without interference, it can be
synchronized thru synchronized statement
synchronized (expression)
statement
Competition Synchronization with Java
Threads
• Cooperation synchronization in Java is achieved via
wait, notify, and notifyAll methods
– All methods are defined in Object, which is the root class in
Java, so all objects inherit them
• The wait method must be called in a loop
• The notify method is called to tell one waiting thread
that the event it was waiting has happened
• The notifyAll method awakens all of the threads on
the object’s wait list
C# Threads
• Loosely based on Java but there are significant
differences
• Basic thread operations
– Any method can run in its own thread
– A thread is created by creating a Thread object
– Creating a thread does not start its concurrent execution; it
must be requested through the Start method
– A thread can be made to wait for another thread to finish with
Join
– A thread can be suspended with Sleep
– A thread can be terminated with Abort
Synchronizing Threads
• Three ways to synchronize C# threads
– The Interlocked class
• Used when the only operations that need to be
synchronized are incrementing or decrementing of an
integer
– The lock statement
• Used to mark a critical section of code in a thread
lock (expression) {… }
– The Monitor class
• Provides four methods that can be used to provide
more sophisticated synchronization
Basic Concepts in Exception Handling I
• In the course of a program’s execution, many
events may occur that were not expected by
the programmer.
• We distinguish between two such classes of
events:
– Those that are detected by hardware: e.g., disk
read errors, end-of-file
– Those that are software-detectable: e.g., subscript
range errors
Basic Concepts in Exception Handling II
• Definition: An exception is an unusual event that is
detectable by either hardware or software and that
may require special processing.
• Terminology: The special processing that may be
required when an exception is detected is called
exception handling. The processing is done by a code
unit or segment called an exception handler. An
exception is raised when its associated event occurs.
User-Defined Exception Handling
• When a language does not include specific exception
handling facilities, the user often handles software
detections by him/herself.
• This is typically done in one of three ways:
– Use of a status variable (or flag) which is assigned a value in
a subprogram according to the correctness of its
computation. [Used in standard C library functions]
– Use of a label parameter in the subprogram to make it
return to different locations in the caller according to the
value of the label. [Used in Fortran].
– Define the handler as a separate subprogram and pass its
name as a parameter to the called unit. But this means that
a handler subprogram must be sent with every call to every
subprogram.
Advantages to Built-in Exception
Handling
• Without built-in Exception Handling, the code required to
detect error conditions can considerably clutter a program.
• Built-in Exception Handling often allows exception
propagation. i.e., an exception raised in one program unit can
be handled in some other unit in its dynamic or static
ancestry. A single handler can thus be used in different
locations.
• Built-in Exception Handling forces the programmer to
consider all the events that could occur and their handling.
This is better than not thinking about them.
• Built-in Exception Handling can simplify the code of programs
that deal with unusual situations. (Such code would normally
be very convoluted without it).
Illustration of an Exception Handling
Mechanism
void example ( ) {
… The exception
average = sum / total; of division by
zero, which is
… implicitly raised
return; causes control
to transfer to the
/* Exception handlers */ appropriate
When zero_divide { handler, which
average = 0; is then executed

printf(“Error-divisor (total) is zero\n”);


}

}
Design Issues for Exception Handling I:
Exception Binding
• Binding an exception occurrence to an exception
handler:
– At the unit level: how can the same exception raised at
different points in the unit be bound to different handlers
within the unit?
– At a higher level: if there is no exception handler local to
the unit, should the exception be propagated to other
units? If so, how far? [Note: if handlers must be local, then
many need to be written. If propagation is permitted, then
the handler may need to be too general to really be
useful.]
Design Issues for Exception Handling
II:Continuation
• After an exception handler executes, either control
can transfer to somewhere in the program outside of
the handler code, or program execution can
terminate.
– Termination is the simplest solution and is often
appropriate.
– Resumption is useful when the condition encountered is
unusual, but not erroneous. In this case, some convention
should be chosen as to where to return:
• At the statement that raised the exception?
• At the statement following the statement that raised the
exception?
• At some other unit?
Design Issues for Exception Handling III:
Others
• Is finalization—the ability to complete some computations at
the end of execution regardless of whether the program
terminated normally or because of an exception—supported?
• How are user-defined exceptions specified?
• Are there pre-defined exceptions?
• Should it be possible to disable predefined exceptions?
• If there are pre-defined exceptions, should there be default
exception handlers for programs that do not provide their
own?
• Can pre-defined exceptions be explicitly raised?
• Are hardware-detectable errors treated as exceptions that
may be handled?
Exception Handling in Java: Class Hierarchy
for Exceptions

Usually thrown by
Errors thrown by the JVM JVM when a user
Errors never thrown by user programs program causes
and should never be handled there an error
Exception Handling in Java: Exception
Handlers
• A try construct includes a compound statement
called the try clause and a list of exception handlers:
try {
//** Code that is expected to raise an exception }
catch (formal parameter) {
//* A handler body
}

catch (formal parameter) {
//** A handler body
}
Exception Handling in Java: Binding
Exceptions to Handlers
• An exception is thrown using the throw statement.
E.g.: throw new MyException (“a message to specify the
location of the error”)
• Binding: If an exception is thrown in the compound
statement of a try construct, it is bound to the first
handler (catch function) immediately following the try
clause whose parameter is the same class as the
thrown object, or an ancestor of it. If a matching
handler is found, the throw is bound to it and it is
executed.
Exception Handling in Java: The
finally clause
• Sometimes, a process must be executed regardless of
whether an exception is raised or not and handled or
not.
• This occurs, for example, in the case where a file must
be closed or an external resource released, regardless
of the outcome of the program.
• This is done by adding a finally clause at the end of
the list of handlers, just after a complete try
construct.
• The finally clause is executed in all cases whether or
not try throws an exception, and whether or not it is
caught by a catch clause.
Exception Handling in C++
• In C++ an exception is "thrown and caught"
whereas in Ada it is "raised and handled.“
• An exception handler, if present, is always the
final segment of the executable part of a
program unit or block.
• It follows the reserved word exception and
precedes the final end of the program unit or
block.
Logical Programming
• The major difference between logic
programming and other programming
languages (imperative and functional)
– Every data item that exist in logic programming
has written in specific representation (symbolic
logic)
• Prolog is a logic programming that widely
used logic language
Introduction
• Prolog specified the way of how the computer
carries out the computation and it is divided
to 3 parts:
– logical declarative semantic of prolog
– new facts that can infer from the given fact
– explicit control information supplied by the
programmer
Symbolic representation: Predicate Calculus
• mathematical representation of
Predicate Calculus formal logic
• First order predicate logic is a
particular form of symbolic logic
FOPL Higher-order that is used for logic programming
PL

Symbolic
Logic

Logic
Formalism

Proposition
Symbolic representation: Predicate Calculus
Predicate Calculus

FOPL Higher-order
PL
• symbolic logic used for the three basic need of
Symbolic formal logic
Logic • to express propositions
• to express the relationships between propositions
Logic • to describe how new propositions can be inferred
Formalism from other propositions that are assumed to be
true
Proposition
Symbolic representation: Predicate Calculus
Predicate Calculus

FOPL Higher-order
PL

Symbolic
Logic
• Formal logic was developed to provide a method
Logic for describing proposition.
Formalism

Proposition
Symbolic representation: Predicate Calculus
Predicate Calculus

FOPL Higher-order
PL

Symbolic
Logic

Logic
Formalism
• Proposition is a logical statement also known as fact
Proposition • consist of object and relationships of object to
each other
Proposition
• Object:
– Constant represents an object, or
– Variable represent different objects at different times
• Simple proposition called as atomic propositions, consist of
compound terms – one element of mathematic relation
which written in a form that has the appearance of
mathematical function notation.

Example (constants):
single parameter (1-tuple): man(jake)
double parameter (2-tuples): like(bob,steak)
Proposition
• Two modes for proposition:
– proposition defined to be true (fact), and
– the truth of the proposition is something that is to
be determined (queries)
• Compound propositions have two or more
atomic proposition, which are connected by
logical operator (is the same way logic
expression in imperative languages)
Logic operators
Name Symbol Example Meaning

negation ¬ ¬a not a

conjunction  ab a and b

disjunction  ab a or b

equivalence  ab a is equivalent


to b
implication  ab a implies b

 ab b implies a
Compound propositions
Example:
a  b  c
a  b  d
(a  (b))  d

Precedence: higher

  
  lower
Variables in Proposition
• Variable known as quantifiers
• Predicate calculus includes two quanifiers, X –
variable, and P – proposition

Name Example Meaning

universal X,P For all X, P is true

existential X,P There exists a value of X such that


P is true
Variables in Proposition
Example
X.(woman(X)  human(X))
 for any value of X, if X is a woman, then X is a human (NL:
woman is a human)
Clausal Form
• Simple form of proposition, it is a standard form
for proposition without loss of generality
• Why we need to transform PC into CF?
– too many different ways of stating propositions that
have the same meaning

Example:
X.(woman(X)  human(X))
X.(man(X)  human(X))
Clausal Form
Example:
likes(bob, trout)  likes(bob, fish)  fish(trout)

consequent antecedent

• Characteristics of CF:
– Existential quantifiers are not required
– Universal quantifiers are implicit in the use of variables
in the atomic propositions
– No operator other than conjunction and disjunction
are required
Proving Theorems
• Method to inferred the collection of
proposition
– use a collections of proposition to determine
whether any interesting or useful fact can be
inferred from them

• Introduced by Alan Robinson (1965)


Proving Theorems
• Alan Robinson introduced resolution in
automatic theorem proving
– resolution is an inference rule that allows inferred
proposition to be computed from given
propositions
– resolution was devised to be applied to
propositions in clausal form
Proving Theorems
• Idea of resolution:

P1  P2 and Q1  Q2
which given
P1 is identical to Q2
 Q1  P2
Proving Theorems
Example:

older(joanne, jake)  mother(joanne, jake)


wiser(joanne, jake)  older(joanne, jake)

 wiser(joanne, jake)  mother(joanne, jake)


Proving Theorems
Example:

father(bob, jake)  mother(bob, jake)  parent(bob, jake)


gfather(bob, fred)  father(bob, jake)  father(jake, fred)

\ gfather(bob, fred)  mother(bob, jake)


 parent(bob, jake)  father(jake, fred)
Proving Theorems
• Process of determining useful values for
variables during resolution – unification

• Unification
– Hypotheses : original propositions
– Goal: presented in negation of the theorem
– Proposition in unification must be presented in
Horn Clauses
Applications of Symbolic Computation
• Relational databases
• Mathematical logic
• Abstract problem solving
• Understanding natural language
• Design automation
• Symbolic equation solving
• Biochemical structure analysis
• Many areas of artificial intelligent

You might also like