0% found this document useful (0 votes)
18 views49 pages

Unit 3 Complete APP

Uploaded by

mel28melody07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views49 pages

Unit 3 Complete APP

Uploaded by

mel28melody07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

18CSC207J – Advanced Programming Practice

C.Arun, Asst. Prof. Dept of Software Engineering, School of Computing, SRMIST


Imperative
Programming Paradigm
Topics
■ Program state, instructions to change the program state

■ Combining Algorithms and Data Structures

■ Imperative Vs Declarative Programming

■ Other Languages: PHP, Ruby, Perl, Swift

■ Demo: Imperative Programming in Python


INTRODUCTION
■ In a computer program,a variable stores the
data. The contents of these locations at any
givenpoint in the

program’s execution are called the program’s state.


Imperative programming is characterized by programming
with state and commands which modify the state.

■ The first imperative programming languages were


machinelanguages.
Machine Language
a very specific task, such
load, a jump, or an ALU operation on a
■ Each instruction
of dataperforms
in a as a CPU register or memory.
• unit example:
• For rt, and rd indicate register operands
shamt
• ––rs, gives a shift amount
– the address or immediate fields contain an
operand directly
Assembly Code
■ 10110000 01100001
■ Equivalent Assembly code
■ B0 61
– B0 - ‘Move a copy of the following valueinto AL’
(AL is a register)
– 61 is a hexadecimal representation of the
value01100001 – 97
■ Intel assembly language
■ MOV AL, 61h; Load AL with 97 decimal (61 hex)
Other Languages
■ FORTRAN(FORmula TRANslation) was the first high level
language to gain wide acceptance. It was designed for
scientific applications and featured an algebraic notation, types,
subprograms, and formatted input/output.
■ COBOL (COmmon Business Oriented Language) was designed
at the initiative of the U. S. Department of Defence
in 1959 and implemented in 1960 to meet the need
for business data processing applications.

■ ALGOL 60 (ALGorithmic Oriented Language) was designed


in 1960 by an international committee for use in
scientific problem solving
Evolutionary developments

PL / I to
Algol to PL /I FORTRAN COBOL to PL/I LISP to PL/I

• Block • Subprograms • File • Dynamic


structure • Formatted IO manipulation storage
• Control • Record allocation
statements • Linked
• Recursion structures
OVERVIEW
■ In imperative programming, a name may be
assigned to a value and later reassigned to
another value.
■ The collection of names and the associated
values and the location of control in the
program constitute the state.
■ The state is a logical model of storage
whichis an association between memory locations
and values.
■ A program in execution generates a sequence of
states.
■ The transition from one state to the next is
determined by assignment operations and sequencing
commands.
Highlights on
■ Assignment, ■ Imperative language

■ goto commands ■ Assertions

■ structured programming ■ Axiomatic semantics

■ Command ■ State

■ Statement ■ Variables

■ Procedure ■ Instructions

■ Control-flow ■ Control structures


Declarative Vs Imperative
Declarative Vs Imperative
# Declarative

# Imperative
DEMO
An algorithm
entered to add two numbers
by user
Step 1: Start
Step 2: Declare variables num1, num2 and sum.
Step 3: Read values num1 and num2.
Step 4: Add num1 and num2 and assign the result to
sum. sum←num1+num2
Step 5: Display sum

Step 6: Stop
Addition two numbers entered by user
An Algorithm to Get n number, print
the same and find Sum of n numbers
Step 1: Start
Step 2: Declare variable sum = 0.
Step 3: Get the value of limit “n”.
Step 4: If limit is reached, goto Step 7 else goto Step 5
Step 5: Get the number from user and add it to sum
Step 6: Goto Step 4
Step 7: If limit is reached, goto Step 9 else goto Step 8
Step 8: Print the numbers
Step 9: Goto Step 7
Step 9: Display sum
Step 10: Stop
Declarative Vs Imperative
Declarative Vs Imperative
# Declarative

# Imperative
Parallel & Concurrent
Programming Paradigm
Introduction

• A system is said to be parallel if it can support two or more actions executing simultaneously i.e., multiple actions are
simultaneously executed in parallel systems.
• The evolution of parallel processing, even if slow, gave rise to a considerable variety of programming paradigms.
• Parallelism Types:
• Explicit Parallelism Message Passing Architecture

• Implicit Parallelism

Shared memory Architecture


Explicit parallelism
• Explicit Parallelism is characterized by the presence of explicit constructs in the programming language, aimed at describing (to
a certain degree of detail) the way in which the parallel computation will take place.
• A wide range of solutions exists within this framework. One extreme is represented by the ``ancient'' use of basic, low level
mechanisms to deal with parallelism--like fork/join primitives, semaphores, etc--eventually added to existing programming
languages. Although this allows the highest degree of flexibility (any form of parallel control can be implemented in terms of
the basic low level primitives gif), it leaves the additional layer of complexity completely on the shoulders of the programmer,
making his task extremely complicate.
Implicit Parallelism
• Allows programmers to write their programs without any concern about the exploitation of parallelism. Exploitation of
parallelism is instead automatically performed by the compiler and/or the runtime system. In this way the parallelism is
transparent to the programmer maintaining the complexity of software development at the same level of standard sequential
programming.
• Extracting parallelism implicitly is not an easy task. For imperative programming languages, the complexity of the problem is
almost prohibitively and allows positive results only for restricted sets of applications (e.g., applications which perform
intensive operations on arrays.
• Declarative Programming languages, and in particular Functional and Logic languages, are characterized by a very high level of
abstraction, allowing the programmer to focus on what the problem is and leaving implicit many details of how the problem
should be solved.
• Declarative languages have opened new doors to automatic exploitation of parallelism. Their focusing on a high level
description of the problem and their mathematical nature turned into positive properties for implicit exploitation of
parallelism.
Methods for parallelism
There are many methods of programming parallel computers. Two of the most common are message passing and data parallel.
1. Message Passing - the user makes calls to libraries to explicitly share information between processors.
2. Data Parallel - data partitioning determines parallelism
3. Shared Memory - multiple processes sharing common memory space
4. Remote Memory Operation - set of processes in which a process can access the memory of another process without its
participation
5. Threads - a single process having multiple (concurrent) execution paths
6. Combined Models - composed of two or more of the above.
Methods for parallelism
Message Passing:
• Each Processor has direct access only to its local memory
• Processors are connected via high-speed interconnect
• Data exchange is done via explicit processor-to-processor communication i.e processes communicate by sending and
receiving messages : send/receive messages
• Data transfer requires cooperative operations to be performed by each process (a send operation must have matching
receive)
Data Parallel:
• Each process works on a different part of the same data structure
• Processors have direct access to global memory and I/O through bus or fast switching network
• Each processor also has its own memory (cache)
• Data structures are shared in global address space
• Concurrent access to shared memory must be coordinate
• All message passing is done invisibly to the programmer
Steps in Parallelism
• Independently from the specific paradigm considered, in order to execute a program which exploits parallelism, the
programming language must supply the means to:
• Identify parallelism, by recognizing the components of the program execution that will be (potentially) performed by
different processors;
• Start and stop parallel executions;
• Coordinate the parallel executions (e.g., specify and implement interactions between concurrent components).
Ways for Parallelism
Functional Decomposition (Functional Parallelism)
• Decomposing the problem into different tasks which can be distributed to multiple processors for simultaneous execution
• Good to use when there is not static structure or fixed determination of number of calculations to be performed
Domain Decomposition (Data Parallelism)
• Partitioning the problem's data domain and distributing portions to multiple processors for simultaneous execution
• Good to use for problems where:
• data is static (factoring and solving large matrix or finite difference calculations)
• dynamic data structure tied to single entity where entity can be subsetted (large multi-body problems)
• domain is fixed but computation within various regions of the domain is dynamic (fluid vortices models)
Parallel Programming Paradigm
• Phase parallel
• Divide and conquer
• Pipeline
• Process farm
• Work pool
Note:
• The parallel program consists of number of super steps, and each super step has two phases : computation phase and
interaction phase
Phase Parallel Model
• The phase-parallel model offers a paradigm that is widely used in
parallel programming.
• The parallel program consists of a number of supersteps, and each
has two phases.
• In a computation phase, multiple processes each perform an
independent computation C.
• In the subsequent interaction phase, the processes perform
one or more synchronous interaction operations, such as a
barrier or a blocking communication.
• Then next superstep is executed.
Divide and Conquer & Pipeline model
• A parent process divides its workload into several smaller pieces
and assigns them to a number of child processes.
• The child processes then compute their workload in parallel and
the results are merged by the parent.
• The dividing and the merging procedures are done recursively.
• This paradigm is very natural for computations such as quick sort.

Pipeline
• In pipeline paradigm, a number of processes form a virtual
pipeline.
• A continuous data stream is fed into the pipeline, and the
processes execute at different pipeline stages simultaneously in an
overlapped fashion.
Process Farm & Work Pool Model
• This paradigm is also known as the master-slave paradigm.
• A master process executes the essentially sequential part of the parallel program
and spawns a number of slave processes to execute the parallel workload.
• When a slave finishes its workload, it informs the master which assigns a new
workload to the slave.
• This is a very simple paradigm, where the coordination is done by the master.
• This paradigm is often used in a shared variable model.
• A pool of works is realized in a global data structure.
• A number of processes are created. Initially, there may be just one piece of work in
the pool.
• Any free process fetches a piece of work from the pool and executes it, producing
zero, one, or more new work pieces put into the pool. The parallel program ends
when the work pool becomes empty.
• This paradigm facilitates load balancing, as the workload is dynamically allocated
to free processes.
Parallel Program using Python
• Multitasking, in general, is the capability of performing multiple tasks simultaneously. Multithreading refers to concurrently
executing multiple threads by rapidly switching the control of the CPU between threads (called context switching).
• Python Global Interpreter Lock limits one thread to run at a time even if the machine contains multiple processors.
• There are two types of multitasking in an OS:
1. Process-based
2. Thread-based
• Threads in Python can be created in three ways:
• Without creating a class
• By extending Thread class
• Without extending Thread class
Parallel Program using Python
• A thread is basically an independent flow of execution. A single process can consist of multiple threads. Each thread in a
program performs a particular task. For Example, when you are playing a game say FIFA on your PC, the game as a whole is a
single process, but it consists of several threads responsible for playing the music, taking input from the user, running the
opponent synchronously, etc.
• Threading is that it allows a user to run different parts of the program in a concurrent manner and make the design of the
program simpler.
• Multithreading in Python can be achieved by importing the threading module.

Example:
import threading
from threading import *
Parallel Program using Python

Model 1 Model 2: Model 3


Without creating a class By Extending thread class Without Extending thread class:
step1 : Import modules step1 : import Modules step1 : create class
step2 : Define the role of thread Step2 : Extending (inherit) a class from Thread step2 : define method
step3 : Instantiate the thread by specifying target step3 : Override Run method step3 : Create object for the class
step4 : Start the Thread. step4 : Instantitate the obj for extended class step4 : Instantiate the Thread by passing target
step5 : call the start method method
from threading import * step5 : class Start method
def fun(): import threading from threading import *
……. class test(Thread): class Test:
child = Thread(target = fun) def run(self): def func(self):
child.start() print("User threads") print("Child Thread")

ob=test() ob=Test()
ob.start() child = Thread(target= ob.func)
child.start()
Parallel program using Threads in Python
# simplest way to use a Thread is to instantiate it with a target
from threading import Thread,current_thread
function and call start() to let it begin working.
from threading import Thread,current_thread class mythread(Thread):

print(current_thread().getName()) def run(self):


def mt(): for x in range(7):
print("Child Thread") print("Hi from child")
for i in range(11,20): a = mythread()
print(i*2) a.start()
def disp():
a.join()
for i in range(10):
print("Bye from",current_thread().getName())
print(i*2)
child=Thread(target=mt)
child.start()
disp()
print("Executing thread name :",current_thread().getName())
Parallel program using Process in Python

import multiprocessing
def worker(num):
print('Worker:', num)
for i in range(num):
print(i)
return

jobs = []
for i in range(1,5):
p = multiprocessing.Process(target=worker, args=(i+10,))
jobs.append(p)
p.start()
Concurrent Programming Paradigm
• Computing systems model the world, and the world contains actors that execute independently of, but communicate with,
each other. In modelling the world, many (possibly) parallel executions have to be composed and coordinated, and that's
where the study of concurrency comes in.
• There are two common models for concurrent programming: shared memory and message passing.
• Shared memory. In the shared memory model of concurrency, concurrent modules interact by reading and writing
shared objects in memory.
• Message passing. In the message-passing model, concurrent modules interact by sending messages to each other
through a communication channel. Modules send off messages, and incoming messages to each module are queued up
for handling
Issues Concurrent Programming Paradigm
Concurrent programming is programming with multiple tasks. The major issues of concurrent programming are:
• Sharing computational resources between the tasks;
• Interaction of the tasks.

Objects shared by multiple tasks have to be safe for concurrent access. Such objects are called protected. Tasks accessing such an
object interact with each other indirectly through the object.
An access to the protected object can be:
• Lock-free, when the task accessing the object is not blocked for a considerable time;
• Blocking, otherwise.

Blocking objects can be used for task synchronization. To the examples of such objects belong:
• Events;
• Mutexes and semaphores;
• Waitable timers;
• Queues
Issues Concurrent Programming Paradigm
Race Condition
import threading
x = 0 # A shared value t1 = threading.Thread(target=incr)
COUNT = 100 t2 = threading.Thread(target=decr)
t1.start()
def incr(): t2.start()
global x t1.join()
for i in range(COUNT): t2.join()
x += 1 print(x)
print(x)

def decr():
global x
for i in range(COUNT):
x -= 1
print(x)
Synchronization in Python
Locks:
Locks are perhaps the simplest synchronization primitives in Python. A Lock has only two states — locked and unlocked
(surprise). It is created in the unlocked state and has two principal methods — acquire() and release(). The acquire() method locks
the Lock and blocks execution until the release() method in some other co-routine sets it to unlocked.

R-Locks:
R-Lock class is a version of simple locking that only blocks if the lock is held by another thread. While simple locks will
block if the same thread attempts to acquire the same lock twice, a re-entrant lock only blocks if another thread currently holds the
lock.

Semaphore:
A semaphore has an internal counter rather than a lock flag, and it only blocks if more than a given number of threads
have attempted to hold the semaphore. Depending on how the semaphore is initialized, this allows multiple threads to access the
same code section simultaneously.
LOCK in python
Synchronization using LOCK
Locks have 2 states: locked and unlocked. 2 methods are used to manipulate them: acquire() and release(). Those are the rules:
1. if the state is unlocked: a call to acquire() changes the state to locked.
2. if the state is locked: a call to acquire() blocks until another thread calls release().
3. if the state is unlocked: a call to release() raises a RuntimeError exception.
4. if the state is locked: a call to release() changes the state to unlocked().
Synchronization in Python using Lock
import threading def decr():
x=0 # A shared value global x
COUNT = 100 lock.acquire()
lock = threading.Lock() print("thread locked for decrement cur x=",x)
for i in range(COUNT):
def incr(): x -= 1
global x print(x)
lock.acquire() lock.release()
print("thread locked for increment cur x=",x) print("thread release from decrement cur x=",x)
for i in range(COUNT): t1 = threading.Thread(target=incr)
x += 1 t2 = threading.Thread(target=decr)
print(x) t1.start()
lock.release() t2.start()
print("thread release from increment cur x=",x) t1.join()
t2.join()
Synchronization in Python using RLock
import threading
def subber(f,count):
class Foo(object): while count > 0:
lock = threading.RLock() f.decr()
def __init__(self): count -= 1
self.x = 0
def add(self,n): # Create some threads and make sure it works
with Foo.lock: COUNT = 10
self.x += n f = Foo()
def incr(self): t1 = threading.Thread(target=adder,args=(f,COUNT))
with Foo.lock: t2 = threading.Thread(target=subber,args=(f,COUNT))
self.add(1) t1.start()
def decr(self): t2.start()
with Foo.lock: t1.join()
self.add(-1) t2.join()
print(f.x)

def adder(f,count):
while count > 0:
f.incr()
count -= 1
Synchronization in Python using Semaphore
import threading
import time def consumer():
print "I'm a consumer and I wait for data."
done = threading.Semaphore(0) print "Consumer is waiting."
item = None done.acquire()
print "Consumer got", item
def producer():
global item t1 = threading.Thread(target=producer)
print "I'm the producer and I produce data." t2 = threading.Thread(target=consumer)
print "Producer is going to sleep." t1.start()
time.sleep(10) t2.start()
item = "Hello"
print "Producer is alive. Signaling the consumer."
done.release()
Synchronization in Python using event
import threading # A producer thread
import time def producer():
item = None global item
# A semaphore to indicate that an item is available for x in range(5):
available = threading.Semaphore(0) completed.clear() # Clear the event
# An event to indicate that processing is complete item = x # Set the item
completed = threading.Event() print "producer: produced an item"
# A worker thread available.release() # Signal on the semaphore
def worker(): completed.wait()
while True: print "producer: item was processed"
available.acquire() t1 = threading.Thread(target=producer)
print "worker: processing", item t1.start()
time.sleep(5) t2 = threading.Thread(target=worker)
print "worker: done" t2.setDaemon(True)
completed.set() t2.start()
Producer and Consumer problem using thread
import threading,time,Queue # Launch a bunch of consumers
items = Queue.Queue() cons = [threading.Thread(target=consumer)
# A producer thread for i in range(10)]
def producer(): for c in cons:
print "I'm the producer" c.setDaemon(True)
for i in range(30): c.start()
items.put(i) # Run the producer
time.sleep(1) producer()
# A consumer thread
def consumer():
print "I'm a consumer", threading.currentThread().name
while True:
x = items.get()
print threading.currentThread().name,"got", x
time.sleep(5)
Producer and Consumer problem using thread
import threading def consumer():
import time print "I'm a consumer", threading.currentThread().name
# A list of items that are being produced. Note: it is actually while True:
# more efficient to use a collections.deque() object for this. with items_cv: # Must always acquire the lock
items = [] while not items: # Check if there are any items
# A condition variable for items items_cv.wait() # If not, we have to sleep
items_cv = threading.Condition() x = items.pop(0) # Pop an item off
def producer(): print threading.currentThread().name,"got", x
print "I'm the producer" time.sleep(5)
for i in range(30): cons = [threading.Thread(target=consumer)
with items_cv: # Always must acquire the lock first for i in range(10)]
items.append(i) # Add an item to the list for c in cons:
items_cv.notify() # Send a notification signal c.setDaemon(True)
time.sleep(1) c.start()
producer()

You might also like