0% found this document useful (0 votes)
9 views

CCA - Module 3 - Concurrent Computing

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

CCA - Module 3 - Concurrent Computing

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

17CS742 – Cloud

computing applications
Module 3
2020-2021[ODD]
2
17CS742 –
Cloud computing applications

► Module – 1 Introduction to cloud computing, Virtualization


► Module – 2 Cloud computing architecture, Aneka
► Module – 3 Concurrent computing, high throughput computing
► Module – 4 Data Intensive Computing
► Module – 5 Cloud platforms in industry, cloud applications

*
3
Course Outcomes

► Explain cloud computing, virtualization and classify services of cloud


computing
► Illustrate architecture and programming in cloud
► Describe the platforms for development of cloud applications and List
the application of cloud.
4
Text Book

► Rajkumar Buyya, Christian Vecchiola, and Thamarai Selvi Mastering


Cloud. Computing McGraw Hill Education

► Reference Book:
Dan C. Marinescu, Cloud Computing Theory and Practice, Morgan
Kaufmann, Elsevier 2013.
5
Module 3

► Concurrent Computing: (Chapter 6)


► High Throughput Computing:
► (Chapter 7)
6

► Multiprocessing: execution of multiple programs in a single machine.


► Multithreading: possibility of multiple instruction streams within the same
program
7
Introducing Parallelism for single
machine computation
► Parallelism when Burroughs Asymmetric Multiprocessing (eg. GPU’s)
Corporation designed the
• Uses different specialized processing units to perform different functions
D825, the first MIMD(multiple
instruction, multiple data)
multiprocessor Symmetric Multiprocessing (eg. Multicore)
• Use of similar or identical processing units to share
computation load

Non uniform Memory access (NUMA)


• Defines a specific architecture for accessing a shared memory
between processors

Clustered Multiprocessing
• Multiple computers joined together to form a single virtual computer
8
Multicore Architecture

► Frequency scaling
► Frequency Ramping
► Instruction Level Parallelism (ILP)
9
Multiprocessing vs. Multitasking

► A process is the runtime image of an application,


► a program that is running

► A thread identifies a single flow of the execution within a process.


► Both can run an top of computer hardware with a single processor and a
single core
► In that case OS gives illusion of multicore technology.

https://www.geeksforgeeks.org/difference-between-multitasking-multithreading-and-
multiprocessing/
10
Programming applications with
threads
Parallel Architectures
Useful technique to increase the throughput of the
system and a viable option for throughput computing
► Implicit threading
► happens when the underlying APIs use internal threads to perform specific
tasks
► Eg. graphical user interface (GUI) rendering

► garbage collection in the case of virtual machine-based languages

► Explicit threading
► use of threads within a program by application developers

► used are I/O from devices and network connections

► long computations

► execution of background operations with no time bounds


11
What is a thread?
► A thread identifies a single control flow: logical sequence of instructions within a process.
► What are threads to OS?
► Minimal building blocks for expressing running code.

► All operations run as threads and have variable lifetimes.

► Threads of the same process share the same execution context

► What concept is used in multitasking?


► The process of execution multiple threads by stopping execution of a thread and starting execution of
another thread.
► Context switching – save information of current thread/process in the register onto a stack and load
information of thread/process to execute onto the registers.
► Running program : main thread which is implicitly created by the compiler or runtime
environment.
► All threads have their own memory space allocated for the entire process.
► Execution of process is terminated when all the other threads are completed.
12
Thread API

► minimum set of features needed to support


multithreading
► POSIX Threads
► Portable Operating System Interface for UNIX :
set of standards related to API for portable
development for UNIX OS flavors.
► Creation and deletion of thread

► Thread synchronization : mutexes, joins,


conditions
► Threading support in Java and .NET
13
Techniques for parallel
computation with threads

► Requires understanding of the problem and its logical structure


► Decomposition : technique that aids in understanding whether a
problem divided into components (tasks) can be executed concurrently.
► Domain Decomposition : identifies functionally repetitive, but
independent computation on data.
► Embarassingly parallel

► Inherently sequential

► Master Slave model

► Functional Decomposition: the process of identifying functionally distinct


but independent computations.
14
Domain decomposition
15
Application of Domain
Decomposition

► Matrix multiplication using multiple threads.


► Obtained as a result of linear transformation of the original matrices.
► The number of columns in the first matrix must match the number of rows
in the second matrix.

► Computation is embarassingly parallel


► The inner computation can be done in parallel using threads.
► https://www.javaprogramto.com/2020/01/java-matrix-multiplication-
threads.html
16
Application of Functional
Decomposition

► sine, cosine, and tangent functions are performed in 3 separate threads


and the results are put together.
► a function pointer is passed to each thread so that it can update the
final result at the end of the computation.
► Lock ensures that a critical section can be accessed by one thread at a
time and guarantees that the final result is updated.
► http://manigandan2693.blogspot.com/2014/10/p-sinx-cosy-tanz-java-
program.html
17
Functional Decomposition
18
Computation vs. Communication

► In designing parallel and in general distributed applications, it is very


important to carefully evaluate the communication patterns
► it is important to minimize the amount of data that needs to be
exchanged while implementing parallel and distributed applications.
► The lack of communication among different threads constitutes the
condition leading to the highest throughput
19
Multithreading using Aneka

► There is a demand for greater computing power than that offered by a


single multicore machine.
► Necessary to use distributed infrastructure from the clouds.
► Decomposition techniques can be applied to partition application into
several units of work.
► Aneka threads : uses existing multithreaded compute-intensive
applications to distributed versions that can run faster by utilizing multiple
machines simultaneously, with minimum conversion effort.
20
Introducing the thread
programming model

► Thread Programming Model : introduces abstraction of Aneka Thread.


► the application is designed as a collection of threads
► Threads are created and controlled by the application developer
► Aneka is in charge of scheduling their execution once they have been
started
► APIs that mimic the ones exposed by .NET
► developers do not have to completely rewrite applications
► replacing the System.Threading.Thread and introducing
AnekaApplication class.
21
3 major elements

Application Constitutes a local view of distributed application


Single units of work are created by the programmer
Aneka.Entity.AnekaApplication<T,M> : T- AnekaThread, M-ThreadManager

Threads Represent main abstractions of the model


Constitute building blocks of a distributed application
Aneka provides Aneka.Threading.AnekaThread – represents distributed thread.

Thread Keep track of execution of threads


Manager
Implemented in Aneka.Threading.ThreadManager
22
Aneka Threads vs. Common
Threads

Interface
Compatibility
Type Thread Life
Serialization Cycle

Aneka Thread vs.


Common Thread

Thread Thread
Priorities Synchronization
23
Interface compatibility

► Aneka.Threading.AnekaThread exposes almost the same interface as System.Threading


BASIC THREAD CONTROL
► Start and Abort have a direct mapping
► No support for suspend/resume : it is usually a deprecated practice even in local threads
► Thread suspension in distributed environment leads to ineffective use of infrastructure where
resources are shared among multiple tenants
► Same reason whey sleep() is not supported.

► No support for interrupt () which forcibly resumes thread from waiting or sleeping state.

► Join() operation has been provided.


24
Properties

► name, unique identifier, and state are implemented.


► Name is freely assigned
► Identifier is generated by Aneka – represents a globally unique identifier
(GUID)
► Properties : IsBackground, Priority, isThreadPoolThread : have been
provided for interface compatibility.
► Related to state : isAlive, isRunning implemented for the ThreadState
property
25
Aneka Threads vs. Common
Threads

► Local threads belong to the hosting process.


► To create a thread, only necessary to provide a pointer to a method
► Aneka threads live in the context of a distributed application
► multiple distributed applications can be managed within a single process
► thread creation also requires the specification of the reference to the
application to which the thread belongs
26
Thread API Comparison
27
Thread Life Cycle

► Aneka threads live and execute in a distributed environment, so their


lifecycle is different from life cycle of threads.
► Not possible to map state value of local threads to Aneka threads.
► Local threads : most of the state transitions are triggered by the
developer
► In Aneka : triggered by the Aneka Middleware
► Aneka threads : has more states
► they support file staging, scheduled by the middleware, which can
queue them for a period of time
► They support reservation of nodes : state indicating execution failure due
to missing reservation credentials.
28
Thread Life

Aneka.Threading.AnekaThre
adlife Life cycle

System.Threading.Threadlife Indicate they do not have a Indicate common


Life cycle corresponding mapping states
29
Typical Thread Life cycle in
Aneka
► Unstarted Unstarted-[Started]-[Queued]-Running-Completed/Aborted/Failed
► On invoking Start() method moves to Started state
► Moves to StagingIn state (if there are files to upload for execution) or
Queued state
► If there is an error, goes to Failed state.
► Rejected State is reached for invalid reservation token
► From the Queued state, if there is a free node, it moves to the Running
state
► If thread generates an exception, moves to Failed state
► On successful completion, moves to the Completed state
► If output files are yet to be retrieved, it moves to StagingOut state
► If developer directly calls Abort() method, goes into the Abort state.
30
Thread Synchronization

► .NET base class: provides monitors, semaphores, reader-writer locks, and


basic synchronization constructs at language level
► Aneka provides minimal support for thread synchronization: limited to join
operation.
► .NET provides more stringent access to shared data.
► This is not as strict in distributed environments, where there is no shared
memory among threads.
► Providing locking facility in distributed environment leads to distributed
deadlocks that are hard to detect.
31
Thread priorities

► System.Threading.Thread
► ThreadPriority enumeration: Highest, AboveNormal, Normal, BelowNormal, or
Lowest.
► Aneka does not support thread priorities
► But for interface compatibility purposes, it has
► Aneka.Threading.Thread
► Priority property - ThreadPriority is always set to normal
32
Type Serialization

► Aneka threads execute in a distributed environment in which the object


code in the form of libraries and live instances information are moved
over the network
► Local threads execute all within the same address space and share
memory – they do not need objects to be copied or transferred into a
different address space.
► In Aneka threads : state of enclosing instance needs to be transferred
and reconstructed in the remote machine : at class level this is called
type serialization
33
Type Serialization

► Serializable : means it is possible to convert an instance of the type into a


binary array.
► In Aneka, use the Serializable attribute in the class definition.
► In some cases, it might be necessary to use the ISerializable interface to
provide additional constructors for the type.

Make a tabular column and list out the major


differences in Local threads and Aneka Threads.
34
Programming applications with
Aneka threads
► Thread Programming Model : programmer creates units of work as Aneka Threads.
► AnekaApplication<W,M>

► Aneka API makes strong use of generics

► Categorize the support given to different programming models through template specialization.

► To develop distributed applications :


► AnekaApplication<AnekaThread, ThreadManager>

► Class type for all distributed applications.

► Defined in Aneka.Threading namespace noted in Aneka.Threading.dll library.

► Configuration class : defined in Aneka.Entity , Aneka.dll library

► Contains a set of properties that allow application class to configure middleware


► Address of Aneka index service
► User credentials – required to authenticate the application with the middleware.
► Additional tuning parameters
35
36
37
Domain Decomposition: Matrix
Multiplication

► Class has been tagged with the Serializable attribute


► Extended with methods required to implement custom serialization
► Include System.Runtime.Serialization namespace

ISerializable interface: has one method


► Implementing the
GetObjectData(SerializationInfo, StreamingContext) – called when
runtime needs to serialize the instance.
► Constructor ScalarProduct(SerializationInfo, StreamingContext) :
invoked when the instance is deserialized.
38

► SerializationInfo class : provides a repository where all the properties


defining the serialized format of a class can be stored and referenced by
name.
39
40
41
42
43
44
45
46
47
48
49

You might also like