0% found this document useful (0 votes)
34 views

Cloud Computing: Assignment Iii

This document analyzes the performance of matrix multiplication using the Aneka programming model with varying matrix dimensions and work units. It finds that for smaller 50x50 matrices, the thread programming model is faster than the task model, but as the matrix size increases to 100x100 and 200x200, the task model performs faster. This is because the task model more efficiently utilizes system cores and has less complexity in data collection than the thread model.

Uploaded by

Suresh Prabhu
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Cloud Computing: Assignment Iii

This document analyzes the performance of matrix multiplication using the Aneka programming model with varying matrix dimensions and work units. It finds that for smaller 50x50 matrices, the thread programming model is faster than the task model, but as the matrix size increases to 100x100 and 200x200, the task model performs faster. This is because the task model more efficiently utilizes system cores and has less complexity in data collection than the thread model.

Uploaded by

Suresh Prabhu
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

CLOUD

COMPUTING
ASSIGNMENT III

Submitted by:
SURESH. P 160913022
Problem Statement:
Relate the performance of Aneka programming model for the matrix multiplication by varying
the dimension of the matrices for multiplication and the basic work unit with respect to
thread programming model as well as task programming model. of the following.
Solution:

Number of Work Unit/Matrix 50*50 100*100 200*200


Size
Thread 302ms 667ms 4.256 sec
Task 402ms 521ms 686ms

From the above readings, we can see that dimension of the matrix 50*50 the thread
programming model is faster than task programming model and as the dimension increases
twice i.e. 100*100 then latency of thread programming increases and task programming
latency decreases and finally when the matrix dimension increases to 200*200, task
programming latency is less compared to thread programming latency.
From the above observation, it is concise that initially when the matrix dimension is less, the
thread programming is faster in executing the matrix multiplication method compared to the
task programming. But as the dimension of the matrix increases, task programming is faster
in executing compared to the thread programming.

The two reasons for interleaving results in thread programming model are

1) The complexity of collecting the data using the dictionary approach is more in thread
programming model when compared to task programming model. Hence, In Task
programming model work unit must be increased

2) In thread Programming Model, thread model doesn’t make the efficient use of the
number of cores present in the native processor (Threads are formed within single
core) and just schedules the threads based on if the processor is free or busy. The
problem is with respect to the thread model is called “core affinity”. Solution to this is
task programming model, which monitors the available resources, i.e., if any of the
core is free during the execution of the thread in another core. Task model tries to
utilize the available resources in a better fashion.

You might also like