Fully Dynamic Scheduler For Numerical Computing On Multicore Processors
Fully Dynamic Scheduler For Numerical Computing On Multicore Processors
since the times taken by the CPU are in nano seconds and
the requests coming up to the processor are very high
hence the priorities keep changing from time in the study
taken by the authors it is observed that the priorities
decided are stable for a couple of hours which can be
consider to be the behavior of the system in the long run or
steady-state. The priorities are different for different
schedules hence the priorities are taken accordingly.
The priorities for a specific schedule are computed, as
follows:
The steady state probabilities or the probabilities or
priorities in the long run are π 0, π1, π2, …, π n-1.
Assuming that steady state is achievable, the steady- state
probability vector π = (π 0, π1, π2,……. π n - 1) ; πl > 0, for l
= 0,1,…….., n-1, can be found as a solution to the system
of equations π P = π, in conjunction with
The priorities obtained are used for that slot proved very
efficient effective.
A software code is written for a multiprocessing credit
card transactions environment and the comparative results
showed more than 15% efficiency.
ALGORITHM
1 Creating a Task Similarly to Cilk and SMPSs, functions
implementing parallel tasks have to be side-effect free, which
means they cannot use global variables, etc. In order to
change a regular function call to a task definition, one needs
to:
• declare the function with empty argument list,
• declare the arguments as local variables, and
• get their values by using a macro unpack
2 Invoking a Task
The second step is changing the function call into a task
invocation, which puts the task in the task pool and returns
immediately, leaving the task execution for later (when
dependencies are met and the scheduler decides to run the
task). In order to change a function call into a task invocation,
one needs to:
• replace the function call with a call to the Insert Task()
function,
• pass the task name (pointer) as the first parameter, and
• follow each original parameter with its size and direction.
3. Scheduler Implementation
Currently the scheduler targets small-scale, multi-socket
shared memory systems based on multicore processors. The
main design principle behind the scheduler is implementation
of the dataflow model, where scheduling is based on data
dependencies between tasks in the task graph. The second
principle is constrained use of resources with strict bounds on
space and time complexity.
CODE:
/****
* Example 1: A Traditional DynC Cooperatively Multithreaded Program
****/
main(void) {
/* CoData structures for two named tasks */
CoData t1, t2;
while(1) {
/* Keyboard handler */
costate {
while(1) {
waitfor(kbhit());
switch(getchar()) {
case '-':
/* Stop one of the currently running tasks */
if (isCoRunning(&t1)) CoPause(&t1);
else if (isCoRunning(&t2)) CoPause(&t2);
else printf("No tasks running.\n");
break;
case '+':
/* Start one of the currently stopped tasks */
if (!isCoRunning(&t1)) CoResume(&t1);
else if (!isCoRunning(&t2)) CoResume(&t2);
else printf("Both tasks running.\n");
break;
}
}
}
Since DynamicC does not support named yields directly, programs such as the example
above would have to simulate a named yield by performing a CoPause() on each intervening
costatement. For example:
while(1) {
costate foo always_on {
printf("Foo");
if (problem_detected()) {
CoPause(bar);
CoPause(baz);
yield;
} else {
printf("No problem detected.\n");
}
}