0% found this document useful (0 votes)
17 views10 pages

Group 2 Assignment 1

The document discusses the evolution of processor technology leading to the rise of multicore architectures, highlighting physical limitations of single-core performance, thermal management challenges, and increasing demand for parallelism. It also outlines drawbacks of single-core processors, reasons for the shift towards parallel computing, and how multi-core systems address these challenges. Additionally, it provides examples of computational tasks managed across multiple cores using multithreading and process-based parallelism.

Uploaded by

anashemasese11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views10 pages

Group 2 Assignment 1

The document discusses the evolution of processor technology leading to the rise of multicore architectures, highlighting physical limitations of single-core performance, thermal management challenges, and increasing demand for parallelism. It also outlines drawbacks of single-core processors, reasons for the shift towards parallel computing, and how multi-core systems address these challenges. Additionally, it provides examples of computational tasks managed across multiple cores using multithreading and process-based parallelism.

Uploaded by

anashemasese11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

(a)How did the evolution of processor technology lead to the rise of

multicore architecture

How Did the Evolution of Processor Technology Lead to the Rise of Multi-
Core Architectures?

1. Physical Limitations of Single-Core Performance


o As transistor sizes approached the physical limits of semiconductor
technology, the ability to increase clock speeds became constrained
due to power dissipation. Single-core processors diminished in
performance, leading to a shift toward multiple cores.
2. Thermal Management Challenges
o Increased clock speeds resulted in excessive heat production, making
it difficult to manage thermal output. Multi-core designs solved this
by distributing tasks across cores running at lower clock frequencies,
improving thermal efficiency.
3. Increasing Demand for Parallelism
o Software applications and workloads became increasingly parallel in
nature. Multi-core processors were developed to exploit this natural
parallelism, allowing multiple threads to run simultaneously and
significantly boosting performance.
4. Evolution of Software Development
o Software practices evolved alongside hardware. The emergence of
parallel programming frameworks (e.g., OpenMP, MPI) encouraged
developers to optimize applications for multi-core architectures.
5. Cost-Effectiveness
o As transistor counts increased, integrating multiple cores on a single
die became more cost-effective than designing high-performance
single-core processors, delivering better functionality without
significantly increasing costs.

6. Advancements in Hardware Technology


o Multi-core designs leveraged improvements in semiconductor
manufacturing, such as shrinking transistor sizes and advanced
interconnects between cores. These advancements enabled efficient
communication between cores and optimized data sharing, further
supporting parallel processing.
(b) Drawbacks of Single-Core Processors

1. Limited Performance Scaling


o Single-core processors faced physical constraints like heat generation
and power consumption, which capped performance improvements
through higher clock speeds.
2. Inefficient Utilization of Parallelism
o Modern applications require parallel processing to handle multiple
tasks simultaneously, and single-core processors fail to leverage this
parallelism effectively, leading to slower processing times.
3. Thermal Constraints
o High clock speeds generate excessive heat, leading to thermal
throttling that restricts peak performance.
4. Increased Power Consumption
o Operating at higher frequencies results in greater power consumption,
making single-core processors less energy efficient compared to
multi-core designs.
5. Inability to Handle Modern Workloads
o Contemporary workloads like video editing, gaming, and data analysis
rely heavily on parallel processing, which single-core processors
struggle to manage effectively.
6. Limited Support for Multitasking
o Single-core designs can execute only one major task at a time, which
creates bottlenecks in systems running multiple applications or threads
concurrently.

(c) Reasons Behind the Shift Towards Parallel Computing

1. Increasing Computational Demand


o Modern applications such as artificial intelligence (AI), data analysis,
and graphics rendering require substantial processing power. Single-
threaded processing could no longer meet these demands efficiently,
prompting a shift to parallel computing for higher performance.
2. Diminishing Returns of Clock Speed Increases
o Semiconductor technology advancements faced physical limitations
(e.g., heat generation and power consumption), making it impractical
to rely solely on increasing clock speeds. Parallel computing allowed
performance gains through additional cores rather than relying on a
single high-frequency core.
3. Exploiting Parallelism in Applications
o Many modern workloads—like processing large datasets or running
AI algorithms—are inherently parallel. Parallel computing enables
efficient scaling by dividing tasks across multiple processors or cores.
4. Improved Software Ecosystems
o Parallel programming models and frameworks, such as OpenMP,
MPI, and threading libraries, have evolved to simplify the process of
optimizing applications for parallel execution. This made transitioning
to parallel computing easier for developers.
5. Cost Efficiency
o As data volumes and computational requirements grew, energy
efficiency and cost-effectiveness became key concerns. Parallel
computing provided better performance per unit of energy consumed,
reducing operational costs.
6. Real-Time Processing Requirements
o Real-time applications, such as gaming, video streaming, and live data
analytics, demand instantaneous processing. Parallel computing
enables simultaneous execution of multiple tasks, ensuring
responsiveness and smooth operation.
o

(d) How Does Multi-Core Systems Help Overcome the Reasons Behind the
Shift Towards Parallel Computing

1. Increasing Computational Demand


o Parallel Capability:
Multi-core processors provide multiple cores that can execute several
threads simultaneously. This significantly enhances processing power,
enabling effective handling of demanding applications such as
scientific simulations, machine learning, and large-scale data
analytics.
2. Diminishing Returns of Clock Speed Increases
o Parallel Scaling:
Multi-core architectures offer performance improvements by
distributing work across multiple cores, bypassing the thermal and
power limitations associated with high clock speeds in single-core
processors.
3. Exploiting Parallelism in Applications
o Natural Fit for Parallel Tasks:
Many applications can be designed to utilize multiple cores, dividing
workloads among them. For instance, data processing can split tasks
into independent chunks, improving efficiency and speed.
4. Advancements in Hardware Technology
o Built for Parallel Processing:
Multi-core systems are designed with shared caches, interconnects
between cores, and memory controllers that support efficient data
sharing. These features optimize execution for modern parallel
workloads.
5. Real-Time Processing Requirements
o Concurrent Data Handling:
Multi-core systems can handle real-time processing demands by
executing multiple data streams simultaneously. For example, video
games benefit from cores managing inputs, calculating results, and
rendering outputs concurrently, improving performance and
responsiveness.
6. Energy Efficiency and Cost Reduction
o Improved Power Management:
Multi-core systems consume less power compared to high-speed
single-core processors by dividing tasks among lower-frequency
cores. This efficiency reduces energy consumption and operational
costs, making them ideal for large-scale and continuous workloads.

(e)Examples showing how computational tasks are divided and


managed across multiple cores

 Example 1: Basic Multithreading using P-threads In this example


multiple threads are performing a simple computation concurrently

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

#define NUM_THREADS 4
#define ARRAY_SIZE 1000000

int array[ARRAY_SIZE];
long long results[NUM_THREADS];

// Function to be executed by each thread


void *compute_sum(void *arg) {
int thread_id = *(int *)arg; // Cast argument back to integer

// Determine the portion of the array this thread will handle


int chunk_size = ARRAY_SIZE / NUM_THREADS;
int start_index = thread_id * chunk_size;
int end_index = start_index + chunk_size;

long long sum = 0;


for (int i = start_index; i < end_index; i++) {
sum += array[i];
}
results[thread_id] = sum; // Store the result in the results array
pthread_exit(NULL);
}

int main() {
// Initialize array with random values
for (int i = 0; i < ARRAY_SIZE; i++) {
array[i] = rand() % 100;
}

pthread_t threads[NUM_THREADS];

// Create threads
for (int i = 0; i < NUM_THREADS; i++) {
pthread_create(&threads[i], NULL, compute_sum, (void
*)(size_t)i);
}

// Wait for all threads to finish


for (int i = 0; i < NUM_THREADS; i++) {
pthread_join(threads[i], NULL);
}
// Combine results from all threads
long long total_sum = 0;
for (int i = 0; i < NUM_THREADS; i++) {
total_sum += results[i];
}

printf("Total sum: %lld\n", total_sum);

pthread_exit(NULL);
return 0;
}

 Explanation of code  Initializing the Array: An array of random values is


created.  Creating Threads: For each thread, we calculate which portion of the
array it is responsible for.  Computing Sums: Each thread computes the sum of
its designated portion of the array.  Joining Threads: We wait for all threads to
finish processing.  Combining Results: The results from each thread are
combined to give the final total sum

Example 2: Using Fork for Process-based Parallelism This is an example using


fork() for a process-based approach.

#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>

#define NUM_PROCESSES 4
#define ARRAY_SIZE 1000000

int array[ARRAY_SIZE];

// Function to compute sum in a child process


long long compute_sum(int start_index, int end_index) {
long long sum = 0;
for (int i = start_index; i < end_index; i++) {
sum += array[i];
}
return sum;
}

int main() {
// Initialize array with random values
for (int i = 0; i < ARRAY_SIZE; i++) {
array[i] = rand() % 100;
}
pid_t pids[NUM_PROCESSES];
long long results[NUM_PROCESSES];

// Create child processes


for (int i = 0; i < NUM_PROCESSES; i++) {
pids[i] = fork();
if (pids[i] == 0) { // Child process
int chunk_size = ARRAY_SIZE / NUM_PROCESSES;
int start_index = i * chunk_size;
int end_index = start_index + chunk_size;
long long sum = compute_sum(start_index, end_index);
exit(sum); // Exit with the computed sum
}
}

// Parent process waits for children and collects results


for (int i = 0; i < NUM_PROCESSES; i++) {
int status;
waitpid(pids[i], &status, 0); // Wait for child process to finish
if (WIFEXITED(status)) {
results[i] = WEXITSTATUS(status);
}
}
// Combine results
long long total_sum = 0;
for (int i = 0; i < NUM_PROCESSES; i++) {
total_sum += results[i];
}

printf("Total sum: %lld\n", total_sum);


return 0;
}

 Explanation of code  Creating Processes: We create multiple processes using


fork().  Child Processes: Each child computes the sum of a specific chunk of the
array.  Collecting Results: The parent process waits for each child to finish and
collects the results.  Displaying the Result: The total sum from all child processes
is displayed.

You might also like