HPC QB With ANS
HPC QB With ANS
Q1. What is parallel computing, and how does it differ from sequential
computing?
Parallel computing is a computing approach where multiple processors
execute or process an application or computation simultaneously.
In contrast, sequential computing executes tasks one after another, with
only one operation occurring at any time.
The main difference lies in how tasks are handled—parallel computing
aims to improve speed and efficiency by dividing tasks, while sequential
computing processes tasks in a single thread, often leading to slower
performance for large datasets or complex tasks.
Q5. What are the key considerations for designing parallel algorithms that
exhibit scalability? Why is scalability important in parallel computing?
Q6. How does data dependence between tasks impact the design of parallel
algorithms? What strategies can be employed to handle data dependence
efficiently?
Data dependence occurs when tasks rely on the output of other tasks,
creating a sequential dependency.
This can limit parallelism as tasks must wait for dependencies to resolve.
Strategies to handle data dependence include task decomposition to
minimize dependencies, employing dependency graphs to organize task
execution, and using techniques like pipelining or parallel loops where
dependencies are controlled.
Efficient handling ensures better parallel performance and avoids
bottlenecks.
Q9. What are the four categories in Flynn's taxonomy, and how are they
defined?
Flynn's taxonomy classifies computing architectures based on instruction and
data streams:
Q11. Describe the key features of Multiple Instruction Single Data (MISD)
architecture. Can you provide a practical application scenario where MISD
architecture might be beneficial?
Q14. What is a multicore processor, and how does it differ from a single-
core processor? Explain the advantages and challenges associated with
multicore architectures.
A multicore processor contains multiple processing units (cores) on a
single chip, enabling parallel task processing.
In contrast, a single-core processor can handle only one task at a time.
Multicore systems improve performance, energy efficiency, and
multitasking capabilities but face challenges like heat dissipation and
software compatibility.
Efficient software design is critical to leverage multicore advantages.
Q31. Define data decomposition and discuss its role in parallel computing.
How does data decomposition differ from task decomposition, and
under what circumstances is data decomposition more suitable?
Data decomposition divides data into subsets that can be processed in
parallel, differing from task decomposition as it focuses on partitioning
data rather than dividing functional tasks.
Data decomposition is more suitable when the operations are uniform
across data, such as matrix multiplications or large dataset processing,
where each processor can independently work on a portion of the data
without relying heavily on inter-task communication.
Q44. Discuss the impact of data dependence between tasks on the design of
parallel algorithms. How can dependencies be managed to avoid race
conditions and ensure the correctness of parallel computations?
Data dependence occurs when tasks require data produced by others,
potentially creating bottlenecks in parallel execution.
Dependencies are managed through techniques like locking, task
reordering, or dependency graphs to avoid race conditions and ensure
accurate results.
Proper dependency handling is crucial for maintaining correctness
without sacrificing too much efficiency.