0% found this document useful (0 votes)
20 views2 pages

Nitte Meenakshi Institute of Technology: Department of Computer Science Engineering

The document discusses parallel programming concepts including pipeline and superscalar execution, dynamic versus inline instruction issue, limitations of superscalar execution and memory system performance, approaches to hide memory latency, SIMD and MIMD architectures, evaluating memory system performance, message passing versus shared memory computers, executing condition statements on SIMD computers, idle parallel computer architectures, omega networks, evaluation metrics for static and dynamic interconnect networks, non-blocking networks, fat tree networks, communication models, bus-based interconnects, multithreading for latency hiding, and protocols for resolving concurrent writes in parallel computers.

Uploaded by

Nikhil Emmanuel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views2 pages

Nitte Meenakshi Institute of Technology: Department of Computer Science Engineering

The document discusses parallel programming concepts including pipeline and superscalar execution, dynamic versus inline instruction issue, limitations of superscalar execution and memory system performance, approaches to hide memory latency, SIMD and MIMD architectures, evaluating memory system performance, message passing versus shared memory computers, executing condition statements on SIMD computers, idle parallel computer architectures, omega networks, evaluation metrics for static and dynamic interconnect networks, non-blocking networks, fat tree networks, communication models, bus-based interconnects, multithreading for latency hiding, and protocols for resolving concurrent writes in parallel computers.

Uploaded by

Nikhil Emmanuel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

NITTE MEENAKSHI INSTITUTE OF TECHNOLOGY

(An Autonomous Institution Under Visvesvaraya Technological University, Belgaum) YELAHANKA, BANGALORE

Department of Computer Science Engineering


Parallel Programming Principal UNIT - 1
1. Discuss the differences between pipeline and superscalar execution with neat diagram? 2. Justify dynamic instruction issue mechanism performs better parallisem than inline instruction issue 3. Discuss with an example the limitations in superscalar execution. 4. Discus the limitations of memory system performance and how to improve it. 5. Illustrate the importance of alternate approaches for hiding memory latency and what are it trade offs. 6. Draw the typical architecture of SIMD and MIMD 7. Explain the operation of processing units in parallel computers 8. Discus the parameters used to evaluate performance of a memory system 9. What are the major differences between message passing and shared space computers? also outline the advantages & disadvantages of the two 10. Explain with an example of executing a condition statement on a SIMD computers with 4 processors 11. Explain the architecture of an idel parallel computers 12. Build an omega network connecting 8 inputs and 8 outputs? How perfect shuffle algorithm is used in it. 13. What are the different criteria used to characterized the cost & perofrmance of static interconnect networks 14. Discuss different evaluation metrics for dynamic networks 15. what are non blocking networks 16. construct a fat tree network of 16 processing nodes how it overcomes dis advantages of other networks.

17. Classify communication model of parallel platforms 18. Write a note on BUS based interconnects 19. Illustrate with an example of multithreading for latency hiding 20. Construct a multistage network of 8 PEs (Processing Elements) connecting 8 Memory Banks. 21. List and explain the protocols that are used to resolve concurrent write in PARM parallel computer

You might also like