0% found this document useful (0 votes)
91 views3 pages

Department of Cse CP7103 Multicore Architecture UNIT - 1, Fundamentals of Quantitative Design and Analysis 100% THEORY Question Bank

This document contains a question bank for the course "CP7103 Multicore Architecture". It includes 100 2-mark questions and 10 16-mark questions covering topics related to quantitative design and analysis of multicore architectures including classes of computers, performance measurement, parallelism, multicore limitations, case studies of multicore architectures, and compiler techniques for exploiting instruction level parallelism.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views3 pages

Department of Cse CP7103 Multicore Architecture UNIT - 1, Fundamentals of Quantitative Design and Analysis 100% THEORY Question Bank

This document contains a question bank for the course "CP7103 Multicore Architecture". It includes 100 2-mark questions and 10 16-mark questions covering topics related to quantitative design and analysis of multicore architectures including classes of computers, performance measurement, parallelism, multicore limitations, case studies of multicore architectures, and compiler techniques for exploiting instruction level parallelism.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Academic Year 2014-2015 Regulation: 2008

DEPARTMENT OF CSE
CP7103 Multicore Architecture
UNIT – 1, Fundamentals of Quantitative Design and Analysis
100% THEORY
QUESTION BANK

Classes of Computers – Trends in Technology, Power, Energy and Cost – Dependability


Measuring, Reporting and Summarizing Performance – Quantitative Principles of
Computer Design – Classes of Parallelism - ILP, DLP, TLP and RLP - Multithreading -
SMT and CMP Architectures – Limitations of Single Core Processors - The Multicore
era – Case Studies of Multicore Architectures.

PART-A: 2 MARK QUESTIONS


1. What is meant by main frame computer?
2. Give four examples to micro computers.
3. Define speedup. Nov 2010
4. Differentiate between TLP and RLP. Nov 2010
5. Differentiate desktop, Embedded and server computers?
6.  Define the CPU time Throughput Execution time.
7. What are dhrystone benchmarks?
8. What are Wheatstone  benchmarks?
9. What are the different level of program used for evaluating the performance of a machine?
10. What is SPEC?
11. What is a kernel?.
12. What is the principle of locality? May 2012
13. Mention the difference between desktop, Embedded and server benchmarks?
14. Define Total execution time.
15. Define Weighted execution time.
16. Define Normalized execution time.
17.  State Amdahl’s law.
18. Give the CPU performance equation and define the following CPI Instruction count.
19. What is the principle of locality?
20. List the classes of computers. May 2011, Nov 2011
21. What are the various classes of instruction set architecture?
22. What is Little Endian and big Endian?
23. What is effective address and pc relative address?
24. What is the principle of locality. Nov 2012
25. Define the Ardens law. May 2011
26. What are the various addressing modes?
27. What are modulo and bit reverse addressing modes?
28. Comment the type and size of operands?
29. Explain the operand for media and signal processing
Academic Year 2014-2015 Regulation: 2008

30. Give the various categories of instruction operators with example for each?
31. Commend the operation for media and signal processing?
32. What are the different type of control flow instructions?
33. Give the major methods of evaluating branch condition, their advantages and disadvantages
34. Explain instruction coding and its type?
35. What are the various compiler optimization available ?
36. What is the difference between DLP and TLP? Nov 2012
37. Compare MIPS and TM 32 processor
38. What is a vector processor?
39. What is Flynn’s taxonomy ?
40. Explain the various methods by which data level parallelism is obtained?
41. Compare RISC and CISC machines.
42. Differentiate von Neumann and hardware architecture.
43. What is pipelining?
44. What are the basic of RISC instruction  set architecture?
45. What are the different stages of pipelined architecture?
46. Briefly describe basic performance issues in pipelining?
47. What are hazards? Mention its types?
48. How data hazards can be minimized?
49. What are structural hazards? how it can be minimized?
50. What are control hazards?
51. How is pipelining implemented?
52. What makes pipelining hard to implement?
53.  What is latency? Nov 2012
54.  What is reservation table?
55.  What are forbidden and permissible latencies ? give example
56.  What are contact cycle?
57.  What is collision vector?
58.  Explain pipeline throughput and efficiency
59.  How do you compute pipeline CPI?
60.  What is a basic block? May 2012
61.  What is ILP?
62.  What are forwarding and bypassing techniques?
63.  What is loop-level parallelism ?
64.  What are the various dependences? How to overcome it?
65.  How to avoid hazards?
66.  What are the different name dependences ? Nov 2012
67.  What is a control dependence?
68.  What is a data dependence?
69.  What is dynamic scheduling? Compare dynamic scheduling with static pipeline scheduling?
70.  Differentiate in-order and out-of-order execution of instruction?
71.  What is imprecise exception?
72.  Explain Tomasulo’s algorithm briefly?
73.  Explain WAR hazards?
74.  Explain WAW hazards?
75.  Explain RAW hazards?
76.  What is a reservation station ? mention its fields? Nov 2010
77.  Give the merits of Tomasulo’s algorithm?
78.  How to remove control dependences?
Academic Year 2014-2015 Regulation: 2008

79.  Compare 1 bit and 2 bit prediction schemes?


80.  Give the merits and demerits of 2 bit prediction scheme?
81.  What are correlating branch predictors?
82.  What is register renaming?
83.  What is commit stage?
84.  How to take advantages of more ILP with multiple issue?
85.  Compare superscalar and VLIW processors?
86.  What are statically scheduled superscalar processors?
87.  How multiple instruction issue is handled by dynamic scheduling ?
88.  What are limitations of ILP?
89.  Explain P6 micro architecture?
90.  Compare Pentium III and Pentium IV processors?
91.  What is thread level parallelism(TLP)?
92.  Explain how to exploit TLP using ILP data path?
93.  Give the practical limitation on exploiting more ILP?
94. What is the role of compiler in exploiting ILP?
95. Give the typical latencies of FP operations and loads and stores.
96. What is loop unrolling? Nov 2011
97. Give the summary of loop unrolling and scheduling.
98. What is register pressure?
99. How loop unrolling and pipeline scheduling can be used with static multiple issue?
100. What is a static branch prediction?

PART-B: 16 MARK QUESTIONS


1. Explain different classes of parallelism and parallel architectures.
2. What are the important functional requirements that an architect faces?
3. Explain the different implementation technologies in changing the trend.
4. Explain different factors that are having impact on cost.
5. With reference to linear processors, explain pipelining in detail.
6. With non-linear processors, explain pipelining with latency analysis, make use of   relevant
state diagrams whenever required.
7.Explain in detail the limitations of ILP with a special mention on realizable processors
8.Indentify  and justify the following fallacies/pitfalls
Processors with lower CPI will always be faster
9.Processors with faster clock  rate will always be better
10. With suitable illustrative examples, explain how compiler techniques can be exploited for
achieving ILP?

You might also like