ACHARIYA
COLLEGE OF ENGINEERING TECHNOLOGY
(Approved by AICTE New Delhi & Affiliated to Pondicherry University)
An ISO 9001 : 2008 Certified Institution
DEPARTMENT OF ARTIFICIAL INTELLIGENCE & DATA SCIENCE
CONTINUOUS ASSESSMENT TEST-II
ACADEMIC YEAR 2023-24
Subject Name: COA Time : 2 Hrs
Subject Code: ADPC407 Max. Marks : 50
Date of Exam: ..2024 YEAR / SEM : II/IV
Course Outcomes:
CO3 To analyze the execution time taken in a pipelined processor.
To understand the need of memory hierarchy and efficiency achieved due to the use of
CO4
cache.
Knowledge Level: K1-Knowledge, K2-Understand, K3-Apply, K4-Analyze, K5-Synthesis K6 -
Evaluate
Q.No Ms CO’s B.L
PART A (10 × 2 = 20 Marks)
Answer all the Questions
1. What is preprocessor? 2 CO3 K1
2. Define Instruction pipelining. 2 CO3 K1
3. Define hazards. 2 CO3 K1
4. List the types of hazards. 2 CO3 K2
5. List Limitation of ILP. 2 CO3 K1
6. What is Locality of reference 2 CO4 K1
7. Define Caching 2 CO4 K1
8. Define Writeback caches 2 CO4 K1
9. What are the policies for cache replacement. 2 CO4 K1
10. Define Read / Write request in cache 2 CO4 K1
PART B (3× 10 = 30 Marks)
Answer ANY THREE Questions
11. Explain in detail about Amdal’s Law. 10 CO3 K1, K2
12. Describe Pipelining & Hazards. 10 CO3 K2
13. Explain mechanisms of cache memory 10 CO4 K1
14. Describe cache replacement policies 10 CO4 K2
ACHARIYA
COLLEGE OF ENGINEERING TECHNOLOGY
(Approved by AICTE New Delhi & Affiliated to Pondicherry University)
An ISO 9001 : 2008 Certified Institution
DEPARTMENT OF ARTIFICIAL INTELLIGENCE & DATA SCIENCE
CONTINUOUS ASSESSMENT TEST-II
ACADEMIC YEAR 2023-24
Subject Name: COA Time : 2 Hrs
Subject Code: ADPC407 Max. Marks : 50
Date of Exam: ..2024 YEAR / SEM : II/IV
Course Outcomes:
To realize the properties of tree data structure and its importance in searching large
CO3
database.
CO4 To understand graph data structure and its applications.
Knowledge Level: K1-Knowledge, K2-Understand, K3-Apply, K4-Analyze, K5-Synthesis K6 -
Evaluate
Q.No Ms CO’s B.L
PART A (10 × 2 = 20 Marks)
Answer all the Questions
1. What is preprocessor? 2 CO3 K1
2. Define Instruction pipelining. 2 CO3 K1
3. Define hazards. 2 CO3 K1
4. List the types of hazards. 2 CO3 K2
5. List Limitation of ILP. 2 CO3 K1
6. What is Locality of reference 2 CO4 K1
7. Define Caching 2 CO4 K1
8. Define Writeback caches 2 CO4 K1
9. What are the policies for cache replacement. 2 CO4 K1
10. Define Read / Write request in cache 2 CO4 K1
PART B (3× 10 = 30 Marks)
Answer ANY THREE Questions
11. Explain in detail about Amdal’s Law. 10 CO3 K1, K2
12. Describe Pipelining & Hazards. 10 CO3 K2
13. Explain mechanisms of cache memory 10 CO4 K1
14. Describe cache replacement policies 10 CO4 K2
ANSWER KEY
PART A
ACHARIYA
COLLEGE OF ENGINEERING TECHNOLOGY
(Approved by AICTE New Delhi & Affiliated to Pondicherry University)
An ISO 9001 : 2008 Certified Institution
DEPARTMENT OF ARTIFICIAL INTELLIGENCE & DATA SCIENCE
1. Preprocessing is the first step and is used to prepare the user's code for machine code by removing
comments, expand included macros, and perform any code maintenance prior to handing the file to the
compiler.
2. An instruction pipeline receives sequential instructions from memory while prior instructions are
implemented in other portions. Pipeline processing can be seen in both the data and instruction streams.
3. Hazards are problems with the instruction pipeline in CPU microarchitectures when the next instruction
cannot execute in the following clock cycle, and can potentially lead to incorrect computation results.
4. Data hazards
Structural hazards
Control Hazards
5. Instructions with data dependencies cannot be executed in parallel, leading to potential stalls in the
pipeline. Branch Prediction Misprediction: Speculative execution relies on accurate branch prediction
6. Locality of reference refers to a phenomenon in which a computer program tends to access same set of
memory locations for a particular time period. In other words, Locality of Reference refers to the
tendency of the computer program to access instructions whose addresses are near one another.
7. Cache Memory is a special very high-speed memory. The cache is a smaller and faster memory that
stores copies of the data from frequently used main memory locations. There are various different
independent caches in a CPU, which store instructions and data.
8. The data is updated only in the cache and updated into the memory at a later time. Data is updated in
the memory only when the cache line is ready to be replaced (cache line replacement is done using
Belady’s Anomaly, Least Recently Used Algorithm, FIFO, LIFO, and others depending on the
application).
9. In computing, cache replacement policies (also known as cache replacement algorithms or cache
algorithms) are optimizing instructions or algorithms which a computer program or hardware-
maintained structure can utilize to manage a cache of information
10. Read Cache – the cache services requests for read I/O only. If the data isn't in the cache, it is read from
persistent storage (also known as the backing store). Write Cache – all new data is written to the cache
before a subsequent offload to persistent media.
PART B
11. It is a formula that gives the theoretical speedup in latency of the execution of a task at a fixed
workload that can be expected of a system whose resources are improved. In other words, it is a
formula used to find the maximum improvement possible by just improving a particular part of a
system.
The formula for Amdahl’s law is:
S = 1 / (1 – P + (P / N))
Where:
S is the speedup of the system
P is the proportion of the system that can be improved
N is the number of processors in the system
12. Pipeline hazards are conditions that can occur in a pipelined machine that impede the execution of a
subsequent instruction in a particular cycle for a variety of reasons.
Types of Pipeline Hazards in Computer Architecture
The three different types of hazards in computer architecture are:
ACHARIYA
COLLEGE OF ENGINEERING TECHNOLOGY
(Approved by AICTE New Delhi & Affiliated to Pondicherry University)
An ISO 9001 : 2008 Certified Institution
DEPARTMENT OF ARTIFICIAL INTELLIGENCE & DATA SCIENCE
1. Structural
2. Data
3. Control
13. Cache Memory is a special very high-speed memory. The cache is a smaller and faster memory that
stores copies of the data from frequently used main memory locations. There are various different
independent caches in a CPU, which store instructions and data. The most important use of cache memory
is that it is used to reduce the average time to access data from the main memory.
Characteristics of Cache Memory
Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU.
Cache Memory holds frequently requested data and instructions so that they are immediately
available to the CPU when needed.
Cache memory is costlier than main memory or disk memory but more economical than CPU
registers.
Cache Memory is used to speed up and synchronize with a high-speed CPU.
Levels of Memory
Level 1 or Register: It is a type of memory in which data is stored and accepted that are
immediately stored in the CPU. The most commonly used register is Accumulator, Program
counter, Address Register, etc.
Level 2 or Cache memory: It is the fastest memory that has faster access time where data is
temporarily stored for faster access.
Level 3 or Main Memory: It is the memory on which the computer works currently. It is
small in size and once power is off data no longer stays in this memory.
Level 4 or Secondary Memory: It is external memory that is not as fast as the main memory
but data stays permanently in this memory.
14. Cache replacement policies, frequently called cache replacement algorithms are optimizing
instructions or algorithms that a computer program can utilize in order to manage the stored cache
information. Simply put, it is a principle that defines how older versions of the cache should be replaced
by the new ones.