0% found this document useful (0 votes)
24 views12 pages

Document 90

The document discusses various computer architecture concepts including binary multiplication, static RAM operation, instruction pipelines, addressing modes, Flynn's classification, cache memory properties, and cache coherence. It provides detailed explanations and examples for each topic, including the workings of different types of caches and the necessity of refreshing DRAM cells. Additionally, it includes calculations related to pipelined processors and cache configurations.

Uploaded by

sachin26xx7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views12 pages

Document 90

The document discusses various computer architecture concepts including binary multiplication, static RAM operation, instruction pipelines, addressing modes, Flynn's classification, cache memory properties, and cache coherence. It provides detailed explanations and examples for each topic, including the workings of different types of caches and the necessity of refreshing DRAM cells. Additionally, it includes calculations related to pipelined processors and cache configurations.

Uploaded by

sachin26xx7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

1. Give the flowchart for multiplication of two binary numbers and explain.?

--->

1. Initially multiplicand is stored in B register and multiplier is stored in Q


register.
2. Sign of registers B (Bs) and Q (Qs) are compared using XOR functionality
(i.e., if both the signs are alike, output of XOR operation is 0 unless 1) and
output stored in As (sign of A register).

Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is


initialized with value n, n is the number of bits in the Multiplier.

3. Now least significant bit of multiplier is checked. If it is 1 add the content


of register A with Multiplicand (register B) and result is assigned in A
register with carry bit in flip flop E. Content of E A Q is shifted to right by
one position, i.e., content of E is shifted to most significant bit (MSB) of A
and least significant bit of A is shifted to most significant bit of Q.
4. If Qn = 0, only shift right operation on content of E A Q is performed in a
similar fashion.
5. Content of Sequence counter is decremented by 1.
6. Check the content of Sequence counter (SC), if it is 0, end the process
and the final product is present in register A and Q, else repeat the
process.

2. Explain the working principle of a static RAM cell with proper diagram?

---> Working Principle:

1. Write Operation:
a. The word line (WL) activates, enabling access transistors.
b. The bit lines (BL and BL') carry the data to overwrite the latch's state.
c. The latch retains the written value when WL is deactivated.
2. Read Operation:
a. WL activates, allowing the latch to drive the bit lines.
b. The stored value is read as the voltage difference between BL and BL'.

3. Explain the difference between Instruction Pipeline and Arithmetic Pipeline.?

--->

4. Explain the difference between three address, two address, one address and zero
address instruction using suitable example?

--->
5. Given a non-pipelined processor with 15ns clock period. How many stages of the
pipelined version of

the processor are required to achieve a clock period of 4 ns? Assume the interface latch
has delay of 0.5 ns?

--->
6. Briefly describe the following addressing modes with an example: a)implied b)direct
c)immediate

d)register indirect e)base register

-->
7. Explain Flynn’s Classification with Diagrammatic representation?

--> Single-instruction, single-data (SISD) systems –


An SISD computing system is a uniprocessor machine which is capable of
executing a single instruction, operating on a single data stream. In SISD,
machine instructions are processed in a sequential manner and computers
adopting this model are popularly called sequential computers. Most
conventional computers have SISD architecture. All the instructions and data
to be processed have to be stored in primary memory.
2. Single-instruction, multiple-data (SIMD) systems –
An SIMD system is a multiprocessor machine capable of executing the same
instruction on all the CPUs but operating on different data streams. Machines
based on an SIMD model are well suited to scientific computing since they
involve lots of vector and matrix operations. So that the information can be
passed to all the processing elements (PEs) organized data elements of vectors
can be divided into multiple sets(N-sets for systems) and each PE can process
one data set.

3. An MISD computing system is a multiprocessor machine capable of


executing different instructions on different PEs but all of them operating
on the same dataset .
Example Z = sin(x)+cos(x)+tan(x)
The system performs different operations on the same data set. Machines
built using the MISD model are not useful in most of the application, a few
machines are built, but none of them are available commercially.

4. Multiple-instruction, multiple-data (MIMD) systems –


An MIMD system is a multiprocessor machine which is capable of executing
multiple instructions on multiple data sets. Each PE in the MIMD model has
separate instruction and data streams; therefore machines built using this
model are capable to any kind of application. Unlike SIMD and MISD machines,
PEs in MIMD machines work asynchronously.

8. Why does DRAM cell need refreshment?


----> DRAM cells use capacitors to store data, but these capacitors lose charge
over time due to leakage. If not refreshed, the data would be lost. Refreshing
involves reading and rewriting the data to restore the charge in the capacitors,
ensuring data integrity. This process is repeated every few milliseconds to
maintain the stored information.

9. Explain immediate, direct, implied, register indirect and relative addressing


modes with example?

 --> Immediate Addressing Mode:


 The operand is specified directly in the instruction.
 The value is provided immediately, not stored in memory or a register.
 Example:
o Instruction: MOV R1, #5
o Meaning: Move the immediate value 5 into register R1.
 Direct Addressing Mode:
 The address of the operand is directly specified in the instruction.
 The instruction contains the memory address where the data is located.
 Example:
o Instruction: MOV R1, 1000
o Meaning: Move the value at memory location 1000 into register R1.
 Implied Addressing Mode:
 The operand is implicitly defined by the instruction itself.
 No explicit address or operand is provided; the operation is assumed.
 Example:
o Instruction: CLC
o Meaning: Clear the carry flag (the operand is implicit and not
specified).
 Register Indirect Addressing Mode:
 The operand’s address is stored in a register, and the instruction uses the
register to point to the memory location.
 The register contains the address of the operand.
 Example:
o Instruction: MOV R1, [R2]
o Meaning: Move the value from the memory location whose address
is in register R2 into register R1.
 Relative Addressing Mode:
 The operand's address is specified relative to the current instruction’s
address (often a program counter offset).
 This mode is commonly used for branching or jump instructions.
 Example:
o Instruction: BEQ 10
o Meaning: Branch to the instruction located 10 bytes away from the
current instruction, if the condition is true (e.g., if a previous
comparison resulted in equality).

10. Explain "NINE" property of cache memory? Cache Coherence

---> While there’s no strict standard for a "NINE" property, it generally includes
features such as:

1. Narrow Access: The cache allows faster access to small blocks of data,
reducing delays compared to accessing main memory.
2. Increased Speed: Cache memory speeds up processing by storing
frequently accessed data.
3. Non-Volatile: In some configurations, cache memory retains data for a
brief period even when the system is powered off (in the case of certain
types of cache like NVRAM).
4. Efficient: The cache is designed to hold frequently used data, which
improves overall efficiency and performance of the CPU.
5. Not All Data Cached: Not all data is stored in the cache; only data that
is frequently used or recently accessed is kept.
6. Exclusive: Cache memory is often exclusive to certain operations,
making data retrieval more efficient.
7. Encryption/Compression: Some caches apply encryption or
compression methods to secure or reduce the amount of data being
stored, enhancing performance.
8. Evolving Technology: Cache memory technologies continuously evolve
to address the increasing demands of processors, including larger and
faster cache designs.
9. Next-Level Performance: The ultimate goal of cache memory is to
achieve next-level performance by reducing the latency in fetching data.

Cache Coherence:

Cache coherence refers to a consistency mechanism used in multi-core


processors to ensure that all processors (or cores) in a system observe a
consistent view of memory. In a multi-processor or multi-core system, each
processor has its own local cache. If multiple processors cache the same
memory location and one processor updates the data in its cache, it might
cause inconsistencies where other processors’ caches still have the old value.

11. A Computer has 512 KB cache memory and 2 MB main memory. If the
block size is 64 bytes, then find out the subfield for

a.Direct Mapped Cache b.Associative Cachec c .8-way set associative


Cache

 Cache size: 512 KB = 524,288 bytes


 Main memory size: 2 MB = 2,097,152 bytes
 Block size: 64 bytes

a. Direct Mapped Cache:

Index bits: log⁡28,192=13\log_2 8,192 = 13log2 8,192=13 bits


 Number of cache blocks: 524,288 / 64 = 8,192

Block offset bits: log⁡264=6\log_2 64 = 6log2 64=6 bits




 Tag bits: 21 - 13 - 6 = 2 bits

Subfields:

 Tag: 2 bits
 Index: 13 bits
 Block offset: 6 bits

b. Associative Cache:

 Number of cache blocks: 8,192 (same as above)


 Block offset bits: 6 bits
 Tag bits: 21 - 6 = 15 bits

Subfields:

 Tag: 15 bits
 Block offset: 6 bits
c. 8-way Set Associative Cache:

Index bits: log⁡21,024=10\log_2 1,024 = 10log2 1,024=10 bits


 Number of sets: 8,192 / 8 = 1,024

 Block offset bits: 6 bits
 Tag bits: 21 - 10 - 6 = 5 bits

Subfields:

 Tag: 5 bits
 Index: 10 bits
 Block offset: 6 bits

12. A five-stage pipeline has stage delays as 150, 120, 160,130, and 140 ns
respectively. Registers are used between the stages, with a delay of 5 ns each.
Assuming a constant clocking rate, what will be the total time to process 1000
data items on the pipeline?

--->

You might also like