0% found this document useful (0 votes)
11 views4 pages

Detailed Computer Organization Design Notes

Uploaded by

Pratham Jindal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views4 pages

Detailed Computer Organization Design Notes

Uploaded by

Pratham Jindal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Detailed Notes on Computer Organization and Design

Unit I: Register Transfer Language and Computer Organization & Design

1. Register Transfer Language (RTL)

RTL is a symbolic way of describing how data is transferred between registers within a computer

system. Registers are

small storage units in the CPU used to store and transfer data quickly. RTL not only shows the

movement of data but also

includes the control signals necessary for this movement. RTL helps to abstract the underlying

hardware into a simple, readable

language. Control signals and timing are essential for this transfer.

2. Bus and Memory Transfer

In computer systems, the bus is a communication system that transfers data between components,

such as the CPU, memory,

and I/O devices. Multiplexing and tri-state buffers are used in bus architecture to avoid conflicts and

allow data transfer

between multiple components efficiently.

3. Micro-Operations: Arithmetic, Logical, and Shift

Micro-operations are operations performed on the data stored in registers. Arithmetic operations like

addition, subtraction;

logical operations such as AND, OR; and shift operations are executed by the Arithmetic Logic Shift

Unit (ALSU) in the CPU.


4. Instruction Codes

Instruction codes are binary codes that represent specific operations (e.g., ADD, SUB) in machine

language. These instructions

are fetched, decoded, and executed during the instruction cycle.

5. General Purpose Registers

These registers store temporary data for fast access during program execution. They are part of the

common bus system in

a CPU and facilitate fast data transfer.

6. Instruction Cycle

The instruction cycle involves fetching, decoding, executing, and storing results. The CPU follows

these steps to execute

instructions efficiently.

7. Interrupt Cycle

Interrupts allow the CPU to handle external events (e.g., I/O requests) and resume normal

operations afterward. Interrupts are

critical for responsive system behavior.

8. Levels of Programming Languages

Machine language, Assembly language, and High-level languages like C and Python provide

increasing levels of abstraction

from the hardware.

Unit II: Central Processing Unit and Memory Hierarchy


1. General Register Organization

A set of registers in the CPU stores temporary data during execution. These registers are connected

to the ALU and control

unit, allowing efficient processing.

2. Stack Organization

Stacks are memory structures that follow a last-in, first-out (LIFO) order. They are used for function

calls, interrupt handling,

and temporary data storage.

3. Instruction Format and Addressing Modes

Instruction format includes fields for operation code (opcode) and operands. Addressing modes

define how operands are

located in memory (immediate, direct, indirect, etc.).

4. CPU vs GPU

The CPU is optimized for sequential processing, while the GPU excels in parallel computing tasks.

GPUs are used for

tasks like graphics rendering and machine learning.

5. Cache Memory and Performance

Cache is a high-speed memory that stores frequently accessed data, improving CPU performance

by reducing access time.

Cache mapping techniques include direct, associative, and set-associative mappings.


6. Virtual Memory

Virtual memory extends physical memory using disk space. It divides memory into pages and swaps

pages between memory

and disk as needed.

7. Memory Hierarchy Framework

Memory is organized in a hierarchy, from fast, small caches to large, slow secondary storage

(disks). This ensures optimal

performance by keeping frequently accessed data in faster memory.

8. Case Study: PIV and AMD Opteron Memory Hierarchies

The PIV and AMD Opteron processors utilize multiple cache levels (L1, L2, and L3) to improve

performance. These designs

optimize data access patterns in multi-threaded applications.

You might also like