0% found this document useful (0 votes)
7 views11 pages

Unit Ii

The document discusses information representation and arithmetic operations in computer systems, detailing data types, number systems, encoding schemes, and floating-point representation. It also explains fixed-point and floating-point representations, their differences, and various instruction formats and addressing modes. Additionally, it highlights the importance of memory hierarchy in balancing speed, cost, and capacity in computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views11 pages

Unit Ii

The document discusses information representation and arithmetic operations in computer systems, detailing data types, number systems, encoding schemes, and floating-point representation. It also explains fixed-point and floating-point representations, their differences, and various instruction formats and addressing modes. Additionally, it highlights the importance of memory hierarchy in balancing speed, cost, and capacity in computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

COMPUTER ORGANISATION AND MICROPROCESSORS

Unit wise QUESTION and ANSWER


UNIT-II

1 Give a Short note on Information representation and Arithmetic Operation


"Information representation and arithmetic operation" typically refers to how data is
encoded, stored, and manipulated within computer systems.
1. Information Representation
a. Data Types
 Numeric: Integers, floating-point numbers, and complex numbers.
 Text: Represented as sequences of characters (ASCII, Unicode).
 Boolean: Represented as True or False (1 or 0 in binary).
 Images, Audio, and Video: Represented as binary data using formats like
JPEG, MP3, or MP4.
b. Number Systems
 Binary (Base-2): Used internally in computers (0 and 1).
 Decimal (Base-10): Used in human-readable formats.
 Hexadecimal (Base-16): Used for compact binary representation.
 Octal (Base-8): Sometimes used in computer science.
c. Encoding Schemes
 ASCII: 7-bit encoding for text characters.
 Unicode: Supports a wide range of characters globally.
 Binary-coded Decimal (BCD): Encodes each decimal digit in binary.
 Two's Complement: Used for representing signed integers.
d. Floating-Point Representation
 IEEE 754 Standard: Represents real numbers with a sign, exponent, and
mantissa. It allows for approximating very large or very small numbers.
2. Arithmetic Operations
a. Basic Operations
 Addition: Combining two numbers.
 Subtraction: Finding the difference between two numbers.
 Multiplication: Scaling one number by another.
 Division: Splitting one number by another.
b. Operations on Different Number Systems
 Binary arithmetic is crucial for computation:
o Addition: Similar to decimal, with carry propagation.
o Subtraction: Implemented using two's complement.
o Multiplication and Division: Performed using bit-shift operations and
algorithms like Booth's algorithm.
c. Logical Operations
 AND, OR, NOT, XOR: Binary logic operations used in computation and
circuit design.
d. Floating-Point Arithmetic
 Performed based on the IEEE 754 standard.
 Precision issues (rounding errors) are common due to the finite representation
of real numbers.
e. Overflow and Underflow
 Overflow: When the result of an operation exceeds the maximum
representable value.
 Underflow: When a number is too small to be represented in the given format.
2. List and Explain basic types of information representation in a computer.
Information representation in a computer refers to how various types of data are
encoded and stored in a computer system.
1. Numeric Representation
Computers use numeric representations for all data types, as everything in a
computer is ultimately stored in binary form (0s and 1s).
a. Integer Representation
 Unsigned Integers: Represent only positive numbers, e.g., 0 to 2n -1 for
n-bit numbers.
 Signed Integers: Use methods like Two's Complement to represent
positive and negative integers.
o Example: A 4-bit two's complement can represent -8 to 7.
b. Floating-Point Representation
 Used to represent real numbers, including fractional values.
 Based on the IEEE 754 Standard, which includes:
o Sign bit: 0 for positive, 1 for negative.
o Exponent: Encodes the range or scale.
o Mantissa (or significand): Encodes the precision.
c. Fixed-Point Representation
 Similar to floating-point but with a fixed number of bits for the integer
and fractional parts.
2. Text Representation
 Text characters are converted into numeric codes for computer storage.
 Encoding schemes include:
o ASCII (American Standard Code for Information
Interchange): Uses 7 bits to represent 128 characters.
o Extended ASCII: Uses 8 bits to support 256 characters.
o Unicode: A modern standard supporting over 1 million characters
across multiple languages, typically represented using UTF-8,
UTF-16, or UTF-32 encodings.
3. Image Representation
Images are represented as a matrix of pixels, with each pixel's color defined by
numbers.
 Black-and-White Images: Represented as binary (0 for black, 1 for
white).
 Grayscale Images: Each pixel has a value between 0 (black) and 255
(white) for an 8-bit representation.
 Color Images: Use models like RGB (Red, Green, Blue), with each
channel typically stored as an 8-bit value.
4. Audio Representation
Audio signals are represented as sequences of numbers derived from analog
signals using sampling and quantization.
 Sampling: Converts continuous signals into discrete samples.
o Example: CD-quality audio uses 44.1 kHz sampling rate and 16
bits per sample.
 Compression: Formats like MP3 and AAC reduce file size while
preserving audio quality.
5. Video Representation
Video is represented as a sequence of images (frames) displayed over time.
 Each frame is stored as an image using color models like RGB or YUV.
 Compression: Formats like MP4 or H.264 reduce storage requirements
using techniques like inter-frame prediction.
6. Boolean Representation
 Represents logical values.
 True and False are stored as 1 and 0 in binary.
 Used in decision-making and control structures in programming and
circuit design.
7. Program Instructions
 Machine code is a binary representation of instructions executed by the
CPU.
 Each instruction includes:
o Opcode: Specifies the operation (e.g., add, subtract).
o Operands: Specifies the data or memory locations to operate on.
3 Define floating point representation and fixed point representation of numbers.
Fixed Point Numbers:
The binary point is not a part of the representation but is implied. The number of
integer and fraction bits must be agreed upon by those generating and those reading
the number.
e.g. Fixed point representation using 4 integer bits and 3 fraction bits:
0110110 will be interpreted as 0110.110.
Floating-Point Representation:
The floating point representation of a number has two parts:
The first part represents a signed, fixed-point number called the mantissa.
The second part designates the position of the decimal (or binary) point and is
called the exponent.The fixed point mantissa may be a fraction or an integer.
Floating point is always interpreted to represent a number in the following form:
m * re
Only the mantissa m and the exponent e are physically represented in the register
(including their signs). The radix r and the radix point position of the mantissa are
always assumed.
e.g. the decimal number +6132.789 is represented in floating point with a fraction and
an exponent as follows:
Fraction Exponent
+0.6132789 +04
The value of the exponent indicates that the actual position of the decimal point is
four positions to the right of the indicated decimal point in the fraction. This
representation is equivalent to the scientific notation +0.6132789 * 10 +4.
4 Distinguish between Fixed point and Floating point representations.

Floating-Point
Feature Fixed-Point Representation
Representation
Represents numbers with a fixed Represents numbers using a
Definition number of bits for the integer and scientific notation format with
fractional parts. a sign, exponent, and mantissa.
Capable of representing very
Limited to a smaller range due to
large or very small numbers
Range of Values fixed allocation of bits for integers
because of the variable
and fractions.
exponent.
Fixed precision determined by the Variable precision based on
Precision
number of fractional bits. the size of the mantissa.
More complex arithmetic
Simpler arithmetic operations; operations; often requires
Complexity
requires no special hardware. specialized hardware (e.g.,
FPU).
Example: Binary number Example: −1.101×210-1.101
Representation 1101.01011101.0101 (fixed \times 2^{10} (scientific
position of the decimal point). notation in binary).
Can be implemented on simpler Requires dedicated hardware
Hardware
hardware; lower power for efficient computation (e.g.,
Requirements
consumption. floating-point units).
More storage space needed due
Storage Efficient for applications requiring
to exponent and mantissa
Efficiency consistent precision.
components.
Used in scientific
Used in embedded systems, digital computations, graphics
Use Cases signal processing (DSP), and processing, and applications
systems with real-time constraints. requiring a wide dynamic
range.
Slower compared to fixed-
Arithmetic Faster for basic arithmetic due to
point due to handling of
Speed simpler computation.
exponent and normalization.
Precision errors may occur if the Can introduce rounding errors
Error Handling
range exceeds the fixed format. due to finite mantissa size.
Ease of Easier to program when precision Requires more attention to
Programming and range are well-defined. precision and rounding issues.

5 Illustrate the Instruction formats with examples


Zero-Address Instructions
o These instructions do not explicitly specify operands.
o Operands are implicitly located at the top of the stack, and the results are
pushed back onto the stack.
o Commonly used in stack-based architectures.
 Format:
o [Opcode]
 Example:
o Operation: C=A+BC = A + B
o Steps:
PUSH A (Push A onto the stack).
PUSH B (Push B onto the stack).
ADD (Pop two values, add them, and push the result back).
POP C (Pop the result from the stack into C).
One-Address Instructions
o One operand is explicitly specified in the instruction.
o The second operand is implicit (usually the Accumulator (ACC) register).
o Simplifies instruction length but limits flexibility.
 Format:
o [Opcode] [Operand]
 Example:
o Operation: C=A+BC = A + B
o Steps:
LOAD A (Load A into ACC).
ADD B (ACC = ACC + B).
STORE C (Store the result from ACC into C).
Two-Address Instructions
o Two operands are explicitly specified.
o One of the operands acts as both the source and the destination.
o Reduces instruction length compared to three-address instructions but may
require extra steps for intermediate results.
 Format:
o [Opcode] [Operand1] [Operand2]
 Example:
o Operation: C=A+BC = A + B
o Steps:
MOVE A, C (Copy A into C to preserve A).
ADD B, C (Add B to C, storing the result in C).
Three-Address Instructions
o Three operands are explicitly specified: two source operands and one
destination operand.
o Provides maximum flexibility but requires a longer instruction word.
 Format:
o [Opcode] [Operand1] [Operand2] [Destination]
 Example:
o Operation: C=A+BC = A + B
o Instruction: ADD A, B, C (Add A and B, and store the result in C).
6 List and explain various addressing modes.
Addressing Modes in Computer Architecture
Addressing modes define how the operand(s) of an instruction are specified and
accessed during execution. They determine the method of locating data, either in
memory or in registers. Different addressing modes provide flexibility and efficiency
in executing instructions.
1. Immediate Addressing Mode
o The operand is specified directly in the instruction itself.
o No need to fetch data from memory.
 Advantage: Fast execution as no additional memory access is required.
 Disadvantage: The operand is fixed and cannot be modified.
 Example:
o Instruction: ADD R1, #5
o Explanation: Add the value 5 directly to the contents of register R1.
2. Direct (Absolute) Addressing Mode
o The instruction specifies the memory address of the operand.
o The operand is fetched from that memory location.
 Advantage: Simple to use.
 Disadvantage: Requires memory access, which can be slower.
 Example:
o Instruction: LOAD R1, 1000
o Explanation: Load the value from memory address 1000 into register
R1.
3. Indirect Addressing Mode
o The instruction specifies a memory address, which contains the
address of the actual operand.
o Requires two memory accesses: one to fetch the address and another to
fetch the operand.
 Advantage: Supports dynamic data structures like linked lists.
 Disadvantage: Slower due to multiple memory accesses.
 Example:
o Instruction: LOAD R1, (1000)
o Explanation: Fetch the address stored at memory location 1000, then
load the value from that address into R1.
4. Register Addressing Mode
o The operand is located in a register specified by the instruction.
 Advantage: Fast execution as no memory access is required.
 Disadvantage: Limited by the number of registers in the CPU.
 Example:
o Instruction: ADD R1, R2
o Explanation: Add the value in register R2 to the value in register R1.
5. Register Indirect Addressing Mode
o The register contains the memory address of the operand.
o The operand is fetched from the memory location pointed to by the
register.
 Advantage: Efficient for accessing arrays and pointers.
 Disadvantage: Requires additional memory access.
 Example:
o Instruction: LOAD R1, (R2)
o Explanation: Fetch the operand from the memory location pointed to
by R2 and load it into R1.
6. Indexed Addressing Mode
o Combines a base address (specified by a register) with an offset
(specified in the instruction).
o Often used to access array elements.
 Advantage: Efficient for sequential data access.
 Disadvantage: Limited by the size of the offset.
 Example:
o Instruction: LOAD R1, 1000(R2)
o Explanation: Add 1000 (offset) to the value in R2 (base), fetch the
operand from that memory address, and load it into R1.
7. Relative Addressing Mode
o The effective address is determined by adding an offset (in the
instruction) to the current program counter (PC).
o Commonly used in branching and looping instructions.
 Advantage: Facilitates position-independent code.
 Example:
o Instruction: JUMP +10
o Explanation: Jump to the address PC+10\text{PC} + 10.
8. Base Addressing Mode
o The effective address is obtained by adding the content of a base
register to an offset.
o Similar to indexed addressing but emphasizes dynamic relocation.
 Advantage: Useful in memory segmentation and dynamic memory allocation.
 Example:
o Instruction: LOAD R1, 2000(BR)
o Explanation: Add 2000 to the content of the base register (BR) to get
the effective address, then fetch the operand.
9. Stack Addressing Mode
o Operands are implicitly located at the top of the stack.
o No explicit addressing is needed in the instruction.
 Advantage: Simplifies instruction format.
 Disadvantage: Limited to stack-based operations.
 Example:
o Instruction: PUSH R1
o Explanation: Push the value of R1 onto the stack.
7 Explain the fixed point addition and subtraction operations with flowchart
8 State the need for memory hierarchy in a computer ?
Need for Memory Hierarchy in a Computer
Memory hierarchy is a structured arrangement of various types of memory in a
computer system, designed to achieve an optimal balance between speed, cost, and
capacity. The need for memory hierarchy arises due to the limitations and trade-offs
among different types of memory.
Speed vs. Cost Trade-off:
1. Faster memory (e.g., registers, cache) is expensive and limited in size.
2. Slower memory (e.g., hard drives, SSDs) is cheaper but takes more time to
access.
3. The hierarchy allows frequently accessed data to reside in faster memory while
larger, less-used data is stored in slower, cheaper memory.
Processor Performance:
1. CPUs operate at high speeds and need rapid access to data.
2. Directly using slow memory (e.g., main memory or disk) would lead to
performance bottlenecks.
3. A hierarchical system ensures that critical data is quickly accessible, minimizing
delays.
Cost Efficiency:
1. Building a large memory entirely with high-speed technology like SRAM would
be prohibitively expensive.
2. The hierarchy uses a mix of memory types to optimize cost while maintaining
performance.
Capacity Requirements:
1. Registers and cache have limited capacity due to cost and size constraints.
2. Larger, slower memory types (e.g., RAM, disk) provide sufficient capacity for the
bulk of the data.
Efficient Data Management:
1. Frequently used data is stored in faster memory (locality of reference).
2. Rarely used data resides in slower memory, which is accessed less frequently.
3. This improves the overall efficiency of data retrieval and storage.
Benefits of Memory Hierarchy
 Improved Performance: Reduces latency by keeping frequently used data in
faster memory.
 Cost Optimization: Balances the cost of memory technologies with
performance needs.
 Efficient Resource Utilization: Ensures appropriate data placement based on
access patterns.
 System Scalability: Allows growth in data and computational complexity
without exponential cost increases.
9 Explain about memory hierarchy in a computer with a neat sketch?
Memory Hierarchy in a Computer
The memory hierarchy is a structured arrangement of different types of memory
in a computer system, organized based on speed, size, and cost. It helps bridge
the gap between the fast CPU and slower memory/storage devices, ensuring
efficient data access.
Registers:
1. Located inside the CPU.
2. Fastest and most expensive.
3. Stores small amounts of immediate data (e.g., operands for instructions).
4. Access Time: 1 cycle.
Cache Memory:
1. Located closer to the CPU, often divided into levels (L1, L2, L3).
2. Faster than RAM but smaller and more expensive.
3. Holds frequently accessed data and instructions to reduce memory latency.
4. Access Time: Few nanoseconds.
Main Memory (RAM):
1. Larger and slower than cache.
2. Stores data and programs currently in use by the CPU.
3. Acts as an intermediary between cache and secondary storage.
4. Access Time: Tens of nanoseconds.
Secondary Storage:
1. Includes HDDs and SSDs.
2. Non-volatile storage used for long-term data storage.
3. Slower but provides high capacity.
4. Access Time: Milliseconds (HDD) or microseconds (SSD).
Tertiary Storage:
1. Includes optical discs, magnetic tapes, and cloud storage.
2. Used for backups and archival purposes.
3. Slowest but cheapest.
4. Access Time: Seconds or more.
Features of Memory Hierarchy
 Speed: Decreases as we move down the hierarchy.
 Cost: Per bit decreases as we move down the hierarchy.
 Capacity: Increases as we move down the hierarchy.
 Access Frequency: High at the top (registers/cache) and decreases
downward.

10 Difference between RAM and CAM


Comparison of RAM vs. CAM

RAM (Random Access CAM (Content Addressable


Feature
Memory) Memory)
Access Address-based access (specific Content-based access (matches
Method address) data)
Speed Moderate to fast Very fast, especially for searches
Stores data in rows, searches for
Structure Organized in rows and columns
content
Supports both read and write
Read/Write Primarily used for fast searching
operations
Types DRAM, SRAM TCAM, ACAM
Volatile (loses data on power
Volatility Can be volatile or non-volatile
off)
General-purpose memory (e.g., High-speed search, pattern
Use Case
active data) matching, and networking
More expensive due to faster
Cost Generally less expensive
searching capabilities
Storing data in active processes, Fast lookups in routers, databases,
Example
operating systems and caches
11 Explain the concept of Associate Memory?

It consists of a memory array and logic form words with n bits per word.
The argument register A and key register K each have n bits, one for each bit of a
word.
The match register M has m bits, one for each memory word.
Each word in memory is compared in parallel with the content of the argument
register.
The words that match the bits of the argument register set a corresponding bit in the
match register.
After the matching process, those bits in the match register that have been set indicate
the fact that their corresponding words have been matched.
Reading is accomplished by a sequential access to memory for those words whose
corresponding bits in the match register have been set.

You might also like