0% found this document useful (0 votes)
22 views

Unit 2 (Data Representation and basic Computer Arithmetic) & 3 Basic Computer Organization and Des

The document covers data representation and basic computer arithmetic, detailing various number systems (decimal, binary, octal, hexadecimal) and their applications in computing. It also explains fixed and floating-point representations, character encoding schemes like ASCII and Unicode, and the importance of addition, subtraction, and magnitude comparison in data processing. Additionally, the document discusses basic computer organization, including bus systems, instruction sets, timing, control mechanisms, the instruction cycle, memory references, input/output operations, and the role of interrupts in managing system events.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Unit 2 (Data Representation and basic Computer Arithmetic) & 3 Basic Computer Organization and Des

The document covers data representation and basic computer arithmetic, detailing various number systems (decimal, binary, octal, hexadecimal) and their applications in computing. It also explains fixed and floating-point representations, character encoding schemes like ASCII and Unicode, and the importance of addition, subtraction, and magnitude comparison in data processing. Additionally, the document discusses basic computer organization, including bus systems, instruction sets, timing, control mechanisms, the instruction cycle, memory references, input/output operations, and the role of interrupts in managing system events.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Unit 2

Data Representation and basic Computer Arithmetic

Number Systems: A number system is a symbolic method for writing numbers. It


defines a set of symbols, like digits or letters, and a set of rules to combine them to represent any
number.

Or
Number systems are methods or systems used to represent numbers. They are fundamental to
mathematics and computing, as they provide ways to count, measure, and perform arithmetic
operations.

Types of number system

1. Decimal
2. Binary
3. Octal
4. Hexadecimal

Decimal System (Base-10): The decimal system, also known as base-10, is the
most familiar number system used by humans. It employs ten symbols: 0, 1, 2, 3, 4,
5, 6, 7, 8, and 9. Each digit's position in a decimal number represents a power of 10.
For instance, in the number 365, the 5 is in the units’ place (100), the 6 is in the tens
place (101), and the 3 is in the hundreds place (102). Decimal numbers are widely
used in everyday life, from counting money to measuring time.

Binary System (Base-2): The binary system, or base-2, is essential in computer


science and digital electronics. It utilizes only two symbols: 0 and 1. Each digit's
position represents a power of 2. For example, in the binary number 1011, the
rightmost digit is in the ones place (20), the next is in the twos place (21), the next in
the fours place (22), and so on. Binary numbers are fundamental in representing
digital data because they correspond directly to the on/off states of electronic
switches.
Octal System (Base-8): The octal system, or base-8, uses eight symbols: 0, 1, 2,
3, 4, 5, 6, and 7. Each digit in an octal number represents a power of 8. For instance,
in the octal number 527, the rightmost digit is in the ones place (8 0), the next is in
the eights place (81), and the leftmost digit is in the sixty-fours place (82). Octal was
historically used in computing but has largely been superseded by hexadecimal due
to its more compact representation.

Hexadecimal System (Base-16): Hexadecimal, or base-16, employs sixteen


symbols: 0-9 and A-F (where A=10, B=11, C=12, D=13, E=14, and F=15). Each
digit's position represents a power of 16. For example, in the hexadecimal number
1A3F, the rightmost digit is in the ones place (160), the next is in the sixteens place
(161), and the leftmost digit is in the 4096s place (163). Hexadecimal is extensively
used in computing, particularly in memory addressing, binary-coded decimal
representation, and colour coding.

Complements: complements refer to mathematical techniques used to simplify


arithmetic operations, particularly subtraction, in digital computers. Two common
types of complements are the ones' complement and the twos' complement.

I. One’s Complement: In the ones' complement system, to find the


complement of a binary number, you simply invert (flip) all the bits.
That is, each 0 becomes a 1, and each 1 becomes a 0.For example, the
ones' complement of the binary number 1011 is 0100.

II. Two’s Complement: The twos' complement system addresses the sign
bit problem of ones' complement arithmetic. To find the twos'
complement of a binary number, you invert all the bits and then add 1
to the result. For example, the twos' complement of the binary number
1011 is calculated as follows:

Invert all the bits: 0100.


Add 1: 0100 + 1 = 0101.

The twos' complement of 1011 is 0101.


fixed and floating point representation

• Fixed-Point Representation: This representation has fixed number of bits for


integer part and for fractional part. For example, if given fixed-point
representation is IIII.FFFF, then you can store minimum value is 0000.0001
and maximum value is 9999.9999. There are three parts of a fixed-point
number representation: the sign field, integer field, and fractional field.

• This representation does not reserve a specific number of bits for the integer
part or the fractional part. Instead it reserves a certain number of bits for the
number (called the mantissa or significand) and a certain number of bits to say
where within that number the decimal place sits (called the exponent).
The floating number representation of a number has two part: the first part
represents a signed fixed point number called mantissa. The second part of
designates the position of the decimal (or binary) point and is called the
exponent. The fixed-point mantissa may be fraction or an integer. Floating -
point is always interpreted to represent a number in the following form: Mxre.

character representation: Character representation refers to the method used to represent


characters, symbols, and text in digital systems, such as computers. In digital systems, characters
are represented using binary codes, where each character is assigned a unique binary pattern
known as a character code or character encoding. There are several character encoding schemes,
with ASCII (American Standard Code for Information Interchange) and Unicode being the most
common.

There are mainly two types of character representation are there:

I. ASCII (American Standard Code for Information Interchange): ASCII is


one of the earliest character encoding standards and assigns a unique 7-bit
binary code (extended ASCII uses 8 bits) to each character. Originally
developed for English and commonly used in Western countries, ASCII
represents characters such as letters (uppercase and lowercase), digits,
punctuation marks, and control characters.

II. Unicode: Unicode is a more comprehensive character encoding standard that


aims to represent characters from all writing systems and languages
worldwide. It uses a variable-length encoding scheme, with characters
encoded using 8, 16, or 32 bits, allowing for the representation of a vast range
of characters, symbols, and emojis.

Unicode includes characters from various scripts, including Latin, Greek,


Cyrillic, Arabic, Chinese, Japanese, and many others. UTF-8, UTF-16, and
UTF-32 are popular encoding formats under the Unicode standard
Addition, Subtraction, magnitude comparison

Addition and subtraction are performed by manipulating the binary representations


of characters, while Magnitude Comparison determines the relative order of characters based on
their encoded values. These operations are essential for tasks like sorting, searching, and
processing textual data efficiently within computer programs and systems. Understanding
character representation is vital for accurately implementing these operations and ensuring the
proper handling of textual information in digital environments.
Unit 3
Basic Computer Organization and Design

Bus System: a bus system refers to a set of electrical pathways that connect various
components inside a computer. It acts as a communication highway for data, addresses, and
control signals to flow between these components.

OR

a bus system refers to a communication pathway or set of pathways that allows various
components of a computer system to exchange data and control signals. It serves as the central
channel through which information flows between the different parts of the computer, facilitating
coordination and interaction between the processor, memory, input/output devices, and other
peripheral components.

a. Data Bus: This bus carries data between the processor, memory, and other devices.
It enables the transfer of binary information, such as instructions and data values, in
both directions.

b. Address Bus: The address bus is used to specify memory locations or I/O ports for
read and write operations. It carries the address signals generated by the processor to
select the target location for data transfer.

c. Control Bus: The control bus carries various control signals that govern the
operation of the computer system. These signals include commands for memory
read/write operations, interrupt requests, clock signals, and bus arbitration signals.
Instruction Set: An instruction set, also known as an instruction set architecture (ISA), is
a collection of instructions that a computer's central processing unit (CPU) can execute.
These instructions define the operations that the CPU can perform, such as arithmetic
operations, logical operations, data movement, and control flow instructions.
I. Arithmetic Instructions: These instructions perform arithmetic operations
such as addition, subtraction, multiplication, and division. They operate on
data stored in the CPU's registers or in memory.

II. Logical Instructions: Logical instructions perform bitwise operations such as


AND, OR, XOR, and NOT. These operations manipulate the individual bits of
binary data.

III. Data Movement Instructions: These instructions move data between


memory and CPU registers or between different CPU registers. They include
load (read from memory) and store (write to memory) instructions, as well as
instructions to move data between registers.
IV. Control Transfer Instructions: Control transfer instructions change the flow
of program execution. They include instructions for branching (jumping to a
different part of the program), subroutine calls and returns, and conditional
branches based on the result of a comparison.

V. Input/Output Instructions: Some instruction sets include specific


instructions for input and output operations, allowing the CPU to
communicate with peripheral devices such as keyboards, displays, and storage
devices.

Timing: Timing in a computer system refers to the coordination of events and operations
based on a clock signal. A clock signal is a periodic electronic signal generated by a clock
circuit that oscillates at a specific frequency, measured in hertz (Hz).The clock signal
serves as a timing reference for various operations within the system, including the
execution of instructions by the CPU, the transfer of data between components, and the
synchronization of internal and external events. Timing considerations are crucial for
ensuring that operations occur at the correct times and that the system operates within
specified performance limits.

Control: Control in a computer system involves the management and coordination of


operations to ensure proper execution and sequencing of instructions. Control mechanisms
include hardware components such as control units, which interpret and execute
instructions, and software components such as control algorithms and programs. The
control unit is responsible for fetching instructions from memory, decoding them, and
executing them in the correct sequence. Control signals generated by the control unit
determine the flow of data and control the operation of various components within the
system, including the CPU, memory, and input/output devices. Control signals govern
operations such as memory read and write cycles, input/output operations, and interrupt
handling.
Instruction Cycle: The instruction cycle, also known as the machine cycle, is the
fundamental process through which a central processing unit (CPU) executes instructions in a
computer system. It consists of a sequence of steps that the CPU performs for each instruction
fetched from memory.

The instruction cycle typically includes the following stages:

I. Fetch: In this stage, the CPU retrieves the next instruction from memory. The
program counter (PC) holds the address of the next instruction to be fetched. The CPU
reads the instruction from memory at the address specified by the program counter
and stores it in a special register called the instruction register (IR).

II. Decode: After fetching the instruction, the CPU decodes it to determine the operation
to be performed and the operands involved. The decoding stage interprets the binary
pattern of the instruction and generates control signals to coordinate subsequent
operations.
III. Execute: In the execution stage, the CPU performs the operation specified by the
decoded instruction. This may involve arithmetic or logical operations, data
movement between registers or memory, or control flow changes such as branching or
jumping to a different part of the program.

IV. Write-back: Finally, if the instruction modifies the contents of registers or memory,
the CPU may update the appropriate data storage locations in the write-back stage.
This stage ensures that any changes made by the instruction are reflected in the
system's state.

Once the instruction cycle is completed for one instruction, the process repeats for the next
instruction in the program. The CPU continuously fetches, decodes, executes, and writes back
instructions until the program completes or encounters a branch or interrupt.

Memory Reference: A memory reference in computer architecture refers to the process of


accessing data stored in the computer's memory. It involves reading data from or writing data to a
specific memory location. Memory references are fundamental operations performed by the CPU
during the execution of instructions.

There are two main types of memory references:


I. Read Memory Reference: A read memory reference involves retrieving data
from a specified memory address. The CPU sends the memory address to the
memory controller, which accesses the corresponding location in the memory
module. The data stored at that memory address is then transferred back to the
CPU for further processing. Read memory references occur when instructions
require data from memory to be loaded into CPU registers for computation or
other operations.

II. Write Memory Reference: A write memory reference involves storing data into
a specific memory address. The CPU sends both the memory address and the data
to be written to the memory controller. The memory controller updates the
contents of the specified memory location with the new data. Write memory
references occur when instructions produce results that need to be stored back
into memory, such as the outcome of arithmetic or logical operations.

Memory references are essential for executing instructions and manipulating data within a
computer system. They enable programs to store and retrieve information from memory,
facilitating the execution of tasks ranging from simple arithmetic operations to complex
computations and data processing.
Input: Input operations involve transferring data from external devices such as keyboards,
mice, or sensors to the computer system. Users interact with software applications through
input devices, providing commands and data to the computer. Input operations are crucial
for enabling user interaction and providing input to software programs for processing.
Output: Output operations involve transferring data from the computer system to external
devices such as monitors, printers, or speakers. Output devices display information
generated by the computer or produce tangible outputs based on computational results.
Output operations are essential for presenting information to users and providing feedback
on the results of computations.

Interrupts: Interrupts are signals generated by hardware or software to temporarily pause


the CPU's current execution and handle specific events or conditions. They serve to
manage asynchronous events that occur independently of the CPU's current tasks, such as
the completion of I/O operations or timer expirations. Interrupts are essential for efficiently
managing system resources and handling time-sensitive events without wasting processing
resources.

You might also like