0% found this document useful (0 votes)
17 views

Week-2 Data Representation

Uploaded by

vedantrathi2006
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Week-2 Data Representation

Uploaded by

vedantrathi2006
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Week-1 Week-3

C Programming - Week 2
C Programming - Week 2
Number representation in computers
1. Binary Number System
2. Octal Number System
3. Decimal Number System
4. Hexadecimal Number System
Conversion Between Number Systems
Negative Number Representation in Computers
1. Sign and Magnitude
2. Two's Complement
Floating Point Representation in Computers
Character Representation and Encoding in Computers
1. ASCII (American Standard Code for Information Interchange)
2. Extended ASCII
3. Unicode
4. UTF-8 (Unicode Transformation Format - 8-bit)
Example: Encoding and Decoding
Instruction Encoding in a Computer
Program Compilation in a Computer
Stages of Compilation
Role of the Operating System in a Computer
Functions of the Operating System
Booting
Bootstrap
BIOS (Basic Input/Output System)

Number representation in computers


1. Binary Number System
Definition:

The binary number system uses two digits, 0 and 1. Each digit is known as a bit.

It is the foundation of all binary code, which is used in computer and digital systems.

Example:

The binary number 1011 represents the decimal number 11.

Explanation:

Binary numbers are base-2 numbers.


To convert a binary number to decimal, sum the products of each bit and its corresponding
power of 2.

Conversion Example:

2. Octal Number System


Definition:

The octal number system uses eight digits, from 0 to 7.

It is a base-8 number system.

Example:

The octal number 17 represents the decimal number 15.

Explanation:

To convert an octal number to decimal, sum the products of each digit and its corresponding
power of 8.

Conversion Example:

3. Decimal Number System


Definition:

The decimal number system uses ten digits, from 0 to 9.

It is a base-10 number system and the most commonly used in everyday life.

Example:

The decimal number 345 represents the same number in the decimal system.

Explanation:

Each digit in a decimal number is multiplied by the corresponding power of 10.

Conversion Example:

4. Hexadecimal Number System


Definition:

The hexadecimal number system uses sixteen digits, from 0 to 9 and A to F (where A = 10, B =
11, C = 12, D = 13, E = 14, and F = 15).

It is a base-16 number system.

Example:

The hexadecimal number 1A3 represents the decimal number 419.


Explanation:

To convert a hexadecimal number to decimal, sum the products of each digit and its
corresponding power of 16.

Conversion Example:

Conversion Between Number Systems


1. Decimal to Binary:

Example: Convert 13 to binary.

Divide by 2 and record the remainders:

Read the remainders from bottom to top: .

2. Decimal to Octal:

Example: Convert 65 to octal.

Divide by 8 and record the remainders:

Read the remainders from bottom to top: .

3. Octal to Binary:

Example: Convert 157 to binary.

Convert each octal digit to its binary equivalent:

Result:

4. Decimal to Hexadecimal:

Example: Convert 255 to hexadecimal.

Divide the number by 16 and record the remainders:

255 ÷ 16 = 15 remainder 15
15 ÷ 16 = 0 remainder 15
Write remainders in reverse: FF

255 = FF in hexadecimal.

5. Hexadecimal to Decimal:

Example: Convert 1A3 to decimal.

Each digit is multiplied by its corresponding power of 16:

1 * 16^2 + A * 16^1 + 3 * 16^0

1 * 256 + 10 * 16 + 3 * 1

256 + 160 + 3

419

1A3 = 419 in decimal.

6. Binary to Hexadecimal:

Example: Convert 11010110 to hexadecimal.

Group binary digits in sets of four from right to left:

Convert each group to decimal, then to hexadecimal:

Result:

7. Hexadecimal to Binary

Direct Mapping: 1 hexadecimal digit maps to 4 binary digits.

Example: Convert 3F to binary.

Convert each digit:

3 = 0011

F = 1111

Result: 00111111

Negative Number Representation in Computers


Negative numbers can be represented in computers using several methods. The most common
ones are:
1. Sign and Magnitude
Description: The most significant bit (MSB) represents the sign (0 for positive, 1 for negative),
and the remaining bits represent the magnitude (absolute value) of the number.

Example:

For an 8-bit system:

+18: 00010010

-18: 10010010

Explanation:

The MSB indicates the sign.

This method is simple but has issues with two representations for zero (+0 and -0).

2. Two's Complement
Description: Negative numbers are represented by inverting all the bits of the positive
number and then adding 1 to the least significant bit (LSB).

Example:

For an 8-bit system:

+18: 00010010

-18: 11101110 (invert 00010010 to get 11101101, then add 1)

Explanation:

This is the most commonly used method because it simplifies arithmetic operations.

There's only one representation for zero.

Easy to implement in hardware and supports straightforward addition and subtraction.

Floating Point Representation in Computers


Floating-point representation is a way to store real numbers (numbers with fractional parts) in a
format that can accommodate a wide range of values. This is essential for scientific calculations,
graphics, and many other applications.

1. Why Use Floating Point?

Wide Range of Values: Floating-point can represent very small and very large numbers.

Fractional Values: It allows for the representation of fractional parts, unlike integers.

Standardization: The IEEE 754 standard ensures consistency across different computing
systems.

2. IEEE 754 Standard

The IEEE 754 standard is the most widely used format for floating-point arithmetic. It defines two
main formats:
Single Precision (32-bit)

Double Precision (64-bit)

3. Components of Floating-Point Representation

A floating-point number is typically represented in the following form:

Where:

s: Sign bit (0 for positive, 1 for negative)

mantissa (fraction): Represents the significant digits of the number.

exponent: Determines the scale or magnitude of the number.

bias: A constant used to adjust the range of the exponent (127 for single precision, 1023 for
double precision).

4. Single Precision (32-bit)

In single precision, a floating-point number is divided into three parts:

Sign (1 bit): Indicates the sign of the number.

Exponent (8 bits): Encodes the exponent with a bias of 127.

Mantissa (23 bits): Encodes the significant digits of the number.

Example: Representing the number -5.75 in IEEE 754 single precision.

1. Convert to Binary:

2. Normalize:

3. Determine Components:

Sign bit (s): 1 (since the number is negative)

Exponent: 2 + 127 (bias) = 129 = 10000001210000001_2100000012

Mantissa: 01110000000000000000000 (23 bits)

4. Combine:

1 | 10000001 | 01110000000000000000000

Explanation:

Sign bit is 1.

Exponent is 129, which in binary is 10000001.

Mantissa is the binary digits after the decimal point, padded to 23 bits.

5. Double Precision (64-bit)

In double precision, a floating-point number is divided into three parts:


Sign (1 bit): Indicates the sign of the number.

Exponent (11 bits): Encodes the exponent with a bias of 1023.

Mantissa (52 bits): Encodes the significant digits of the number.

Example: Representing the number -5.75 in IEEE 754 double precision.

1. Convert to Binary:

2. Normalize:

3. Determine Components:

Sign bit (s): 1 (since the number is negative)

Exponent: 2 + 1023 (bias) = 1025 = 10000000001210000000001_2100000000012

Mantissa: 0111000000000000000000000000000000000000000000000000 (52 bits)

4. Combine:

1 | 10000000001 | 0111000000000000000000000000000000000000000000000000

Explanation:

Sign bit is 1.

Exponent is 1025, which in binary is 10000000001.

Mantissa is the binary digits after the decimal point, padded to 52 bits.

6. Special Values

IEEE 754 defines several special values:

Zero: Represented by all bits in the exponent and mantissa being zero.

Positive Zero: 0 | 00000000 | 00000000000000000000000

Negative Zero: 1 | 00000000 | 00000000000000000000000

Infinity: Represented by all bits in the exponent being ones and all bits in the mantissa being
zero.

Positive Infinity: 0 | 11111111 | 00000000000000000000000

Negative Infinity: 1 | 11111111 | 00000000000000000000000

NaN (Not a Number): Represented by all bits in the exponent being ones and at least one
bit in the mantissa being non-zero.

Example: 0 | 11111111 | 10000000000000000000000

7. Precision and Rounding

Precision: Single precision can represent approximately 7 decimal digits, while double
precision can represent about 15 decimal digits.

Rounding: When performing arithmetic operations, results might need to be rounded to fit
the available precision. IEEE 754 specifies several rounding modes (e.g., round to nearest,
round toward zero).
Example Calculation

Let's convert a simple decimal number to its floating-point representation and back to understand
the process.

Example: Convert the decimal number 10.375 to IEEE 754 single precision.

1. Convert to Binary:

10 in binary is 1010 .

0.375 in binary is 0.011 (since 0.375 = 0.50 + 0.251 + 0.125*1).

Combining these gives: .

2. Normalize:

3. Determine Components:

Sign bit (s): 0 (since the number is positive)

Exponent: 3 + 127 (bias) = 130 = .

Mantissa: 01001100000000000000000 (23 bits)

4. Combine:

0 | 10000010 | 01001100000000000000000

Converting Back to Decimal

Given the IEEE 754 single precision representation 0 | 10000010 | 01001100000000000000000 :

1. Sign Bit: 0 (positive)

2. Exponent: 10000010210000010_2100000102 = 130. Subtract bias (127): 130 - 127 = 3.

3. Mantissa: 1.010011 (normalized form)

Combine these to get:

Convert back to decimal:

Character Representation and Encoding in


Computers
Basics of Character Representation

Characters (letters, digits, symbols) in computers are represented using numeric codes. Each
character is assigned a unique number, which is stored in memory as binary data. This system
allows computers to process text.
Common Character Encoding Standards

There are several encoding standards used to represent characters. The most common ones
include:

ASCII (American Standard Code for Information Interchange)

Extended ASCII

Unicode

UTF-8

1. ASCII (American Standard Code for Information


Interchange)
Description: ASCII is one of the oldest character encoding standards. It uses 7 bits to
represent each character, allowing for 128 unique characters.

Range: 0 to 127.

Characters: Includes control characters (e.g., newline, tab) and printable characters (e.g.,
letters, digits, punctuation).

Example:

The character 'A' in ASCII:

Decimal: 65

Binary: 01000001

Explanation:

ASCII uses 7 bits, so 'A' (65 in decimal) is represented as 01000001 in binary.

2. Extended ASCII
Description: Extended ASCII uses 8 bits to represent each character, allowing for 256 unique
characters. It includes the standard ASCII set plus additional characters (e.g., accented letters,
symbols).

Range: 0 to 255.

Example:

The character 'é' in Extended ASCII:

Decimal: 233

Binary: 11101001

Explanation:

Extended ASCII uses 8 bits, so 'é' (233 in decimal) is represented as 11101001 in binary.
3. Unicode
Description: Unicode is a comprehensive standard designed to support characters from all
writing systems around the world. It assigns a unique code point to every character,
independent of how it is encoded.

Range: Over 1.1 million code points, covering most of the world's writing systems.

Example:

The character 'A' in Unicode:

Code Point: U+0041

The character '汉' (Chinese character for "Han") in Unicode:

Code Point: U+6C49

Explanation:

Unicode uses a code point to represent each character. The encoding specifies how these
code points are stored in memory.

4. UTF-8 (Unicode Transformation Format - 8-bit)


Description: UTF-8 is a variable-length encoding standard that encodes Unicode characters
using one to four bytes. It is backward compatible with ASCII and efficient for representing
English text while also supporting all Unicode characters.

Range: Each character can be 1 to 4 bytes.

Example:

The character 'A' in UTF-8:

1 byte: 01000001

The character '汉' in UTF-8:

3 bytes: 11100110 10110001 10001001

Explanation:

UTF-8 is designed to be efficient for ASCII characters (using only one byte) and flexible
enough to handle all Unicode characters with variable-length encoding.

Example: Encoding and Decoding


Let's go through an example of encoding and decoding a character using different standards.

Example: Encoding and decoding the character 'A'

1. ASCII:

Encoding: 'A' -> Decimal 65 -> Binary 01000001

Decoding: Binary 01000001 -> Decimal 65 -> 'A'

2. Extended ASCII:

Encoding: 'A' -> Decimal 65 -> Binary 01000001


Decoding: Binary 01000001 -> Decimal 65 -> 'A'
3. Unicode:

Encoding: 'A' -> Code Point U+0041

Decoding: Code Point U+0041 -> 'A'

4. UTF-8:

Encoding: 'A' -> Code Point U+0041 -> UTF-8 01000001 (1 byte)

Decoding: UTF-8 01000001 -> Code Point U+0041 -> 'A'

Importance of Character Encoding

Text Processing: Proper character encoding is essential for correctly displaying and
processing text in different languages.

Data Exchange: Ensures consistent interpretation of text data across different systems and
platforms.

Compatibility: UTF-8's backward compatibility with ASCII allows for seamless integration
with legacy systems.

Instruction Encoding in a Computer


Basics of Instruction Encoding

In a computer, instructions are encoded as binary numbers that the CPU can decode and execute.
These instructions are part of a machine language, which is a low-level language specific to a
computer's architecture. Each instruction tells the CPU what operation to perform and on which
data.

Components of an Instruction

An instruction typically consists of several components:

Opcode (Operation Code): Specifies the operation to be performed (e.g., add, subtract, load,
store).

Operands

: Specifies the data or the addresses of the data to be operated on. Operands can be:

Registers: Small, fast storage locations within the CPU.

Immediate values: Constants embedded in the instruction itself.

Memory addresses: Locations in RAM.

Instruction Formats

Different CPU architectures have different instruction formats. Common formats include:

RISC (Reduced Instruction Set Computing): Simple instructions that execute in a single
clock cycle.
CISC (Complex Instruction Set Computing): More complex instructions that can execute
multiple low-level operations.

Program Compilation in a Computer


Compilation is the process of converting high-level programming code into machine code that a
computer's CPU can execute. This process involves several stages, each transforming the source
code closer to executable machine code.

The compilation process typically involves the following stages:

1. Preprocessing

2. Compilation

3. Assembly

4. Linking

Stages of Compilation
1. Preprocessing

The preprocessing stage involves preparing the source code for compilation. This step includes:

Macro Expansion: Expanding macros defined with #define .

File Inclusion: Including header files specified with #include .

Conditional Compilation: Evaluating conditional directives such as #if , #ifdef , and


#endif .

2. Compilation

The compilation stage translates the preprocessed source code into assembly language. This
involves:

Syntax Analysis: Checking the syntax of the code.

Semantic Analysis: Ensuring the code makes sense semantically.

Intermediate Code Generation: Generating an intermediate representation of the code.

Optimization: Improving the efficiency of the intermediate code.

Code Generation: Translating the optimized intermediate code to assembly code.

3. Assembly

The assembly stage converts the assembly code into machine code (binary format). This is done
by the assembler, which translates each assembly instruction into its corresponding machine
instruction.

4. Linking

The linking stage combines multiple object files and libraries into a single executable file. This
involves:
Symbol Resolution: Resolving references to functions and variables across different object
files.

Address Binding: Assigning memory addresses to code and data.

Library Linking: Including external libraries required by the program.

Role of the Operating System in a Computer


Overview of the Operating System

The operating system (OS) is a fundamental piece of software that manages computer hardware
and software resources and provides services for computer programs. It acts as an intermediary
between users and the computer hardware.

Functions of the Operating System


1. Process Management

The OS manages processes in the system, including:

Process Scheduling: Allocating CPU time to various processes.

Process Creation and Termination: Creating and terminating processes as needed.

Process Synchronization and Communication: Ensuring processes can communicate and


operate without conflict.

Example: When you open a web browser, the OS creates a new process for it, allocates CPU time,
and manages its execution until you close the browser.

2. Memory Management

The OS handles the allocation and deallocation of memory spaces as needed by programs. It
ensures efficient use of memory and manages:

Main Memory: The RAM available for running programs.

Virtual Memory: Extending the available memory using disk space.

Example: When you run multiple applications, the OS allocates memory to each application and
uses virtual memory to handle situations where RAM is insufficient.

3. File System Management

The OS manages files and directories on storage devices. It provides:

File Creation and Deletion: Enabling users to create, delete, and modify files.

Directory Management: Organizing files into directories (folders).

File Access Control: Setting permissions to control access to files.

Example: When you save a document, the OS writes the data to the storage device and updates
the file directory.
4. Device Management

The OS manages hardware devices through device drivers, which are software components that
enable the OS to communicate with hardware peripherals. It handles:

Device Drivers: Software that communicates with hardware devices.

I/O Operations: Managing input/output operations and data transfers.

Example: When you print a document, the OS sends data to the printer through the printer driver.

5. User Interface

The OS provides user interfaces (UIs) for interaction, such as:

Command-Line Interface (CLI): Text-based interface for entering commands.

Graphical User Interface (GUI): Visual interface with windows, icons, and menus.

Example: Windows OS provides a GUI with a desktop, start menu, and taskbar for easy
interaction.

6. Security and Access Control

The OS ensures system security and controls access to resources by:

User Authentication: Verifying user identities (e.g., login credentials).

Access Control: Setting permissions for files and resources to ensure only authorized users
can access them.

Example: The OS requires a password to log in and allows setting file permissions to restrict
access.

Booting
Definition

Booting is the process of starting up a computer and loading the operating system into memory
so that the computer becomes ready for use.

Steps

1. Power-On: The computer is powered on, and the power supply activates.

2. POST (Power-On Self-Test): The BIOS checks the hardware components to ensure they are
functioning correctly.

3. Boot Loader: The BIOS locates and loads the boot loader from a bootable device.

4. OS Loading: The boot loader loads the operating system kernel into memory.

5. Initialization: The operating system initializes system settings and loads necessary drivers
and services.

Example
When you press the power button on your computer, the system undergoes the booting process,
eventually displaying the login screen for the operating system.

Bootstrap
Definition

Bootstrapping is the process by which a computer loads and initializes the operating system from
a powered-off state, starting from the BIOS executing initial instructions to loading the OS into
memory.

Steps

1. CPU Initialization: The CPU begins executing instructions from the BIOS.

2. POST: The BIOS performs hardware checks.

3. Boot Sequence: The BIOS follows the boot order to find a bootable device.

4. Boot Loader: The BIOS loads the boot loader from the bootable device.

5. OS Loading: The boot loader loads the operating system kernel.

6. System Handover: Control is handed over to the operating system.

Example

The bootstrapping process involves the BIOS loading the boot loader from the hard drive, which
then loads the operating system like Windows or Linux.

BIOS (Basic Input/Output System)


Definition

BIOS is firmware stored on a chip on the motherboard, responsible for initializing hardware
during the booting process and providing runtime services for operating systems and applications.

Functions

1. POST (Power-On Self-Test): Checks the integrity and functionality of hardware components.

2. Boot Loader Location: Identifies and loads the boot loader from a bootable device.

3. BIOS Setup Utility: Allows users to configure hardware settings and system parameters.

4. Hardware Initialization: Initializes hardware components before the operating system takes
over.

5. System Firmware Interface: Provides an interface between the operating system and the
hardware.

Example

When you start your computer, the BIOS performs a POST, loads the boot loader, and then hands
control over to the operating system.

You might also like