Week-2 Data Representation
Week-2 Data Representation
C Programming - Week 2
C Programming - Week 2
Number representation in computers
1. Binary Number System
2. Octal Number System
3. Decimal Number System
4. Hexadecimal Number System
Conversion Between Number Systems
Negative Number Representation in Computers
1. Sign and Magnitude
2. Two's Complement
Floating Point Representation in Computers
Character Representation and Encoding in Computers
1. ASCII (American Standard Code for Information Interchange)
2. Extended ASCII
3. Unicode
4. UTF-8 (Unicode Transformation Format - 8-bit)
Example: Encoding and Decoding
Instruction Encoding in a Computer
Program Compilation in a Computer
Stages of Compilation
Role of the Operating System in a Computer
Functions of the Operating System
Booting
Bootstrap
BIOS (Basic Input/Output System)
The binary number system uses two digits, 0 and 1. Each digit is known as a bit.
It is the foundation of all binary code, which is used in computer and digital systems.
Example:
Explanation:
Conversion Example:
Example:
Explanation:
To convert an octal number to decimal, sum the products of each digit and its corresponding
power of 8.
Conversion Example:
It is a base-10 number system and the most commonly used in everyday life.
Example:
The decimal number 345 represents the same number in the decimal system.
Explanation:
Conversion Example:
The hexadecimal number system uses sixteen digits, from 0 to 9 and A to F (where A = 10, B =
11, C = 12, D = 13, E = 14, and F = 15).
Example:
To convert a hexadecimal number to decimal, sum the products of each digit and its
corresponding power of 16.
Conversion Example:
2. Decimal to Octal:
3. Octal to Binary:
Result:
4. Decimal to Hexadecimal:
255 ÷ 16 = 15 remainder 15
15 ÷ 16 = 0 remainder 15
Write remainders in reverse: FF
255 = FF in hexadecimal.
5. Hexadecimal to Decimal:
1 * 256 + 10 * 16 + 3 * 1
256 + 160 + 3
419
6. Binary to Hexadecimal:
Result:
7. Hexadecimal to Binary
3 = 0011
F = 1111
Result: 00111111
Example:
+18: 00010010
-18: 10010010
Explanation:
This method is simple but has issues with two representations for zero (+0 and -0).
2. Two's Complement
Description: Negative numbers are represented by inverting all the bits of the positive
number and then adding 1 to the least significant bit (LSB).
Example:
+18: 00010010
Explanation:
This is the most commonly used method because it simplifies arithmetic operations.
Wide Range of Values: Floating-point can represent very small and very large numbers.
Fractional Values: It allows for the representation of fractional parts, unlike integers.
Standardization: The IEEE 754 standard ensures consistency across different computing
systems.
The IEEE 754 standard is the most widely used format for floating-point arithmetic. It defines two
main formats:
Single Precision (32-bit)
Where:
bias: A constant used to adjust the range of the exponent (127 for single precision, 1023 for
double precision).
1. Convert to Binary:
2. Normalize:
3. Determine Components:
4. Combine:
1 | 10000001 | 01110000000000000000000
Explanation:
Sign bit is 1.
Mantissa is the binary digits after the decimal point, padded to 23 bits.
1. Convert to Binary:
2. Normalize:
3. Determine Components:
4. Combine:
1 | 10000000001 | 0111000000000000000000000000000000000000000000000000
Explanation:
Sign bit is 1.
Mantissa is the binary digits after the decimal point, padded to 52 bits.
6. Special Values
Zero: Represented by all bits in the exponent and mantissa being zero.
Infinity: Represented by all bits in the exponent being ones and all bits in the mantissa being
zero.
NaN (Not a Number): Represented by all bits in the exponent being ones and at least one
bit in the mantissa being non-zero.
Precision: Single precision can represent approximately 7 decimal digits, while double
precision can represent about 15 decimal digits.
Rounding: When performing arithmetic operations, results might need to be rounded to fit
the available precision. IEEE 754 specifies several rounding modes (e.g., round to nearest,
round toward zero).
Example Calculation
Let's convert a simple decimal number to its floating-point representation and back to understand
the process.
Example: Convert the decimal number 10.375 to IEEE 754 single precision.
1. Convert to Binary:
10 in binary is 1010 .
2. Normalize:
3. Determine Components:
4. Combine:
0 | 10000010 | 01001100000000000000000
Characters (letters, digits, symbols) in computers are represented using numeric codes. Each
character is assigned a unique number, which is stored in memory as binary data. This system
allows computers to process text.
Common Character Encoding Standards
There are several encoding standards used to represent characters. The most common ones
include:
Extended ASCII
Unicode
UTF-8
Range: 0 to 127.
Characters: Includes control characters (e.g., newline, tab) and printable characters (e.g.,
letters, digits, punctuation).
Example:
Decimal: 65
Binary: 01000001
Explanation:
2. Extended ASCII
Description: Extended ASCII uses 8 bits to represent each character, allowing for 256 unique
characters. It includes the standard ASCII set plus additional characters (e.g., accented letters,
symbols).
Range: 0 to 255.
Example:
Decimal: 233
Binary: 11101001
Explanation:
Extended ASCII uses 8 bits, so 'é' (233 in decimal) is represented as 11101001 in binary.
3. Unicode
Description: Unicode is a comprehensive standard designed to support characters from all
writing systems around the world. It assigns a unique code point to every character,
independent of how it is encoded.
Range: Over 1.1 million code points, covering most of the world's writing systems.
Example:
Explanation:
Unicode uses a code point to represent each character. The encoding specifies how these
code points are stored in memory.
Example:
1 byte: 01000001
Explanation:
UTF-8 is designed to be efficient for ASCII characters (using only one byte) and flexible
enough to handle all Unicode characters with variable-length encoding.
1. ASCII:
2. Extended ASCII:
4. UTF-8:
Encoding: 'A' -> Code Point U+0041 -> UTF-8 01000001 (1 byte)
Text Processing: Proper character encoding is essential for correctly displaying and
processing text in different languages.
Data Exchange: Ensures consistent interpretation of text data across different systems and
platforms.
Compatibility: UTF-8's backward compatibility with ASCII allows for seamless integration
with legacy systems.
In a computer, instructions are encoded as binary numbers that the CPU can decode and execute.
These instructions are part of a machine language, which is a low-level language specific to a
computer's architecture. Each instruction tells the CPU what operation to perform and on which
data.
Components of an Instruction
Opcode (Operation Code): Specifies the operation to be performed (e.g., add, subtract, load,
store).
Operands
: Specifies the data or the addresses of the data to be operated on. Operands can be:
Instruction Formats
Different CPU architectures have different instruction formats. Common formats include:
RISC (Reduced Instruction Set Computing): Simple instructions that execute in a single
clock cycle.
CISC (Complex Instruction Set Computing): More complex instructions that can execute
multiple low-level operations.
1. Preprocessing
2. Compilation
3. Assembly
4. Linking
Stages of Compilation
1. Preprocessing
The preprocessing stage involves preparing the source code for compilation. This step includes:
2. Compilation
The compilation stage translates the preprocessed source code into assembly language. This
involves:
3. Assembly
The assembly stage converts the assembly code into machine code (binary format). This is done
by the assembler, which translates each assembly instruction into its corresponding machine
instruction.
4. Linking
The linking stage combines multiple object files and libraries into a single executable file. This
involves:
Symbol Resolution: Resolving references to functions and variables across different object
files.
The operating system (OS) is a fundamental piece of software that manages computer hardware
and software resources and provides services for computer programs. It acts as an intermediary
between users and the computer hardware.
Example: When you open a web browser, the OS creates a new process for it, allocates CPU time,
and manages its execution until you close the browser.
2. Memory Management
The OS handles the allocation and deallocation of memory spaces as needed by programs. It
ensures efficient use of memory and manages:
Example: When you run multiple applications, the OS allocates memory to each application and
uses virtual memory to handle situations where RAM is insufficient.
File Creation and Deletion: Enabling users to create, delete, and modify files.
Example: When you save a document, the OS writes the data to the storage device and updates
the file directory.
4. Device Management
The OS manages hardware devices through device drivers, which are software components that
enable the OS to communicate with hardware peripherals. It handles:
Example: When you print a document, the OS sends data to the printer through the printer driver.
5. User Interface
Graphical User Interface (GUI): Visual interface with windows, icons, and menus.
Example: Windows OS provides a GUI with a desktop, start menu, and taskbar for easy
interaction.
Access Control: Setting permissions for files and resources to ensure only authorized users
can access them.
Example: The OS requires a password to log in and allows setting file permissions to restrict
access.
Booting
Definition
Booting is the process of starting up a computer and loading the operating system into memory
so that the computer becomes ready for use.
Steps
1. Power-On: The computer is powered on, and the power supply activates.
2. POST (Power-On Self-Test): The BIOS checks the hardware components to ensure they are
functioning correctly.
3. Boot Loader: The BIOS locates and loads the boot loader from a bootable device.
4. OS Loading: The boot loader loads the operating system kernel into memory.
5. Initialization: The operating system initializes system settings and loads necessary drivers
and services.
Example
When you press the power button on your computer, the system undergoes the booting process,
eventually displaying the login screen for the operating system.
Bootstrap
Definition
Bootstrapping is the process by which a computer loads and initializes the operating system from
a powered-off state, starting from the BIOS executing initial instructions to loading the OS into
memory.
Steps
1. CPU Initialization: The CPU begins executing instructions from the BIOS.
3. Boot Sequence: The BIOS follows the boot order to find a bootable device.
4. Boot Loader: The BIOS loads the boot loader from the bootable device.
Example
The bootstrapping process involves the BIOS loading the boot loader from the hard drive, which
then loads the operating system like Windows or Linux.
BIOS is firmware stored on a chip on the motherboard, responsible for initializing hardware
during the booting process and providing runtime services for operating systems and applications.
Functions
1. POST (Power-On Self-Test): Checks the integrity and functionality of hardware components.
2. Boot Loader Location: Identifies and loads the boot loader from a bootable device.
3. BIOS Setup Utility: Allows users to configure hardware settings and system parameters.
4. Hardware Initialization: Initializes hardware components before the operating system takes
over.
5. System Firmware Interface: Provides an interface between the operating system and the
hardware.
Example
When you start your computer, the BIOS performs a POST, loads the boot loader, and then hands
control over to the operating system.