Number System and Data Representation Notes
Number System and Data Representation Notes
A number system is a way to represent and express numbers using a consistent set of
symbols and rules. It defines how numbers are represented, manipulated, and understood.
Number systems are foundational in mathematics and computer science, as they are used to
perform calculations, represent data, and express mathematical concepts. Here are some key
components and types of number systems:
1. Base: The base (or radix) of a number system determines how many unique digits or
symbols are used to represent numbers. For example:
- In the decimal system (base-10), the digits are 0-9.
- In the binary system (base-2), the digits are 0 and 1.
1. Decimal (Base-10):
- The most common number system used in everyday life.
- Uses ten digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
- Example: The number 256 in decimal represents 2×102+5×101+6×100
2. Binary (Base-2):
- Used primarily in computer systems and digital electronics.
- Uses two digits: 0 and 1.
- Example: The binary number 1011 represents 1×23+0×22+1×21+1×20=11 in
decimal.
3. Octal (Base-8):
- Uses eight digits: 0, 1, 2, 3, 4, 5, 6, 7.
- Example: The octal number 25 represents 2×81+5×80=21 in decimal.
4. Hexadecimal (Base-16):
- Commonly used in computing and programming.
- Uses sixteen symbols: 0-9 and A (10), B (11), C (12), D (13), E (14), F (15).
- Example: The hexadecimal number 1A represents 1×161+10×160=26 in
decimal.
In summary, a number system is a structured way to represent numbers using a defined set of
symbols and rules. Understanding number systems is crucial for mathematics, computer
science, and various applied fields.
Positional Value System
The positional value system (or place value system) is a method of representing numbers in
which the position of each digit in a number determines its value. This system is fundamental
in many number systems, including the decimal (base-10), binary (base-2), octal (base-8),
and hexadecimal (base-16) systems.
Base or Radix:
- The base of a number system indicates how many unique digits or symbols are
used to represent numbers.
- For example, in the decimal system (base-10), the digits are 0 to 9, while in
the binary system (base-2), the digits are 0 and 1.
2. Place Value:
- Each digit in a number has a place value that depends on its position
within the number. The value of each position is a power of the base.
- For instance, in the decimal number 456, the place values are:
- 4×102 (hundreds place)
- 5×101 (tens place)
- 6×100 (ones place)
- The total value is calculated by summing these values: 400+50+6=456
3. Counting System:
- The positional value system allows for compact representation of large
numbers, making it easier to perform arithmetic operations.
- For example, instead of writing the number "one thousand" as "1000," it can
be represented as 103 in the positional value system.
1. Number Systems
- Decimal Number System
- Binary Number System
- Octal Number System
- Hexadecimal Number System
2. Conversions between Number Systems
- Converting Decimal to Binary, Octal, and Hexadecimal
- Converting Binary, Octal, and Hexadecimal to Decimal
- Conversions between Binary, Octal, and Hexadecimal
- Binary Arithmetic
3. Binary Data Representation
- Fixed-Point and Floating-Point Representation
- Binary Coding Schemes
NUMBER SYSTEMS
- The decimal system is the most commonly used number system in everyday life,
consisting of ten digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
- Each position in a decimal number represents a power of 10.
- For example, the number 253 can be expressed as: 2×102+5×101+3×100==253
- The hexadecimal system uses sixteen symbols: 0-9 and A-F (where A=10, B=11,
C=12, D=13, E=14, F=15).
- Decimal to Binary: Multiply the fraction by 2, take the integer part as the next binary
digit, and repeat with the fractional part.
- Decimal to Octal: Multiply the fraction by 8, take the integer part as the next octal
digit, and repeat.
- Decimal to Hexadecimal: Multiply the fraction by 16, take the integer part as the
next hexadecimal digit, and repeat.
- Binary to Octal: Group binary digits in sets of three (from right to left) and convert
each group.
- Binary to Hexadecimal: Group binary digits in sets of four and convert each group.
- Octal to Binary: Convert each octal digit directly into three binary digits.
- Hexadecimal to Binary: Convert each hexadecimal digit directly into four binary
digits.
Revision Questions:
1. Binary to Decimal
2. Decimal to Binary
3. Binary to Octal
4. Octal to Binary
5. Binary to Hexadecimal
6. Hexadecimal to Binary
7. Decimal to Hexadecimal
8. Hexadecimal to Decimal
9. Octal to Decimal
Binary Arithmetic
Binary arithmetic relies on a set of basic rules for addition, subtraction, multiplication, and
division, which are quite similar to those in decimal but adapted for the binary system (only 0
and 1 are used).
1. Binary Addition
Binary addition follows simple rules, with "carrying" occurring when the sum is 2 or more:
0+0=0
0+1=1
1+0=1
1 + 1 = 0 (which is 0, carry 1, note in the decimal system 1+1=2, and 2 in binary is 10)
111 (Carry)
1011
+ 1101
----------
11000
Step-by-step:
2. Binary Subtraction
Binary subtraction is similar to decimal subtraction but uses borrowing when needed:
Binary subtraction is similar to decimal subtraction but Binary subtraction uses "borrowing,"
where a "10"( 2 in decimal) in binary is borrowed if needed:
0-0=0
1-0=1
1-1=0
We’ll use the rules of binary subtraction with borrowing, which works similarly to decimal
subtraction but is adapted for binary.
1010
-0111
0 - 1: Since 0 is less than 1, we need to borrow 1 from the next column to the left. The
0 in the next column (second from the right) becomes 2 in binary, so the rightmost
column calculation becomes 10 - 1 = 1.
Result for this column: 1
5. Leftmost column:
1010
-0111
--------
0011
So,
1010 -
0111 =
0011 in
binary
(which
is equal
to 10-
7=3 in
decimal
).
3. Binary Multiplication
0×0=0
0×1=0
1×0=0
1×1=1
Binary multiplication is similar to decimal multiplication but easier since each digit is either 0
or 1:
- Multiply each bit of the first number by each bit of the second number.
- Shift left for each position of the digit in the multiplier (similar to multiplying by
powers of ten in decimal).
101
x 11
---------
101 (101 * 1)
+ 1010 (101 * 1, shifted left by 1)
---------
1111
Step-by-step:
1. Multiply the first bit of the bottom number by the top number: 101×1=101
2. Multiply the second bit of the bottom number by the top number and shift left by one
position: 101×1=101
3. Add the two results together:
101
+ 1010
--------
1111
4. Binary Division
Align the divisor under the leftmost bits of the dividend that can be divided:
The divisor 110 can fit into the first three bits of 101 in the dividend. So, we check if
110 fits into 101:
110 is larger than 101, so it doesn’t fit. Place a 0 in the quotient.
1010
- 110
-------
100
1001
- 110
-------
011
Final Answer
101010 in decimal is 42
110 in decimal is 6
Summary of Operations
Binary Addition: Align and add, carrying over when sums exceed 1.
Binary Subtraction: Borrow when necessary.
Binary Multiplication: Multiply like decimal, shifting for each bit.
Binary Division: Use long division methods, subtracting the divisor repeatedly.
These methods are essential for performing arithmetic operations in digital electronics and
programming, as all data in computers is ultimately represented in binary form.
Revision Questions:
Binary Addition
1. Question 1: What is the result of adding the binary numbers 1101 and 1011?
2. Question 2: Add the binary numbers 0110 and 1110 and explain any carries involved.
3. Question 3: Calculate the sum of the binary numbers 10101 and 00111.
Binary Subtraction
Binary Multiplication
1. Question 1: What is the product of the binary numbers 101 and 11?
2. Question 2: Multiply the binary numbers 110 and 101. Show your working steps.
3. Question 3: Calculate the product of 111 and 10 in binary.
Binary Division
1. Question 1: Divide the binary number 1100 by 10 and state the quotient and
remainder.
In binary systems, numbers are represented using bits (binary digits), and depending on how
these bits are interpreted, they can represent either positive only numbers (unsigned) or both
positive and negative numbers (signed).
Unsigned Numbers
- Definition: Unsigned numbers use all the bits to represent non-negative values,
meaning they only represent positive integers and zero.
- Range: For an n-bit unsigned number, the range is from 0 to (2n - 1).
- Example: An unsigned 4-bit number can represent values from 0 to 15:
- 0000 in binary = 0 in decimal
- 1111 in binary = 15 in decimal
Use Cases:
- Unsigned numbers are used when you know that negative values are not needed,
such as in addressing memory locations or representing counts of objects. Signed
Numbers
- Definition: Signed numbers allow representation of both positive and negative values.
This is typically done using a method called two's complement, which simplifies
binary arithmetic operations like addition and subtraction.
- Range: For an n-bit signed number, the range is from -2(n-1) to 2(n-1) - 1.
- Example: A signed 4-bit number can represent values from -8 to 7:
- 1000 in binary = -8 in decimal (two's complement representation of negative values)
- 0111 in binary = 7 in decimal
- Two's complement is the most widely used method for representing signed integers
in binary. It simplifies the hardware required for arithmetic operations by making
negative numbers easier to work with.
In binary, a complement is used to change the sign of a number or to prepare it for certain
arithmetic operations. There are two types of complements commonly used: one's
complement and two's complement.
One's Complement
- Definition: The one's complement of a binary number is formed by inverting all the
bits (changing 0s to 1s and 1s to 0s).
- How to calculate: Simply flip all the bits of the number.
Example:
- For a 4-bit number, 1010 (10 in decimal) has a one's complement of 0101.
Limitations:
- One's complement has two representations for zero (0000 for +0 and 1111 for -0),
which can lead to complications in arithmetic.
Two's Complement
- Definition: The two's complement of a binary number is formed by inverting all the
bits (one’s complement) and then adding 1 to the least significant bit.
For an n-bit signed number in two's complement, the range is from −2(n−1) to 2(n−1)−1.
Negative Range: The most significant bit (MSB) in two's complement represents the
sign (0 for positive, 1 for negative). With nnn bits, the smallest possible value, when
the MSB is set to 1 and all other bits are 0, is −2(n−1).
Positive Range: The largest value is obtained when the MSB is 0, and all other bits
are set to 1, which gives 2(n−1) −1.
Example
- How to calculate:
1. Take the one's complement of the number.
2. Add 1 to the result.
Example:
Advantages:
- Two's complement simplifies arithmetic operations and is widely used in computer
systems.
- There is only one representation for zero, which eliminates the confusion seen in
one's complement.
- Subtraction can be done by simply adding the two's complement of a number (i.e., no
separate subtraction operation is needed).
Use Case:
- Two's complement is the standard method for representing signed integers in modern
computers due to its efficiency in arithmetic operations.
Addition:
Subtraction:
Subtraction in two's complement can be performed by adding the two's complement of the
number to be subtracted.
Summary
Both one’s and two’s complement methods are essential for computer systems to handle
arithmetic operations efficiently, especially when dealing with negative values.
Sign Magnitude
The sign-magnitude method is a way to represent signed binary numbers. In this system, the
most significant bit (MSB) represents the sign of the number: a 0 indicates a positive value,
and a 1 indicates a negative value. The remaining bits represent the magnitude (absolute
value) of the number in binary.
Example
0110 represents +6
1110 represents -6
This approach is simple but less commonly used for computations, as other systems like
two's complement offer advantages in handling arithmetic operations directly.
Revision Questions:
Sign Magnitude
1's Complement
2's Complement
Overflow in the binary number system occurs when a calculation produces a result that is
too large to be represented with the fixed number of bits allocated for a particular binary
value. This is a common issue in computing, particularly when adding or subtracting binary
numbers in fixed-length registers or memory locations. Overflow typically arises in both
unsigned and signed binary number representations (e.g., two's complement).
Types of Overflow:
and only the lower 4 bits would be stored (00002), which gives an incorrect result of
0.
- In signed binary numbers using two's complement, the leftmost bit represents the sign
of the number (0 for positive, 1 for negative). The remaining bits represent the
magnitude of the number.
- Overflow in two's complement occurs when the result of an operation is too large (or
too small) to be represented within the given bit width, resulting in an incorrect sign. -
Example (4-bit system):
01112 represents +7, and 00012 represents +1. Adding them gives
10002, which represents -8, not +8.
- Overflow in signed numbers can be detected by checking the carry into and carry
out of the most significant bit (MSB). If these carries are different, overflow has
occurred.
Detecting Overflow:
In unsigned binary addition, overflow occurs if there is a carry out from the MSB.
In signed addition, overflow is detected by comparing the sign of the operands and
the result. Specifically, overflow occurs if:
The sum of two positive numbers results in a negative number.
The sum of two negative numbers results in a positive number.
Practical Example:
Suppose we have an 8-bit unsigned number system (0 to 255) and try to add 240 and 20:
In the binary number system, underflow occurs when a calculation produces a result smaller
than the smallest value that the system can represent with the given precision. This is
particularly relevant in floating-point arithmetic where a result’s absolute magnitude (its size
regardless of sign) is too close to zero for the available number of bits.
2. Impact of Underflow
- When underflow occurs, the system may round the result to zero (or, in some systems, to
a denormalized small value if denormalization is supported).
- This can lead to loss of precision in calculations, potentially causing inaccuracies in
scientific computing, machine learning, and other fields where precise values are critical.
- While underflow is mainly associated with floating-point arithmetic, it can also occur in
integer and fixed-point representations if a number rounds down to zero or loses
accuracy due to a limited range of representation, though this is less common than in
floating-point systems.
4. Handling Underflow
Example:
Imagine you are working in a computer system that represents floating-point numbers using a
limited range of exponents, such as 2−127 as the smallest non-zero value in IEEE 754 single-
precision format. In this system:
This underflow issue is common in financial calculations, scientific modelling, and simulations
where precise, very small numbers are necessary. Techniques like denormalization (which
allows representing values just above zero with leading zeros) or error handling can help
manage the effects of underflow.
In computing, data is stored and measured in specific units. Understanding these units
helps in comprehending how computers process, store, and manage information. This
section explores the fundamental and higher units of data storage and how they are
managed. 1. Bits, Nibble, and Bytes: Basic Units of Storage
Bit
- Definition: A bit is the most basic unit of data storage in computing. It stands
for binary digit and can hold one of two values: 0 or 1. The word "bit" is
derived from the combination of two words: "binary" and "digit." It
represents the smallest unit of data in computing and digital communications.
American mathematician and computer scientist John Tukey introduced the
term "bit" in 1946. He used it in his work on information theory, specifically
in a paper discussing the organization of data for statistical analysis. The term
quickly became popular in computer science and information theory,
representing the basic unit of information in binary systems.
- 0 represents an "off" state.
- 1 represents an "on" state.
- Binary System: Computers use the binary system (base-2) to store and
process data, where everything is represented as combinations of 0s and 1s.
- History: The encoding of data by discrete bits was used in the punched cards
invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by
Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov,
Charles Babbage, Herman Hollerith, and early computer manufacturers like
IBM. A variant of that idea was the perforated paper tape. In all those systems,
the medium (card or tape) conceptually carried an array of hole positions; each
position could be either punched through or not, thus carrying one bit of
information. The encoding of text by bits was also used in Morse code (1844)
and early digital communications machines such as teletypes and stock ticker
machines (1870).
Nibble
Byte
In data representation, various terms like bit, byte, nibble, and others refer to specific units
of information and the number of bits they hold. Here's a detailed explanation of each term:
2. Nibble
3. Byte
4. Character
5. Halfword
6. Word
Definition: A unit of data that the CPU can process in a single operation.
Size: Typically 16 bits (2 bytes) or 32 bits (4 bytes), depending on the computer
architecture.
Usage: The term "word" can vary in size depending on the system architecture. In
older systems, it is usually 16 bits; in modern systems, it’s often 32 bits.
Summary Table:
Term Number of Bits Number of Bytes
Bit 1 bit -
Nibble 4 bits 0.5 byte
Byte 8 bits 1 byte
Character 8 bits 1 byte (in ASCII)
Halfword 16 bits 2 bytes
Word 16-32 bits 2-4 bytes
DWord 32-64 bits 4-8 bytes
QWord 64-128 bits 8-16 bytes
In modern systems, particularly 64-bit architectures, "words" and "double words" may refer
to larger values. These terms are flexible and often depend on the specific system architecture
being used.
Data storage is typically measured in larger units than bits and bytes, especially when dealing
with files, programs, and storage devices. As data size increases, so do the units used to
measure it.
Kilobytes (KB)
- Definition: A kilobyte (KB) consists of 1024 bytes (not 1000 due to the
binary system used in computing, where 1 KB = 210 bytes).
- Common Use: Kilobytes are used to measure small files, such as simple text
documents or small icons.
- Example: A plain text document with approximately 1000 characters would
be roughly 1 KB.
Megabytes (MB)
Gigabytes (GB)
Terabytes (TB)
Comparison of Units
Unit Size in Bytes Use
Kilobyte 1024 bytes Text files, small documents
Megabyte 1024 KB = 1,048,576 bytes Images, songs, medium-sized files
Gigabyte 1024 MB = 1,073,741,824 bytes Videos, applications, large files
Terabyte 1024 GB = 1,099,511,627,776 bytes Large storage devices, cloud storage
- Represents each digit of a decimal number with its binary equivalent. For example,
the decimal number 45 would be represented as 0100 0101 in BCD.
- A 7-bit character encoding standard that represents characters and control codes using
numbers. For example, the letter "A" is represented as 65 in decimal or 01000001 in
binary.
Unicode
- A universal character encoding standard that allows for the representation of text in
most of the world's writing systems. It uses variable-length encoding, often in UTF-8
format, which can accommodate a vast array of characters from different languages.
These foundational concepts of number systems, data conversions, binary arithmetic, and
data representation techniques are essential for understanding how data is stored, processed,
and manipulated in computer systems. Mastery of these concepts provides a solid base for
more advanced studies in computer science and data analysis.
In computing, data types define how data is stored, represented, and processed by
computers. Each data type has specific storage requirements and ways of representation.
Understanding data types is crucial for handling different kinds of data, such as text,
numbers, images, and multimedia content.
Characters
The American Standard Code for Information Interchange (ASCII) has a rich history
that began in the early 1960s. It was developed as a standardized way to represent text and
control characters in computers and communication equipment. Here's an overview of its
history and evolution:
ASCII was created in 1963 by a committee led by Robert W. Bemer, a computer scientist
known as the "father of ASCII." Prior to this, various computers and communication systems
used their own proprietary character sets, which made interoperability difficult. The aim of
ASCII was to provide a standard that all computers could use for encoding characters.
2. ASCII Structure
1967 Update: A few minor changes were made to ASCII in 1967, adding the
lowercase letters and making the assignment of certain punctuation marks more
consistent.
1970s-1980s: ASCII became the standard for computers and communication systems,
particularly with the rise of personal computers. Early systems like the PDP-11 and
Apple II relied heavily on ASCII.
8-bit Expansion (Extended ASCII): While ASCII was a 7-bit code, many systems
started using an 8-bit byte in the 1980s. This allowed for Extended ASCII, where the
additional 128 codes (128-255) could represent graphical characters, accented letters,
and more symbols. However, this extension was not standardized and varied by region
and language.
ASCII remains foundational to character encoding in computing but has largely been
supplanted by more modern encodings like Unicode, which can represent a far larger range
of characters (including international scripts and symbols) to accommodate the globalized
use of computers. Unicode incorporates ASCII as its first 128 characters for backward
compatibility, ensuring that any text encoded in ASCII will be readable in a Unicode system.
Pros of ASCII
Cons of ASCII
- Limited Range: ASCII's character set is limited to 128 symbols, which is insufficient
for representing non-English languages or special symbols.
- Incompatibility with Multilingual Data: ASCII couldn't accommodate languages
with accented characters, non-Latin alphabets, or the need for broader character
representation, leading to the rise of alternatives like Unicode.
Conclusion: ASCII laid the foundation for text encoding in the digital world, but its
limitations prompted the development of more comprehensive systems like Unicode, which
can handle the demands of modern computing. Nonetheless, ASCII remains a key part of
computing history, and its simplicity still finds use in many applications today.
For more information on ASCII and its influence on modern computing, you can explore
resources such as ASCII Wikipedia.
Unicode
By the late 1980s, the limitations of the ASCII encoding system and various extended
character sets became more apparent. ASCII, which was a 7-bit encoding, could represent
only 128 characters. Extended versions (like ISO 8859 and others) could encode more, but
they were still limited in terms of supporting multiple languages and special characters.
Different languages, such as Chinese, Japanese, and Arabic, required their own encoding
standards, creating compatibility issues when exchanging text across systems and languages.
The rise of global computing and the internet created a pressing need for a unified system that
could handle multiple scripts without ambiguity. This is where Unicode comes in.
The Unicode project began in 1987 when Joe Becker from Xerox, along with other engineers
from Apple and Xerox, started to address the need for a more comprehensive encoding
system. The goal was to create a universal character set that could support multiple
languages, symbols, and characters without requiring multiple incompatible encodings.
Unicode's early versions started small, but the encoding quickly expanded to accommodate
more scripts and languages. Over time, Unicode went from being just a useful tool to
becoming the global standard for character representation in software and the web.
- UTF-8 Encoding (1993): UTF-8, a variable-width encoding system for Unicode, was
developed in 1993 and became widely adopted due to its backward compatibility with
ASCII. It allowed systems to handle a wide variety of characters without drastically
increasing storage requirements for common English texts.
- Unicode and the Web: The growing importance of Unicode became particularly
evident in the mid-1990s, as the World Wide Web expanded. With websites serving
users from across the globe, it was necessary to represent all languages without
corruption or encoding errors.
- Unicode Standardization: Unicode quickly became the preferred encoding for most
major software platforms, including Microsoft Windows, macOS, and Unix-based
systems like Linux. It was also adopted as the underlying character encoding for
HTML, XML, and most programming languages.
- Emojis and Unicode: In the 2010s, Unicode gained even more visibility with its role
in standardizing emojis. These small pictographic characters became widely used
across mobile platforms, and Unicode made sure they were consistently represented
across devices and operating systems. Emojis are now part of Unicode updates, with
new ones being added each year.
4. Structure of Unicode
Unicode was initially designed as a 16-bit encoding, which provided space for 65,536 unique
characters. However, as more scripts were added, it became clear that even 16 bits were not
enough, especially with the addition of thousands of rare characters, ancient scripts, and
modern symbols like emojis. Today, Unicode uses 21 bits, which allows for over a million
possible characters (though fewer than 150,000 have been assigned).
UTF-8: A variable-length encoding that uses 1 to 4 bytes per character. It is the most
commonly used form, especially on the web.
UTF-16: A variable-length encoding using 2 or 4 bytes per character.
UTF-32: A fixed-length encoding that uses 4 bytes per character, but is less
commonly used due to its inefficient use of space.
Encoding: It’s backwards-compatible with ASCII (the first 128 characters are
identical), and can encode all possible Unicode characters by using additional bytes as
needed.
Usage: UTF-8 is the most widely used encoding on the web today due to its efficiency
and compatibility.
Pros:
Efficient for representing ASCII characters (1 byte).
Widely adopted across platforms and the web.
Cons:
Variable-length encoding means that more memory is used for non-ASCII
characters.
b. UTF-16
c. UTF-32
Each UTF encoding is designed for specific use cases depending on memory efficiency, text
complexity, and performance requirements. UTF-8 is generally preferred for web and
international applications, while UTF-16 or UTF-32 may be used in specialized systems like
certain operating systems or databases.
5. Modern-Day Adoption
Today, Unicode is the de facto standard for text encoding on most platforms, including:
- Operating Systems: Windows, macOS, Linux, and other systems fully support
Unicode.
- Programming Languages: Most modern programming languages (Python, Java, C#,
JavaScript) have built-in support for Unicode.
- Web: HTML5, CSS, XML, and JSON all use Unicode as their default character
encoding.
Pros of Unicode
Cons of Unicode
- Storage Overhead: Some forms of Unicode (such as UTF-16 or UTF-32) use more
space than older encodings, which can be inefficient for storage or processing,
especially in memory-limited systems.
- Complexity in Processing: Handling variable-length encodings like UTF-8 can make
text processing more complex compared to fixed-width systems.
- Backward Compatibility: Transitioning older systems that used regional encodings
to Unicode can be challenging and requires reworking the codebases and data.
Integers
- Definition: Integers are whole numbers (both positive and negative) without
decimal points (e.g., -10, 0, 25).
- Storage: Integers are typically stored using two's complement for both
positive and negative numbers. The number of bits used determines the range
of values.
- A 32-bit integer can store values between -2,147,483,648 to 2,147,483,647.
- A 64-bit integer can store larger numbers between -
9,223,372,036,854,775,808 and 9,223,372,036,854,775,807.
- Two's Complement:
- Negative numbers are stored by inverting the binary representation of their
absolute value and adding 1.
- Example:
+5 in binary (8 bits): 00000101
-5 in two’s complement (8 bits): 11111011
Booleans
Boolean Operations
Application of Booleans