0% found this document useful (0 votes)
19 views

CSC201Lesson 2

Computer Science notes

Uploaded by

caseejopen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

CSC201Lesson 2

Computer Science notes

Uploaded by

caseejopen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Lesson Two

In this lesson two, we are going to study the following:


Data representation in memory, Stack and Heap allocation, Queues, TREES.
Implementation Strategies for stack, queues, trees.

DATA REPRESENTATION IN MEMORY

Computers cannot accept and process data and instructions naturally using human language.
All data type be it numbers, letters, special symbols, sound or pictures must first be converted
into machine-readable (binary) form. So it is important to understand how a computer
together with its peripheral devices handles data in its electronic circuits, on magnetic media
and in optical devices. The way of data encoding in computer is what is known as data
representation.

Data representation in digital circuits


Electronic components, such as microprocessor, consist of millions of electronic circuits. The
availability of high voltage(on) in these circuits is interpreted as ‘1’ while a low voltage (off)
is interpreted as ‘0’. This concept is analogous to switching on and off an electric circuit.
When the switch is closed the high voltage in the circuit causes the bulb to light (‘1’ state) but
when the switch is open, the bulb goes off (‘0’ state). This is the origin data representation in
digital computers using the binary number system.

Data representation on magnetic media


The laser beam reflected from the land is interpreted, as 1 while the laser entering the pot is
not reflected and is interpreted as 0. The reflected pattern of light from the rotating disk falls
on a receiving photoelectric detector that transforms the patterns into digital form. The
presence of a magnetic field in one direction on magnetic media is interpreted as 1; while the
field in the opposite direction is interpreted as “0”. Magnetic technology is mostly used on
storage devices that are coated with special magnetic materials such as iron oxide. Data is
written on the media by arranging the magnetic dipoles of some iron oxide particles to face in
the same direction and some others in the opposite direction.

Data representation on optical media


In optical devices, the presence of light is interpreted as ‘1’ while its absence is interpreted as
‘0’. Optical devices use this technology to read or store data. For example, in a CD-ROM, if
the shiny surface is placed under a powerful microscope, the surface is observed to have very
tiny holes called pits. The areas that do not have pits are called land.

Why the use of Binary System in Computer?

The complexity of natural language due to its diversity makes the development of a system
that can just understand it very difficult. It is much easier to construct electric circuits on two-
state logic that is binary or On and Off logic. More so, digital devices which are based on
binary systems are more portable, reliable and use less energy when compared to analog
devices.

Bits, bytes, nibble and word


These terms are used widely in reference to computer memory and data size.

• Bits: This is the basic unit of data or information in digital computers which can be
defined as a binary with only two values either 0 or 1.
• Byte: a group of bits (8 bits) used to represent a character. A byte is considered the
basic unit of measuring memory size in a computer.
• A nibble: is half a byte, which is usually a grouping of 4 bytes.
• Word: two or more bytes make a word. The term word length is used as the measure
of the number of bytes in each word. For example, a word can have a length of 16 bits,
32 bits, 64 bits etc.

Representation of different data types.


Apart from numbers, letters and special symbols, computers also process complex types of data
such as sound and pictures. However, these complex types of data take a lot of memory space
and processor time when coded in binary form.
This limitation necessitates the use of higher number system (Octal, hexadecimal etc.) in
computing to reduce these streams of binary digits into manageable form. This helps to improve
the processing speed and optimize memory usage.

Number systems and their representation

A number system is a set of symbols used to represent values derived from a common base
or radix. In computing, there are two major number systems:

i. Decimal number system


ii. Binary number system

Other number system such as octal number system and hexadecimal number system are
multiples of binary system.

Decimal number system

The term decimal is curled from a Latin prefix deci, which means ten. Decimal number
system has ten digits ranging from 0-9 thus it is also called a base ten or denary number
system. Decimal is commonly used number, so most often it is written without subscript but
where many number system is considered it should be written with subscript 10 e.g. X10 (10
indicates the base or radix)

Binary Number System


Binary number system has radix or base 2 e.g. X2, it uses only two digits 1 and 0 to represent
numbers. Unlike in decimal numbers where the place value increases in factors of ten, in binary
system, the place values increase by the factor of 2. Consider a binary number such as
10012.The right most digit has a place value of 1×20 while the left most has a place value of
1×23. The position value is assigned from the right most digit starting from 0 to n-1. Using the
given example 10012, the place value can be written as
1x23; 0x22 ; 0x21 ;1x20

Octal number system


Consists of eight digits ranging from 0-7.the place value of octal numbers goes up in factors of
eight from right to left. It is written as base 8; e.g. 5248. The place value follow the sam pattern
as in binary system. 5x82; 2x81; 3x80

Hexadecimal number system


This is a base 16 number system that consists of sixteen digits ranging from 0-9 and letters A-
F where A is equivalent to 10, B to 11 up to F which is equivalent to 15 in base ten system.
The place value of hexadecimal numbers goes up in factors of sixteen.
A hexadecimal number can be denoted using 16 as a subscript or capital letter H to the right of
the number. For example, 85B can be written as 85B16 or 85BH.

CONVERSION FROM ONE NUMBER SYSTEM TO ANOTHER


We are going to consider the following conversion

• Converting between decimal and binary numbers.


• Converting octal numbers to decimal and binary form.
• Converting hexadecimal numbers to decimal and binary form.

Converting between decimal and binary numbers


Step:
Continuously divide the integral part by 2, and note the remainder
Collect the remainders from down to top, and present your number in base 2.
For example, convert 8510 to binary

85 ÷ 2 = 42 𝑅 1
42 ÷ 2 = 21 𝑅 0
21 ÷ 2 = 10 𝑅 1
10 ÷ 2 = 5 𝑅 0
5÷2= 2𝑅1
2÷2= 1 𝑅0
1÷2= 0𝑅1
8510 = 10101012

To convert from binary to denary/decimal number


Follow the steps below:

• First, write the place values starting from the right hand side.
• Write each digit under its place value.
• Multiply each digit by its corresponding place value.
• Add up the products. The answer will be the decimal number in base ten.

10101002 to base ten

1x26 + 0x25 + 1x24 + 0x23 + 1x22 + 0x21 + 1x20


1x64 + 0 +32 +1x16 +0x8 +1x4 + 0x2 + 1x1
64 + 0 + 16 + 0 + 4 + 0 + 1
=8510

Conversion of fractions from decimal to binary and vice versa


The binary equivalent of the fractional part is gotten by:
• Continuous multiplication of the fraction with 2 until nothing is left on the fractional
part or there is a recurring decimal.
• From the products, read the respective integral digits from the top downwards.
• Combine the two parts together to set the binary equivalent.

Convert 0.12510 to binary


Solution
.125 x 2 = 0.250
.250 x 2 = 0.500
.500 x 2 = 1.00
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒 0.125 = 0.0012
Convert 27.056 to binary form
Solution
Convert the integral and fractional parts separately, then combine them together to get its
binary equivalent.
27 to binary form by continuous division by 2
27 ÷ 2 = 13 𝑅 1
13 ÷ 2 = 6 𝑅 1
6÷2 =3𝑅0
3÷2 = 1𝑅1
1÷2 = 0𝑅1
27 = 110112
Convert 0.056 to binary form
.056 x 2 = 0.112
.112 x 2 = 0.224
.224 x 2 = 0.448
.448 x 2 = 0.896
.896 x 2 = 1.792
.792 x 2 = 1.584
.584 x 2 = 1.168
.168 x 2 = 0.336
.056 = .000011102

Therefore 27.056 10 to binary = 11011.000011102

Converting 11011.000011102 to decimal


Assign position weight to each of the binary digit.

1x24 1x23 0x22 1x21 1x20 . 0x2-1 0x2-2 0x2-3 0x2-4 1x2-5 1x2-6 1x2-7 0x2-8
16 + 8 + 0 + 2 + 1 . 0 + 0 + 0 + 0 + 0.031 + 0.016 + .009
27 . 056

11011.000011102 = 27.05610

Converting Octal number to its binary equivalent.

Convert 5218 to its binary equivalent


Solution

Working from left to the right, each octal number is represented in binary using three digits
(remember, 23 gives 8) and then combining them together, we get the final binary equivalent.
Therefore:

5=1012

2=0102

1=0012

Combining the three from left to right

5 2 1
101 010 001

5218 =1010100012

Converting binary numbers to hexadecimal numbers

▪ To convert binary numbers to their hexadecimal equivalents, simply group the digits
of the binary number into groups of four from right to left e.g. 11010001001.The next
step is to write the hexadecimal equivalent of each group e.g.

0110,1000,1001

1001- 9

0001- 1

0110 - 6

The equivalent of 110100010012 is 619H or 61916

Converting hexadecimal numbers to decimal and binary numbers.

Converting hexadecimal numbers to decimal number

Steps for converting from hexadecimal number to base 10 equivalents:

• First, write the place values starting from the right hand side.
• If a digit is a letter such as ‘A’ write its decimal equivalent
• Multiply each hexadecimal digit with its corresponding place value and then add the
products

For example convert 619H to base 10

6 x162 + 1x161 + 9x160


6 x 256 + 1x16 + 9x1
1536 + 16 + 9
= 156110

Table 5: Summarized Conversion of Decimal, Octal and Hexadecimal to Binary equivalent


decimal Octal Hexadecimal Binary
0 0 0 0000
1 1 1 0001
2 2 2 0010
3 3 3 0011
4 4 4 0100
5 5 5 0101
6 6 6 0110
7 7 7 0111
8 10 8 1000
9 11 9 1001
10 12 A 1010
11 13 B 1011
12 14 C 1100
13 15 D 1101
14 16 E 1110
15 17 F 1111
Note: For octal binary equivalent is in 3bits while hexadecimal is in 4bits.

Representation of symbols using coding Scheme


In computing a fixed number of bits are used to represent a piece of data, which could be a
number, a character, or others. A n-bit storage location can represent up to 2^n distinct
entities. For example, a 3-bit memory location can hold one of these eight binary patterns:
000, 001, 010, 011, 100, 101, 110, or 111.
The number of bits per character depends on the coding scheme used.
The most common coding schemes are:
i. Binary Coded Decimal (BCD),
ii. Extended Binary Coded Decimal Interchange Code (EBCDIC) and
iii. American Standard Code for Information Interchange (ASCII).

Binary Coded Decimal


This is a 4-bit code used to represent numeric data only. For example, a number like 9 can be
represented using Binary Coded Decimal as 10012 .
Binary Coded Decimal is mostly used in simple electronic devices like calculators and
microwaves for easy processing and displaying of individual numbers on their Liquid Crystal
Display (LCD) screens.

A standard Binary Coded Decimal is an enhanced format of Binary Coded Decimal. It is a 6-


bit representation scheme which can represent up to 64 (26) characters including non-numeric
characters. Letter A can be represented as 1100012 using standard Binary Coded Decimal.
Extended Binary Coded Decimal Interchange code (EBCDIC)
Extended Binary Coded Decimal Interchange code (EBCDIC) is an 8-bit character-coding
scheme used primarily on IBM computers. A total of 256 (28) characters can be coded using
this scheme. For example, the symbolic representation of letter A using Extended Binary
Coded Decimal Interchange code is 001100012.

American standard code for information interchange (ASCII)


ASCII is a 7-bit code, which means that only 27 (128) characters can be represented.
However, it has been extended to 8-bit to better utilize the 8-bit computer memory
organization, (The 8th-bit was originally used for parity check in the early computers.). This
8-bit coding scheme is referred to as an 8-bit American standard code for information
interchange and can represent up to 256 characters. The symbolic representation of letter A
using this scheme is 010000012.

Table: 8-bit ASCII Representation of Symbols.


Symbol Dec. Binary Symbol Dec. Binary Symbol Dec. Binary
! 33 00100001 0 48 00110000 W 87 01010111
“ 34 00100010 1 49 00110001 X 88 01011000
# 35 00100011 2 50 00110010 Y 89 01011001
$ 36 00100100 3 51 00110011 Z 90 01011010
% 37 00100101 4 52 00110100 a 97 01100001
& 38 00100110 5 53 00110101 b 98 01100010
‘ 39 00100111 6 54 00110110 c 99 01100011
( 40 00101000 7 55 00110111 d 100 01100100
) 41 00101001 8 56 00111000 e 101 01100101
* 42 00101010 9 57 00111001 f 102 01100110
+ 43 00101011 A 65 01000001 g 103 01100111
, 44 00101100 B 66 01000010 h 104 01101000
- 45 00101101 C 67 01000011 i 105 01101001
. 46 00101110 D 68 01000100 j 106 01101010
/ 47 00101111 E 69 01000101 k 107 01101011
: 58 00111010 F 70 01000110 l 108 01101100
; 59 00111011 G 71 01000111 m 109 01101101
< 60 00111100 H 72 01001000 n 110 01101110
= 61 00111101 I 73 01001001 o 111 01101111
> 62 00111110 J 74 01001010 p 112 01110000
? 63 00111111 K 75 01001011 q 113 01110001
@ 64 01000000 L 76 01001100 r 114 01110010
[ 91 01011011 M 77 01001101 s 115 01110011
\ 92 01011100 N 78 01001110 t 116 01110100
] 93 01011101 O 79 01001111 u 117 01110101
^ 94 01011110 P 80 01010000 v 118 01110110
_ 95 01011111 Q 81 01010001 w 119 01110111
` 96 01100000 R 82 01010010 x 120 01111000
{ 123 01111011 S 83 01010011 y 121 01111001
| 124 01111100 T 84 01010100 z 122 01111010
} 125 01111101 U 85 01010101
~ 126 01111110 V 86 01010110
Integer Representation in memory

Integers as haven been discussed earlier are whole numbers or fixed-point numbers with the
radix (base) point fixed after the least significant bit. They are opposite to real numbers or
floating-point numbers, where the position of the radix point varies. Integers representation
and processing in computers differs from that of floating-point numbers.

A fixed number of bits are used to represent an integer. The commonly-used bit-lengths for
integers are 8-bit, 16-bit, 32-bit or 64-bit. Besides bit-lengths, there are two representation
schemes for integers:
i. Unsigned Integers: represent zero and positive integers.
ii. Signed Integers: represent zero, positive and negative integers.

Representation of Signed integers


Signed integers can be represeented with any of these proposed scheme
➢ Sign-Magnitude representation (Prefixing an extra sign bit to a binary number)
➢ 1's Complement representation
➢ 2's Complement representation

Signed Magnitude representation


In decimal numbers, a signed number has a prefix “+” for a positive number e.g. +562 and “-
“ for a negative number e.g.-56 but in binary, a negative number may be represented by
prefixing a ‘1’ to the number while a positive number may be represented by prefixing a ‘0’.
For example, the 7-bit binary equivalent of 127 is 11111112. To show that it is positive, we
add an extra bit (0) to the left of the number i.e. (0)11111112, to indicate that it is a negative
number we add an extra bit (1) i.e. (1)11111112.
The problem of using this method is that the zero can be represented in two ways
i.e.(0)00000002 and (1)00000002. But we know that there is nothing like positive or negative
zero, zero is a turning point.

Ones (1’s) compliment Representation


The term compliment simply brings to mind two parts coming together for a completion or to
form a whole. The idea of compliment is used to address the problem of signed numbers i.e.
positive and negative.
In decimal numbers (0 to 9), we talk of nine’s compliment. For example the nine’s
compliment of 9 is 0, that of 5 is 4 while 3 is 6.
However, in binary numbers, the ones compliment is the bitwise NOT (~) applied to the
number. Bitwise NOT is a unary operator (operation on only one operand) that performs
logical negation on each bit. For example the bitwise NOT of 11002 is 00112.
Note: 0s are negated to 1s while 1s are negated to 0s just inverse of the bit.

Two’s (2’s) compliment Representation


Twos compliment, equivalent to tens compliment in decimal numbers, is the most popular
way of representing negative numbers in computer systems. The advantages of using this
method are:

➢ There is only one way of representing a zero unlike in case with other two methods.
➢ Effective addition and subtraction can be done even with numbers that are represented
with a sign bit without a need for circuitries to examine the sign of an operand.

The twos compliment of a number is obtained by first finding the ones compliment then
adding 1. For example, to get the twos compliment of 2510,

Steps:

i. Convert 2510 to its binary equivalent


ii. Find its ones compliment.
iii. Add a 1 to ones compliment to get the twos compliment

2510=00110012

Bitwise NOT (0011001) =1100110 (1’s compliment)


Two’s compliment = 11001102+12

= 11001112

A floating-point number (or real number) can represent a very large (1×10^50) or a very small
(1×10^-50) value. It is typically expressed in the scientific notation, with a fraction (F) and
an exponent (E) of a certain radix (r), in the form of F × r^E.

Representation of floating point number is not unique. For example, the number18.66 can be
represented as 1.866×101, 0.1866×102, 0.01866×103, etc. The fractional part can
be normalized. In the normalized form, there is only a single non-zero digit before the radix
point. For example, decimal number523.4567 can be normalized as 5.234567×102; binary
number 1010.1011 can be normalized as 1.011011×23.
It is important to note that floating-point numbers suffer from loss of precision when
represented with a fixed number of bits (e.g., 32-bit or 64-bit). This is because there
are infinite number of real numbers (even within a small range of says 0.0 to 1.0). On the other
hand, a n-bit binary pattern can represent a finite 2n distinct numbers.
We cannot discuss floating numbers representation in detail, it is not within the scope of our
study.

STACK AND HEAP ALLOCATION, QUEUES, TREES

STACK

Stack is a linear data structure that enables the elements to be inserted and deleted from one
end, called the Top of Stack (TOS). A stack data structure follows the last in first out (LIFO)
operation to insert and remove an element from the stack list. It is an abstract data type with
two possible operations to insert (push) and delete (pop) elements in the stack.
A push operation is used in the stack to insert elements at the top of the list, hide those elements
that already available in the list, or initialize the stack if it is empty. The pop operation is used
in the stack data structure to delete a data item from the top of the list.

QUEUE Figure 2.1: Stack (Entry and Exit are from the same point

Figure 2.2: The Queue data structure


Queue is a linear data structure that enables the insertion at one end of the list, known as
the rear, and the deletion of the elements occurred at another end of the list, called the front.
It is a sequential collection of data elements based on the First in First out (FIFO) data
structure, which means the first inserted elements in a queue will be removed first from the
queue list. Its method of operation is inverse of stack which is Last in First out.

Below are queue operations:


1. Enqueue(): queue operation for insertion of an element to the list.
2. Dequeue(): queue operation for deletion of an item from the list.
3. Peek(): It is used to get the first element of the queue list without removing it.
4. IsFull(): It is an IsFull operation that indicates whether the list is full.
5. IsEmpty(): It is an IsEmpty operation that represents whether the list is empty.

HEAP

A heap data structure is a special type of complete binary tree that satisfies the heap
property and arranged the elements in a specific order. Heap data structure is of two types:
Max and Min heap.

In Max heap, the root node's value is always higher or equal to the value of all existing
child nodes in the heap tree. While in Min heap, the value of the root node/element is always
shorter than the existing elements of the heap node. Each child node's value in the min-
heap is equal to or larger than the parent node's value. Min Heap is the inverse of Max Heap
data structure.

Figure 2.3: The Heap data structure

Operations in Heap include:


i. create(S): Create a new heap
ii. insert(S, x): Insert an element into a heap
iii. meld(S, S' ): Combine the contents of two heaps into one, destroying both
iv. delete(S, x): Delete an element from a heap
v. findmin(S): Get the element with minimum key in the heap

HASH TABLE

In computer science, a hash table or hash map is a data structure is a non-linear data structure
that uses a hash function to map identifying values, known as keys (e.g., a person's name), to
their associated values (e.g., their telephone number). Thus, a hash table implements an
associative array. The hash function is used to transform the key into the index (the hash) of an
array element (the slot or bucket) where the corresponding value is to be sought. A key is a
non-null value that is mapped or linked to an element. Hashing makes our data structure much
simpler and faster when performing insertion and search operations on various data elements,
regardless of the data's size.

Hash function

The hash table algorithm is an array of items; this array is often simply called the hash table.
Hash table algorithms calculate an index based on the data item's key and the length of the
array. The index is used to find or insert the data into the array. The implementation of this
calculation is the hash function,

f: index = f(key, array Length)

The hash function calculates an index into the array from the data key and array Length (the
size of the array).

Keys Indexes Slot or bucket (Records)

0
Jerry Mary +234-8103684213
1

659
Mary Jerry +234-7010368421
660

998
Robert Robert +234-8201030481
999
Figure 2.4: The Hash table

DICTIONARY

A dictionary is a type of data structure that holds data elements in a group of objects and is
similar to a hash table, except that it is an ordered or sorted collection of data elements in key-
value pairs. Each key is associated with a single value. When we retrieve a key, the dictionary
will return the associated value of a key. For example, students = {'James' = 25, 'Jasmine' =
'17', 'Rosy = '19', 'Margret' = '24', 'Price' = '28'}

Given a word, one can find its definition. A telephone book is a sorted list of people's names,
addresses, and telephone numbers. Knowing someone's name allows one to quickly find their
telephone number and address.

Operations on Dictionary

This may include:

i extract all keys or values


ii check if certain key exists
iii get value for a given key
iv add, delete, modify (key, value) pair

Graphs

A graph is a non- linear data structure consisting of finite sets of vertices (nodes)
and edges to create an illustrated representation of a set of objects. These edges and nodes are
connecting through any two nodes in the graph. The connected node can be represented as a
directed or undirected graph. In a directed graph, each node is directly connected with edges
in only one direction. In an undirected graph, each node is connected with edges in all
directions. Hence it is also known as a bidirectional node.

Figure 2.5: The Graph data structure

TREES

This is a non-linear data structure representing the hierarchical data. It is a finite set of
elements where one of these nodes or elements is called a root node, and the remaining
elements of a data structure consisting of a value called Subtrees. Every node of the tree
maintains the parent-child relationship, where only one parent node and the remaining node
in the tree is the child node. A node can have multiple child nodes but has a single parent
node. There are some types of trees, such as a binary tree, binary search tree, expression
trees, AVL tree and B trees.
.

Figure 2.5: Tree data structure

Binary Trees
A binary tree is a structure in which each node in the tree is said to have at most two nodes as
its children, and each node has exactly on parent node. The top node in the tree in the only
one without a parent node and is called the root of the tree. On the on other hand, a node
without a child (children) is called leaf node.

Important properties of nodes in binary trees


A better understanding of these properties will help you in making the most of this discussion
on binary trees. The depth of different nodes is defined as the number of nodes that exist on
the way that connects the root to a particular node. That is why the depth of the root node is
0. On the other hand, the height of different nodes in a binary tree is the number of nodes that
lie in the path that connects a particular node with the root node. That is why the height of
leaf nodes is 0. The depth of a node is measured by starting from the root node and then
going down to reach that node. On the other hand, when it comes to calculating the height,
we start at the node in question and then journey towards the root node. In both times, we
start at 0. There are people who also measure height and depth from1 and not from 0, which
is not wrong and is just what different people prefer. Now the maximum depth of a node is
defined as the depth of a binary tree. Similarly, the maximum height of a node is defined as
the height of a binary tree. So the height and depth of a binary tree are always the same.

There are two basic representation of binary trees namely linked and sequential
representation. These are further discussed as follows.
Linked representation:
Binary trees in linked representation are stored in the memory as linked lists. These lists have
nodes that are not stored at adjacent or neighbouring memory locations and are linked to each
other through the parent-child relationship associated with trees. In this representation, each
node has three different parts namely:
• Pointer that points towards the right node
• Pointer that points towards the left node
• Data element

This is the more common representation. All binary trees consist of a root pointer that points
in the direction of the root node. When you see a root node pointing towards null or 0, you
should know that you are dealing with an empty binary tree. The right and left pointers store
the address of the right and left children of the tree.

Sequential representation:
Although it is simpler than linked representation, its inefficiency makes it a less preferred
binary tree representation of the two. The inefficiency lies in the amount of space it requires
for the storage of different tree elements. The sequential representation uses an array for the
storage of tree elements. The number of nodes a binary tree has defines the size of the array
being used. The root node of the binary tree lies at the array’s first index. The index at which
a particular node is stored will define the indices at which the right and left children of the
node will be stored. An empty tree has null or 0 as its first index.

Types of binary trees


There are four basic types of binary trees which are briefly discussed as follows:
i. Full binary trees:
Full binary trees are those binary trees whose nodes either have two children or none. In other
words, a binary tree becomes a full binary tree when apart from leaves, all its other nodes
have two children.

ii. Complete binary trees:


Complete binary trees are those that have all their different levels completely filled. The only
exception to this could be their last level, whose keys are predominantly on the left. A binary
heap is often taken as an example of a complete binary tree.

iii. Perfect binary trees:


Perfect binary trees are binary trees whose leaves are present at the same level and whose
internal nodes carry two children. A common example of a perfect binary tree is an ancestral
family tree.

iv. Pathological degenerate binary trees:


Degenerate trees are those binary trees whose internal nodes have one child. Their
performance levels are similar to linked lists.

Advantages of Binary Trees


• Binary trees are ordered by design, which means that you do not have to make any
special effort to order them, i.e. putting an element into its correct location or
position in the tree. Thus, the arrangement of element in the binary is ordered or
organized automatically by design.
• Its operations such as search, insert and delete takes about log(n) steps where n is
the total number of elements that the tree holds.
• It enable peer-to-peer programming, network routing with higher bandwidth,
Cryptography and 3D games.
• It is an ideal way to store data in hierarchical pattern, reflects structural
relationships that exist in the given dataset.
• A flexible way of holding and moving data and storing of data in many nodes as
possible.
• It is faster than linked lists and slower than arrays when comes to accessing data
items from the tree.

Disadvantages of Binary Trees


• The shape of the tree depends on the order of insertions, and it can be
degenerated.
• When inserting or searching for an element, the key of each visited node has to be
compared with the key of the element to be inserted/found. Keys may be long and
the run time may increase much.

Figure 2.6: An Example of Binary Tree Structure below,

An Example of Java Code for the Implementation of Binary Tree


// class to create nodes
class Node {
int key;
Node left, right;
public Node(int item) {
key = item;
left = right = null;
}
}
class BinaryTree {
Node root;
// Traverse tree
public void traverseTree(Node node) {
if (node != null) {
traverseTree(node.left);
System.out.print(" " + node.key);
traverseTree(node.right);
}
}
public static void main(String[] args) {
// create an object of BinaryTree
BinaryTree tree = new BinaryTree();
// create nodes of the tree
tree.root = new Node(1);
tree.root.left = new Node(2);
tree.root.right = new Node(3);
tree.root.left.left = new Node(4);
System.out.print("\nBinary Tree: ");
tree.traverseTree(tree.root);
}
}
Output Result {Binary Tree: 4 2 1 3}

Figure 2.7: Graphical representation of Output Result

Traversal Algorithms
When we work with graphs, there may be times that we wish to do something to each node in
the graph exactly once. For instance, there may be piece of information that needs to be
distributed to all of the computers on a network. Therefore, we want this information to get to
each computer, and we do not want to give it to any computer twice. The same thing would be
true if we were looking for information instead of distributing it. There are two methods that
we will examine to achieve this traversal. In other words, there are two main types of Traversal
algorithms namely Depth-first and Breadth-first. In Depth-first traversal algorithm, the
traversal will go as far as possible down a path before considering another. In the Breadth-first
traversal, the traversal will go evenly in many directions.
Both methods are discussed in detail below. For these two traversal methods, we choose one
node in the graph as the starting point. In our discussion the phrase “visit node” will be used to
represent the action that needs to be done at each node. For instance if we are searching, visiting
the node would mean that we check it for the information we want.
Depth-First Traversal Algorithm
In depth-first traversal, we visit the starting node and then proceed to follow links through the
graph until we reach a dead end. In an undirected graph, a node is dead end if all of the nodes
adjacent to it have already been visited. In a directed graph, if a node has no outgoing edges,
we also have a dead end. When we reach a dead end, we back up along our path until we find
an unvisited adjacent node and then continue in that new direction. The process will be
completed when we back up to the starting node and all the nodes adjacent to it have been
visited. Consider the graph in Figure 2.8

Figure 2.8: Graphical Representation of traversal Algorithm

If we begin the depth-first traversal at node 1, we then visit, in order, the nodes 2, 3, 4, 7, 5
and 6 before we reach a dead end. We would then back up to node 7 to find that node 8 has
not been visited, but that immediately leads to a dead end. We next back up to node 4 and
find that node 9 has not been visited, but again we have an immediate dead end. We then
continue to back up until we reach the starting node, and because all nodes adjacent to it have
been visited, we are done. The recursive algorithm for depth-first traversal algorithm is as
follows

DepthFirstTraversal (G, v)
G is the graph
V is the current node
Visit (v)
Mark (V)
For every edge vw in G Do
If w is not marked Then
DepthFirstTraversal (G, w)
End if
End for

This recursive algorithm relies on the system stack of the computer to keep track of where it
has been in the graph so that it can properly back up when it reaches dead ends. We could
create a similar non-recursive algorithm by using a stack data structure and pushing graph
vertices ourselves.

Advantages of Depth-first Traversal Algorithm


• Consumes less memory
• Finds the larger distant element (from source vertex) in less time.

Disadvantages of Depth-first Traversal Algorithm


• May not find optimal solution to the problem.
• May get trapped in searching useless path.

Breadth-First Traversal Algorithm


In breadth-first traversal, we visit the starting node and then on the first pass visit all of the
nodes directly connected to it. In the second pass, we visit nodes that are two edges “away”
from the starting node. With each new pass, we visit nodes that are one more edge away.
From the starting node. With each new pass, we visit nodes that are one more edge away.
Because there might be cycles in the graph, it is possible for a node to be on two paths of
different lengths from the starting node. Because we will visit that node for the first time
along the shortest path from the starting node, we will not need to consider it again. We will,
therefore, either need to keep a list of the nodes we have visited or we will need to use a
variable in the node to mark it as visited to prevent multiple visits. Consider again the graph
in Figure 2.8. If we begin our traversal at node 1, we will visit nodes 2 and 8 on the first pass.
On the second pass, we will visit nodes 3 and 7, (Even though 2 and 8 are also at the end of
paths of length 2, we will not return to them because they were visited in the first pass). On
the third pass, we visit nodes 4 and 5, and o the last pass we visit nodes 6 and 9.
Conversely, where the Depth-first traversal depended on a stack, our breadth-first traversal is
based on a queue.

The algorithm for breadth-first Traversal is as follows


BreadthFirstTraversal (G, v)
G is the graph
V is the current node
Visit (v)
Mark (v)
Enqueue (v)
While the queue is not empty Do
Dequeue (x)
For every edge xw in G Do
If w is not marked Then
Visit (w)
Mark (w)
Enqueue (w)
End if
End For
End While

This algorithm will add the root of the breadth-first traversal tree to the queue but then
immediately remove it. As it looks at the nodes that are adjacent to the root, they will be
added to the end of the queue. Once all of the nodes adjacent to the root have been visited, we
will return to the queue and get the first of those nodes. You should notice that because nodes
are added to the end of the queue, no node that is two edges away from the root will be
considered again until all of the nodes one edge away have been taken off of the queue and
processed.
Advantages of Breadth-first Traversal Algorithm
• Used to find the shortest path between vertices
• Always finds optimal solutions.
• There is nothing like useless path in BFS, since it searches level by level.
• Finds the closest goal in less time

Disadvantages of Breadth-first Traversal Algorithm


• All of the connected vertices must be stored in memory. So consumes more
memory.

RUN TIME STORAGE MANAGEMENT; POINTERS AND REFERENCES, LINKED


STRUCTURES.

Run-time Storage (Memory) Management

For every execution (run-time) the CPU fetches instructions and data of a program from
memory; therefore, both the program and its data must reside in the main (RAM and ROM)
memory. Modern multiprogramming systems are capable of storing multiple programs
together with their data in the main memory.
The main task of the memory management component of an operating system is to ensure
safe execution of programs by providing:
– Memory sharing
– Memory protection

Issues associated with Memory sharing


• Transparency: Coexistence of several processes in the main memory, unaware of each
other, and run regardless of the number and location of processes.
• Safety (or protection): Processes must not coincide or corrupt each other (nor the OS!)
• Efficiency: CPU use must be preserved and memory must be fairly allocated. Having
reduced cost for memory management.
• Relocation: Ability of a program to run in different memory locations.
Storage allocation
Information stored in main memory can be classified into:
• Program (code) and data (variables, constants)
• Readonly (code, constants) and readwrite(variables)
• Address (e.g., pointers) or data (other variables); binding (when memory is allocated for the
object): static or dynamic.
The compiler, linker, loader and runtime libraries cooperate to manage this information.

Creating an executable code


For a program to be executed by the CPU, it must pass through several steps:
• Compiling (translating) from source program to generates the object code.
• Linking—a linker combines the object code into a single self sufficient executable code.
• Loading—a loader copies the executable code into memory. This may include runtime
linking with libraries.
• Execution—dynamic memory allocation.

Shared Libraries

Translatio

Compiler Object
From Source code to program Execution

Figure 2.9: Source program to Executable code


Address binding (Relocation)
Address binding or relocation is the process of associating program instructions and data
(addresses) to physical memory addresses.
Types of Address binding
i. Static—new locations are determined before execution. It is used in Compile and Load
time
• Compile time: The compiler or assembler translates symbolic addresses (e.g.,
variables) to absolute addresses.
• Load time: The compiler translates symbolic addresses to relative (relocatable)
addresses. The loader translates these to absolute addresses.
ii. Dynamic—new locations are determined during execution.
• Run time: The program retains its relative addresses.
The absolute addresses are generated by hardware.

Function of a Linker
This as have earlier mentioned combines the object code into a single self-sufficient
executable code. A compile time linker collects (if possible) and puts together all the required
pieces to form the executable code.

Issues:
• Relocation: this is concerned with where to put pieces.
• Cross reference: this is concerned with where to find pieces.
• Reorganization: new memory layout for the combined pieces.

Figure 2.10: Function of a Linker

Functions of a loader
A loader places the executable code in main memory starting at a predetermined
location (base or start address).

Figure2.11: Loading of codes in main memory

Methods of Loading
This can be done in different ways depending on hardware architecture:
• Absolute loading: always loads programs into a designated memory location.
• Relocatable loading: allows loading programs in different memory locations.
• Dynamic (runtime) loading: loads functions when first called (if ever).

Linked-List
Linked-List is a collection of data links known as nodes. Each node contains a data value and
the address of the next link. Unlike array all the elements of the linked-list are not stored in
neighbouring memory locations. It can be simply said to be a sequence of data nodes that
connect through links. Each node of a list consists of two items: a data part where values are
stored and a pointer which indicates where the next node can be found. The linked list's
starting point denotes the head of the list, and the endpoints represent the node's tail also
known a Null.

Head

45 215 65

Tail
Figure 2.12: Linked list

With a linked list, a list of values that can easily be grown by storing values in different parts
of memory can be stored.

Figure 2.13: Storing of values in different locations in the memory.

We can link our list together by allocating, for each element, enough memory for
both the value we want to store, and the address of the next element:

By the way, NUL refers to \0, a character that ends a string, and NULL refers to an
address of all zeros, or a null pointer (as pointing nowhere).

Unlike arrays, random access elements is not supported in a linked list because the
location of each element is identified by the link. For example, we can no longer
access an element of the list by calculating its position, in constant time, instead,
we have to follow each element’s pointer, one at a time. And we need to allocate
twice as much memory as we needed before for each element.
In code, we might create our own struct called node (like a node from a graph in
mathematics), and we need to store both an int and a pointer to the
next node called next:

• typedef struct node


• {
• int n;
• struct node *next;
• }
• node;
This struct can start with typedef struct node so that we can refer to a node inside
our struct.

We can build a linked list in code starting with our struct. First, we’ll want to
remember an empty list, so we can use the null pointer: node *list = NULL;.

To add an element, first thing is to allocate some memory for a node, and set its
values:
• node *n = malloc(sizeof(node));
• // We want to make sure malloc succeeded in getting memory for us:
• if (n != NULL)
• {
• // This is equivalent to (*n).number, where we first go to the
node pointed
• // to by n, and then set the number property. In C, we can also
use this
• // arrow notation:
• n->number = 2;
• // Then we need to store a pointer to the next node in our list,
but the
• // new node won't point to anything (for now):
• n->next = NULL;
• }

Now our list can point to this node: list = n;:

To add to the list, we’ll create a new node the same way, perhaps with the value 4.
But now we need to update the pointer in our first node to point to it.
Since our list pointer points only to the first node (and we can’t be sure that the
list only has one node), we need to “follow the breadcrumbs” and follow each
node’s next pointer:
• // Create temporary pointer to what list is pointing to
• node *tmp = list;
• // As long as the node has a next pointer ...
• while (tmp->next != NULL)
• {
• // ... set the temporary to the next node
• tmp = tmp->next;
• }
• // Now, tmp points to the last node in our list, and we can update
its next
• // pointer to point to our new node.

If we want to insert a node to the front of our linked list, we would need to
carefully update our node to point to the one following it, before updating list.
Otherwise, we’ll lose the rest of our list:
• // Here, we're inserting a node into the front of the list, so we
want its
• // next pointer to point to the original list, before pointing the
list to
• // n:
• n->next = list;
• list = n;

And to insert a node in the middle of our list, we can go through the list,
following each element one at a time, comparing its values, and changing
the next pointers carefully as well.

With some volunteers on the stage, we simulate a list, with each volunteer acting
as the list variable or a node. As we insert nodes into the list, we need a
temporary pointer to follow the list, and make sure we don’t lose any parts of our
list. Our linked list only points to the first node in our list, so we can only look at
one node at a time, but we can dynamically allocate more memory as we need to
grow our list.

Now, even if our linked list is sorted, the running time of searching it will be O(n),
since we have to follow each node to check their values, and we don’t know where
the middle of our list will be.

We can combine all of our snippets of code into a complete program:


• #include <stdio.h>
• #include <stdlib.h>

• // Represents a node
• typedef struct node
• {
• int number;
• struct node *next;
• }
• node;

• int main(void)
• {
• // List of size 0, initially not pointing to anything
• node *list = NULL;

• // Add number to list
• node *n = malloc(sizeof(node));
• if (n == NULL)
• {
• return 1;
• }
• n->number = 1;
• n->next = NULL;
• // We create our first node, store the value 1 in it, and leave
the next
• // pointer to point to nothing. Then, our list variable can point
to it.
• list = n;

• // Add number to list
• n = malloc(sizeof(node));
• if (n == NULL)
• {
• return 1;
• }
• n->number = 2;
• n->next = NULL;
• // Now, we go our first node that list points to, and sets the
next pointer
• // on it to point to our new node, adding it to the end of the
list:
• list->next = n;

• // Add number to list
• n = malloc(sizeof(node));
• if (n == NULL)
• {
• return 1;
• }
• n->number = 3;
• n->next = NULL;
• // We can follow multiple nodes with this syntax, using the next
pointer
• // over and over, to add our third new node to the end of the
list:
• list->next->next = n;
• // Normally, though, we would want a loop and a temporary
variable to add
• // a new node to our list.

• // Print list
• // Here we can iterate over all the nodes in our list with a
temporary
• // variable. First, we have a temporary pointer, tmp, that points
to the
• // list. Then, our condition for continuing is that tmp is not
NULL, and
• // finally, we update tmp to the next pointer of itself.
• for (node *tmp = list; tmp != NULL; tmp = tmp->next)
• {
• // Within the node, we'll just print the number stored:
• printf("%i\n", tmp->number);
• }

• // Free list
• // Since we're freeing each node as we go along, we'll use a
while loop
• // and follow each node's next pointer before freeing it, but
we'll see
• // this in more detail in Problem Set 5.
• while (list != NULL)
• {
• node *tmp = list->next;
• free(list);
• list = tmp;
• }
• }

You might also like