0% found this document useful (0 votes)
22 views74 pages

Assignment 1

This document provides information for Assignment 1 of the BTEC Level 5 HND Diploma in Computing unit 19: Data Structures and Algorithms. It includes the student and assessor details, a table of contents, and 7 sections that will analyze various data structures and algorithms. Section I requires designing a specification for data structures and their valid operations. Section II examines how a memory stack is used to implement function calls. Section III provides an imperative definition for an abstract data type of a software stack.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views74 pages

Assignment 1

This document provides information for Assignment 1 of the BTEC Level 5 HND Diploma in Computing unit 19: Data Structures and Algorithms. It includes the student and assessor details, a table of contents, and 7 sections that will analyze various data structures and algorithms. Section I requires designing a specification for data structures and their valid operations. Section II examines how a memory stack is used to implement function calls. Section III provides an imperative definition for an abstract data type of a software stack.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

ASSIGNMENT 1 FRONT SHEET

Qualification BTEC Level 5 HND Diploma in Computing

Unit number and title Unit 19: Data Structures and Algorithms

Submission date 9/10/2023 Date Received 1st submission

Re-submission Date 9/10/2023 Date Received 2nd submission

Student Name LY NGUYEN TUAN KIET Student ID BC00045

Class IT05101 Assessor name TRAN VAN NHUOM

Student declaration

I certify that the assignment submission is entirely my own work and I fully understand the consequences of plagiarism. I understand that
making a false declaration is a form of malpractice.

Student’s signature KIET

Grading grid

P1 P2 P3 M1 M2 M3 D1 D2

1
 Summative Feedback:  Resubmission
Feedback:

Grade: Assessor Signature: Date:

Internal Verifier’s Comments:

IV Signature:

2
Table of Content

Table of Image ................................................................................................................................................ 4

INTRODUCTION .............................................................................................................................................. 7

I. Create a design specification for data structures, explaining the valid operations that can be carried out
on the structures (P1). ....................................................................................................................................8

II. Determine the operations of a memory stack and how it isused to implement function calls in a
computer (P2). ..............................................................................................................................................24

III. Using an imperative definition, specify the abstract data type for a software stack (P3). .....................29

IV. Illustrate, with an example, a concrete data structure for a First in First out (FIFO) queue (M1). ........ 33

V. Compare the performance of two sorting algorithms (M2). ...................................................................40

VI. Examine the advantages of encapsulation and information hiding when using an ADT. Advantages of
encapsulation(M3). ...................................................................................................................................... 54

VII. Analyse the operation, using illustrations, of two network shortest path algorithms, providing an
example of each (D1). .................................................................................................................................. 62

3
Table of Image

Image 1 : Data Structure ........................................................................................................................ 9

Image 2 : Stack ..................................................................................................................................... 10

Image 3 : Example code of stack .......................................................................................................... 10

Image 4 : Output of the code ...............................................................................................................11

Image 5 : Queue ................................................................................................................................... 12

Image 6 : Example code of Queue ........................................................................................................13

Image 7 : Output of the code ...............................................................................................................13

Image 8 : Link list. .................................................................................................................................14

Image 9 : Binary Search Tree ................................................................................................................16

Image 10 : Tree Data Structure ............................................................................................................ 17

Image 11 : example .............................................................................................................................. 18

Image 12 : example .............................................................................................................................. 19

Image 13 : example .............................................................................................................................. 20

Image 14 : Insert 65 ..............................................................................................................................21

Image 15 : Insert 10 ..............................................................................................................................21

Image 16 : Insert 40 ..............................................................................................................................22

Image 17 : Insert 5 ................................................................................................................................22

Image 18 : Insert 82 ..............................................................................................................................22

4
Image 19 : Insert 75 ..............................................................................................................................23

Image 20 : Insert 90 ..............................................................................................................................23

Image 21 : Stack Memory .....................................................................................................................25

Image 22 : Push Operation ...................................................................................................................26

Image 23 : Pop Operation .................................................................................................................... 27

Image 24 : Memory Unit ...................................................................................................................... 28

Image 25 : Formal specifications ..........................................................................................................29

Image 26 : FIFO Queue ......................................................................................................................... 33

Image 27 : queue Enqueue ................................................................................................................... 36

Image 28 : queue dequeue ................................................................................................................... 37

Image 29 : Example .............................................................................................................................. 39

Image 30 : sort list ................................................................................................................................ 41

Image 31 : sort list ................................................................................................................................ 41

Image 32 : sort list ............................................................................................................................... 42

Image 33 : sort list ................................................................................................................................ 42

Image 34 : sort list ................................................................................................................................ 43

Image 35 : Final sorted list .................................................................................................................. 43

Image 36 : sort list ................................................................................................................................ 44

Image 37 : sort list ................................................................................................................................ 45

Image 38 : sort list ................................................................................................................................ 45

5
Image 39 : sort list ................................................................................................................................ 46

Image 40 : sort list ................................................................................................................................ 46

Image 41 : Final sorted list .................................................................................................................. 47

Image 42 : sort list ............................................................................................................................... 49

Image 43 : sort list ................................................................................................................................ 49

Image 44 : sort list ................................................................................................................................ 50

Image 45 : sort list ................................................................................................................................ 50

Image 46 : sort list ................................................................................................................................ 51

Image 47 : sort list ................................................................................................................................ 51

Image 48 : sort list ................................................................................................................................ 52

Image 49 : Final sorted list .................................................................................................................. 52

Image 50 : Activity Stack ...................................................................................................................... 63

Image 51 : diagram illustrates ............................................................................................................ 64

6
INTRODUCTION

Welcome to this assignment 1 where we will be exploring data structures in depth. In this
assignment, we will be creating a design specification for several data structures, including a stack ADT,
two sorting algorithms, and two network shortest path algorithms. We will also be discussing how to
specify an abstract data type using the example of a software stack, as well as the advantages of
encapsulation and information hiding when using an ADT. Finally, we will delve into the topic of
imperative ADTs with regard to object orientation. By the end of this assignment, you will have a solid
understanding of these important concepts in computer science and be able to apply them in practical
situations.

7
I. Create a design specification for data structures, explaining the valid operations that can be
carried out on the structures (P1).

1. Data Structure.

A data structure serves as a container for storing data in a particular arrangement or layout. This
arrangement enables the data structure to exhibit efficiency in certain operations while potentially being
less efficient in others.

By carefully organizing and structuring the data, a data structure can optimize specific operations
such as searching, insertion, deletion, or retrieval. Different data structures are designed with different
trade-offs, depending on the types of operations they prioritize.

For example, an array provides efficient random access to elements, allowing for quick retrieval
by index. On the other hand, inserting or deleting elements within an array can be less efficient,
requiring shifting or resizing of the array.

In contrast, a linked list excels at efficient insertion and deletion operations, as it only requires
adjusting the links between nodes. However, accessing elements by index in a linked list can be less
efficient, requiring traversing the list from the beginning.

Other data structures like stacks and queues prioritize specific operations. Stacks, following the
Last-In-First-Out (LIFO) principle, efficiently handle adding and removing elements from one end. Queues,
following the First-In-First-Out (FIFO) principle, excel at managing elements in a specific order.

The choice of data structure depends on the nature of the data and the expected operations to
be performed. Understanding the strengths and weaknesses of different data structures is crucial for
designing efficient algorithms and optimizing performance in various applications. There are different
kinds of the data structure which are as follows:

8
Image 1: Data Structure

2. Stack .

A stack is a type of linear data structure that operates based on a specific order for performing
operations. This order can either be Last In First Out (LIFO) or First In Last Out (FILO).

In a LIFO stack, the last element that is added to the stack will be the first one to be removed. It's
similar to a stack of plates, where the plate that is placed on top is the first one to be taken off.

On the other hand, in a FILO stack, the first element that is added to the stack will be the last one
to be removed. It's like a stack of books where the book that is placed at the bottom needs to be taken
out last.

Stacks are widely used in computer applications such as memory management systems and
compilers. They are considered fundamental data structures in computer science and play a crucial role
in various algorithms and problem-solving tech+niques.Stacks mainly perform three basic operations:

 Push: This operation adds an item to the top of the stack. If the stack is already full, then it results in
an Overflow condition, which means that no more elements can be added.

9
 Pop: This operation removes an item from the top of the stack. The items are popped in the reverse
order in which they were pushed, meaning that the last item pushed will be the first one to be
removed. If the stack is already empty, then it results in an Underflow condition, which means that
there are no more elements to remove.

 Peek or Top: This operation returns the top element of the stack without removing it.

In addition to these three operations, stacks also have a method called Is Empty which returns
true if the stack is empty and false if it still contains elements.

These operations make stacks a useful tool for solving various problems in computer science,
such as implementing recursive algorithms and parsing expressions.

Image 2: Stack

Image 3: Example code of stack

10
When you input a number, for example input = 6:

return 77;

multiply(Integer – 11 * 7 = 77)

minus(Integer - 26 – 15 = 11)

plus(Integer - 6 + 20 = 26)

Input(Integer - 6)

result(Integer - multiply)

data(Integer - 6)

Scan(Scanner - System.in)

Image 4: Output of the code

3. Queue .

A queue is a linear data structure that operates based on a specific order for inserting and
removing elements. It's similar to a line of people waiting for a service, where the first person who joins
the line is the first one to be served.

In a queue, the first element is inserted from one end called the REAR (also called tail), and the
removal of existing elements takes place from the other end called as FRONT (also called head). This
makes a queue a FIFO (First in First Out) data structure, which means that the element inserted first will
be removed first.

For example, if you go to a ticket counter to buy movie tickets and are first in the queue, then you
will be the first one to get the tickets. The same principle applies to a Queue data structure. Data
inserted first will leave the queue first.

11
The process of adding an element into a queue is called Enqueue, and the process of removing an
element from a queue is called Dequeue. If the queue is already full, then adding an element will result
in an Overflow condition, while removing an element from an empty queue will result in an Underflow
condition.

Queues are widely used in computer applications such as job scheduling and network packet
routing. They are considered fundamental data structures in computer science and play a crucial role in
various algorithms and problem-solving techniques.

Image 5: Queue

12
Image 6: Example code of Queue

Image 7: Output of the code

4. Link list.

A linked list is a data structure that consists of a finite sequence of data elements called nodes. It
is commonly used as a data structure after arrays.

13
In a linked list, each node contains both the data element and a reference (or pointer) to the next
node in the sequence. This creates a chain-like structure where each node points to the next node,
forming the linked list.

Unlike arrays, linked lists do not require contiguous memory allocation. Each node can be located
anywhere in memory, and the links between nodes allow for efficient traversal and manipulation of the
list.

Linked lists offer several advantages over arrays, such as dynamic size, efficient insertion and
deletion operations, and flexibility in managing memory. However, they have some drawbacks as well,
including slower access time for random elements and the need for additional memory to store the links.

Linked lists are widely used in various applications, including implementing other data structures
like stacks, queues, and hash tables. They are also fundamental in many algorithms and provide a
foundation for understanding more complex data structures and algorithms in computer science.

Image 8: Link list.

Types of link list:

That's correct! Linked lists are a common data structure used in programming. Here's a brief
explanation of each type:

 Simple Linked List: In this type, each element in the list contains a link to the next element.
Navigation is only possible in a forward direction.

 Doubly Linked List: In a doubly linked list, each element has links to both the next and previous
elements. This allows for navigation in both forward and backward directions.

14
 Circular Linked List: In a circular linked list, the last element in the list has a link to the first element
as its next, and the first element has a link to the last element as its previous. This creates a circular
structure, enabling continuous traversal of the list.

These different types of linked lists provide flexibility in implementing various algorithms and
data structures.

Basic operations:

Linked lists are a popular data structure in programming, and they support a variety of operations.
Here are some of the basic operations that are commonly supported by linked lists:

 Insertion: This operation adds a new element to the beginning of the list. The new element becomes
the head of the list, and its next pointer is set to point to the previous head.

 Deletion: This operation removes the first element from the list. The head pointer is updated to
point to the next element in the list, and the deleted element is removed from memory.

 Display: This operation displays all the elements in the list, starting from the head and following the
next pointers until the end of the list is reached.

 Search: This operation searches for an element in the list that matches a given key. The search starts
at the head of the list and continues until either the element is found or the end of the list is reached.

 Deletion (by key): This operation removes an element from the list that matches a given key. The
search for the element starts at the head of the list, and once it's found, its previous element's next
pointer is updated to point to its next element, and its next element's previous pointer is updated to
point to its previous element. The deleted element is then removed from memory.

These operations provide developers with powerful tools for manipulating linked lists in their
programs.

5. How stack help to store the required data?

15
A stack is a fundamental data structure used to store and manage data. It follows the Last-In-
First-Out (LIFO) principle, where the most recently inserted item is the first one to be removed. Stacks
typically have a limited size, making them suitable for storing a small amount of data.

To implement a stack, a common approach is to use a linked list data structure. Each node in the
linked list represents an item in the stack, with the top of the stack being the head of the linked list.
When new data is inserted, it is added to the top of the stack by creating a new node and updating the
head pointer to point to this new node.

The stack allows for efficient insertion and removal of elements since it only operates on the top
of the stack. When data is inserted, it becomes the new top element, and subsequent insertions will be
placed on top of it. Similarly, when data is removed, the top element is removed first.

While stacks have limited storage capacity, they are valuable in various scenarios. For example,
they are commonly used in programming languages for function call management and expression
evaluation. Additionally, they are utilized in algorithms such as depth-first search and backtracking.

Binary Search Tree:

Image 9: Binary Search Tree

16
A Binary Search Tree (BST) is a fundamental data structure used in computer science. It consists
of nodes, with each node having at most two children - a left child and a right child. The top node in the
tree is called the root.

The BST satisfies the binary search requirement, which means that the key in each node must be
greater than or equal to any key stored in the left subtree, and less than or equal to any key stored in the
right subtree. This enables fast search, addition, and removal of items. Additionally, BSTs keep their keys
in order, allowing for other searches and operations to apply the principle of Binary Search.

When looking for a key in a BST (or where to insert a new key), the search starts at the root node
and traverses the tree from root to leaf, comparing it to the key stored in each node along the way.
Depending on the comparison result, the search continues in either the left or right subtree. This process
allows operations to jump to about half of the tree with each comparison, making each search, insertion,
or deletion take time relative to the logarithm of the number of items stored in the tree.

On average, this is much faster than finding an item with a key in an unsorted array, which
requires linear time. However, it is slower than performing the same operation on a hash table, which
has constant time complexity. Nonetheless, BSTs are still widely used in computer science due to their
efficiency and simplicity.

Image 10: Tree Data Structure

In a tree data structure, the way data is stored is crucial to understand the underlying structure.
Each node in the tree holds a value associated with it, and it is important to note that no nodes are

17
allowed to be empty. If a node does not have a value, it means that the node itself does not exist in the
tree.

This concept ensures that every node in the tree has meaning and contributes to the overall
structure. Each node represents a specific element or entity within the dataset being represented by the
tree.

For example, let's consider a family tree. Each person in the family would be represented by a
node in the tree, and their values would indicate their names or other relevant information. If a node has
no value, it signifies that there is no corresponding person in the family tree.

Understanding how data is stored in a tree is essential for effectively working with and
manipulating the underlying structure. The values associated with each node provide context and
meaning to the data, allowing for efficient traversal, search, insertion, and deletion operations within the
tree. Here's an example I mean by that.

Image 11: example

One of the primary use cases for a Binary Search Tree (BST) is to search for values efficiently. The
structure of a BST allows for fast searching of specific values. Let's delve into the process of finding
values in a BST.

When searching for a value in a BST, you start at the root node and compare the target value
with the current node's value. If the target value is equal to the current node's value, the search is
successful. If the target value is less than the current node's value, you move to the left child of the

18
current node. If the target value is greater than the current node's value, you move to the right child of
the current node. Let's look at an example:

Image 12: example

Inserting a value into a Binary Search Tree (BST) is a straightforward process and can be seen as a
quest. Let's explore the steps involved in inserting a value into a BST.

Start at the root: Begin the insertion process by starting at the root node of the BST.

Compare the value: Compare the value you want to insert with the value of the current node.

Move left or right: If the value you want to

Find an empty spot: Continue moving down the tree until you find an empty spot (a null
reference) where you can insert the new value.

Avoid duplicates: Since BSTs cannot have duplicate values, at each step along the way, ensure
that the value you are trying to insert does not match the value of the current node you are viewing. If a
duplicate value is encountered, you can either ignore it or handle it based on your specific requirements.

Insert the value: Once you find an empty spot, create a new node with the desired value and
insert it into the tree at that position.

19
Image 13: example

 Demonstration of the insertion operation

Inserting a new node into a Binary Search Tree (BST) is a fundamental operation that maintains
the BST's property of having all values in the left subtree smaller than the current node and all values in
the right subtree larger than the current node. Let's explore the steps involved in performing the
insertion operation in a BST.

Create a new node: Start by creating a new node with the given value and set its left and right
child pointers to NULL.

Check if the tree is empty: Check if the BST is empty. If it is, set the root node to the new node.

Traverse the tree: If the BST is not empty, traverse the tree by starting at the root node and
comparing the value of the new node with the current node.

Move to the appropriate child: If the value of the new node is less than or equal to the current
node, move to the left child of the current node. If it is greater than the current node, move to the right
child.

Repeat until reaching a leaf node: Continue traversing down the tree until you reach a leaf node
(a node with no children).

Insert the new node: Once you reach a leaf node, insert the new node as its child. If the value of
the new node is less than or equal to the leaf node, insert it as its left child. If it is greater than the leaf
node, insert it as its right child.

20
The time complexity of inserting a new node into a BST is O(log n), where n is the number of
nodes in the tree. This is because each comparison during traversal reduces the number of nodes to
search by half, leading to logarithmic time complexity.

In conclusion, inserting a new node into a BST involves creating a new node, traversing down the
tree based on comparisons, and inserting it as a child of a leaf node. This operation maintains the BST's
property and has a time complexity of O(log n).

Construct a Binary Search Trees by entering the following sequence of numbers 65, 10, 40, 5, 82,
75 and 90. The element is included in the Binary Search Tree as follows:

 Insert 65:

Image 14: Insert 65

 Insert 10

Image 15 : Insert 10

 Insert 40

21
Image 16 : Insert 40

 . Insert 5

Image 17: Insert 5

 . Insert 82

Image 18: Insert 82

 Insert 75

22
Image 19: Insert 75

 Insert 90

Image 20: Insert 90

23
II. Determine the operations of a memory stack and how it isused to implement function calls in a
computer (P2).

1. The operations of a memory stack

Stack memory is a memory management technique that enables temporary data storage in
system memory, functioning as a first-in, last-out buffer. The Stack Pointer is a crucial component of
stack memory operations, as it indicates the current location of the stack memory address and is
automatically modified with each stack action.

When data is stored in the stack memory, it is referred to as "pushing" and is done using the
PUSH instruction. Conversely, when data is retrieved from the stack memory, it is referred to as
"popping" and is done using the POP instruction. This allows for efficient and organized management of
data in memory, as the most recently added data is always the first to be retrieved.

Overall, stack memory plays a vital role in optimizing memory usage and improving system
performance.

In addition to its role as a temporary data storage buffer, stack memory is also commonly used
for function calls and subroutine execution. When a function is called, the current state of the program is
saved onto the stack memory, including the values of any variables or registers that are currently in use.
This allows the function to execute with its own set of values and parameters without affecting the rest
of the program.

Once the function has completed its execution, the saved state is retrieved from the stack
memory and the program continues from where it left off. This process is known as the "stack frame"
and is crucial for managing program flow and data organization.

However, it is important to note that stack memory has a limited size and can become easily
overwhelmed if too much data is pushed onto it. This can lead to memory overflow errors and cause the
program to crash or behave unpredictably. Therefore, it is essential to carefully manage stack memory
usage and ensure that it is used efficiently and effectively.

24
Image 21: Stack Memory

In stack data structure, there are several methods that can be used to manipulate the elements in
the stack. The push() method is used to add an element to the top of the stack. This is useful for when
you need to add new data to the stack and make it the most recently added element.

On the other hand, the pop() method is used to remove the top element from the stack. This is
useful when you need to retrieve and remove the most recently added element from the stack.

The peek() method is another useful method that returns the top element of the stack without
removing it. This can be helpful when you need to access the most recently added element without
modifying the stack.

The size() method is used to determine the number of elements in the stack. This can be useful
when you need to know how many elements are currently stored in the stack.

Finally, the isEmpty() method returns a boolean value indicating whether or not the stack is
empty. This can be helpful when you need to check if there are any elements in the stack before
performing certain operations.

Overall, these methods are essential for manipulating and managing data in a stack data
structure. By utilizing these methods effectively, you can ensure that your stack is organized and efficient,
which can lead to improved program performance and reduced errors.

25
Push Operation: During the execution of a program, when a function is called, the current state
of the program is saved onto the stack memory. This includes the contents of the registers that are being
used by the caller program. This operation is known as the "push" operation.

By saving the register contents onto the stack, the function can have its own set of registers to
work with without interfering with the caller program. This ensures that the function can execute
independently and safely modify its own register values without affecting the rest of the program.

The push operation plays a crucial role in maintaining the integrity and consistency of the
program's execution. It allows for proper context switching between different functions and ensures that
each function can operate with its own set of register values.

In addition to saving register contents, other relevant data such as function parameters and
return addresses may also be pushed onto the stack during the push operation. This ensures that all
necessary information is preserved and can be restored when the function completes its execution.

Overall, the push operation in stack memory management is essential for maintaining program
flow and ensuring that functions can operate independently without causing conflicts or unintended side
effects.

Image 22: Push Operation

26
After a function has completed its execution, the saved state of the program on the stack
memory needs to be restored. This is done using the "pop" operation, which retrieves the data from the
stack memory and restores it to the appropriate registers.

During the pop operation, the most recently pushed data is retrieved from the top of the stack
and restored to the appropriate registers. This includes not only register values, but also function
parameters and return addresses.

By restoring the saved state of the program from the stack memory, the program can continue its
execution from where it left off before the function call. This ensures that all necessary data is preserved
and that the program can operate smoothly and efficiently.

In addition to restoring data, the pop operation also serves to remove unnecessary data from the
stack memory. This ensures that the stack memory is used efficiently and that it doesn't become
overwhelmed with unnecessary data.

Overall, the pop operation is an essential part of stack memory management. It ensures that
program flow is maintained and that data is properly managed and restored. By using the pop operation
effectively, developers can create efficient and reliable software applications that meet the needs of
their users.

Image 23: Pop Operation

27
2. Memory stack in a computer.

To execute a stack on a CPU, a section of computer memory is mapped to the stack operation,
and a processor register is used as a stack pointer. This approach allows for efficient execution of the
stack in a CPU-connected random access memory.

The memory of a computer is typically divided into three sections: program, data, and stack. The
Program Counter (PC) pointer stores the location of the next instruction in the program. The Address
Register (AR) provides access to various data, while the Stack Pointer (SP) constantly impacts the address
of the element at the top of the stack.

The PC, AR, and SP registers are connected to the common bus, allowing them to communicate
with each other and with other components of the CPU. During the fetch stage, the PC reads the
instruction to be executed. During the execution phase, a read operand from the address register occurs.
To push or pop an element from the stack, the stack pointer is utilized.

By using registers and memory effectively, developers can optimize stack execution and improve
program performance. This allows for efficient management of program flow and data organization,
leading to more reliable and effective software applications.

Image 24: Memory Unit

28
III. Using an imperative definition, specify the abstract data type for a software stack (P3).

1. Formal specifications.

Formal specifications are essential in expressing instructions, functions, or systems, particularly in


the fields of programming and computer science. By employing reliable and reasonable inference
techniques, behavioral analysis can enhance the design output by validating crucial features. These
specifications encompass scientific methods and practices, utilizing language grammar and meaning to
thoroughly analyze and potentially uncover valuable insights. They serve as a foundation for the
development of software and systems, ensuring their effectiveness and reliability.

Image 25: Formal specifications

 Types formal Specifications

Attribute Orientation is a programming approach that involves declaring only the desired traits in
a declarative manner. Algebraic data types are treated as an algebra, and their axioms establish their
mathematical characteristics. The use of first-order predicate logic with preconditions and
postconditions is common in this approach. This allows for the creation of a system model and a
description of the expected behavior.

29
Model-oriented programming is another approach that explains how the system works in detail,
using sets, sequences, tuples, and maps. It involves creating an abstract representation of specified
mathematical objects such as sets, strings, functions, and algorithms. This approach also employs state
machines to model the behavior of the system.

Both Attribute Orientation and Model-oriented programming have their unique benefits and can
be used in different scenarios. Attribute Orientation is ideal for expressing desired traits in a concise and
declarative way, while Model-oriented programming is suitable for creating detailed system models and
explaining how they work.

2. VDM specification language.

The Vienna Development Method (VDM) is a formal methodology that has been used for decades
to develop computer-based systems and software. It is a model-oriented approach that consists of a set
of languages and tools with a robust mathematical foundation. These tools and languages enable
developers to describe and evaluate system models early in the design process, before making costly
implementation commitments.

VDM emphasizes the importance of creating accurate and detailed system models that are
mathematically sound. This approach helps developers to identify potential problems and errors early on
in the design process, which can save time and resources. The VDM methodology also supports the
development of high-quality software that is reliable, efficient, and maintainable.

In summary, VDM is a powerful methodology that provides developers with the tools and
languages needed to create accurate and detailed system models. By using VDM, developers can identify
potential problems early on in the design process and create high-quality software that is reliable,
efficient, and easy to maintain.

3. Pre-condition, Post-condition.

In software development, prerequisites are essential for performing a system function.


Preconditions must be met before the function can run, and a separate set of conditions applies after the

30
function is executed. To ensure that a particular function is implemented correctly, it is necessary to
specify the following conditions:

Preconditions: These are the conditions that must be met before the function can be executed.
They include any required inputs or data that must be available before the function can run.

Postconditions: These are the conditions that apply after the function has been executed. They
may include outputs, changes to system state, or any other effects that the function has on the system.

To simplify the evaluation of these conditions, a default action is often used. This provides a
generic condition that can be used to assess multiple situations, rather than evaluating each condition
individually. By using default actions, developers can save time and resources while still ensuring that all
necessary conditions are met before and after a function is executed.

4. The ERROR condition.

In many cases, there is an implied action that can be used to evaluate multiple conditions
simultaneously. This approach offers a common condition that can be used to check for a variety of
distinct circumstances, rather than evaluating each condition independently.

The use of implied actions can be beneficial in software development, as it can simplify the
evaluation process and save time and resources. For example, if a particular function requires a specific
input, the implied action may be to check if the input is available before executing the function. This
approach can be used to evaluate multiple functions that require the same input, rather than evaluating
each function independently.

However, it is important to note that implied actions may not always be appropriate for every
situation. In some cases, it may be necessary to evaluate each condition independently to ensure that all
necessary requirements are met. Developers must carefully consider the specific circumstances of each
situation to determine whether an implied action or independent evaluation is the most appropriate
approach.

31
5. Formal Specifications in Stack.

In computer science, the Stack Abstract Data Type (ADT) is commonly used to store and
manipulate data. The Stack ADT consists of three primary commands: "push", "pop", and "top". The
"push" command adds an element to the top of the stack, the "pop" command removes the top element
from the stack, and the "top" command retrieves the element at the top of the stack.

To formally define the Stack ADT, it is necessary to identify and consider any uncommon
conditions that may arise. These conditions include Error-condition, Pre-condition, and Post-condition.
Error-conditions are any situations in which an error may occur during the execution of a command. Pre-
conditions are any requirements that must be met before a command can be executed, while Post-
conditions are any effects that a command has on the stack after it has been executed.

By considering these uncommon conditions, developers can create a more robust and reliable
definition of the Stack ADT. This can help to ensure that the stack operates correctly in all situations and
can prevent errors or unexpected behavior.

32
IV. Illustrate, with an example, a concrete data structure for a First in First out (FIFO) queue (M1).

1. First in First out (FIFO) Queue:

A Queue is a type of data structure that operates on the principle of First In First Out (FIFO). It
follows a specific order in which operations are performed. A common example of a queue is a line of
customers waiting for a service, where the customer who arrived first is served first.

The main distinction between queues and stacks lies in the removal process. In a stack, we
remove the item that was most recently added (Last In First Out - LIFO), while in a queue, we remove the
item that was least recently added (First In First Out - FIFO). This fundamental difference in removal
behavior makes queues and stacks suitable for different scenarios and problem-solving approaches.

Queues are widely used in computer science and programming, especially in operating systems
and network protocols. For example, when a computer receives multiple requests for data, it processes
them in the order they were received, using a queue data structure to manage the requests.

In addition to FIFO, queues can also implement other ordering schemes, such as priority queues,
where items are removed based on their priority level rather than their arrival time. This makes queues a
versatile tool for solving a wide range of problems in computer science and beyond.

Overall, understanding the concept of queues and their applications is essential for any
programmer or computer scientist. By mastering this fundamental data structure, one can develop more
efficient and elegant algorithms for solving complex problems.

Image 26: FIFO Queue

33
FIFO Queue basic operations:

Linear operations refer to a set of actions that involve creating, using, and deleting a queue data
structure. Queues are commonly used in computer science and programming to manage a collection of
items in a specific order.

The two fundamental operations of a queue are enqueue() and dequeue(). Enqueue() adds new
items to the back of the queue, while dequeue() removes and returns the item at the front of the queue.
These operations follow the FIFO principle, where the first item added to the queue is the first one to be
removed.

Apart from these basic operations, queues can also support other functions, such as checking the
size of the queue, accessing the front or back item without removing it, and clearing the entire queue.

Overall, understanding the basic operations of a queue is crucial for working with linear data
structures in computer science and programming. By mastering these concepts, one can develop more
efficient and effective algorithms for solving complex problems.

Queues are widely used in various applications, including operating systems, network protocols,
and data processing. For example, in an operating system, a queue is used to manage processes waiting
for CPU time. Similarly, in network protocols, a queue is used to manage the transmission of packets.

Apart from the basic enqueue and dequeue operations, queues can also be implemented with
additional features, such as priority queues, circular queues, and double-ended queues. Priority queues
allow items to be removed based on their priority level, while circular queues enable efficient use of
memory by reusing space that becomes available after dequeueing. Double-ended queues (also known
as dequeues) enable items to be added or removed from both ends of the queue.

In addition to their use in computer science, queues can also be applied to real-world scenarios.
For instance, queues can be used to manage waiting lines in stores, banks, or hospitals. By organizing
customers in a queue, businesses can ensure that everyone is served in the order they arrived and
minimize wait times.

34
Overall, understanding the various types of queues and their applications is essential for anyone
working with data structures and algorithms. By mastering these concepts, one can develop more
efficient and effective solutions for a wide range of problems.

Enqueue Operation:

A queue is a linear data structure that follows the First In First Out (FIFO) principle. It maintains
two data pointers, front and rear, which indicate the first and last positions of the queue, respectively.

To insert data into a queue (enqueue), the following steps should be taken:

Step 1: Check if the queue is full. If the queue is full, produce an overflow error and exit. This
ensures that the queue does not exceed its maximum capacity.

Step 2: If the queue is not full, increment the rear pointer to point to the next empty space and
add the data element to the queue location where rear is pointing. This ensures that the new item is
added to the end of the queue.

Step 3: Return success. This indicates that the enqueue operation was successful and the new
item has been added to the queue.

In addition to enqueue, queues also support other operations, such as dequeue (removing the
first item from the queue), peek (viewing the first item without removing it), and size (checking the
number of items in the queue).

Overall, understanding the basic operations of a queue and how to implement them is essential
for working with linear data structures in computer science and programming. By mastering these
concepts, one can develop more efficient and effective algorithms for solving complex problems.

35
Image 27: queue Enqueue

2. Dequeue Operation

Accessing data from a queue is a process that involves two steps:

 Retrieve the data from the position where the front pointer is pointing. This allows us to retrieve the
data of the first element in the queue without removing it.

 After accessing the data, we proceed to remove the accessed element. This ensures that the
accessed element is removed from the queue.

During the process of accessing data, we can also perform other operations such as checking the
first data in the queue without removing it (peek), checking the size of the queue (size), or checking if the
queue is empty (isEmpty).

Knowing how to access data in a queue and how to perform the corresponding operations is
crucial when working with queue data structures in computer science and programming. By mastering
these concepts, we can develop more efficient and effective algorithms to solve complex problems.

To perform a dequeue operation (delete the first element) on the queue, perform the following
steps:

36
 Step 1: Check if the queue is empty or not.

 Step 2: If the queue is empty, print an error message and exit the program. If not, we access the data
at the position the front pointer is pointing to, then advance the front pointer by one position to
point to the next element in the queue.

 Step 3: Returns successful results.

During the process of deleting the first element, we can also perform other operations such as
checking the first data in the queue without deleting it (peek), checking the size of the queue (size) or
checking See if the queue is empty or not (isEmpty).

Understanding how to remove the first element and the operations involved is important when
working with queues in computer science and programming. By mastering these concepts, we can
develop effective algorithms and solve complex problems easily and quickly.

Image 28: queue dequeue

To perform queuing operations most effectively, we need to add the following additional functions:

 peek(): Get the first element in the queue without deleting it. This function allows us to preview the
first data in the queue without affecting the original queue.

37
 isfull(): Checks if the queue is full. If the queue is full, we cannot add new elements to the queue.

 isempty(): Checks whether the queue is empty or not. If the queue is empty, we cannot access or
delete any elements from the queue.

In addition, we can also add some other functions such as push() to add elements to the end of
the queue or size() to get the size of the queue.

Understanding these functions and how to use them is important when working with queues and
other data structures in computer science and programming. By using these functions effectively, we can
develop algorithms and applications to solve complex real-world problems.

Example:

FIFO stands for "first in, first out". This is a method of processing data structures in which the first
element is processed first and the newest element is processed last.

In a FIFO queue, elements are added to the queue in order from top to bottom, and when we
remove an element from the queue, the first element in the queue is removed. This ensures that
elements will be processed in the exact order in which they were added to the queue.

The FIFO queue is an important data structure and is widely used in many different fields, from
simple computer systems to complex systems such as traffic control systems and stock exchange
systems. . Understanding how FIFO queues and other data structures work will help us develop effective
algorithms and applications to solve real-world problems.

38
Image 29: Example

In this example, we need to consider the following:

 There is a ticket counter where people come, get their tickets and leave.

 Everyone gets in a line (queue) to get to the Ticket Counter in an organized manner.

 The first person to enter the line (queue) will receive a ticket first and leave the queue first.

 The next person in line (queue) will receive a ticket after the person in front.

 This way, the last person in line will get the last ticket.

Therefore, the first person to enter the line (queue) will receive a ticket first and the last person
to enter the line (queue) will receive a ticket last.

This example illustrates the practical use of queues. Queues help organize tasks in order and
ensure that tasks are performed in an organized and efficient manner. It is widely used in many different
fields, from managing goods in supermarkets to handling customer requests in service centers.
Understanding how queuing works is important in applying it to real-life situations.

39
V. Compare the performance of two sorting algorithms (M2).

1. Bubble Sort.

Bubble Sort is a simple algorithm used to sort an array of n elements. It works by comparing each
element with its adjacent element and swapping them if they are not in the correct order. If we want to
sort an array in ascending order, then we start by comparing the first element with the second element.
If the first element is larger than the second element, we swap them and move on to compare the
second and third elements, and so on. We repeat this process n times for an array with n elements. This
is called a bubble sort because with each pass, the largest element "bubbles" to the end of the array.

It's worth noting that while Bubble Sort is easy to understand and implement, it's not very
efficient for large arrays. Its time complexity is O(n^2), which means that the number of comparisons
and swaps grows exponentially with the size of the array. For larger arrays, more efficient algorithms like
Quick Sort or Merge Sort are preferred.

The bubble sort algorithm is performed using the following steps:

The Bubble Sort algorithm can be broken down into the following steps:

 Step 1: Starting with the first element (index = 0), compare it with the next element in the array.

 Step 2: If the current element is larger than the next element, swap them.

 Step 3: If the current element is smaller than or equal to the next element, move to the next
element and repeat Step 1.

 Step 4: Continue this process until the entire array is sorted.

By repeating these steps multiple times, the largest elements gradually "bubble" towards the end
of the array. This process is repeated for each element in the array, ensuring that all elements are
correctly sorted.

40
It's important to note that Bubble Sort has a time complexity of O(n^2), which means its
performance decreases significantly for larger arrays. Therefore, it is generally not recommended for
sorting large datasets. Other more efficient sorting algorithms, such as Quick Sort or Merge Sort, are
preferred in such cases.

Example of bubble sort:

Consider an array with values:

1st iterations:

Image 30: sort list

The numbers shown above are not sorted in any particular order. To organize them, we will sort
them in ascending or descending order. Sorting in ascending order means arranging the numbers from
smallest to largest, while sorting in descending order means arranging them from largest to smallest.

In this case, we will sort the numbers in descending order. This means that the largest numbers
will appear at the top, and the smallest numbers will appear at the bottom. To do this, we will use a
sorting algorithm such as Bubble Sort, which compares adjacent elements and swaps them if they are
not in the correct order. By repeating this process multiple times, the largest numbers gradually move
towards the top of the list, resulting in a sorted list in descending order.

Image 31: sort list

41
To begin sorting the numbers in descending order, we can start by comparing the first two
elements: 6 and 2. Since 6 is greater than 2, we need to swap their values. After swapping, the new
arrangement will be 2, 6.

This is just the first step in the sorting process. We will continue comparing and swapping
adjacent elements until the entire list is sorted in descending order.

Image 32: sort list

After swapping the first two elements, we move on to the next pair: 7 and 3. Since 7 is greater
than 3, we need to swap their values. After swapping, the new arrangement will be 2, 7, 6.

We will continue this process of comparing and swapping adjacent elements until the entire list is
sorted in descending order. This involves repeating the process for each pair of adjacent elements until
the largest numbers gradually move towards the top of the list. Once the sorting process is complete, the
numbers will be arranged in descending order with the largest numbers at the top and the smallest
numbers at the bottom.

Image 33: sort list

After swapping the second pair of elements, which were 7 and 3, we need to continue with the
next pair: 7 and 5. Since 7 is greater than 5, we need to swap their values. After swapping, the new
arrangement will be 2, 7, 5, 6.

It's important to note that after each swap, we need to check if there are any more adjacent
elements that need to be swapped. In this case, we can see that we can still swap the 7 with the

42
remaining element, which is 6. This is because 7 is greater than 6. After swapping, the new arrangement
will be 2, 7, 6, 5.

We will continue this process of comparing and swapping adjacent elements until no more swaps
are needed. At this point, the list will be fully sorted in descending order with the largest numbers at the
top and the smallest numbers at the bottom.

Image 34: sort list

After swapping the previous elements, we move on to the next pair: 7 and 4. Since 7 is greater
than 4, we need to swap their values. After swapping, the new arrangement will be 2, 7, 6, 4, 5.

We continue this process of comparing and swapping adjacent elements. Next, we compare 7
and 6. Since 7 is greater than 6, we swap their values. The new arrangement becomes 2, 7, 7, 4, 5.

We then compare 7 and 4. Again, since 7 is greater than 4, we swap their values. The new
arrangement becomes 2, 7, 7, 4, 5.

We repeat this process until no more swaps are needed. At this point, the list will be fully sorted
in descending order: 7, 7, 6, 5, 4, 2.

This demonstrates the iterative nature of the sorting process, where each comparison and swap
brings the largest elements closer to their correct positions.

Image 35: Final sorted list

43
After the first iteration of the sorting process, the numbers in the array are 2, 6, 3, 5, 4, and 7.
However, we can see that the numbers are still not sorted in ascending order. This means that we need
to continue with the second iteration of the sorting process.

During the second iteration, we will compare adjacent pairs of elements and swap them if they
are not in the correct order. This process will continue until the entire array is sorted in ascending order.
The iterative nature of the sorting process ensures that each comparison and swap brings the smallest
elements closer to their correct positions.

It's worth noting that Bubble Sort has a time complexity of O(n*2), which means that the number
of comparisons and swaps grows exponentially with the size of the array. For larger arrays, more
efficient algorithms like Quick Sort or Merge Sort are preferred.

2nd iterations:

Image 36: sort list

To continue with the sorting process, we need to start with the second pair of adjacent elements,
which are 6 and 3. Since 6 is greater than 3, we need to swap their values. After swapping, the new
arrangement becomes 2, 3, 5, 4, 6, and 7.

We then move on to the next pair, which are 5 and 4. Since 5 is greater than 4, we need to swap
their values. The new arrangement becomes 2, 3, 4, 5, 6, and 7.

We continue this process with the next pair, which are 5 and 6. Since 5 is smaller than 6, we don't
need to swap them. We then compare the last pair, which are 6 and 7. Since 6 is smaller than 7, we don't
need to swap them either.

44
At this point, we have completed the third iteration of the sorting process. However, we can see
that the numbers are still not sorted in ascending order. This means that we need to continue with
further iterations until the entire array is sorted correctly.

Image 37: sort list

Continuing with the sorting process, we move on to the next pair of adjacent elements, which are
6 and 5. Since 6 is greater than 5, we need to swap their values. After swapping, the new arrangement
becomes 2, 3, 4, 5, 6, and 7.

We then compare the next pair, which are 6 and 7. Since 6 is smaller than 7, we don't need to
swap them.

At this point, we have completed the fourth iteration of the sorting process. However, we can see
that the numbers are still not sorted in ascending order. This means that we need to continue with
further iterations until the entire array is sorted correctly.

Image 38: sort list

Continuing with the sorting process, we move on to the next pair of adjacent elements, which are
6 and 4. Since 6 is greater than 4, we need to swap their values. After swapping, the new arrangement
becomes 2, 3, 4, 5, 6, and 7.

We then compare the next pair, which are 6 and 7. Since 6 is smaller than 7, we don't need to
swap them.

45
At this point, we have completed the fifth iteration of the sorting process. However, we can see
that the numbers are still not sorted in ascending order. This means that we need to continue with
further iterations until the entire array is sorted correctly.

Image 39: sort list

After the second iteration of the sorting process, we can see that the numbers are almost in
ascending order, except for the number 4. However, to ensure that we have achieved the perfect order,
we need to continue with the third iteration of the sorting process.

During the third iteration, we will compare adjacent pairs of elements and swap them if they are
not in the correct order. This process will continue until the entire array is sorted in ascending order. The
iterative nature of the sorting process ensures that each comparison and swap brings the smallest
elements closer to their correct positions.

It's important to note that Bubble Sort has a time complexity of O(n^2), which means that the
number of comparisons and swaps grows exponentially with the size of the array. For larger arrays, more
efficient algorithms like Quick Sort or Merge Sort are preferred.

3rd iterations:

Image 40: sort list

Continuing with the third iteration of the sorting process, we move on to the next pair of adjacent
elements, which are 5 and 4. Since 5 is greater than 4, we need to swap their values. After swapping, the
new arrangement becomes 2, 3, 4, 5, 6, and 7.

46
We then compare the next pair, which are 6 and 7. Since 6 is smaller than 7, we don't need to
swap them.

At this point, we have completed the third iteration of the sorting process. We can see that all
the numbers are now sorted in ascending order, which means that we have achieved the perfect order.
The final arrangement of the numbers is 2, 3, 4, 5, 6, and 7.

Bubble Sort is a simple but inefficient sorting algorithm that works by repeatedly swapping
adjacent elements until the entire array is sorted. It's important to note that for larger arrays, more
efficient algorithms like Quick Sort or Merge Sort are preferred.

Image 41: Final sorted list

After completing the third iteration of the sorting process, we can see that all the numbers are
now in their correct place, and the array is sorted in ascending order. The final arrangement of the
numbers is 2, 3, 4, 5, 6, and 7.

However, it's important to note that Bubble Sort is an iterative algorithm that requires multiple
passes to ensure that all elements are sorted. Therefore, even though we have achieved the perfect
ascending order, we still need to continue with the fourth iteration of the sorting process.

During the fourth iteration, we will compare adjacent pairs of elements and swap them if they
are not in the correct order. This process will continue until the entire array is sorted in ascending order.
The iterative nature of the sorting process ensures that each comparison and swap brings the smallest
elements closer to their correct positions.

47
It's worth noting that Bubble Sort has a time complexity of O(n^2), which means that the number
of comparisons and swaps grows exponentially with the size of the array. For larger arrays, more
efficient algorithms like Quick Sort or Merge Sort are preferred.

2. Selection Sort

Selection Sort is a simple yet effective sorting algorithm that works by repeatedly selecting the
smallest element and placing it in its correct position. The algorithm starts by finding the smallest
element in the array and swapping it with the element in the first position. Then, it proceeds to find the
second smallest element and swaps it with the element in the second position. This process continues
until the entire array is sorted.

The Selection Sort Algorithm is commonly used to arrange elements in a specific order, such as
ascending or descending. It derives its name from the fact that it repeatedly selects the next smallest
element and moves it to its rightful place.

The selection process begins by selecting the first element in the list and comparing it with all the
remaining elements. If any element is found to be smaller (in case of ascending order), both elements
are swapped, ensuring that the first position is occupied by the smallest element in the desired sort
order. The same procedure is then repeated with the second element, comparing it to the remaining
elements and swapping if necessary. This process continues until all elements are sorted.

Selection Sort has a time complexity of O(n^2), making it suitable for small arrays or partially
sorted arrays. However, for larger arrays, more efficient algorithms like Quick Sort or Merge Sort are
preferred.

The selection sort algorithm is performed using the following steps:

Selection Sort is a simple algorithm that sorts an array by repeatedly selecting the smallest
element and placing it in its correct position. The algorithm works as follows:

 Step 1: Select the first element in the list as the starting point.

48
 Step 2: Compare the selected element with all other elements in the list.

 Step 3: In each comparison, if any element is found to be smaller than the selected element (in case
of ascending order), swap both elements.

 Step 4: Repeat the same procedure with the element in the next position in the list until the entire
list is sorted.

During each iteration, the algorithm selects the next smallest element and places it in its correct
position. This process continues until all elements are sorted.

Selection Sort has a time complexity of O(n^2), making it suitable for small arrays or partially
sorted arrays. However, for larger arrays, more efficient algorithms like Quick Sort or Merge Sort are
preferred.

It's worth noting that Selection Sort is an in-place sorting algorithm, meaning that it doesn't
require any additional memory to store temporary data. This makes it a memory-efficient algorithm for
sorting arrays.

Example of selection sort:

Consider the following unsorted list of elements:

Image 42: sort list

1st iteration:

Image 43: sort list

2nd iteration:

49
To sort a list using the Selection Sort algorithm, we begin by selecting the first element as the
starting point. Next, we select the second position element in the list and compare it with all other
elements in the list. Whenever we encounter a smaller element than the element at the first position,
we swap those two elements.

This process is repeated for each subsequent position in the list until the entire list is sorted. The
algorithm repeatedly selects the next smallest element and places it in its correct position.

Image 44: sort list

3rd iteration:

To continue sorting a list using the Selection Sort algorithm, we move on to the third position
element. We select the element at the third position and compare it with all other elements in the list.
Whenever we find a smaller element than the element at the first position, we swap those two elements.

This process is repeated for each subsequent position in the list until the entire list is sorted. The
algorithm selects the next smallest element and places it in its correct position.

Image 45: sort list

4th iteration:

To continue sorting a list using the Selection Sort algorithm, we move on to the fourth position
element. We select the element at the fourth position and compare it with all other elements in the list.
Whenever we find a smaller element than the element at the first position, we swap those two elements.

50
This process is repeated for each subsequent position in the list until the entire list is sorted. The
algorithm selects the next smallest element and places it in its correct position.

Image 46: sort list

5th iteration:

To continue sorting a list using the Selection Sort algorithm, we move on to the fifth position
element. We select the element at the fifth position and compare it with all other elements in the list.
Whenever we find a smaller element than the element at the first position, we swap those two elements.

This process is repeated for each subsequent position in the list until the entire list is sorted. The
algorithm selects the next smallest element and places it in its correct position.

Image 47: sort list

6th iteration:

To continue sorting the list using the Selection Sort algorithm, we move on to the fifth position
element. We select the element in position six and compare it with all the other elements in the list.
Whenever we find an element that is smaller than the element in the first position, we swap those two
elements.

This process is repeated for each subsequent position in the list until the entire list is sorted. The
algorithm selects the next smallest element and places it in the correct position.

51
Image 48: sort list

7th iteration:

To continue sorting the list using the Selection Sort algorithm, we move on to the fifth position
element. We select the element in position seven and compare it with all the other elements in the list.
Whenever we find an element that is smaller than the element in the first position, we swap those two
elements.

This process is repeated for each subsequent position in the list until the entire list is sorted. The
algorithm selects the next smallest element and places it in the correct position.

Image 49: Final sorted list

Comparison between bubble sort and selection sort:

Bubble sort Selection sort

Comparing and switching out adjacent elements In the case of an ascending order, the largest
element is chosen and switched with the last
element.

O(n) is the time complexity. The O(n2) time complexity.

Inefficient a higher level of efficiency than the bubble sort

It is stable It is not stable

Exchanging method Selection method

It is slow Fast as compared to bubble sort

52
Bubble Sort and Selection Sort are two commonly used algorithms for sorting a list of elements.
However, they differ in the way they compare and swap elements.

Bubble Sort compares each element with its adjacent element and swaps them if they are not in
the correct order. This process is repeated until the entire list is sorted. In contrast, Selection Sort selects
the smallest or largest element in the list and places it in its correct position by swapping it with the
element at the beginning of the unsorted list.

Although both algorithms have a worst-case time complexity of O(n^2), Bubble Sort requires n^2
comparisons, whereas Selection Sort requires only n comparisons. This makes Selection Sort more
efficient than Bubble Sort for larger lists.

Bubble Sort is a stable algorithm, meaning that the relative order of equal elements is preserved
after sorting. However, Selection Sort is an unstable algorithm, meaning that the relative order of equal
elements may change after sorting.

In terms of efficiency, Bubble Sort is considered to be one of the simplest and least efficient
algorithms. It requires extra space to store temporary variables and requires more swaps than Selection
Sort. On the other hand, Selection Sort is faster and more efficient than Bubble Sort.

In summary, both Bubble Sort and Selection Sort have their advantages and disadvantages. While
Bubble Sort is a simple algorithm, it is less efficient than Selection Sort. Selection Sort, on the other hand,
is faster and more efficient but is not stable.

53
VI. Examine the advantages of encapsulation and information hiding when using an ADT. Advantages
of encapsulation(M3).

 Advantages of encapsulation and information hiding when using an ADT:

Encapsulation:

 Data protection:

Data protection is an important aspect of encapsulation in object-oriented programming. By


making the data members of an Abstract Data Type (ADT) private, we restrict direct access to them from
outside the class. Instead, we provide public methods or interfaces to interact with and manipulate the
data.

This approach ensures that the data in the ADT is protected from unauthorized access or
modification by external code. Only the methods defined within the class have the ability to access and
modify the private data members. This helps to maintain the integrity and security of the data.

For example, let's consider a class representing a bank account. The account balance is a private
data member, and it should not be directly accessible or modified by code outside the class. Instead, the
class provides public methods like deposit() and withdraw() to interact with and manipulate the account
balance.

By encapsulating the data in this way, we ensure that only authorized operations can be
performed on the account balance. Unauthorized code cannot directly access or modify the balance,
reducing the risk of data corruption or unauthorized transactions.

In addition to protecting data, encapsulation also provides other benefits such as code
maintainability and flexibility. By encapsulating data within a class, we can easily modify the internal
implementation without affecting the code that uses the class. This promotes code reusability and makes
it easier to maintain and update the system over time.

54
Overall, encapsulation plays a crucial role in data protection by restricting direct access to data
members and providing controlled access through public methods or interfaces. It helps to ensure the
privacy, integrity, and security of data within an ADT.

 Modularity and maintainability:

Encapsulation not only provides data protection but also contributes to the modularity and
maintainability of Abstract Data Types (ADTs). By hiding the internal implementation details of an ADT,
encapsulation allows for changes to be made to the implementation without impacting the code that
uses the ADT.

One of the key benefits of encapsulation is modularity. Modularity refers to the division of a
system into smaller, self-contained modules. By encapsulating the internal implementation of an ADT,
we create a clear separation between the implementation and the interface exposed to the users. This
separation allows for easier understanding and management of the codebase.

With encapsulation, each module can be developed, tested, and maintained independently.
Changes made to the internal implementation of an ADT, such as optimizing performance or fixing bugs,
can be done without affecting the code that uses the ADT's public interface. This modular approach
enhances code maintainability as it reduces the risk of unintended side effects and makes it easier to
track and fix issues.

Furthermore, encapsulation promotes code reusability. Once an ADT is encapsulated with a well-
defined interface, it can be easily reused in different parts of a program or in other projects. The
encapsulated ADT can be seen as a black box, where users only need to understand and interact with the
public interface without worrying about its internal workings. This encourages code reuse, reduces
redundancy, and improves overall development efficiency.

In summary, encapsulation enhances modularity and maintainability by separating the internal


implementation of an ADT from its public interface. This allows for independent development and
maintenance of modules, facilitates code reuse, and enables changes to be made to the implementation
without impacting users of the ADT.

55
 Code reusability:

Code reusability is an important concept in software engineering as it allows developers to save


time and resources by using existing code to build new applications. One way to achieve code reusability
is through the use of Abstract Data Types (ADTs) and encapsulation.

Encapsulation is the practice of hiding the internal details of an object from the outside world. In
the context of ADTs, this means that the implementation details of the data type are hidden from the
user, and only the interface is exposed. This allows the ADT to be used without the user needing to know
how it is implemented, making it more reusable.

By encapsulating the implementation details of an ADT, developers can change the underlying
code without affecting the code that uses it. This means that changes can be made to improve
performance, fix bugs, or add new features without breaking existing code. Additionally, encapsulation
helps to prevent unintended modifications to the internal state of an object, which can lead to
unexpected behavior.

In summary, encapsulation is a powerful tool for achieving code reusability in software


development. By hiding the implementation details of an ADT from the outside world, developers can
create more flexible and reusable code that can be used in a variety of contexts without modification.

Information hiding:

 Abstraction:

Abstraction is a fundamental concept in software development that allows us to focus on the


essential aspects of a system while hiding unnecessary details. In the context of Abstract Data Types
(ADTs), information hiding plays a crucial role in achieving abstraction.

By hiding the implementation details of an ADT, we can create a clear separation between how
the ADT works internally and how it is used externally. This separation allows us to abstract away the
complexities of the implementation and focus solely on the interface of the ADT.

56
The interface of an ADT defines the operations that can be performed on it and the behavior that
can be expected. Users of the ADT only need to know how to use these operations without having to
worry about how they are implemented behind the scenes. This simplifies the usage of the ADT and
makes it easier to understand, especially for other developers who may be working with the code.

Abstraction through information hiding also provides benefits in terms of code maintenance and
evolution. Since the internal implementation details are hidden, changes and optimizations can be made
to the implementation without affecting the code that uses the ADT. This allows for greater flexibility
and adaptability as the system evolves over time.

In summary, information hiding and abstraction in ADTs make them easier to use and understand
by abstracting away unnecessary implementation details. This simplifies code usage, improves code
maintainability, and allows for future enhancements without impacting existing code.

 Reduced complexity:

Reducing complexity is a key goal in software development as it helps to improve code quality,
readability, and maintainability. Information hiding is an important technique that can be used to
achieve this goal, particularly in the context of Abstract Data Types (ADTs).

By hiding the implementation details of an ADT, we can simplify the code that uses it. This is
because users of the ADT only need to know how to use its interface without having to worry about how
it works internally. This reduces the cognitive load required to work with the code and makes it easier to
read, write, and maintain.

Furthermore, information hiding can help to reduce the likelihood of errors and bugs in the code.
Since the implementation details are hidden, users of the ADT are less likely to make unintended
modifications to its internal state. This can help to prevent bugs caused by incorrect usage of the ADT.

Reducing complexity through information hiding also has benefits for code maintenance. Since
the implementation details are hidden, changes can be made to the implementation without affecting

57
the code that uses the ADT. This makes it easier to modify and update the code over time without
introducing new bugs or breaking existing functionality.

In summary, information hiding can help to reduce complexity in software development by


simplifying code usage, improving code quality, and making it easier to maintain over time. By hiding the
implementation details of an ADT, we can create more flexible and reusable code that is easier to work
with and less prone to errors.

 Improved security:

Security is a critical concern in software development, particularly when it comes to protecting


sensitive data from unauthorized access. Information hiding is a powerful technique that can be used to
improve the security of code by hiding sensitive data from prying eyes.

In the context of Abstract Data Types (ADTs), information hiding can be used to hide the internal
state of an object from the outside world. This means that sensitive data can be stored inside an ADT
without being directly accessible to users of the ADT. This can help to prevent unauthorized access to the
data and improve the security of the system as a whole.

For example, consider a system that stores user passwords. By using an ADT to store the
passwords, the implementation details of how the passwords are stored and encrypted can be hidden
from the outside world. This makes it more difficult for attackers to gain access to the passwords, even if
they manage to gain access to the code that uses the ADT.

Information hiding can also help to prevent unintended modifications to sensitive data. By hiding
the implementation details of an ADT, users of the ADT are less likely to make accidental modifications
to the internal state of the object. This can help to prevent data corruption and improve the overall
security of the system.

In summary, information hiding can be used to improve security in software development by


hiding sensitive data from unauthorized access and preventing unintended modifications to that data. By

58
using ADTs to store sensitive data, we can create more secure and robust systems that are less
vulnerable to attacks and data breaches.

Example:

Let's consider a stack Abstract Data Type (ADT) as an example. A stack follows the Last In First
Out (LIFO) principle, where the last element inserted is the first one to be removed.

To encapsulate the stack ADT, we can make the data members of the ADT private and provide
public methods for accessing and manipulating the stack. For instance, a stack ADT could have the
following public methods:

 push(element): Inserts an element into the stack.

 pop(): Removes the top element from the stack and returns it.

 is_empty(): Returns true if the stack is empty, false otherwise.

The internal implementation of the stack ADT, such as using an array or a linked list, can be
hidden from the outside world.

By encapsulating and hiding the implementation details of the stack ADT, several benefits are
achieved:

 Data protection: The data in the stack is protected from unauthorized access or modification by
external code.

 Modularity and maintainability: The stack ADT becomes more modular, allowing for easier
maintenance and updates without affecting the code that uses it.

 Code reusability: The encapsulated stack ADT can be reused in different contexts without
modifications, enhancing code reusability.

 Abstraction: Users of the stack ADT only need to know how to use its public methods, abstracting
away unnecessary implementation details.

59
 Reduced complexity: The code that uses the stack ADT becomes less complex since the
implementation details are hidden, leading to improved readability and ease of understanding.

 Improved security: By hiding sensitive data within the encapsulated stack ADT, unauthorized access
to the data is prevented, enhancing overall system security.

In conclusion, encapsulation and information hiding are crucial concepts in object-oriented


programming. When applied to ADTs like the stack example, they offer various advantages such as data
protection, modularity, code reusability, abstraction, reduced complexity, and improved security. By
utilizing encapsulation and information hiding, developers can create more maintainable, secure, and
reusable code.

In addition to the benefits mentioned earlier, encapsulation and information hiding also
contribute to better code organization and collaboration among developers.

Encapsulation allows for the clear separation of concerns within a codebase. By encapsulating the
implementation details of an ADT, developers can focus on specific functionalities and responsibilities
without being overwhelmed by the entire system. This promotes better code organization and modular
design, making it easier to understand and maintain the codebase as it grows.

Furthermore, encapsulation facilitates collaboration among developers. When different team


members are working on different parts of a project, encapsulation ensures that they can work
independently without interfering with each other's code. Each developer can focus on their assigned
tasks related to the ADT without needing to understand or modify the implementation details of other
components. This promotes parallel development and reduces dependencies, leading to more efficient
teamwork.

Additionally, encapsulation and information hiding can also improve code documentation and
readability. By exposing only the necessary interface of an ADT, it becomes easier for developers to
understand how to use the ADT correctly. The public methods serve as a clear contract or API, making it
easier to document and communicate the intended usage of the ADT to other developers.

60
Overall, encapsulation and information hiding not only enhance code reusability, security, and
complexity reduction but also contribute to better code organization, collaboration, and documentation.
By leveraging these principles effectively, developers can create more maintainable, scalable, and robust
software systems.

61
VII. Analyse the operation, using illustrations, of two network shortest path algorithms, providing an
example of each (D1).

1) Specify the abstract data type for a software stack.

A software stack refers to a collection of programs that work together to produce a desired
outcome, such as an operating system and its associated applications. For example, the software stack
on a smartphone includes the operating system, along with various applications like the phone app, web
browser, and other basic utilities. A software stack can also refer to any group of applications that work
in a sequence towards a common goal, or any set of routines or utilities that work together.

To enable these programs to work together effectively, abstract data types like Stacks (LIFO) and
Queues (FIFO) are required. These data structures help manage and organize the flow of data within the
software stack.

To better understand how a software stack works, let's explore the stack operations on an
Android device. Every application running on an Android device has an active stack maintained by the
runtime system. When an application is launched, its first operation is placed on the stack. As
subsequent operations are initiated, they are placed on top of the stack, with the previous activity
pushed down. The operation at the top of the stack is considered active and running.

When an operation is completed, it is removed from the top of the stack by the runtime system,
and the operation just below it becomes the current activity. The topmost operation can be terminated
if its designated task has been completed or if the user selects the "Back" button to return to the
previous activity. In this case, the current activity is removed from the stack by the runtime system and
destroyed.

In summary, understanding the operations of a software stack is essential for developing


effective software applications. By leveraging data structures like Stacks and Queues, we can better
manage and organize data flow within a software stack. The Android operating system provides an
example of how stacks work in practice, with each application having its own active stack managed by
the runtime system.

62
Image 50: Activity Stack

In the context of an Android device, the diagram illustrates how activities are managed within the
activity stack. When a new activity is started, it is pushed onto the top of the stack. The currently active
activity remains at the top of the stack until it is either pushed down by a new activity being started or
popped off the stack when it exits or when the user navigates to the previous activity.

The activity stack follows a Last-In-First-Out (LIFO) order, meaning that the most recently added
activity is the first one to be removed. This behavior is similar to how a stack data structure operates,
where the last item pushed onto the stack is the first one to be popped off.

In situations where system resources become constrained, the Android runtime may need to
reclaim resources by killing activities. When this occurs, activities at the bottom of the stack are typically
the first ones to be killed. This ensures that the most recently used activities, which are closer to the top
of the stack, have a higher chance of being retained.

Understanding the behavior of the activity stack is crucial for Android developers as it affects how
activities are managed and how the user navigates through different screens or functionalities within an

63
application. By following the LIFO principle, the activity stack provides a structured approach to
managing the lifecycle of activities and optimizing resource usage on Android devices.

Image 51: diagram illustrates

2) Two network shortest path algorithms

 Bellman-Ford Algorithms:

The Bellman-Ford algorithm is a solution for the single-source shortest path problem in graphs
that can have negative edge weights and directed edges. In cases where the graph is not already directed,
it needs to be modified by including two edges in each direction to make it directed.

One of the key advantages of the Bellman-Ford algorithm is its ability to detect negative weight
cycles that are reachable from the source. A negative weight cycle is a cycle in the graph where the sum
of the weights along the cycle is negative. In such cases, there is no shortest path as a path can endlessly
loop on the negative weight cycle, continuously reducing the path cost. The Bellman-Ford algorithm can
identify the presence of a negative weight cycle and report that no shortest path exists.

However, if there is no negative weight cycle in the graph, the Bellman-Ford algorithm returns
the weight of the shortest path from the source to each vertex along the respective paths. It computes

64
and updates the shortest path distances iteratively, considering all possible edges in each iteration until
convergence is achieved.

By considering both positive and negative edge weights, the Bellman-Ford algorithm provides a
versatile solution for finding shortest paths in a wide range of graph scenarios. It can handle graphs with
negative weights, which is a limitation of some other shortest path algorithms like Dijkstra's algorithm.

In summary, the Bellman-Ford algorithm is a powerful tool for finding shortest paths in graphs
with negative edge weights and directed edges. It can detect negative weight cycles and accurately
compute the shortest path distances when such cycles are not present.

How the Bellman-Ford Algorithms work?

The Bellman-Ford algorithm is a dynamic programming approach that calculates the shortest
path from the starting vertex to all other vertices in a graph. It initially overestimates the length of the
path to all vertices and then iteratively refines these estimates by finding new paths that are shorter
than the previously overestimated paths.

Similar to other dynamic programming problems, the Bellman-Ford algorithm calculates the
shortest path in a bottom-up manner. It first calculates the shortest distance with at most one edge in
the path. Then it proceeds to calculate the shortest path with up to two edges, and so on. After each
iteration of the outer loop, the shortest path with at most i edges is calculated. Since there can be a
maximum of |V|-1 edges in any simple path, the outer loop runs |V|-1 times.

The key idea behind the Bellman-Ford algorithm is that, assuming there is no negative weight
cycle, if we have calculated the shortest paths with up to i edges, then iterating over all edges ensures
that the shortest path with i+1 edges is also calculated.

One advantage of the Bellman-Ford algorithm is that it can handle graphs with negative weight
edges, unlike some other shortest path algorithms like Dijkstra's algorithm. However, it has a higher time
complexity of O(|V||E|) compared to Dijkstra's algorithm with a time complexity of O(|E|+|V|log|V|).

65
To summarize, the Bellman-Ford algorithm is a powerful tool for finding the shortest path from a
single source vertex to all other vertices in a graph, even when negative weight edges are present. It uses
a dynamic programming approach to iteratively refine path length estimates until convergence is
achieved.

Algorithm:

The Bellman-Ford algorithm is a popular algorithm for finding the shortest path from a single
source vertex to all other vertices in a graph. The algorithm works by initially setting the distance value
of the source vertex to 0 and the distance value of all other vertices to infinity. It then iteratively updates
the distance value of each vertex by considering all adjacent vertices.

To update the distance values, the algorithm iterates through all adjacent vertices of a given
vertex u. For every adjacent vertex v, it checks if the sum of the distance value of u (from the source) and
the weight of edge u-v is less than the distance value of v. If this is the case, it updates the distance value
of v to be the sum of the distance value of u and the weight of edge u-v.

This process is repeated for all vertices in the graph for a total of |V|-1 iterations, where |V| is
the total number of vertices in the graph. This ensures that all possible paths from the source vertex to
all other vertices are considered.

One advantage of the Bellman-Ford algorithm is that it can handle graphs with negative weight
edges, unlike some other shortest path algorithms like Dijkstra's algorithm. However, it has a higher time
complexity of O(|V||E|) compared to Dijkstra's algorithm with a time complexity of O(|E|+|V|log|V|).

To better understand how the Bellman-Ford algorithm works, let's consider an example. Suppose
we have a graph with 5 vertices and 7 edges, and we want to find the shortest path from vertex 1 (the
source) to all other vertices. We start by setting the distance value of vertex 1 to 0 and the distance value
of all other vertices to infinity.

In the first iteration, we update the distance values of vertices 2, 3, and 5 based on their adjacent
edges. In the second iteration, we update the distance values of vertices 3 and 4 based on their adjacent

66
edges. In the third iteration, we update the distance value of vertex 4 based on its adjacent edge. Finally,
in the fourth iteration, no updates are made as all distance values have converged.

After these iterations, we have calculated the shortest path from vertex 1 to all other vertices in
the graph. By storing the predecessor of each vertex during these iterations, we can also reconstruct the
actual shortest path from vertex 1 to any other vertex in the graph.

In summary, the Bellman-Ford algorithm is a powerful tool for finding the shortest path from a
single source vertex to all other vertices in a graph, even when negative weight edges are present. It uses
an iterative approach to update distance values based on adjacent edges until convergence is achieved.

Let all edges be processed in the following order: (B, E), (D, B), (B, D), (A, B), (A, C), (D, C), ( B, C),
(E, D). We get distance after all edges are processed for the first time. The first row in the original
distance. The second row shows the distance when the (B, E), (D, B), (B, D) and (A, B) are processed.
The third row displays the distance when (A, C) is processed. The fourth row displayed when (D, C), (B, C)
and (E, D) are processed.

67
The second iteration ensures that all the shortest paths are at most 2 edges long. The algorithm
handles all edges 2 more times. The distance is minimized after the second iteration, so the third
iteration and the fourth time do not update the distance.

 Dijkstra’s Algorithm

Dijkstra's algorithm is a popular algorithm for finding the shortest path from a single source vertex
to all other vertices in a graph. It uses a breadth-first search approach and is not limited to solving a
single-source problem. However, it does have a constraint: the graph cannot have negative weight edges.

One significant improvement of Dijkstra's algorithm over the Bellman-Ford algorithm is its running
time. Dijkstra's algorithm has a time complexity of O(|E|+|V|log|V|), which is generally faster than the
Bellman-Ford algorithm's time complexity of O(|V||E|).

In addition to solving the single-source shortest path problem, Dijkstra's algorithm can also be used
to solve the shortest path problem for all pairs of vertices by running it on all internal vertices. However,
this requires that all edge weights in the graph be positive.

Dijkstra's algorithm allows you to calculate the shortest path between a chosen source node and
every other node in the graph. By maintaining a priority queue of vertices and iteratively selecting the
vertex with the smallest distance value, the algorithm gradually explores and updates the distances to all
reachable vertices. The algorithm terminates when all vertices have been visited or when the destination
vertex is reached.

68
It is important to note that Dijkstra's algorithm does not work correctly if there are negative weight
edges in the graph. In such cases, other algorithms like the Bellman-Ford algorithm or specialized
algorithms for graphs with negative weights should be used.

In summary, Dijkstra's algorithm is a powerful tool for finding the shortest path from a single source
vertex to all other vertices in a graph. It offers improved running time compared to the Bellman-Ford
algorithm but requires that the graph does not contain negative weight edges. It can also be extended to
solve the shortest path problem for all pairs of vertices under the condition of positive edge weights.

How Dijkstra's Algorithm works

Djkstra's algorithm works on the basis that all B -> D paths of the shortest path A -> D between
vertices A and D are also the shortest path between vertices B and D.

Dijkstra's algorithm is a popular algorithm for finding the shortest path from a single source vertex
to all other vertices in a graph. It works by initially overestimating the distance of each vertex from the
start and then iteratively visiting each vertex and its neighbors to find the shortest path to those
neighboring nodes.

The algorithm uses a greedy approach in the sense that it chooses the next best solution at each
step, hoping that the end result is the best solution for the entire problem. Specifically, Dijkstra's
algorithm maintains a priority queue of vertices, where the priority is determined by the estimated
distance from the source vertex. At each iteration, the algorithm selects the vertex with the smallest
estimated distance and explores its neighbors to update their estimated distances if a shorter path is
found.

69
One key property of Dijkstra's algorithm is that it only works correctly on graphs without negative
weight edges. This is because the algorithm assumes that all paths with fewer edges are shorter than
paths with more edges, which may not hold true in the presence of negative weight edges.

Despite this limitation, Dijkstra's algorithm is widely used due to its efficiency and practicality. It has
a time complexity of O(|E|+|V|log|V|), which is generally faster than other algorithms like the Bellman-
Ford algorithm. Additionally, it can be easily extended to solve the shortest path problem for all pairs of
vertices by running it on all internal vertices.

In summary, Dijkstra's algorithm is a powerful tool for finding the shortest path from a single source
vertex to all other vertices in a graph. It uses a greedy approach to iteratively update estimated distances
and maintain a priority queue of vertices. However, it only works correctly on graphs without negative
weight edges. For instances, calculate the shortest path between node C and the other nodes in the
chart below:

During the execution of the algorithm, we will mark every node with the minimum distance to
the C button (our selected node). For node C, this distance is 0. For the rest of the nodes, since we still
don't know the minimum distance, it starts with infinity (∞):

70
We will also have an existing node. Initially, we set it to C (our selected button). In the picture, we
mark the current button with a red dot. Now, we check the neighbors of our current node (A, B and D) in
no particular order. Let's start with B. We add the minimum distance of the current node (in this case 0)
with the weight of our current connection node with B (in this case 7) and them I have 0 + 7 = 7. We
compare that value to the minimum distance of B (infinity); the lowest value is the remaining value is the
minimum distance of B (in this case, 7 is smaller than infinity):

71
3) Comparison Bellman Ford’s Algorithm and Dijktra’s Algorithm

BELLMAN FORD’S ALGORITHM DIJKSTRA’S ALGORITHM

Bellman Ford’s Algorithm works when Dijkstra’s Algorithm doesn’t work when

there is negative weight edge, it also there is negative weight edge.

detects the negative weight cycle.

The result contains the vertices which The result contains the vertices

contains the information about the containing whole information about the

other vertices they are connected to network, not only the vertices they are

connected to.

72
Conclusion

In conclusion, we have explored various important concepts related to data structures in this
assignment. We have seen how to create a design specification for a stack ADT, two sorting algorithms,
and two network shortest path algorithms. We have also discussed how to specify an abstract data type
using the example of a software stack and the advantages of encapsulation and information hiding when
using an ADT. Finally, we have delved into the topic of imperative ADTs with regard to object orientation.

By understanding these concepts, you will be better equipped to design and implement data
structures in your own programming projects. Whether you are working on a small personal project or a
large-scale enterprise application, the ability to create efficient and effective data structures is crucial. I
hope that this assignment has been informative and that you feel confident in your ability to work with
data structures going forward.

73
Reference

1. Corwin. (2019).Data Structure: Singly Linked list -Codeforwin. [online] Available at:
https://codeforwin.org/2015/09/singly-linked-list-data-structure-in.html.

2. Tutorialspoint.com. (2019).Data Structure and Algorithms Selection Sort -Tutorialspoint. [online]


Available at:
https://www.tutorialspoint.com/data_structures_algorithms/selection_sort_algorithm.htm.

3. Sciencing.com. (2019). [online] Available at: https://sciencing.com/the-advantages-disadvantages-


ofting-algorithms-12749529.html. Studytonight.com. (2019).

4. Bubble Sort Algorithm | Studytonight. [online] Available at: https://www.studytonight.com/data-


structures/bubble-sort.

5. Getrevising.co.uk. (2019).Sort Bubble Algorithm. [online] Available at:


https://getrevising.co.uk/grids/sort-bubble-algorithm.

6. Anon, (2019). [online] Available at: https://www.techwalla.com/articles/advantages-disadvantages-


ofubble-sort.

7. En.wikipedia.org. (2019).Linear search. [online] Available at:


https://en.wikipedia.org/wiki/Linear_search. 2braces.com. (2019)

8. .Linear Search Algorithm -Data Structures. [online] Available at: https://www.2braces.com/data-


structures/linear-search. En.wikipedia.org. (2019).

9. Binary search algorithm. [online] Available at:


https://en.wikipedia.org/wiki/Binary_search_algorithm.

74

You might also like