0% found this document useful (0 votes)
486 views13 pages

Linked List, Stack, Queue

Linked lists have several advantages over other data structures: 1. Memory is dynamically allocated, allowing elements to be easily added or removed. 2. They are useful for implementing other data structures like stacks and queues. 3. Only allocated memory is used, so there is no wasted space.

Uploaded by

Taqi Ismail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
486 views13 pages

Linked List, Stack, Queue

Linked lists have several advantages over other data structures: 1. Memory is dynamically allocated, allowing elements to be easily added or removed. 2. They are useful for implementing other data structures like stacks and queues. 3. Only allocated memory is used, so there is no wasted space.

Uploaded by

Taqi Ismail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Advantages Of Linked List

1. Dynamic Data Structure: In LinkedList, memory is dynamically allocated to the LinkedList. One
can easily add or remove an element to the LinkedList at runtime. Hence, there is no need for an initial
size.
2. Implementation: LinkedList is a very useful Data Structure that helps implement other Data
Structures like Stacks and Queues.
3. No Memory Wastage: As discussed in the first point, memory is dynamically allocated to the
LinkedList. That is why we can remove the memory which is not in use, and no memory is wasted.
4. Insertion and Deletion: In LinkedList, insertion and deletion operations are efficient compared to
other Data Structures such as Arrays as no shifting of elements is required in the LinkedList. Only the
pointers need to be updated.
5. Versatility: Linked lists can be used to implement a wide range of data structures, including
stacks, queues, associative arrays, and graphs, as well as linked data structures such as trees and hash
tables.
6. Persistence: Linked lists can be used to implement persistent data structures, which are data
structures that can be modified and accessed across multiple program executions. This is because linked
lists can be easily serialized and stored in non-volatile memory.

Disadvantages Of Linked List


1. Memory Usage: The memory used by LinkedList is more because we also need to store the
address of the following data.
2. Accessing an element: We can not access any element of the LinkedList directly. We don’t have
direct access to every element of LinkedList. If we want to access the ith element of the
LinkedList, then we need to traverse the LinkedList till the ith index.
3. Reverse Traversal: The reverse traversal is not possible in the Singly LinkedList because we
don’t have the memory address of the previous pointers. It is possible in the Doubly LinkedList,
but again it consumes more memory as we need to store the memory address of the previous
pointers.
4. More complex implementation: Linked lists can be more complex to implement than other data
structures, such as arrays or stacks. They require knowledge of dynamic memory allocation and
pointer manipulation, which can be difficult for novice programmers.
5. Lack of cache locality: Linked lists may not take advantage of the caching mechanisms in modern
processors, since accessing elements in a linked list requires jumping to different locations in
memory. This can result in slower performance compared to other data structures that have better
cache locality, such as arrays.

Applications of Linked Lists


1. It can be used in a photo viewer to allow for continuous viewing of images in a slide show.
2. It is used to build queues and stacks, two concepts that are important to computer science.
3. A circular list is used in multiplayer video games to loop between players.
4. Navigation systems, which require both front and back navigation, make excellent use of the doubly
linked list.
5. It is utilized in the idea of a well-known card game.
Singly-linked list: A singly linked list can be defined as the collection of ordered sets of
elements. The number of elements may vary according to the needs of the program. A node
in the singly linked list consists of two parts: the data part and the link part. The data part
of the node stores actual information that is to be represented by the node while the link
part of the node stores the address of its immediate successor. A one-way chain or singly
linked list can be traversed only in one direction. In other words, we can say that each node
contains only the next pointer, therefore we can not traverse the list in the reverse direction.

Doubly-linked list: A doubly linked list is a complex type of linked list in which a node
contains a pointer to the previous as well as the next node in the sequence. Therefore, in a
doubly linked list, a node consists of three parts: node data, pointer to the next node in
sequence (next pointer), pointer to the previous node (previous pointer). A sample node in
a doubly linked list is shown in the figure.

Circular-linked list: A circular Linked list is a unidirectional linked list. Circular Linked
List is a variation of a Linked list in which the first element points to the last element and
the last element points to the first element. Singly Linked List and Doubly Linked List can
be made into circular linked lists.
Circular doubly-linked list
A circular doubly linked list is a mixture of a doubly linked list and a circular linked list.
Like the doubly linked list, it has an extra pointer called the previous pointer, and similar to
the circular linked list, its last node points at the head node. This type of linked list is a
bi-directional list. So, you can traverse it in both directions.

Header-linked List
A header-linked list is a type of linked list that has a header node at the beginning of the
list. In a header-linked list, HEAD points to the header node instead of the first node of the
list. The header node does not represent an item in the linked list. This data part of this
node is generally used to hold any global information about the entire linked list. The next
part of the header node points to the first node in the list.

A header-linked list can be divided into two types:


Grounded-header linked list: Grounded header linked list that stores NULL in the last
node’s next field.
Circular-header linked list: Circular header linked list that stores the address of the
header node in the next part of the last node of the list.
Two-way circular header list: A two-way circular header list is a data structure that
represents a collection of elements, where each element contains a value and two pointers
that reference the previous and next elements in the list.
In a two-way circular header list, the first and last elements are connected, creating a loop
or circle, and a header node is used to keep track of the beginning and end of the list. This
means that the list can be traversed in both directions, from the first element to the last, or
from the last element to the first.
The header node does not contain any value, and its previous and next pointers point to the
first and last elements in the list, respectively. This makes it easy to add or remove
elements from the beginning or end of the list, without having to change the header node.
Stack:
A stack is a linear data structure in computer science that operates on the Last-In-First-Out
(LIFO) principle. A stack stores elements in a sequential manner where elements are
inserted (pushed) onto the top of the stack and removed (popped) from the top. The new
element to be added is at the top of the stack. The operations performed on a stack include
push, pop, and peek. Stacks are widely used in computer algorithms and programs for data
manipulation, function calls, and expression evaluations.
PUSH operation of Stack
PUSH operation of the stack is used to add an item to a stack at the top. We can perform
Push operations only at the top of the stack. However, before inserting an item in the stack
we must check stack should have some empty space.
PUSH Operation Algorithm of Stack:
if TOP = MAX-1
return "Overflow"
endif
top = top + 1
stack[top] = item
end
POP Operation of Stack
POP operation is performed on the stack to remove items from the stack. We can perform
the Pop operation only at the top of the stack. In an array implementation of pop()
operation, the data element is not actually removed, instead, the top is decremented to a
lower position in the stack to point to the next value.
POP Operation Algorithm of Stack:
If TOP=-1
return "Underflow"
endif
item=Stack[Top]
top=top-1
return Item
end
Real-life application of Stack:
● The evaluation of expressions with operands and operators can be done using a stack.
● Stacks can be used for backtracking, or to verify if an expression's parenthesis matches.
● It can also be used to change the expression from one form to another.
● It can be applied to formally manage memories.
● Text editors have an undo and redo feature.
● A web browser stores its history as a stack of items.
Queue: The queue is a linear data structure that follows the principle of
First-In-First-Out (FIFO) which means the element which is added first is the first one to
be removed. It is like a real-world queue where the first person is the first to be served. In a
queue data structure, the elements are added at one end, known as the rear, and removed
from the other end, known as the front. The rear end is also known as the tail, and the front
end is known as the head. Queues are commonly used in computer algorithms, such as
breadth-first search, and scheduling. Enqueue and dequeue are two common operations
used in computer programming, especially in data structures such as queues.
Enqueue: Enqueue refers to the operation of adding an element to the end of a queue. In
other words, the new element is inserted at the rear of the queue. This operation is also
known as "push" in some contexts. When you enqueue an element, you increase the size of
the queue by one.
Dequeue: Dequeue, on the other hand, refers to the operation of removing an element from
the front of a queue. In other words, the oldest element in the queue is removed. This
operation is also known as "pop" in some contexts. When you dequeue an element, you
decrease the size of the queue by one.
Real-life application of Queue:
● Routers and mail queues are examples of queue applications in computer networks.
● Applications for queues exist in the area of managing website traffic as well.
● In the majority of applications, including MP3 media players and CD players, queues
serve as buffers.
● Operating system application to manage interruption.
● Utilized as waiting lists for a single shared resource, such as a printer, disc, or CPU.
● When we send messages to pals using WhatsApp and they don't have an internet
connection, the messages are queued up on WhatsApp's server.
Priority Queue: A priority queue is a data structure that stores elements along with their
priority, and allows for efficient retrieval of the element with the highest priority. It is
similar to a regular queue, but instead of being a first-in, first-out (FIFO) data structure, a
priority queue is a data structure that retrieves elements in order of priority.
Real-life application of Priority Queue:
● Data Compression in WINZIP / GZIP: The Huffman encoding algorithm uses a
priority queue to maintain the codes for data contents. They store these codes in a min
heap, considering the size of codes as a parameter to decide the priority.
● Used in implementing Prim’s algorithm: Prim’s algorithm generates a minimum
spanning tree from an undirected, connected, and weighted graph. It uses a min priority
queue to maintain the order of elements for generating a minimum spanning tree.
● Used to perform the heap sort: When you provide an unsorted array to this algorithm, it
converts it into a sorted array. This algorithm uses a min priority queue to generate an
order of elements.
Tower of Hanoi: The Tower of Hanoi is a mathematical puzzle that consists of three
towers(pegs) and multiple disks. Initially, all the disks are placed on one rod. And these
disks are arranged one over the other in ascending order of size.
Our Objective is to move all disks from the initial tower to another tower without violating
the rule.
Algorithm
START
Procedure TOH(disk, source, dest, aux)
IF disk == 1, THEN
move disk from source to dest
ELSE
TOH(disk - 1, source, aux, dest) // Step 1
moveDisk(source to dest) // Step 2
TOH(disk - 1, aux, dest, source) // Step 3
END IF
END Procedure
STOP

Garbage Collection: Garbage collection (GC) is a dynamic approach to automatic


memory management and heap allocation that processes and identifies dead memory
blocks and reallocates storage for reuse. The primary purpose of garbage collection is to
reduce memory leaks.
In older programming languages, such as C and C++, the developer must manually delete
objects and free up memory. Relying on manual processes made it easy to introduce bugs
into the code, some of which can have serious consequences.
For example, a developer might forget to free up memory after the program no longer
needs it, leading to a memory leak that quickly consumes all the available RAM. Or the
developer might free up an object's memory space without modifying a corresponding
pointer, resulting in a dangling pointer that causes the application to be buggy or even to
crash.
Programming languages that include garbage collection try to eliminate these types of bugs
by using carefully designed GC algorithms to control memory deallocation. The garbage
collector automatically detects when an object is no longer needed and removes it, freeing
up the memory space allocated to that object without affecting objects that are still being
used.
Breadth First Search (BFS): Breadth First Search algorithm traverses a graph in a
breadth-ward motion and uses a queue to remember to get the next vertex to start a search
when a dead end occurs in any iteration. BFS is basically a node-based algorithm that is
used to find the shortest path in the graph between two nodes. BFS moves through all of its
nodes which are connected to the individual nodes. BFS uses the FIFO (First In First Out)
principle while using the Queue to find the shortest path. However, BFS is slower and
requires a large memory space.

Depth First Search (DFS): Depth First Search algorithm traverses a graph in a
depth-ward motion and uses a stack to remember to get the next vertex to start a search
when a dead-end occurs in any iteration. DFS uses LIFO (Last In First Out) principle while
using Stack to find the shortest path. DFS is also called Edge Based Traversal because it
explores the nodes along the edge or path. DFS is faster and requires less memory. DFS is
best suited for decision trees.
Adjacency List: An Adjacency list is an array consisting of the address of all the linked
lists. The first node of the linked list represents the vertex and the remaining lists connected
to this node represent the vertices to which this node is connected. This representation can
also be used to represent a weighted graph. The linked list can slightly be changed to even
store the weight of the edge.

Adjacency Matrix: Adjacency Matrix is a 2D array of size V x V where V is the number


of vertices in a graph. Let the 2D array be adj[][], a slot adj[i][j] = 1 indicates that there is
an edge from vertex i to vertex j. The adjacency matrix for undirected graphs is always
symmetric. Adjacency Matrix is also used to represent weighted graphs. If adj[i][j] = w,
then there is an edge from vertex i to vertex j with weight w.
Bubble Sort: Bubble sort is a basic algorithm for arranging a string of numbers or other
elements in the correct order. The method works by examining each set of adjacent
elements in the string, from left to right, switching their positions if they are out of order.
The algorithm then repeats this process until it can run through the entire string and find no
two elements that need to be swapped.
Analysis of Bubble Sort: The first pass of the algorithm results in N-l comparisons and
in the worst case may result in N-l exchanges also. The second pass results in N-2
comparisons and in the worst case may result in N-2 exchanges. Continuing the analysis
we observe that as the iterations or passes increase the comparisons and exchanges
decrease. Finally, the total number of comparisons will be equal to
= (N -1) + (N – 2)+ (N – 3)+…………..+ 2 + 1
= (N)* (N – 1) / 2
= O(N2
Selection Sort:
Divide and Conquer Algorithm: A divide-and-conquer algorithm is a strategy for solving
a large problem by
● breaking the problem into smaller sub-problems
● solving the sub-problems, and
● combining them to get the desired output.
To use the divide and conquer algorithm, recursion is used.
The following are some standard algorithms that follow Divide and Conquer algorithm.
● Quicksort is a sorting algorithm.
● Merge Sort is also a sorting algorithm.
● Closest Pair of Points The problem is to find the closest pair of points in a set of points
in the x-y plane
● Strassen’s Algorithm is an efficient algorithm to multiply two matrices.
● Cooley–Tukey Fast Fourier Transform (FFT) algorithm is the most common algorithm
for FFT.
● Karatsuba algorithm for fast multiplication does the multiplication of two n-digit
numbers in at most single-digit multiplications in general (and exactly n^{\log_23}
when n is a power of 2).

Huffman Coding-
Huffman Coding is a famous Greedy Algorithm.
● It is used for the lossless compression of data.
● It uses variable length encoding.
● It assigns variable length codes to all the characters.
● The code length of a character depends on how frequently it occurs in the given text.
● The character which occurs most frequently gets the smallest code.
● The character which occurs least frequently gets the largest code.
● It is also known as Huffman Encoding.
Hashing: Hashing refers to the process of generating a fixed-size output from an input of
variable size using the mathematical formulas known as hash functions. This technique
determines an index or location for the storage of an item in a data structure.
Components of Hashing:
There are majorly three components of hashing:
● Key: A Key can be anything string or integer which is fed as input in the hash
function the technique that determines an index or location for storage of an item in a
data structure.
● Hash Function: The hash function receives the input key and returns the index of an
element in an array called a hash table. The index is known as the hash index.
● Hash Table: A hash table is a data structure that maps keys to values using a special
function called a hash function. Hash stores the data in an associative manner in an
array where each data value has its own unique index.
Collision: The hashing process generates a small number for a big key, so there is a
possibility that two keys could produce the same value. The situation where the newly
inserted key maps to an already occupied, and must be handled using some collision
handling technology.
Applications of Hash Data Structure:
● Hash is used in databases for indexing.
● Hash is used in disk-based data structures.
● In some programming languages like Python, JavaScript hash is used to implement
objects.
Real-Time Applications of Hash Data Structure:
● Hash is used for cache mapping for fast access to the data.
● Hash can be used for password verification.
● Hash is used in cryptography as a message digest.
● Rabin-Karp algorithm for pattern matching in a string.
● Calculating the number of different substrings of a string.
Advantages of Hash Data Structure:
● Hash provides better synchronization than other data structures.
● Hash tables are more efficient than search trees or other data structures
● Hash provides constant time for searching, insertion, and deletion operations on
average.
Disadvantages of Hash Data Structure:
● Hash is inefficient when there are many collisions.
● Hash collisions are practically not avoided for a large set of possible keys.
● Hash does not allow null values.
Infix Notation: Infix expressions are the most usual type of expression. This notation is
typically employed when writing arithmetic expressions by hand. Moreover, in the infix
expression, we place the operator between the two operands it operates on.
For example, the operator “+” appears between the operands A and B in the expression “A
+ B”.

Furthermore, infix expressions can also include parentheses to indicate the order of
operations. In this way, we should observe the operator precedence rules and use
parentheses to clarify the order of operations in expressions in infix notation. Operator
precedence rules specify the operator evaluation order in an expression. So, in an
expression, operators with higher precedence are evaluated before operators with lower
precedence. Some operator precedence rules follow:
● Parentheses: expressions inside parentheses are evaluated first
● Exponentiation: exponents are evaluated next
● Multiplication and division: multiplication and division are evaluated before addition
and subtraction
● Addition and subtraction: finally, addition and subtraction are evaluated last
However if an expression has multiple operators with the same precedence, the evaluation
of those operators occurs from left to right.
Prefix Notation: Prefix expressions, also known as Polish notation, place the operator
before the operands. For example, in the expression “+ A B”, we place the “+” operator
before the operands A and B, as demonstrated in the image next:

We should consider that prefix expressions are evaluated from right to left. Thus, we apply
each operator to its operands as it is encountered.
Postfix Notation: Postfix expressions, also known as reverse Polish notation, where we
place the operator after the operands. For instance, in the expression “A B +”, the “+” we
place the operator after the operands A and B. The figure is next depicts the example:

Hence, we can evaluate postfix expressions from left to right, with each operator being
applied to its operands as encountered.
Comparison of the Expression Notations:
The infix notation is the simplest notation for humans to read and write, but it requires
more complex parsing algorithms for computers due to parentheses and operator
precedence rules. The prefix and postfix notations are computationally efficient and do not
require parentheses or operator precedence tracking. Furthermore, the prefix notation can
easily handle unary operators, while infix and postfix notations require special
handling.The infix notation uses parentheses for function arguments, while the prefix and
postfix notations can use other delimiters. The infix notation is the most usual notation for
writing mathematical expressions, while the prefix and postfix notations are appropriate for
particular applications. Examples of these applications are stack-based algorithms and
programming languages.

You might also like