0% found this document useful (0 votes)
7 views13 pages

ITCC 104 - Data Structures & Algorithms: Lesson 1: Trees Tree

The document provides an overview of data structures, specifically focusing on trees and graphs, including key terminologies, operations, and traversal methods. It explains various types of trees such as binary trees, binary search trees, and AVL trees, along with their properties and common operations like insertion, deletion, and traversal. Additionally, it covers graph concepts, including directed and undirected graphs, cycles, paths, and basic graph operations.

Uploaded by

Asahi Urahara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views13 pages

ITCC 104 - Data Structures & Algorithms: Lesson 1: Trees Tree

The document provides an overview of data structures, specifically focusing on trees and graphs, including key terminologies, operations, and traversal methods. It explains various types of trees such as binary trees, binary search trees, and AVL trees, along with their properties and common operations like insertion, deletion, and traversal. Additionally, it covers graph concepts, including directed and undirected graphs, cycles, paths, and basic graph operations.

Uploaded by

Asahi Urahara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

ITCC 104 – Data Structures & h) Descendant: Are any of the nodes that can

be reached from that node following edges


Algorithms away from the root node.
Lesson 1: Trees i) Ancestor: Any of the nodes that can be
reached from that node following edges
Tree toward the root node.
j) Path: Described as a list of edges between a
 essentially collections of nodes node and one of its descendants.
 typically including constraints that prevent k) Height: Represents the number of edges
more than one reference to each node, and between the root node and the leaf that is
stipulate that no references point to the root furthest from the root node.
node. l) Depth: The number of edges between that
 incredibly useful data structures in node and the root node represents the depth
programming, although their applications of a node. The root node, therefore, has a
can be somewhat limited. depth equal to zero.

Tree Terminologies
Here are some of the most common and
important terms:
a) Node: Any object or value stored in the tree
represents a node. In the preceding figure, The two diagrams following show the two types
the root and all of its children and of tree:
descendants are independent nodes.
b) Root: The base node of the tree. Ironically,
this node is typically depicted at the top of a
graphic representation of the tree. Note that
a root node, even if it has zero descendants,
represents an entire tree by itself.
c) Parent: Any node which contains 1...n child
nodes. The parent is only the parent in
respect to one of its children. Also note that
any parent node can have 0...n children
depending on the rules associated with the
tree's structure.
d) Child: Any node other than the root node is Common Operations
a child to one (and only one) other node. Many of the common operations associated with
The root node of any tree that is not a sub- trees can be defined in terms of a single node, or
tree of another structure is the only node that from the perspective of the same. Here is a list of
is not itself a child. the most common operations associated with trees:
e) Siblings: Siblings, also referred to as
children, represent the collection of all of  Data: Associated with a single node, and
the child nodes to one particular parent. For returns the object or value contained in that
example, referring to the preceding figure, node.
the collection of two nodes below the root  Children: Returns the collection of siblings
represents siblings. associated with this parent node.
f) Leaf: Any node that has no child nodes  Parent: Some tree structures provide a
g) Edge: The route, or reference, between a mechanism to "climb" the tree, or traverse
parent and child node. the structure from any particular node back
toward the root.
 Enumerate: Will return a list or some other
collection containing every descendant of a
particular node, including the root node
itself.
 Insert: Allows a new node to be added as a
child of an existing node in the tree. Can
be somewhat complicated when the tree
structure has a limit to the number of
children that can be associated with a a) 2k -1: maximum number of nodes in a
particular parent. binary tree of height k
o When the maximum number of b) Skewed binary tree: A tree that either have
children permitted is already in 0 left subtrees or 0 right subtrees
place, one of those children must be c) Full binary tree of depth k: a binary tree of
relocated as a child of the new node depth kl with 2k-1 nodes
being inserted
 Graft: A similar operation to insert, except
that the node being inserted has Tree Representations
descendants of its own, meaning it is a
multi-layer tree. Can be somewhat Using Arrays- static /sequential representation
complicated when the tree structure has a
limit to the number of children that can be
associated with a particular parent.
o When the maximum number of
children permitted is already in
place, one of those children must be
logically relocated as a child of a
leaf of the new tree being inserted. Using Linked list
 Delete: Will remove a specified node from
the tree. If the node being deleted has
descendants, those nodes must be relocated
to the deleted node's parent in some
fashion, otherwise the operation is classified
as a prune.
 Prune: Will remove a node and all of its
descendants from a tree. Traversing Binary Trees
 Traversing a tree means visiting each node
in a specified order. This process is not as
Binary Trees commonly used as finding, inserting, and
 If every node in a tree can have at most two deleting nodes.
children, the tree is called a binary tree.  One reason for this is that traversal is not
particularly fast. But traversing a tree has
Binary tree some surprisingly useful applications and
is theoretically interesting.
 a finite set of nodes which is either empty
 There are three simple ways to traverse a
or consisting of a root with at most two
tree. They’re called preorder, inorder, and
branches to disjoint binary trees called the
postorder
left subtree and the right subtree
Inorder Traversal
 An inorder traversal of a binary search tree
will cause all the nodes to be visited in
ascending order, based on their key values.
 If you want to create a sorted list of the data
in a binary tree, this is one way to do it:
o Call itself to traverse the node’s left
subtree.
o Visit the node.
o Call itself to traverse the node’s right Like inOrder(B), inOrder(C) has no
subtree children, so task 1 returns with no action,
task 2 visits C, and task 3 returns with no
Traversing a 3-Node Tree action.
- Let’s look at a simple example to get an idea of 10. inOrder(B) now returns to inOrder(A).
how this recursive traversal routine works. Imagine 11. However, inOrder(A) is now done, so it
traversing a tree with only three nodes: a root (A), returns and the entire traversal is complete.
with a left child (B), and a right child (C), as
shown in the figure below:
Preorder and Postorder Traversals
 A binary tree (not a binary search tree) can
be used to represent an algebraic
expression that involves the binary
arithmetic operators +, -, /, and *.
 The root node holds an operator, and the
other nodes represent either a variable
name (like A, B, or C), or another operator.
Each subtree is an algebraic expression.

Follow the Steps of an Inorder Traversal:


1. Start by calling inOrder() with the root A as A*(B+C)
an argument. This incarnation of inOrder()
 This is called infix notation; it’s the
we’ll call inOrder(A).
notation normally used in algebra.
2. inOrder(A) first calls inOrder() with its left
 Traversing the tree inorder will generate the
child, B, as an argument. This second
correct inorder sequence A*B+C, but you’ll
incarnation of inOrder() we’ll call
need to insert the parentheses yourself.
inOrder(B).
3. inOrder(B) now calls itself with its left child *A+BC
as an argument. However, it has no left
child, so this argument is NULL. This  This is called prefix notation. It’s another
creates an invocation of inorder() we could equally valid way to represent an algebraic
call inOrder(NULL). expression.
4. There are now three instances of inOrder()  One of the nice things about it is that
in existence: inOrder(A), inOrder(B), and parentheses are never required; the
inOrder(NULL). However, inOrder(NULL) expression is unambiguous without them.
returns immediately when it finds its ABC+*
argument is NULL.
5. Now inOrder(B) goes on to visit B; we’ll  This is called postfix notation. Starting on
assume this means to display it. the right, each operator is applied to the two
6. Then inOrder(B) calls inOrder() again, with things on its left.
its right child as an argument. Again this
argument is NULL, so the second
inorder(NULL) returns immediately.  Preorder: Data- Left Tree- Right Tree
7. Now inOrder(B) has carried out tasks 1, 2,  Postorder: Left Tree- Right Tree- Data
and 3, so it returns (and thereby ceases to  Inorder: Left Tree- Data- Right Tree
exist).
8. Now we’re back to inOrder(A), just
returning from traversing A’s left child.
9. We visit A, and then call inOrder() again
with C as an argument, creating inOrder(C).
5. Take the inorder traversal of A, B, and C.
6. Take the middle node as the parent of the
two other nodes.
7. Adjust other nodes if necessary.

Deletion
1. Delete the node.
Binary Search Tree 2. If the deleted node is a non-terminal
node, the immediate predecessor
 A node’s left child must have a key less (immediate successor) replaces the
than its parent, and a node’s right child deleted node.
must have a key greater than its parent. 3. Starting from the deepest level, take the
first node whose subtree heights differ
by more than one level as A.
4. Take the right child of A as B if it has
more descendants than the left child;
otherwise, take the left child of A as B.
5. Take the right child of B as C if it has
more descendants than the left child;
otherwise, take the left child of B as C. If
equal prioritize LL over LR or RR over
RL.
6. Take the inorder traversal of A, B, and C.
7. Take the middle node as the parent of the
 All identifiers in the left subtree are less
two other nodes.
than the identifier in the root
8. Adjust other nodes.
 All identifiers in the right subtree are
greater than the identifier in the root
 The left and right subtrees are also binary
search trees Lesson 2: Graphs
Part 1: Visual Graph Concepts
AVL Trees Graph
 Introduced by Adelson-Velskii and Landis  a pair (V, E), where V is a set of nodes,
in 1962 called vertices, and E is a collection of
 Height-balanced binary search trees pairs of vertices, called edges. Vertices and
 Binary search trees in which the heights of edges are positions and store elements.
the subtrees of each node differ by at most
one level. Directed edge
 Insertion of deletion of nodes causes the  ordered pair of vertices (u, v)
search tree to be rebalanced.  first vertex u is the origin
 second vertex v is the destination
Insertion
 Example: one-way road traffic
1. Insert the node at the proper point.
2. Starting from the point of insertion, take the
first node whose subtree heights differ by
more than one level as A.
3. If the new node is inserted in the left subtree Undirected edge
of A, take the left child of A as B; if inserted  unordered pair of vertices (u, v)
in the right subtree of A, take the right child  Example: railway lines
of A as B.
4. If the new node is inserted in the left subtree
of B, take the left child of B as C; if inserted
in the right subtree of B, take the right child
of B as C.
Directed graph  A graph is connected if there is a path from
every vertex to every other vertex. If a
 all the edges are directed graph is not connected then it consists of a
 Example: route network set of connected components.

 A directed acyclic graph [DAG] is a


Undirected graph
directed graph with no cycles.
 all the edges are undirected
 Example: flight network

 A forest is a disjoint set of trees.


 A graph with no cycles is called a tree. A  A spanning tree of a connected graph is a
tree is an acyclic connected graph. subgraph that contains all of that graph’s
vertices and is a single tree.
 A spanning forest of a graph is the union
of spanning trees of its connected
components.
 A bipartite graph is a graph whose vertices
 A self-loop is an edge that connects a vertex can be divided into two sets such that all
to itself edges connect a vertex in one set with a
vertex in the other set.

 Two edges are parallel if they connect the


same pair of vertices
 In weighted graphs, integers (weights) are
assigned to each edge to represent
(distances or costs).
 The Degree of a vertex is the number of
edges incident on it.
 A subgraph is a subset of a graph’s edges
(with associated vertices) that form a graph.
 A path in a graph is a sequence of adjacent
vertices. Simple path is a path with no
repeated vertices. In the graph below, the
dotted lines represent a path from G to E.
 Graphs with all edges present are called
complete graphs

 A cycle is a path where the first and last


vertices are the same. A simple cycle is a
cycle with no repeated vertices or edges
(except the first and last vertices).
 Graphs with relatively few edges (generally
if it edges < |V| log |V|) are called sparse
graphs.
 Graphs with relatively few of the possible
edges missing are called dense.
 Directed weighted graphs are sometimes  At the node level, only the target node y
called network. must be passed as a parameter, while at the
 We will denote the number of vertices in a graph level, both x and y must be provided.
given graph by |V|, and the number of edges
by |E|. Note that E can range anywhere from GetNodeValue
0 to |V|(|V| – l)/2 (in undirected graph). This  sometimes called the GetVertexValue or
is because each node can connect to every GetPointValue operation.
other node.  returns the value associated with the node,
whether it is a primitive or some custom
object type, and the operation has an O (1)
Part 2: Graph Operations operational cost.
 defined as a part of the graph object, the
AddNode node to be interrogated must be passed into
 sometimes called the AddVertex or the operation as a parameter.
AddPoint operation, and is dependent on the
SetNodeValue
language used to define the graph.
 simply inserts new nodes into the graph  sometimes called the SetVertexValue or
without defining any edges or references to SetPointValue operation
neighboring nodes.  sets the value of the node and has an O (1)
 represents an O (1) operation operational cost.
 exclusively implemented in the graph
collection object. Adjacent
 checks whether an edge exists from node x
RemoveNode
to node y , and typically returns a Boolean
 sometimes called the RemoveVertex or value representing the result.
RemovePoint operation, and it is dependent
on the language used to define the graph. Neighbors
 deletes the node from the graph and  functions similarly to the children
removes any edges or references to and operation in a tree data structure.
from neighboring nodes.  returns a list containing all of the nodes y
 has an O ( n + k ) operational cost, where n where there is an edge from node x to node
is the number of nodes in our graph and k is y.
the number of edges.
 exclusively implemented in the graph Count
collection object.
 returns the number of nodes contained in
AddEdge the collection.

 sometimes called the AddArc or AddLine GetEdgeValue


operation, and it is dependent on the
 sometimes called the GetArcValue or
language used to define the node.
GetLineValue operation
 simply adds a new edge from node x to
 returns the value associated with the edge,
node y.
whether it is a primitive or some custom
 implemented in both the collection object
object type, and the operation has an O (1)
and the node object.
operational cost.
 At the node level, only the target node y
must be passed as a parameter; while at the SetEdgeValue
graph level, both x and y must be provided.
 sometimes called the SetArcValue or
RemoveEdge SetLineValue operation
 sets the value of the edge and has an O (1)
 sometimes called the RemoveArc or
operational cost
RemoveLine operation, and it is dependent
on the language used to define the node.
 simply removes an existing edge from node
x to node y if it exists.
Part 3: Applications of Graphs Adjacency List
 Representing relationships between
components in electronic circuits
 Transportation networks: Highway network,
Flight network
 Computer networks: Local area network,
Internet, Web
 Databases: For representing ER (Entity
Relationship) diagrams in databases, for  In this representation all the vertices
representing dependency of tables in connected to a vertex v are listed on an
databases adjacency list for that vertex v. This can be
easily implemented with linked lists.
 The total number of linked lists is equal to
Part 3: Application of Graphs the number of vertices in the graph

 Representing relationships between Disadvantages of Adjacency Lists


components in electronic circuits
 Transportation networks: Highway network,  Using adjacency list representation we
Flight network cannot perform some operations efficiently.
 Computer networks: Local area network, As an example, consider the case of deleting
Internet, Web a node.
 Databases: For representing ER (Entity  In adjacency list representation, it is not
Relationship) diagrams in databases, for enough if we simply delete a node from the
representing dependency of tables in list representation, if we delete a node from
databases the adjacency list then that is enough.

Adjacency Set
 It is very much similar to adjacency list but
instead of using Linked lists, Disjoint Sets
Part 4: Graph Representation [UnionFind] are used.
Adjacency Matrix
 In the matrix, each edge is represented by Part 5: Graph Traversals
two bits for undirected graphs. That means,
an edge from u to v is represented by 1 value Depth First Search [DFS]
in both Adj[u ,v ] and Adj[u,v]. To save  DFS algorithm works in a manner similar
time, we can process only half of this to preorder traversal of the trees. Like
symmetric matrix. Also, we can assume that preorder traversal, internally this algorithm
there is an “edge” from each vertex to itself. also uses stack.
So, Adj[u, u] is set to 1 for all vertices.  The intersections of the maze are the
 If the graph is a directed graph, then we vertices and the paths between the
need to mark only one entry in the intersections are the edges of the graph. The
adjacency matrix. As an example, consider process of returning from the “dead end” is
the directed graph below. called backtracking. We are trying to go
away from the starting vertex into the graph
as deep as possible, until we have to
backtrack to the preceding grey vertex. In
DFS algorithm, we encounter the following
types of edges.

 The adjacency matrix for this graph can be


given as:
Applications of DFS
 Topological sorting
 Finding connected components
 Finding articulation points (cut vertices) of
the graph
 Finding strongly connected components
 Solving puzzles such as mazes

Breadth First Search [BFS]


 The BFS algorithm works similar to level –
order traversal of the trees. Like level –
order traversal, BFS also uses queues. In
fact, level – order traversal got inspired from
BFS. Initially, BFS starts at a given vertex,
which is at level 0.
 In the first stage, it visits all vertices at level
1 (that means, vertices whose distance is 1
from the start vertex of the graph). In the
second stage, it visits all vertices at the
second level. These new vertices are the
ones which are adjacent to level 1 vertices.
BFS continues this process until all the
levels of the graph are completed. Generally,
queue data structure is used for storing the
vertices of a level.
Applications of Shortest Path Algorithms
 Finding fastest way to go from one place to
another
 Finding cheapest way to fly/send data from
one city to another

Shortest Path in Unweighted Graph


 Let’s be the input vertex from which we
want to find the shortest path to all other
vertices. Unweighted graph is a special case
of the weighted shortest-path problem, with
all edges a weight of 1. The algorithm is
similar to BFS and we need to use the
following data structures:
o A distance table with three columns
(each row corresponds to a vertex):
 Distance from source vertex.
 Path – contains the name of
the vertex through which we
get the shortest distance.
o A queue is used to implement
breadth-first search. It contains
vertices whose distance from the
source node has been computed and
their adjacent vertices are to be
examined.

Applications of BFS
 Finding all connected components in a
graph
 Finding all nodes within one connected
component
 Finding the shortest path between two nodes
 Testing a graph for bipartiteness

Part 6: Variations of Shortest Path &


Algorithms
Part 2: Binary search
 A binary search is typically implemented as
a recursive function and works on the
principle of repeatedly dividing the
collection in half and searching smaller and
smaller chunks of the collection until a
match is found or until the search has
exhausted the remaining options and turns
up empty.
o For example, given the following set
of ordered values: S = {8, 19, 23, 50,
75, 103, 121, 143, 201}
o Using a linear search to find the
value 143 would have a complexity
cost of O (8) since 143 is found at
index 7 (position 8) in our collection.
 However, a binary search can take
advantage of the sorted nature of the
collection to improve upon this complexity
cost. We know that the collection consists of
nine elements, so the binary search would
begin by examining the median element at
index 5 and comparing that to the key value
of 143. Since i[5] = 75 , and this is less than
143 , the set is split and the range of possible
matches is reduced to only include the upper
half, leaving us with: S = {103, 121, 143,
201}
 With four elements, the median element is
Lesson 3: Searching And Sorting the element at position two. Position i[2] =
Algorithms 121 , and this is less than 143 so the set is
split and the range of possible matches is
Part 1: Linear Search reduced to only include the upper quarter,
leaving us with: S = {143, 201} With two
Linear search elements, the median element is the element
at position one. Since i[1] = 143 , we have
 A search, also called a sequential search, is
found a match and the value can be returned.
simply a loop through a collection with
o This search only cost O (3),
some kind of comparison function to locate
improving on the linear search time
a matching element or value.
by almost 67%. Although individual
 Most linear searches return a value
results may vary, the binary search
representing the index of the matching
pattern is consistently more efficient
object in the collection, or some impossible
than the linear search when the
index value such as -1 when an object is not
collection is sorted.
found. Alternative versions of this search
could return the object itself or null if the
object is not found.
 This is the simplest form of search pattern Part 3: Jump search
and it carries an O ( n ) complexity cost.  A jump search bears some similarity to both
This complexity is consistent whether the the linear search and binary search
collection is in random order or if it has algorithms in that it searches blocks of
already been sorted. elements from left to right beginning with
the first block in the collection, and also
because at each jump the algorithm
compares the search key value to the value
of the element at the current step.
 If the algorithm determines that the key consider ordering a collection in ascending
could exist in the current subset of elements, order.
the next step (no pun intended) is to examine o Given the following set of values: S
each element in the current subset to = {50, 25, 73, 21, 3}
determine if it is less than the key.  Our algorithm will find the smallest value in
o For example, let's search for value S[0...4] , which in this case is 3 , and place it
143 given the following set of at the beginning of S[0...4]:  Selection sort
ordered values:  Jump search S = S = {3, 25, 73, 21, 50}
{8, 19, 23, 50, 75, 103, 121, 143,  The process is repeated for S[1...4] , which
201} returns the value 21: S = {3, 21, 73, 25, 50}
 Since our collection contains nine elements, The next evaluation at S[2...4] returns a
m = √ n gives us a value of 3. Since i[2] = value of 25: S = {3, 21, 25, 73, 50}
23 , and this is less than 143 , the algorithm  Finally, the function repeats again for
jumps to the next block. S[3...4] which returns the smallest value of
 Next, i[4] = 103 which is also less than 143 50: S = {3, 21, 25, 50, 73}
so this subset is excluded.  There is no need to examine the last object
 Finally, i[8] = 201 . Since 201 is greater than in the collection because it, by necessity, is
143 , the key could possible exist in the third already the largest remaining value. This is a
subset: S 3 = {121, 143, 201} Next, the small consolation, however, because the
algorithm checks each element in this subset selection sort algorithm still has an O ( n 2 )
to determine if it is less than 143 . And i[6] = complexity cost.
121 , so the algorithm continues its  Moreover, this worstcase complexity score
examination. doesn't tell the whole tale in this particular
 Also, i[7] = 143 which is not less than 143 , case. The selection sort is always an O ( n
so the execution proceeds to the final step. 2 ) complexity, even under the best of
Since i[7] = 143 , we have found a match to circumstances.
our key and the value of i can be returned.  Therefore, the selection sort is quite possibly
 This search cost O (5), which is only slightly the slowest and most inefficient sorting
better than the O (7) the linear search algorithm you may encounter.
produced and slightly worse than the O (3)
cost we found with a binary search.
 However, with much larger data sets the Part 5: Insertion sort
jump search is consistently more efficient
than a linear and binary search when the  An insertion sort is a very simple algorithm
collection is sorted. that looks at an object in a collection and
 Again, sorting the collection does represent compares its key to the keys prior to itself.
some cost in time and performance up front, You can visualize this process as how many
but the payoff over the life span of your of us order a hand of playing cards,
application's run cycle is more than justifies individually removing and inserting cards
the effort. from left to right in ascending order.
 An insertion sort algorithm will examine an
object at index i and determine if it's key is
lower in value or priority than the object at
Part 4: Selection sort
index i - 1 . If so, the object at i is removed
 A selection sort can be described as an in- and inserted at i - 1 . At this point, the
place comparison. This algorithm divides a function will repeat and continue to loop in
collection or list of objects into two parts. this manner until the object key at i - 1 is not
The first is a subset of objects that have lower than the object key at i .
already been sorted, ranging from 0 to i, o Given the following set of values: S
where i is the next object to be sorted. The = {50, 25, 73, 21, 3} Our algorithm
second is a subset of objects that have not will begin examining the list at i = 1 .
been sorted, ranging from i to n, where n is  We do this because at i = 0 , i - 1 is a non-
the length of the collection. existent value and would require special
 The selection sort algorithm works by taking handling.
the smallest or largest value in a collection  Since 25 is less than 50, it is removed and
and placing it at the beginning of the reinserted at i = 0.
unsorted subarray by swapping it with the  Since we're at index 0, there is nothing left
object at the current index. For example, to examine to the left of 25, so this iteration
is complete: S = {25, 50, 73, 21, 3} Next we However, it does have one distinct
examine i = 2. advantage over other comparison sorts, in
 Because 73 is not less than 50, this value that: inherently determine whether or not the
doesn't need to move. Since we have already list has been sorted. Bubble sort
sorted everything to the left of i = 2 , this accomplishes this by not performing
iteration is immediately completed. comparisons on objects that were sorted in
 At i = 3 , the value 21 is less than 73 and so previous iterations and by stopping once the
it is removed and reinserted at i = 2 . collection proves ordered.
Checking again, 21 is less than 50, so the
value 21 is removed and reinserted at 
Insertion sort index 1. Part 7: Quick sort
 Finally, 21 is less than 25, so the value 21 is
removed and reinserted at i = 0 .  The quick sort is one of a set of algorithms
o Since we're now at index 0, there is known as divide-and-conquer. Divide and
nothing left to examine to the left of conquer algorithms work by recursively
21, so this iteration are complete: S = breaking down a set of objects into two or
{21, 25, 50, 73, 3} more sub sets until each sub set becomes
 Finally, we come to i = 4 , the end of the list. simple enough to solve directly. In the case
Since 3 is less than 21, the value 3 is of quick sort, the algorithm picks an element
removed and reinserted at i = 3 . called a pivot , and then sorts by moving all
 Next, 3 is less than 73, so the value 3 is smaller items prior to it and greater items
removed and reinserted at i = 2 . At i = 2 , 3 after it.
is less than 50 so the value 3 is removed and  Moving elements before and after the pivot
reinserted at i = 1 . is the primary component of a quick sort
o At i = 1 , 3 is less than 25 so the algorithm and is referred to as a partition .
value 3 is removed and reinserted at i The partition is recursively repeated on
= 0 . Since we're now at index 0, smaller and smaller sub sets until each sub
there is nothing left to examine to the set contains the 0 or 1 element, at which
left of 3 so this iteration, and our point the set is ordered.
sorting function, are complete: S =  Choosing the correct pivot point is critical in
{3, 21, 25, 50, 73} maintaining quick sort's improved
 As you can see this algorithm is simple but performance. For example, choosing the
also potentially expensive for larger lists of smallest or largest element in the list will
objects or values. The insertion sort has a result in O ( n 2 ) complexity. Although
worst-case and even an average- case there is no bulletproof approach for
complexity of O ( n 2 ). choosing the best pivot, there are
 However, unlike selection sort, insertion sort fundamentally four approaches your design
has improved efficiency when sorting lists can take:
that were previously sorted. As a result, it o Always pick the first object in the
enjoys a bestcase complexity of O ( n ), collection.
making this algorithm a slightly better o Always pick the median object in the
choice than selection sort. collection.
o Always pick the last object in the
collection.
Part 6: Bubble sort o Choose an object at random from the
collection.
 Bubble sort is another simple algorithm that
steps through the list of values or objects to
be sorted and compares adjacent items or Part 8: Merge sort
their keys to determine if they are in the
wrong order. The name comes from the way  Merge sort is another popular version of the
that unordered items seem to bubble to the divide and conquer algorithm. It is a very
top of the list. efficient, general-purpose sort algorithm.
 However, some developers sometimes refer The algorithm gets is named from the fact
to this as a sinking sort, as objects could just that it divides the collection in half,
as easily appear to be dropping down recursively sorts each half, and then merges
through the list. Overall, the bubble sort is the two sorted halves back together. Each
just another inefficient comparison sort. half of the collection is repeatedly halved
until there is only one object in the half, at
which point it is sorted by definition. As
each sorted half is merged, the algorithm
compares the objects to determine where to
place each sub set.

Part 9: Bucket sort


 Bucket sort , also known as bin sort , is a
type of distribution sorting algorithm.
 Distribution sorts are algorithms that
scatter the original values into any sort of
intermediate structures that are then
ordered, gathered, and merged into the final
output structure.
 It is important to note that, although bucket
sort is considered a distribution sort, most
implementations typically leverage a
comparison sort to order the contents of the
buckets.
 This algorithm sorts values by distributing
them throughout an array of arrays that
are called buckets . Elements are distributed
based on their value and the range of values
assigned to each bucket.

You might also like