0% found this document useful (0 votes)
17 views10 pages

Data STR

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views10 pages

Data STR

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Assignment

Directorate of Online Education

SESSION APR 2023


PROGRAM BACHELOR OF COMPUTER APPLICATION (BCA)
SEMESTER II
COURSE CODE & NAME DCA1202 – DATA STRUCTURE AND ALGORITHM
NAME: THAHSHIN SHAFRIYA
ROLL NO: 2214509750
SET-1
1.a)
A linked list is a linear data structure that stores a collection of data elements dynamically.
Nodes represent those data elements, and links or pointers connect each node. Each node
consists of two fields, the information stored in a linked list and a pointer that stores the
address of its next node. The last node contains null in its second field because it will point to
no node. A linked list can grow and shrink its size, as per the requirement. It does not waste
memory space.
A linked list is the most sought-after data structure when it comes to handling dynamic data
elements. A linked list consists of a data element known as a node. And each node consists of
two fields: one field has data, and in the second field, the node has an address that keeps a
reference to the next node.
The basic operations in the linked lists are insertion, deletion, searching, display, and deleting
an element at a given key. These operations are performed on Singly Linked Lists as given
below −

• Insertion − Adds an element at the beginning of the list.


• Deletion − Deletes an element at the beginning of the list.
• Display − Displays the complete list.
• Search − Searches an element using the given key.
• Delete − Deletes an element using the given key.

Insertion at Beginning
In this operation, we are adding an element at the beginning of the list.

Algorithm
1. START
2. Create a node to store the data
3. Check if the list is empty
4. If the list is empty, add the data to the node and assign the head pointer to it.
5 If the list is not empty, add the data to a node and link to the current head. Assign the head to the
newly added node.
6. END

Deletion at Beginning
In this deletion operation of the linked, we are deleting an element from the beginning of the
list. For this, we point the head to the second node.

Algorithm
1. START
2. Assign the head pointer to the next node in the list
3. END

b) The queue is an abstract data type which is defined by following structure and operations.
Queue is structured as an ordered collection of items which are added at one end called rear
end, and other end called front end.
Basic Operations on Queue:
Some of the basic operations for Queue in Data Structure are:
enqueue() – Insertion of elements to the queue.
dequeue() – Removal of elements from the queue.
Operation 1: enqueue()
Inserts an element at the end of the queue i.e. at the rear end. The following steps should be
taken to enqueue (insert) data into a queue:
Check if the queue is full.
If the queue is full, return overflow error and exit. If the queue is not full, increment the rear
pointer to point to the next empty space. Add the data element to the queue location, where
the rear is pointing.
Operation 2: dequeue()
This operation removes and returns an element that is at the front end of the queue. The
following steps are taken to perform the dequeue operation:
Check if the queue is empty. If the queue is empty, return the underflow error and exit. If the
queue is not empty, access the data where the front is pointing. Increment the front pointer to
point to the next available data element.
2. a)
A binary tree is a tree data structure in which each node has no more than two children.
Because each element in a binary tree can only have two children, they are commonly
referred to as the left and right child.

A Binary tree is represented by a pointer to the tree's highest node (often referred to as the
"root"). If the tree is empty, then the root value is NULL. A Binary Tree node comprises the
following components:
Data
Pointer to left child
Pointer to right child
Types of Binary Tree based on the number of children:
Following are the types of Binary Tree based on the number of children:

Full Binary Tree


Degenerate Binary Tree
Skewed Binary Trees
i) Full Binary Tree
A full binary tree is one in which every node has 0 or 2 children. Here are some examples of
entire binary trees. A full binary tree is also a binary tree in which all nodes except the leaf
nodes have two offspring.

A full Binary tree is a sort of binary tree in which each parent node/internal node has two or
no children. It is also referred to as a correct binary tree.
ii) Degenerate Binary Tree
A tree with one child for each internal node. Such trees perform similarly to linked lists in
terms of performance. A degenerate or pathological tree is one that has only one child, either
left or right.
iii) Skewed Binary Tree
A skewed binary tree is a pathological/degenerate tree in which the left or right nodes
dominate the tree. As a result, skewed binary trees are classified into two types: left-skewed
binary trees and right-skewed binary trees.
b) Dijkstra's algorithm is a method for determining the shortest pathways between nodes in a
weighted graph, which might represent a road network. Edsger W. Dijkstra, a computer
scientist, devised it in 1956 and published it three years later.
There are numerous variations of the algorithm. The original Dijkstra algorithm discovered
the shortest path between two given nodes, but a more frequent form fixes a single node as
the "source" node and finds the shortest paths from the source to all other nodes in the
network, generating a shortest-path tree.
The technique calculates the shortest path between any two nodes in the network for a given
source node. It may also be used to find the shortest paths from a single node to a single
destination node by stopping the process once the shortest path to the target node is found.
Network routing protocols, most notably IS-IS (Intermediate System to Intermediate System)
and OSPF (Open Shortest Path First), make extensive use of shortest path algorithms. It's also
used as a subroutine in other algorithms, such Johnson's.
Dijkstra's Algorithm Applications
• To find the shortest path
• In social networking applications
• In a telephone network
• To find the locations in the map
3. The breadth-first search, or BFS algorithm, is used to look for nodes that fulfil a set of
criteria in a tree or graph data structure. It explores all nodes at the current depth level before
going on to nodes at the next depth level, starting at the root of the tree or graph. The breadth-
first search can be used to tackle several problems in graph theory. The number of edges, for
example, determines the shortest path between two vertices a and b. The Ford-Fulkerson
method is used in a flow network to determine the maximum flow, and when a binary tree is
serialized/deserialized rather than serialised in sorted order, the tree can be swiftly
reconstructed.
The most commonly used method is the Breadth-First Search Algorithm (BFS). BFS is a
graph traversal method in which you begin at a source node and work your way through the
graph, analysing nodes that are directly related to the source node. In BFS traversal, you must
then go to the next-level neighbour nodes.
The BFS requires you to explore the graph in a breadthwise direction:
To begin, move horizontally and visit all of the nodes in the current layer.cProceed to the
next layer.
Breadth-First Search stores the node in a queue data structure and marks it as "visited" until it
marks all the neighbouring vertices directly connected to it. Because the queue follows the
First In First Out (FIFO) principle, the node's neighbours are viewed in the order in which
they are entered into the node, beginning with the node that was inserted first.
Following the definition of breadth-first search, we shall investigate why we require a
breadth-first search algorithm.
Depth First Traversal (or DFS) for a graph is analogous to Tree Depth First Traversal. The
sole catch is that, unlike trees, graphs can have cycles (a node can be visited twice). Use a
boolean visited array to avoid processing a node more than once. A graph can contain many
DFS traversals.
Depth-first search, for example, is a technique for traversing or exploring tree or graph data
structures. The algorithm starts from the root node (or, in the case of a graph, some arbitrary
node) and explores as far as possible along each branch before backtracking.
4. Sequential Search
In this strategy, we visit each element of the list sequentially and determine whether or not it
is the requested element. Specifically, the key element is compared to the first member of the
list; if a match is discovered, the search is successful, and the position of the key is returned.
Otherwise, the next member of the list is compared with the key, and the procedure is
repeated until the key is discovered or the list is fully searched.
Algorithm:
LinearSearch(A, n,key)
{
for(i=0;i<n;i++)
{
if(A[i] == key)

return i;
}
return -1; //-1 indicates unsuccessful search
}
Binary Search
Binary search is an incredibly efficient search strategy that seeks the provided item in the
already sorted list of given components with the fewest feasible comparisons. The technique's
logic is as follows:
1. Locate the list's middle element first.
2. Contrast the mid-element with the sought-after item.
There are three possibilities:
a. The search is successful if it is a desired element.
b. If it is less than the desired item, limit your search to the first half of the list.
c. If it is greater than the requested item, proceed to the second half of the list.
Algorithm:
Binary Search(A,l,r, key)
{
if(l= = r)
{
if(key = = A[l])
return l+1; //index starts from 0
else
return 0;
}
else
{
m = (l + r) /2 ;
if(key = = A[m])
return m+1;
else if (key < A[m])
return Binary Search(l, m-1, key)
else
return Binary Search(m+1, r, key) ;
}
}
b) An algorithm is a step-by-step technique for solving a problem in a way that always yields
the correct solution. When there are numerous algorithms for a given issue (which there
frequently are! ), the optimal method is usually the one that solves it the quickest.
We use algorithms all the time as computer programmers, whether it's an established method
for a common problem, such as sorting an array, or a wholly new algorithm specific to our
software. Understanding algorithms allows us to make better decisions about which existing
algorithms to utilize and learn how to create new, correct, and efficient algorithms.
Sequencing, selection, and iteration are the three essential building components of an
algorithm.
Sequencing: An algorithm is a step-by-step procedure, and the sequence of those steps is
critical to ensuring the algorithm's accuracy.
Selection: Algorithms can use selection to determine a different set of steps to execute based
on a Boolean expression
Iteration: Algorithms frequently utilize repetition to perform steps a certain number of times
or until a predefined condition is met.
5. This section will go over the two key metrics for measuring algorithm efficiency. These
are time complexity and spatial complexity.

However, because they cannot be directly compared, we must consider a combination of the
two.

The complexity of space

The amount of memory required on a computer or other device, or space complexity, is


simply the difference between the amount of memory required to execute a programme and
the function input.

Memory types include registers, cache, RAM, virtual memory, and secondary memory.
Four important elements must be taken into account while considering space complexity:
The RAM needed to store the algorithm’s code
The amount of RAM needed to enter the data
The amount of RAM needed to output the data (some algorithms, such as sorting algorithms,
do not need memory to output data and usually just rearrange the input data).
The amount of memory needed for the algorithm’s computation space. Local variables and
any required stack space can be included in this.
Mathematically, space is defined as the product of the two elements given below:
A variable component with structured variables dependent on the problem that the algorithm
is attempting to solve.
A fixed portion that is independent from the problem is made up of instruction space,
constant space, fixed-size structural variables, and simple variables.
So, the following formula is used to determine the space complexity S(a) of any algorithm:
S(a) = c (the fixed part) + v(i) (the variable part which depends on an instance characteristic
i)
Time complexity
The amount of time required to finish an algorithm is determined by the same factors that
influence space complexity, but time complexity converts these into a numerical function.
This statistic might be useful when comparing different algorithms, especially when
processing large volumes of data. However, if the amount of data is small, more exact
calculations will be required when analysing the effectiveness of algorithms. The temporal
complexity of an algorithm is reduced when it uses parallel processing. T (num) is the
definition of time complexity. It is measured by the total number of steps as long as each step
takes the same amount of time.
Data for Training and Testing in Machine Learning »
Algorithmic time complexity examples
The transportation of data and the comparison of keys, for example, how frequently the data
is moved and the key is compared are two critical aspects that influence the complexity of an
algorithm.
We use three scenarios to assess the complexity of an algorithm:
Maximum time complication
Cases' average complexity
Maximum time complication
The Algorithm Efficiency Determination Method
The first steps in precisely assessing an algorithm's efficiency (measuring performance) are
theoretical research and benchmarking. We'll go through this in more detail below.
b) Divide: This entails breaking down the problem into smaller sub-problems.
Conquer: Solve sub-problems iteratively until they are solved.
Combine: Combine the sub-problems to obtain the overall solution.
Algorithms that follow the Divide and Conquer algorithm
The Divide and Conquer method is followed by the following standard algorithms. A sorting
algorithm is Quicksort. The method selects a pivot element and rearranges the array elements
so that all items less than the selected pivot element go to the left side of the pivot and all
elements greater than the pivot element move to the right side. Finally, the method sorts the
subarrays to the left and right of the pivot element recursively. Merge Sort is a sorting
algorithm as well. The algorithm divides the array into two parts, sorts them recursively, and
then merges the two sorted halves.
Divide and Conquer algorithm illustration
Merge Sort is a typical example of Divide and Conquer, as shown below. In combine Sort,
we divide an array into two halves, recursively sort the two halves, and then combine the
sorted halves.
6. Knapsack Problem Using Greedy Method: The essential notion behind all families of
knapsack problems is the selection of various goods, each with profit and weight values, to be
packed into one or more knapsacks with capacity. There were two variations of the knapsack
problem:
Problem of the Fractional Knapsack
The Greedy Method is an effective way to handle the fractional Knapsack issue, which
requires you to sort the items according to their value/weight ratio. We can break things in a
fractional knapsack to maximise the total value of the knapsack. The Fractional knapsack
problem refers to the problem of breaking an object.
In this method, the Knapsack's filling is done so that the maximum capacity of the knapsack
is utilized so that maximum profit can be earned from it. The knapsack problem using the
Greedy Method is referred to as:
Given a list of n objects, say {I1, I2,……, In) and a knapsack (or bag).
The capacity of the knapsack is M.
Each object Ij has a weight wj and a profit of pj
If a fraction xj (where x ∈ {0...., 1)) of an object Ij is placed into a knapsack, then a profit of
pjxj is earned.
The problem (or Objective) is to fill the knapsack (up to its maximum capacity M),
maximizing the total profit earned.
Mathematically:
Knapsack Problem Using Greedy Method
Knapsack Problem Using Greedy Method Pseudocode
A pseudo-code for solving knapsack problems using the greedy method is;
greedy fractional-knapsack (P[1...n], W[1...n], X[1..n]. M)
/*P[1...n] and W[1...n] contain the profit and weight of the n-objects ordered such that
X[1...n] is a solution set and M is the capacity of knapsack*/
{
For j ← 1 to n do
X[j]← 0
profit ← 0 // Total profit of item filled in the knapsack
weight ← 0 // Total weight of items packed in knapsacks
j←1
While (Weight < M) // M is the knapsack capacity
{
if (weight + W[j] =< M)
X[j] = 1
weight = weight + W[j]
else{
X[j] = (M - weight)/w[j]
weight = M
}
Profit = profit + p[j] * X[j]
j++;
} // end of while
} // end of Algorithm

You might also like