0% found this document useful (0 votes)
15 views

DSAassignment

Data structure and algorithms

Uploaded by

thapasazan123456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

DSAassignment

Data structure and algorithms

Uploaded by

thapasazan123456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

P1 Create a design specification for data structures explaining the valid

operations that can be carried out on the structures.


Abstract data type:

− Abstract Data type (ADT) is a type (or class) for objects whose behavior is defined by a set of
value and a set of operations.
− The definition of ADT only mentions what operations are to be performed but not how these
operations will be implemented. It does not specify how data will be organized in memory and
what algorithms will be used for implementing the operations. It is called “abstract” because it
gives an implementation-independent view. The process of providing only the essentials and
hiding the details is known as abstraction.

Document shared on https://www.docsity.com/en/assignment-1-btec-assignment-1-btec/7504300/

− The user of data type does not need to know how that data type is implemented, for example,
we have been using Primitive values like int, float, char data types only with the knowledge that
these data type can operate and be performed on without any idea of how they are
implemented. So, a user only needs to know what a data type can do, but not how it will be
implemented. Think of ADT as a black box which hides the inner structure and design of the
data type. Now we’ll define three ADTs namely List ADT, Stack ADT, Queue ADT.
1.1. List ADT

List Abstract Data Type is a collection of elements that have a linear relationship with each other. A
linear relationship means that, except for the first one, each element on the list has a unique
successor. Also lists have a property intuitively called size, which is simply the number of elements
on the list.

List is mutable. List is also an interface, which means that other classes provide the actual
implementation of the data type. These classes include ArrayList which is implemented internally
using Arrays and LinkedList which is implemented internally using LinkedList data structure.

- The operations on List ADT can be classified like below with examples

• Creators: java.util.ArrayList and java.util.LinkedList constructors,

. Collections.singletonList(T t)

• Producers: . Collections.unmodifiableList(List list)

• Observers: size() method of . java.util.ArrayList , get(int index) method of java.util.ArrayList

• Mutators: add(Object e), remove(int index), addAll(Collection c) methods of

. java.util.ArrayList

- Java library’s List interface specifies 25 different operations/methods and some of the methods
are as follows
• get(int index) – Returns an element at a particular index from the list.
• add(E e) – Appends the specified element to the end of this list.
• remove(Object o) – Remove the first occurrence of the specified element from the list.
• remove(int index)Document shared on https://www.docsity.com/en/assignment-1-btec-assignment-1-btec/7504300/ – Removes the element at
the specified index from the list.

• size() – Returns number of elements of the list.


• isEmpty() – Return true if the list is empty, else return false.

1.2. Stack ADT


Stack ADT is a collection with homogeneous data items (elements), in which all insertions and
deletions occur at one end, called the top of the stack. A stack is a LIFO “Last In, First Out”
structure. Analogy to Stack is a stack of plates.

Stacks are managed mainly using two functions like below

• PUSH – places an element on top of the stack.


• POP – removes an element from the stack.

In Java, Stack ADT class extends the Vector class which is a growable array of Objects and it can be
accessed using integer index.

The operations on the stack ADT can be described like below

• Creators: Constructor of . java.util.Stack

• Producers: Vector(Collection c) method of Vector.

• Observers: peek() method of . java.util.Stack , isEmpty() method of java.util.Stack

• Mutators: push(E item) method of . java.util.Stack , pop() method of java.util.Stack

Java library provides the below following operations which can be performed on java.util.Stack

• push(E e) – Inserts an element at the top of stack.


• pop() – Removes an element from the top of the stack if it is not empty.
• peek() – Returns the top element of stack without removing it.
• size() – Returns the size of the stack.
• isEmpty() – Returns true if the stack is empty, else it returns false.

1.3. Queue ADT

Queue ADT is a collection in which the elements of the same type are arranged sequentially. The
Document shared on https://www.docsity.com/en/assignment-1-btec-assignment-1-btec/7504300/ operations can be performed

at both ends with insertion being done at rear end deletion being done at the front end for a
single ended queue. Theoretically,
Queue is a FIFO “First In, java.util.LinkedList , java.util.concurrent.ArrayBlockingQueue First Out”
structure.
Java data structures like implement Queue ADT
using LinkedList and ArrayLists internally respectively.

The operations on the queue ADT can be described like below

• Creators: Constructor of . java.util.LinkedList

• Producers: Constructor method LinkedList(Collection c) of . java.util.LinkedList

• Observers: peek() method of . java.util.LinkedList

• Mutators: add(E item) method of . java.util.LinkedList

Java library provides the below following operations which can be performed on java.util.Queue

• add(E e) – Queues an element at the end of queue.


• remove() – Dequeues an element from the head of the queue.
• peek() – Returns the element of the queue without removing it.
• offer(E e) – Inserts the specified element into this queue if it is possible to do without
violating capacity restrictions.
• size() – Returns the size of the queue.

P2 Determine the operations of a memory stack and how it is used to


implement function calls in a computer.

2.1. What is memory stack:


- A stack can be implemented in a random access memory (RAM) attached to a CPU. The
implementation of a stack in the CPU is done by assigning a portion of memory to a stack
operation and using a processor register as a stack pointer. The starting memory location of the
stack is
Document shared on https://www.docsity.com/en/assignment-1-btec-assignment-1-btec/7504300/

specified by the processor register as stack pointer.

2.2. Operations:
- A Stack contains elements of the same type arranged in sequential order. All operations take
place at a single end that is top of the stack and following operations can be performed:
• push() – Insert an element at one end of the stack called top.
• pop() – Remove and return the element at the top of the stack, if it is not empty.
• peek() – Return the element at the top of the stack without removing it, if the stack is not
empty.
• size() – Return the number of elements in the stack.
• isEmpty() – Return true if the stack is empty, otherwise return false.
• isFull() – Return true if the stack is full, otherwise return false.

2.3. Exception
- Operations pop and top cannot be performed if the stack is empty

Attempting the execution of pop or top on an empty stack should throws a StackEmptyException

- Operations push sometimes cannot be performed if it’s not enough memory

Attempting the execution of push when there is not enough memory should throws a
OutOfMemoryError

2.4. Application:
- Any sort of nesting (such as parentheses)
• Evaluating arithmetic expressions (and other sorts of expression)
• Implementing function or method calls
• Keeping track of previous choices (as in backtracking)
• Keeping track of choices yet to be made (as in creating a maze)
• Undo sequence in a text editor
• Auxiliary data structure for algorithms
• Component of other data structure

2.5.Method calls and the implementation by using stack Document shared on https://www.docsity.com/en/assignment-1-btec-
assignment-1-btec/7504300/

- Each time a method is called, an activation record (AR) is allocated for it.
- This record contains the following information:
• Parameters and local variables used in the called method
• Dynamic link: a pointer to the caller’s activation record
• Return address to resume control by the caller (address of instruction immediately
following the call)
• Return value for a method not declared as void.
• Since the size of AR may vary from one call to another, returned value is placed right above
the AR of the caller
• Each new AR is placed on top of the run-time stack
• When a method terminates, its AR is removed from the top of the run-time stack. Thus, the
first AR placed on the stack is the last one removed.

2.6. Implement Stack by Array in Java:

3. package Main;
4.
5. class Stack {
6. static final int MAX = 1000;
7. int top;
8. int a[] = new int[MAX];
9.
10. boolean isEmpty()
11. {
12. return (top < 0);
13. }
14. Stack()
15. {
16. top = -1;
17. }
18.
19. boolean push(int x)
20. {
21. if (top >= shared
Document (MAX - 1)) {
on https://www.docsity.com/en/assignment-1-btec-assignment-1-btec/7504300/

22. System.out.println("Stack Overflow");


23. return false;
24.
25. } else {
26. a[++top] = x;
27. System.out.println(x + " pushed into stack");
28. return true;
29.
30. } }
31.
32. int pop()
33. {
34. if (top < 0) {
35. System.out.println("Stack Underflow");
36. return 0;
37.
38. } else {
39. int x = a[top--];
40. return x;
41.
42. } }
43.
44. int peek()
45. {
46. if (top < 0) {
47. System.out.println("Stack Underflow");
48. return 0;
49.
50. } else {
51. int x = a[top];
52. return x;
53.
54. } }
55. }
56. Document shared on https://www.docsity.com/en/assignment-1-btec-assignment-1-btec/7504300/

57. class Main {


58. public static void main(String args[])
59. {
60. Stack s = new Stack();
61. s.push(10);
62. s.push(20);
63. s.push(30);
64. System.out.println(s.pop() + " Popped from stack");
65. }
66.}

Output:

P3 Using an imperative definition, specify the abstract data type for a


software stack.

3.1. Definition of software stack:


- An application consists of a set of functions working together in a defined architecture to
deliver specified services to the user. The simplest application architecture consist of three
layers:
• Presentation Layer: The presentation layer is what the client sees when they access the
application through a website or web-based application portal.
• Logic Layer: The logic layer contains application logic and business rules that help fulfill
application requests. This layer makes calculations and decisions about how to process
requests while controlling the transmission of data between the data layer and the
presentation layer.
• Data Layer: The data layer is a server-side system that passes information to the logic layer
when
it is necessary to complete a calculation or when it needs to be passed to the presentation

layer Document shared on https://www.docsity.com/en/assignment-1-btec-assignment-1-btec/7504300/


where it becomes visible to users.
- Each of these layers has unique requirements in terms of the programming languages and
software tools that are required to establish and maintain its function. A web-based
presentation layer may be written in languages like HTML5, Javascript and CSS. The application
layer could be programmed in Java, C#, Python or C++. Applications like MySQL and MongoDB
could be used to maintain back-end servers.
- The term Software Stack refers to the set of components that work together to support the
execution of the application. Some software components power back-end processes, some are
used to perform calculations and some are used in the presentation layer to enable user
interface. In any case, the components of a software stack work in tandem to efficiently deliver
application services to the end-user.

3.2. Parts of a software stack:


- Applications have four tiers, three of which are on the server-side. This graphic explains the
inner workings of a stack: the client is where it all starts and ends.
• The client tier—this is the only component in the browser
• The web tier—the web server, or HTTP server
• The business tier—the application server, including the development platform, frameworks,
and server-side programming languages
• The database tier—the database server you choose, which can often depend on the
business tier - The tiers each include an operating system, server, database and server-side
scripting language.

You’re not limited to the components in a stack—they’re interchangeable based on your needs and
customizable.

3.3. Five Software Stack Examples:


- A software stack that has proved itself useful or preferable for delivering a specific type of
application may occasionally be adopted by other developers. A software stack that has
become popular may take on an identity of its own as a growing number of software
companies adopt the same set of software components to deliver an application. Software
companies may bundle specific components
together and market them as a single software stack for a specific purpose. Below are five of

the Document shared on https://www.docsity.com/en/assignment-1-btec-assignment-1-btec/7504300/


most popular software stacks that developers may

use as an application platform:

3.3.1. LAMP - A software stack designed to support web services, the LAMP software stack is
useful for building dynamic web sites and cloud applications. The stack includes the
Linux operating system, Apache web server, MySQL relational database management
system and the PHP programming language.
3.3.2. MEAN - Dynamic web sites and web applications are builds using the MEAN software
stack, which includes four free and open-source components: a database tool called
MongoDB, the Express.js web application server framework, a front-end web framework
called Angular.js, and the Node.js runtime environment.
3.3.3. WIMP - The WIMP software stack includes the Windows operating system, IIS web
servers, MySQL or MS Access as a data management system and the PHP, Perl or Python
programming languages.
3.3.4. NMP - NMP is actually a set of several software stacks that incorporate Nginx web
servers, MySQL and the PHP programming language. This set of technologies works with
all major operating systems and has been packaged separately with Linux, Windows,
and macOS.
3.3.5. MAMP - The MAMP framework can be used to develop web sites that function on
computers that use Windows or macOS. The software stack consists of either macOS or
Windows operating system, Apache web server, MySQL for relational database
management and PHP, Perl or Python for web development.
- Each software stack provides a unique set of advantages and disadvantages for developers. It is
up to application architects to understand and anticipate the specific needs of an application
before choosing the best set of software solutions that support the delivery of application
services to the end-user.
P4 Implement a complex ADT and algorithm in an executable programming language to solve
a well-defined problem.

Here's an implementation of a complex ADT and algorithm in Java that solves a well-defined
problem:

Problem: First, we will define our ADT, which will be a dynamic programming table. Each cell in
the table will represent the length of the longest increasing subsequence that ends at the
corresponding index in the input list.

The longestIncreasingSubsequence method takes in an array of integers and returns the length
of the longest increasing subsequence in the array. It initializes a dynamic programming table
dp with each cell set to 1, since each element is a valid subsequence of length 1. Then, it
iterates through the array, and for each element, it iterates through all previous elements to
check if it can be appended to a previous subsequence to form a longer increasing
subsequence. If it can, it updates the value in the dynamic programming table to reflect the
new length of the subsequence. Finally, it iterates through the dynamic programming table to
find the maximum length and returns that as the length of the longest increasing subsequence.
This algorithm has a time complexity of O(n^2) since it iterates through the array twice, and a
space complexity of O(n) since it creates a dynamic programming table of size n.

let's demonstrate how the implementation of the longest increasing subsequence algorithm in Java
solves the problem of finding the longest increasing subsequence in a list of integers.

Suppose we have the input list [10, 9, 2, 5, 3, 7, 101, 18]. We can call the
longestIncreasingSubsequence method with this input to get the length of the longest increasing
subsequence:

First, the dynamic programming table dp is initialized as follows:

Then, the algorithm iterates through the array and updates the dynamic programming table as
follows:
The algorithm uses dynamic programming to solve the problem. The idea is to maintain a dynamic
programming table dp of the same size as the input array nums, where dp[i] represents the length
of the longest increasing subsequence ending at index i of the array nums. We initialize dp to all
ones since each element in the array is itself an increasing subsequence of length 1.

Then, we iterate through the array from left to right, and for each index i, we check all previous
indices j from 0 to i-1. If nums[j] < nums[i], it means that we can extend the increasing subsequence
ending at index j with the current element nums[i] to form a longer increasing subsequence ending
at index i. So, we update dp[i] to be the maximum of its current value and the value of dp[j] + 1.
This represents the length of the longest increasing subsequence we can form by appending
nums[i] to the longest increasing subsequence ending at index j. We update dp[i] with this
maximum value.

Finally, we iterate through the dp array and return the maximum value, which represents the
length of the longest increasing subsequence in the array.

In our example, the algorithm correctly identifies the longest increasing subsequence in the array to
be [2, 3, 7, 101], which has length 4.

The implemented algorithm for finding the longest increasing subsequence has a time complexity
of O(n^2), where n is the length of the input array. This is because we iterate through the array
once and for each element, we iterate through all previous elements to check if we can extend the
longest increasing subsequence ending at that index. The inner loop takes at most O(n) time and is
executed for each element in the array, leading to a total time complexity of O(n^2).

In terms of space complexity, the algorithm uses an array of size n to store the dynamic
programming table, leading to a space complexity of O(n).

Overall, the time complexity of O(n^2) is reasonable for most practical use cases and the space
complexity of O(n) is quite efficient. However, if the input array is very large, the time complexity
may become a bottleneck and alternative algorithms with better time complexity, such as the
O(nlogn) algorithm using binary search, may be more suitable. Additionally, if the input array
contains duplicate elements, the algorithm may not correctly identify all longest increasing
subsequences.

here's an updated implementation of the algorithm in Java that includes error handling and test
results reporting:
In this updated implementation, I added error handling for the case where the input array is null or
empty. If this occurs, an IllegalArgumentException is thrown with a descriptive error message.

To test the error handling, I created a new test case with a null input array. In this case, the
program correctly catches the exception and prints the error message instead of attempting to
compute the longest increasing subsequence.
Here are the test results:

P6 Discuss how asymptotic analysis can be used to assess the effectiveness of an algorithm.

Data structures and algorithms are fundamental concepts in computer science that are used to
solve a wide range of problems. Choosing the right data structure and algorithm is critical to the
performance and efficiency of a solution.

Data structures are used to organize and store data in a way that allows for efficient access and
manipulation. Different data structures have different strengths and weaknesses in terms of time
and space complexity. For example, arrays have O(1) time complexity for accessing an element, but
have O(n) time complexity for inserting or deleting an element in the middle. On the other hand,
linked lists have O(1) time complexity for inserting or deleting an element in the middle, but have
O(n) time complexity for accessing an element.

Algorithms are used to perform a specific task or computation on data. Different algorithms have
different time and space complexities, and choosing the right algorithm is important to ensure that
the solution is efficient and scalable. Asymptotic analysis is a method used to analyze the time and
space complexity of an algorithm as the input size grows towards infinity. This analysis allows us to
compare the efficiency of different algorithms and choose the most appropriate one for a given
problem.
Asymptotic analysis provides the upper bound (worst-case) complexity of an algorithm in terms of
the input size. The most commonly used notations for asymptotic analysis are Big-O, Big-Ω, and Big-
Θ. Big-O notation provides an upper bound on the growth rate of the function, and it is commonly
used to describe the worst-case time complexity of an algorithm. For example, an algorithm with
time complexity O(n^2) means that the number of operations required to solve the problem grows
quadratically with the size of the input. Big-Ω notation provides a lower bound on the growth rate
of the function, and it is commonly used to describe the best-case time complexity of an algorithm.
Big-Θ notation provides both upper and lower bounds on the growth rate of the function, and it is
commonly used to describe the average-case time complexity of an algorithm.

As I mentioned earlier, the effectiveness of data structures and algorithms can be assessed through
asymptotic analysis, which provides an understanding of the growth rate of the algorithm's time
and space complexity as the input size grows. Let's take an example to see how asymptotic analysis
can be used to assess the effectiveness of an algorithm.

Consider the problem of searching for an element in an array of n elements. One approach is to use
a linear search algorithm, which simply scans through the array element by element until it finds
the target element. The worst-case time complexity of linear search is O(n), which means that the
number of operations required to search for an element grows linearly with the size of the input
array. This approach is simple and easy to understand, but it is not efficient for large arrays.

Another approach is to use a binary search algorithm, which is based on the divide and conquer
strategy. The basic idea is to divide the array in half at each step and compare the target element
with the middle element. If the target element is greater than the middle element, then we search
in the right half of the array; otherwise, we search in the left half of the array. The worst-case time
complexity of binary search is O(log n), which means that the number of operations required to
search for an element grows logarithmically with the size of the input array. This approach is much
more efficient than linear search for large arrays.
Asymptotic analysis provides a way to compare the efficiency of these two algorithms. The worst-
case time complexity of linear search is O(n), which means that as the size of the input array grows,
the time required to search for an element grows linearly. On the other hand, the worst-case time
complexity of binary search is O(log n), which means that as the size of the input array grows, the
time required to search for an element grows logarithmically. This means that binary search is
much more efficient than linear search for large arrays.

In summary, asymptotic analysis provides a way to assess the effectiveness of algorithms by


analyzing their time and space complexity as the input size grows. It helps us to understand the
growth rate of the algorithm and compare the efficiency of different algorithms for a given
problem.

4.2 A trade-off in the context of specifying an ADT refers to the decision to sacrifice one desirable
feature for another. When designing an ADT, it is often necessary to make trade-offs between
various factors such as time complexity, space complexity, ease of use, and performance.

For example, let's consider the trade-offs involved in designing a hash table. Hash tables are used
for efficient key-value pair lookups. One important factor to consider when designing a hash table is
the load factor, which is the ratio of the number of elements in the table to the size of the table.

If the load factor is too high, collisions become more frequent, which slows down lookups. On the
other hand, if the load factor is too low, the table takes up more memory than necessary.
Therefore, a trade-off has to be made between memory usage and lookup time.

One way to reduce collisions and improve lookup time is to increase the size of the table. However,
this also increases memory usage. Another approach is to use open addressing or chaining to
resolve collisions, but this can make the implementation more complex.
For example, if we choose to use open addressing with linear probing, we can reduce the memory
usage since we only need to store the key-value pairs in the table. However, this can result in poor
performance when the table is heavily loaded due to a high rate of collisions. If we instead choose
to use chaining, we may have better performance when the table is heavily loaded, but we need to
allocate additional memory to store the linked lists.

In this example, the trade-off is between memory usage and lookup time. Depending on the
requirements of the application, we need to choose an appropriate load factor and collision
resolution strategy that balances these trade-offs.

Therefore, when specifying an ADT, it is important to consider the trade-offs between different
design options and choose an implementation that meets the specific requirements of the
application.

4.3 Implementation-independent data structures are data structures that can be used in a
programming language or environment without being tied to a specific implementation or
programming language. Here are three benefits of using implementation-independent data
structures:

Portability: Implementation-independent data structures can be used across different programming


languages and platforms. For example, the JSON (JavaScript Object Notation) format is an
implementation-independent data structure that can be used in multiple programming languages
such as Python, Java, and C#. This makes it easier to share data between different systems,
platforms, and programming languages.

Separation of Concerns: Using implementation-independent data structures can help separate the
concerns of data representation and data manipulation. By using an independent data structure,
the focus can be on the logic and algorithms to manipulate the data rather than on the details of
the underlying implementation.

Flexibility: Implementation-independent data structures can be modified or adapted to suit the


specific needs of an application. For example, a binary search tree is an implementation-
independent data structure that can be modified to suit the requirements of different applications.
The same binary search tree can be used to store different types of data such as integers, strings, or
objects, and can be implemented in different ways such as using arrays, linked lists, or nodes.

In summary, implementation-independent data structures offer portability, separation of concerns,


and flexibility, which can make them a useful tool for data representation and manipulation in a
variety of programming languages and platforms.

P7

here are two ways in which the efficiency of an algorithm can be measured in Java:

Time complexity: Time complexity measures the amount of time an algorithm takes to run as a
function of the input size. In Java, we can measure the time complexity of an algorithm using the
System.nanoTime() method to get the current system time in nanoseconds. We can then measure
the time it takes for an algorithm to run by measuring the difference between the start time and
end time. For example, consider the following code snippet that sorts an array of integers using the
bubble sort algorithm:
In this example, we use the System.nanoTime() method to measure the time taken to sort an array
using the bubble sort algorithm. We store the start time in a variable called startTime, and the end
time in a variable called endTime. We then calculate the duration by subtracting the start time from
the end time, and print out the result.

Space complexity: Space complexity measures the amount of memory an algorithm uses as a
function of the input size. In Java, we can measure the space complexity of an algorithm by
analyzing the amount of memory used by the algorithm's data structures. For example, consider
the following code snippet that calculates the factorial of a number using recursion:
In this example, ‘the factorial()’ method uses recursion to calculate the factorial of a number. Each
recursive call creates a new stack frame, which uses additional memory. Therefore, the space
complexity of this algorithm is proportional to the input size n. We can also use tools like Java's
profiler to analyze the memory usage of the algorithm and identify potential memory leaks or
inefficiencies.

REFERENCES

1. Adam Drozdek. Data Structures

2. and Algorithms in Java: SECOND EDITION.

Available at: file:///C:/Users/ASUS/Downloads/Data%20Structures%20and%20Algorithms%20in


%20Java.pdf

[Accessed April 12, 2021]

3. Wu, C. Thomas. An Introduction to Object-Oriented Programming with Java, 5th Edition.

Available at: file:///C:/Users/ASUS/Downloads/An%20Introduction%20to%20Object-Oriented


%20Programming%20with%20Java,%205th%20Edition.pdf

[Accessed April 12, 2021]

4. Aarish Ramesh. ADT Java Tutorial.

Available at: https://examples.javacodegeeks.com/adt-java-tutorial/#section3


[Accessed April 12, 2021]

5. Abstract Data Type (ADT)

Available at: https://www.cpp.edu/~ftang/courses/CS240/lectures/adt.htm

[Accessed April 12, 2023]

6. Reading 11: Abstract Data Types

Available at: https://web.mit.edu/6.005/www/fa16/classes/11-abstract-data-types/

[Accessed April 12, 2023]

7. Array implementation of Stack

Available at: https://www.javatpoint.com/ds-array-implementation-of-stack

[Accessed April 12, 2023]

8. Java Stack Implementation using Array

Available at: https://howtodoinjava.com/data-structure/java-stack-implementation-array/Document shared


on https://www.docsity.com/en/assignment-1-btec-assignment-1-btec/7504300/

[Accessed April 12, 2023]

9. Software Stack
Available at: https://www.techopedia.com/definition/27268/software-stack#:~:text=A%20software
%20stack%20is%20a,that%20work%20as%20a%20set.

[Accessed April 12, 2023]

10. Stack Data Structure (Introduction and Program)

Available at: https://www.geeksforgeeks.org/stack-data-structure-introduction-program/

[Accessed April 12, 2023]

11. Granville Barnett and Luca Del Tongo. Data Structures and Algorithms: Annotated Reference
with Examples, 1st edition.

Available at: file:///C:/Users/ASUS/Downloads/Dsa.pdf

[Accessed April 12, 2023]

You might also like