0% found this document useful (0 votes)
3 views

Stack Basics

The document provides an overview of stack data structures, highlighting their LIFO principle, common operations, and applications such as expression evaluation and backtracking algorithms. It also compares ArrayList and LinkedList, detailing their performance characteristics, memory usage, and scenarios for optimal use. Additionally, it discusses time complexities and the dynamic resizing of ArrayLists, emphasizing the trade-offs between memory efficiency and operational speed.

Uploaded by

chandanxingh173
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Stack Basics

The document provides an overview of stack data structures, highlighting their LIFO principle, common operations, and applications such as expression evaluation and backtracking algorithms. It also compares ArrayList and LinkedList, detailing their performance characteristics, memory usage, and scenarios for optimal use. Additionally, it discusses time complexities and the dynamic resizing of ArrayLists, emphasizing the trade-offs between memory efficiency and operational speed.

Uploaded by

chandanxingh173
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Stack Data Structure:

● A linear data structure that follows the Last-In-First-Out (LIFO) principle.


● Elements are added and removed from the top of the stack. It can be visualized as a
stack of plates where you can only remove the top plate first.
● Commonly used operations:
○ push(element): Adds an element to the top of the stack.
○ pop(): Removes and returns the top element from the stack.
○ peek(): Returns the top element without removing it.
○ isEmpty(): Checks if the stack is empty.
○ size(): Returns the number of elements in the stack.

Applications of Stacks:

● Expression evaluation (e.g., infix to postfix conversion)


● Undo functionality in text editors
● Backtracking algorithms (like Depth First Search)
● Parenthesis matching in expressions
● Function call stack in recursion

Real-World Use Cases of Stacks:

1. Expression Evaluation (Postfix/Infix):


Stacks are used in parsing and evaluating mathematical expressions like infix, postfix
(RPN), and prefix.
2. Function Call Management:
The call stack used by programming languages to manage function calls follows the
stack LIFO principle.
3. Undo/Redo Functionality:
Applications like text editors use stacks to implement undo/redo operations where the
last operation is undone first (LIFO).
4. Depth-First Search (DFS):
In graph algorithms like DFS, stacks are used to explore nodes in the LIFO manner.

Time Complexity Analysis:

Operation Array Implementation Linked List Implementation

push O(1) O(1)


pop O(1) O(1)

peek O(1) O(1)

isEmpty O(1) O(1)

size O(1) O(n) (worst case)

. Common Stack Problems or Questions:

● Balanced Parentheses: Check if a string of parentheses is balanced.


● Next Greater Element: Find the next greater element for every element in an array.
● Stock Span Problem: Calculate how many days stock price has been less than or equal
to the current day's price.

ArrayList:

● Internally uses: A dynamic array to store elements.


● Performance Characteristics:
○ Access Time (get): O(1) – Fast random access due to indexing.
○ Insertion (add at the end): O(1) (amortized) – Fast if no resizing is required.
○ Insertion (add at the beginning or middle): O(n) – Requires shifting elements.
○ Deletion (from the end): O(1).
○ Deletion (from the beginning or middle): O(n) – Requires shifting elements.
● Memory usage: Requires more memory due to internal resizing and maintaining
capacity.

LinkedList:

● Internally uses: A doubly linked list, where each element (node) points to both its
previous and next element.
● Performance Characteristics:
○ Access Time (get): O(n) – Slower, as you need to traverse the list from the start.
○ Insertion (at the beginning or middle): O(1) – No shifting required, just adjusting
pointers.
○ Insertion (at the end): O(1) – If we keep track of the tail node.
○ Deletion (from the beginning or middle): O(1) – Fast removal by updating
pointers.
● Memory usage: Requires more memory per element due to storing pointers for the next
and previous elements.
When to Use Which?

Use ArrayList when:

● You need fast random access (e.g., frequently using get(i)).


● You mostly add or remove elements at the end of the list.
● Your list size is relatively stable or doesn’t change drastically.
● Example scenario: Storing and accessing elements in a catalog or list of items
where the order doesn’t change frequently.

Use LinkedList when:

● You need fast insertions and deletions at both ends or the middle of the list.
● You are working with large data sets that require frequent insertion or removal of
elements at arbitrary positions.
● You don’t need random access or are working with a queue or stack-like structure.
● Example scenario: Implementing a queue, deque, or scenarios where you frequently
insert or remove elements.

When would you choose LinkedList over ArrayList?

● If you require frequent insertions or deletions at arbitrary positions (like at the


beginning or middle of the list).
● When random access is not critical, and the focus is on efficient modifications.

How does memory usage differ between LinkedList and ArrayList?

● ArrayList uses less memory as it only needs to store the elements themselves.
● LinkedList requires more memory because each node stores references (pointers) to
both the next and previous elements.

When deciding between using a LinkedList or an ArrayList in Java, it’s important to consider
the differences in their internal structure and performance characteristics.

Here's a breakdown of when to use LinkedList vs ArrayList, based on your needs:

1. ArrayList:

● Internally uses: A dynamic array to store elements.


● Performance Characteristics:
○ Access Time (get): O(1) – Fast random access due to indexing.
○ Insertion (add at the end): O(1) (amortized) – Fast if no resizing is required.
○ Insertion (add at the beginning or middle): O(n) – Requires shifting elements.
○ Deletion (from the end): O(1).
○ Deletion (from the beginning or middle): O(n) – Requires shifting elements.
● Memory usage: Requires more memory due to internal resizing and maintaining
capacity.

Use ArrayList when:

● You need fast random access to elements (using get).


● You do more lookups than insertions or deletions.
● You primarily add elements at the end of the list.
● Memory usage is not a primary concern.

2. LinkedList:

● Internally uses: A doubly linked list, where each element (node) points to both its
previous and next element.
● Performance Characteristics:
○ Access Time (get): O(n) – Slower, as you need to traverse the list from the start.
○ Insertion (at the beginning or middle): O(1) – No shifting required, just adjusting
pointers.
○ Insertion (at the end): O(1) – If we keep track of the tail node.
○ Deletion (from the beginning or middle): O(1) – Fast removal by updating
pointers.
● Memory usage: Requires more memory per element due to storing pointers for the next
and previous elements.

Use LinkedList when:

● You need frequent insertions or deletions at both the beginning or in the middle of the
list.
● You don’t need fast random access to elements.
● The size of the list changes frequently.

3. When to Use Which?

When to Use ArrayList for Stack:


1. Small to Medium Sized Stacks:
When you know that the stack will not grow too large (e.g., handling 100-1000 elements)
and you need fast access, use ArrayList.
2. Balanced Push and Pop Operations:
When the stack operations are fairly balanced and there's not a heavy emphasis on
continuous insertion or removal.
3. Memory Constraints:
When memory overhead is a concern, and you don’t want the extra cost of node pointers
in a LinkedList.

When to Use LinkedList for Stack:

1. Large or Dynamic Stacks:


If the stack size can grow large or shrink significantly, and you don’t want to deal with
resizing arrays.
2. Frequent Push/Pop Operations:
For scenarios where you have frequent insertions and deletions at the head of the stack
(top of the stack), LinkedList is more efficient.
3. Avoiding Resizing:
If you want to avoid the overhead of resizing that occurs with ArrayList, a
LinkedList is a better option since it grows dynamically without reallocation.
4. You need fast insertions and deletions at both ends or the middle of the list.
5. You are working with large data sets that require frequent insertion or removal of
elements at arbitrary positions.

● If you require frequent insertions or deletions at arbitrary positions (like at the


beginning or middle of the list).
● When random access is not critical, and the focus is on efficient modifications.

2. What are the time complexities of get, add, and remove in ArrayList and LinkedList?

● ArrayList:
○ get(i): O(1)
○ add(e) at end: O(1) (amortized)
○ remove(i): O(n) (due to shifting)
● LinkedList:
○ get(i): O(n) (since it requires traversal)
○ add(e) at head/tail: O(1)
○ remove(i): O(n) (but O(1) if you're removing the first or last element)

3. How does memory usage differ between LinkedList and ArrayList?

● ArrayList uses less memory as it only needs to store the elements themselves.
● LinkedList requires more memory because each node stores references (pointers) to
both the next and previous elements.

In short:

● ArrayList is best when you need fast access (retrieval) and occasional modifications.
● LinkedList is best when you frequently insert or delete elements at various positions in
the list.

Criteria Fixed-Size Array ArrayList Stack LinkedList Stack


Stack

Size Management Fixed size, must Dynamically Dynamic, no size limit as


be defined at the resizable, grows elements are linked
start when full

Memory Usage Efficient (no extra Efficient for small Higher due to node
memory for stacks, but overhead structure (data + pointers)
pointers) for resizing

Push (Insertion) O(1) (if not full) Amortized O(1) O(1) (constant-time
(resizes when full) insertion at head)

Pop (Removal) O(1) (removes O(1) O(1) (constant-time


from top) removal at head)
Peek (Top O(1) O(1) O(1)
Element)

Space Complexity O(n) (size of the O(n) (size of the O(n) (size of the list +
array) array + extra space memory for pointers)
when resizing)

Resizing Not possible, Yes, resizing causes No resizing needed, grows


Overhead fixed size O(n) dynamically

Memory Very efficient, no Uses more memory Less memory efficient due
Efficiency wasted memory during resizing to pointer overhead

Random Access O(1) (direct O(1) (direct access O(n) (traverse the list)
access via index) via index)

Use Cases - When stack size - For moderate - For stacks with
is known in stacks with frequent unpredictable size changes
advance resizes

- When memory - For - Frequent push/pop


constraints are small-to-medium operations
important sized stacks

- Avoiding - Less frequent - Large datasets where


overhead from push/pop resizing can be a
dynamic resizing bottleneck

When to Use - Best for small, - Suitable for - Best for stacks with
fixed-size stacks dynamic stacks with frequent push/pop and
fewer operations unpredictable size
Advantages - Simple and fast - Dynamically resizes - No need for resizing
as needed

- No resizing - Supports dynamic - Constant time


overhead size insertion/removal

Disadvantages - Fixed size, can - Overhead of - Higher memory usage


lead to stack resizing when full due to pointers
overflow

Stack Overflow Yes (when the No (resizes when No (grows dynamically)


Possibility array is full) needed)

Best For - Stacks with - Small to - Stacks where size


known maximum moderate-size stacks changes frequently
size where resizing is rare

Amortized O(1) refers to an average time complexity over a series of operations, where most
operations are O(1) (constant time), but occasionally, one operation might take longer (e.g.,
O(n)).

For example, in the case of ArrayList, adding an element is generally O(1) because you add
the element to the end of the list. However, when the underlying array is full, the array must be
resized to accommodate new elements. This resizing involves creating a new, larger array and
copying all the elements over, which takes O(n) time.

However, this resizing doesn't happen frequently. Most of the time, adding elements is still O(1).
So when we consider the average time complexity over many additions, the total cost of all
operations is spread out, and it averages out to O(1). This is what we call amortized O(1).

How it works in an ArrayList:

● Normal insertion (when there's space): O(1)


● Resizing operation (when full): O(n)
But resizing happens rarely, so the average over multiple operations is amortized O(1).

Analogy:

Imagine filling up a jar with marbles. Every time you add a marble, it fits, and it takes no extra
time (O(1)). But once the jar is full, you need to get a bigger jar, which takes time to transfer all
the marbles (O(n)). This jar-resizing step doesn't happen often, so most of your marble-adding
actions are still quick, making the overall average O(1).

Yes, you are correct that ArrayList is dynamically sized, meaning it grows automatically when
needed. However, the underlying implementation of an ArrayList in Java uses a resizable
array. This array has a fixed capacity at any given time, which is why the concept of being "full"
still applies temporarily.

Here’s why it can become "full":

● Initially, the ArrayList is backed by a regular array with a certain capacity (e.g., 10
elements by default).
● When you keep adding elements, eventually the capacity of the backing array is
reached.
● At this point, the ArrayList must resize the backing array to make more space.

How does the resizing work?

● When the ArrayList reaches its current capacity, it creates a new array that is typically
1.5 times or 2 times larger than the current array.
● All the elements from the old array are copied into this new, larger array.
● After this resize, new elements can be added to the ArrayList again until it fills up, at
which point the process repeats.

Why "Amortized O(1)" still applies:

● Even though resizing takes O(n) time because you have to copy all elements, it doesn’t
happen on every insertion.
● Most insertions are done in O(1) (constant time).
● The cost of resizing is spread out over all the insertions, making the average time per
insertion O(1) in the long run.

Dynamic Nature of ArrayList:

● It adjusts its size dynamically by creating larger arrays, but this resizing mechanism is
what causes the array to occasionally be "full" before it can grow.
● This is why ArrayList offers amortized O(1) insertion — most insertions are fast, but
resizing can be costly.

in the case of an ArrayList, the underlying implementation would be the specific data structure
and algorithms used to store and manipulate the elements. This might involve using a dynamic
array or a linked list, along with methods for adding, removing, and accessing elements.

Key points about underlying implementation:

● Hidden from the user: The user typically interacts with the ArrayList through a
high-level interface, without needing to know the details of the underlying
implementation.
● Essential for functionality: The underlying implementation is crucial for the correct
behavior and performance of the ArrayList.
● Can vary between implementations: Different programming languages or libraries may
have different underlying implementations for ArrayLists, leading to variations in
performance or features.

You might also like