cs61bl Su14 F Sol
cs61bl Su14 F Sol
(b) Input: An array of n Comparable objects that is sorted except for k randomly located elements
that are out of place (that is, the list without these k elements would be completely sorted)
Sort: Insertion sort
Runtime: O(nk)
Explanation: For the n − k sorted elements, insertion sort only needs 1 comparison to check
that it is in the correct location (larger than the last element in the sorted section). The re-
maining k out-of-place elements could be located anywhere in the sorted section. In the worst
case, they would be inserted at the beginning of the sorted section, which means there are
O(n) comparisons in the worst-case for these k elements. This leads to an overall runtime of
O(nk + n), which simplifies to O(nk).
Comments: It was a common error to say the runtime was O(n) or O(n2 ). We gave partial
credit for both of these answers. O(n) was incorrect because it underestimated the number of
comparisons required for the out-of-place elements. O(n2 ) was incorrect because it ignored
the fact that only 1 comparison is needed for already sorted elements.
Also, it is incorrect to equate k with log2 n. It was only given that k < log2 n.
(c) Input: An array of n Comparable objects that is sorted except for k randomly located pairs of
adjacent elements that have been swapped (each element is part of at most one pair).
Sort Option 1: Bubble sort with optimization
Runtime 1: O(n) or O(n + k)
Explanation 1: It was necessary to mention the optimization for bubble sort, which involves
stopping after a complete iteration of no swaps occurring. In order words, optimized bubble
sort stops once the list is sorted, not after it has run through n iterations for a runtime of O(n2 ).
It takes one iteration of bubble sort to swap the k randomly reversed pairs of adjacent elements
then another iteration before optimized bubble sort stops due to no swaps. If you count each
swap as taking O(1) time, the total runtime is O(2n + k), which simplifies to O(n).
Sort Option 2: Insertion sort
Runtime 2: O(n) or O(n + k)
Explanation 2: Insertion sort requires 1 comparison for the n−k sorted elements then requires
2 comparisons for the second element in each of the k pairs. This leads to a runtime of O(n+k),
which simplifies to O(n).
(d) Input: An array of n elements where all of the elements are random ints between 0 and k
Sort Option 1: Counting sort
Runtime 1: O(n) or O(n + k)
Explanation 1: Counting sort involves initializing an array of size k, then going through n
elements while incrementing numbers in the array. Recovering the sorted list requires going
through k buckets and outputting n numbers. This is a total runtime of O(2n + 2k), which
simplifies to O(n + k) or O(n).
Comments: Counting sort was the easier sort to explain. Radix sort, explained below, re-
quired a more intricate explanation to prove that it had as efficient a runtime as counting sort.
Sort Option 2: Radix sort
Runtime 2: O(n)
Explanation 2: Radix sort is O((n + b)d), where b is the number of buckets used and d is the
number of digits representing the largest element. (In this case, the largest element was k.)
Because the input array is composed of Java ints, we can say that b is equal to 2 and d is
equal to 32 because Java ints are 32-bits long, and each bit can be a 0 or 1. Thus, because
b and d are constants, the runtime for radix sort on Java ints is O(n).
Comments: We also accepted the answer that b was 10 and d was a constant when simplify-
ing the runtime to O(n) based on Java ints.
While it is true that radix sort runs in O(n log k) because the number k can be represented in
log k bits and b would then be a constant, this runtime did not prove that radix sort was as
efficient of a sort as counting sort. Thus, we only gave partial credit for this runtime.
Sort Option 3: Bucket sort
Runtime 3: O(n) or O(n + k)
Explanation 3: The correctness of bucket sort as an answer depended heavily on the choice
of buckets (and how many buckets were used, in particular). Solutions that chose k buckets
were quite similar to counting sort and thus had the same runtime as counting sort.
Comments: Solutions that did not explain the choice or number of buckets were docked points
because bucket sort actually refers to any sorting algorithm that involves placing elements in
buckets. Thus, counting sort, radix sort, and even quicksort and merge sort are all categorized
as bucket sorts. Only explanations of bucket sort that were specific about the implementation
received full points.
2
SID:
(b) Give a topological sort ordering of the vertices in the following directed acyclic graph (in other
words, give a linearized ordering of the vertices in the graph).
D E
F G
3
SID:
(a) Draw the splay tree that results after calling find("A") on the splay tree below.
Solution:
C
−→ B
−→ A
B A C B
A C
−→ A E
−→ D
B B E
C C
(b) Add a "B" node to the AVL tree below by drawing it below. Then draw the tree that results after
the AVL tree balances itself. Show all intermediary steps, if any.
A
Solution:
B
−→ C
−→
4
SID:
(c) Draw the 2-3 tree that results after inserting the following elements in the given order:
1 2 3 4 5 6
Solution:
1 Add 2 1 2 Add 3 1 2 3
−→ −→
4 2 1 3 6 5
Solution:
4 Add 2 2 4 Add 1 1 2 4
−→ −→
5
SID:
a 5
b 2
c 8 c f d
d 7
a
e 3
f 7 e
g 1
g b
b 00001
c 01
d 11
e 0001
f 10
g 00000
6
SID:
5 Pair (5 points)
Write a program, Pair.java, that generates the box-and-pointer diagram shown below when
run. Your program should include a class with a main method. In the diagram below, each object’s
static type is labeled next to the corresponding variable name. Each object’s dynamic type is not
shown.
Solution:
Comments: Many students didn’t use generics, and instead made instance variables have static
type Object (we only took off one point for this). Another common mistake was to use p2 before
it was initialized (for example, Pair p2 = new Pair(p1, p2);)
7
SID:
For each of the following methods, explain whether the O class would still compile if the method is
added to it. If the O class wouldn’t compile, explain why. The O class may have additional methods
/ constructors. The V class does not.
8
SID:
(a) Recall that Dijkstra’s algorithm finds the shortest paths from some starting vertex s to all other
vertices in the graph. In terms of big-Oh, |E|, and |V |, how many times does each of the
following priority queue operations get called in one complete run of Dijkstra’s algorithm?
i) enqueue: O(|V |)
(b) In lab, we analyzed the running time of Dijkstra’s algorithm using a priority queue implemented
with a binary min-heap. Now let’s use a priority queue implemented with Java’s HashMap that
maps vertices to their priority values. Assuming that the hash map operations put, get, and
remove take constant time, what are the new big-Oh running times (in terms of |E| and |V |) of
a single call to each of the following operations:
i) enqueue: O(1)
v) update: O(1)
Comments: Some students didn’t realize that these were the running times of a single
call to each operation (as opposed to part (a), which asked for how many times each
operation was called). One common mistake was to give the running time of dequeue
as O(1). This was incorrect because in order to dequeue, the entire HashMap needs to
be traversed to find the vertex with the lowest priority value. Another common mistake
was to give the running time of isEmpty as O(|V |). This was incorrect because Java’s
implementation of HashMap (like most implementations) has a variable to keep track of
the size of the HashMap.
9
SID:
(c) With our changes above, what is the new big-Oh running time of Dijkstra’s algorithm (in terms
of |V | and |E|)? Show your work.
10
SID:
1 import j a v a . u t i l . A r r a y L i s t ;
2
3 public class MyPriorityQueue {
4
5 p r i v a t e A r r a y L i s t < I n t e g e r > binMinHeap ;
6
7 public MyPriorityQueue ( ) {
8 binMinHeap = new A r r a y L i s t < I n t e g e r > ( ) ;
9 binMinHeap . add ( n u l l ) ;
10 }
11
12 / / Removes and r e t u r n s i t e m i n p r i o r i t y queue w i t h s m a l l e s t p r i o r i t y
13 public I n t e g e r dequeue ( ) {
14 I n t e g e r t o R e t u r n = binMinHeap . g e t ( 1 ) ;
15 binMinHeap . s e t ( 1 , binMinHeap . remove ( binMinHeap . s i z e ( ) − 1 ) ) ;
16 bubbleDown ( 1 ) ;
17 return toReturn ;
18 }
19
20 / / Adds i t e m t o p r i o r i t y queue
21 public void enqueue ( I n t e g e r i t e m ) {
22 binMinHeap . add ( i t e m ) ;
23 bubbleUp ( binMinHeap . s i z e ( ) − 1 ) ;
24 }
25
26 / / Swaps t h e elements a t index1 and index2 o f t h e b i n a r y min heap
27 p r i v a t e void swap ( i n t index1 , i n t index2 ) {
28 i n t temp = binMinHeap . g e t ( index1 ) ;
29 binMinHeap . s e t ( index1 , binMinHeap . g e t ( index2 ) ) ;
30 binMinHeap . s e t ( index2 , temp ) ;
31 }
32
33 / / Bubbles up t h e element i n t h e b i n a r y min heap a r r a y l i s t a t g i v e n i n d e x
34 p r i v a t e void bubbleUp ( i n t i n d e x ) {
35 while ( i n d e x / 2 > 0 && binMinHeap . g e t ( i n d e x ) < binMinHeap . g e t ( i n d e x / 2 ) ) {
36 swap ( index , i n d e x / 2 ) ;
37 index = index / 2;
38 }
39 }
40
41 / / Bubbles down t h e element i n t h e b i n a r y min heap a r r a y l i s t a t g i v e n i n d e x
42 p r i v a t e void bubbleDown ( i n t i n d e x ) {
43 i n t n = binMinHeap . s i z e ( ) ;
44 while ( i n d e x ∗ 2 < n && binMinHeap . g e t ( i n d e x ) > binMinHeap . g e t ( i n d e x ∗ 2 ) ) {
45 swap ( index , i n d e x ∗ 2 ) ;
46 index = index ∗ 2;
47 }
48 }
49 }
11
SID:
(a) Draw an example of a binary min-heap with exactly 5 nodes (either tree or array list form is
fine) such that calling Joe’s dequeue method on your binary min-heap produces an invalid
binary min-heap.
Solution:
1
3 2
4 5
(b) Which lines of code are buggy? lines 44-46 Explain the bug.
When we bubble down, we have to check whether or not our current node is bigger than either
of its children. If it is, then we switch it with the smaller of the two. This code only checks if the
left node is smaller.
(c) Rewrite the lines of code you specified in part b so that his min-heap will work as intended.
Solution: Example 1
12
SID:
Solution: Example 2
2. Not finding the smaller of the children before comparing with the current element: Many
solutions fell into the same mistake as the prompt’s bubbleDown method.
3. Terminating incorrectly: Solution should stop bubbling down children if there was no swap
with the current element. Some solutions also causes infinite loops.
4. Not dealing with all cases: Solution should deal with cases where there are only one or both
children and when a swap is or is not needed.
13
SID:
On the next page, write an evenOdd method in the MyLinkedList class above that destructively
sets the linked list to contain every other linked list node of the original linked list, starting with the
first node. Your method must also return a linked list that contains every other linked list node of
the original linked list, starting with the second node.
Your method should work destructively and should not create any new ListNode objects. If a
MyLinkedList contains zero elements or only one element, a call to evenOdd should return
null. The last ListNode of each MyLinkedList has it’s next instance variable set to null.
Example: If a MyLinkedList initially contains the elements [5, 2, 3, 1, 4], then a call to evenOdd
should return a MyLinkedList with the elements [2, 1], and after the call, the original
MyLinkedList should contain the elements [5, 3, 4]
14
SID:
Solution:
Comments: Many students tried to use methods that were not provided in the MyListNode class
(like add and remove). Non-destructive solutions, solutions resulting in NullPointerExceptions, and
creation of cyclical lists were also fairly common.
15
SID:
• V put(K key, V value): Associates the specified value with the specified key in this
map. If the map previously contained a mapping for the key, the old value is replaced.
• V get(K key): Returns the value associated with the input key, or null if there is no
mapping for the key.
• V remove(Object key): Removes the mapping for the specified key from this map if
present. Returns the previous value associated with the input key, or null if there was no
mapping for key.
Example:
16
SID:
(a) Explain in words how you would implement MemoryMap (including which data structures you
would use) so that the operations listed on the previous page are time efficient. Space effi-
ciency is not a concern. Do not write any code. Solutions that are as efficient as possible
will receive full credit. Less efficient solutions may receive partial credit.
Solution: For part (a), multiple solutions were accepted as long as the student demonstrated
usage of data structures that could potentially solve the MemoryMap question (without con-
sideration for efficiency). Some examples of accepted solutions were:
• Using some kind of binary search tree (keys are not necessarily comparable).
• Not specifying the usage of a HashMap, or assuming that the Map interface was actually
a HashMap.
17
SID:
(b) For each of the methods listed below, explain how you would implement the method for
MemoryMap and state your planned implementation’s average case running time. State any
assumptions you make about the average case running times of any data structures you would
use in your implementation.
Solution: Optimal solution: extend HashMap<K, V>. Add a linked list of ListNode<K> with
a HashMap<K, ListNode<K». Start put, get, and remove with a call to super. This received
10 points.
• put: Look K up in the list node hash map; if it exists, remove it from the linked list and
the hash map. Add K to the front of the linked list. O(1)
• get: Look K up in list node hash map; if it exists, remove it from the linked list and the
hash map. Add K to the front of the linked list. O(1)
• remove: Look K up in list node hash map; if it exists, remove it from the linked list and
the hash map. O(1)
• recent: Retrieve the first m elements of the linked list. O(m).
Comments: Some examples of suboptimal solutions were (where N is the number of entries
in the hash map):
• HashMap and a ArrayList or LinkedList or Stack as your recently accessed list. Ei-
ther you have to move items around in the list to get rid of duplicates (inefficient put/get/re-
move running in O(N) time but recent in O(m)) or you have to purge duplicates during the
recent call (at best, O(N) time using a hash set to check for duplicates, or O(N 2 ) naively.
Correctly implemented and analyzed, this received 7 points.
• HashMap, and a second HashMap<K, Integer> for tracking priorities with a counter.
Every time a get or put is done, we increment the counter and update (or put) into the
second hash map in O(1). When we remove, we remove from both hash maps in O(1).
When recent is called, we sort the entrySet on value and return the first m in O(n log n)
time. This received 7 points.
• The same as above, but using a priority queue instead of a hash map. get and put
took O(log n) and remove took O(n). recent took O(n log k), or O(n log n) depending on
whether you limited the size of the priority queue or not. This usually received around 4-7
points depending on implementation.
• Using a splay tree. There’s almost no way to get this to work; while splay trees do give
the most recent item in constant time at the top of the tree, there are no guarantees as to
where the m most recent items will be in the tree. Additionally, this requires that the keys
be comparable, which is not guaranteed.
18
SID:
boolean add(E e) Append the specified element to the end of the list
boolean contains(Object o) Returns true if this list contains the specified element
E get(int index) Returns the element at the specified position in this list
E remove(int index) Removes the element at the specified position in this list
Associates the specified value with the specified key in this map
V put(K key, V value) (returns the previous value associated with key, or null if there
was no mapping for key)
V remove(Object key) Removes the mapping for a key from this map if it is present
19