1. I wanted, first, an argument that you could have any number of rotations (k) in a single red-black tree insertion. Next I wanted some upper bound on how big such a tree would be, in terms of k. There will be partial credit for examples with specific k (for example k=3 or k=4). Simplest solution: suppose we have a 2-3 tree where every node is a 3-node, of depth k (i.e. k steps on any root-to-null path). This tree has (3^k - 1)/2 nodes, each with 2 keys, so N=(3^k - 1). In the left-leaning red-black represenation, there is an alternating red-black path from the root to the minimum key. If we insert a new smaller minimum key, we trigger a chain reaction of k rotations (each rotation is followed by a colorflip). We can do a little better if we observe that we do not need 3-nodes everywhere, but only on the path from the root to the min. This tree would be smaller, with only N = 2*(2^k - 1) keys. Finally, if k is even, we can instead arrange that the red-black tree has a *alternating* red-black path with k/2 red edges, where each red edge goes left but each black edge goes right. Then inserting at the bottom of this path (to right of last red edge) triggers 2*(k/2)=k rotations, since this is the kind of 4-node repair taking 2 rotations instead of one. Now the total number of keys is N = 2*(2^(k/2) - 1). I think the last is the smallest N possible, but already the first argument would be full credit. 2. A persistent data structure is "partially persistent" if you can only modify the most recent version (you can search, but not modify, the older versions). It is "fully persistent" if you can also modify the older versions. Sarnak and Tarjan implement partial persistence (they make this clear near start of Section 2), with only O(1) additional space per modification. REMARK: by contrast, in hw2 we implemented full persistence, but at the cost of O(log N) space per modification. Our method is what Sarnak and Tarjan call "path copying", since we make a new copy of each node on the path from the root to the insertion point. 3. Suppose CO is a collection, and IT is an iterator over CO. If IT is "fail-fast", then it may refuse to work (throwing an exception) after CO is modified by any means other than IT.remove(). To implement this, there is an int field CO.modCount, incremented each time CO is modified. IT has an int field IT.expectedModCount, which is initially a copy of CO.modCount, and only updated in IT.remove(). Now the iterator methods can check that these two fields are equal (a cheap int comparison), and throw an exception if not. If you look at the iterator methods in TreeMap.java, you see several variations of this code: if (modCount != expectedModCount) throw new ConcurrentModificationException(); So if CO is modified when IT does not expect it, the next IT operation can throw this exception. PS: you might wonder why we would *want* fail-fast iterators. One issue is that there may not be a clear choice for how an iterator should behave after certain modifications (e.g. what if your next data item was deleted, or something is inserted before it). Another issue is that a behavior that might be natural for one implementation (like a sorted array) might be difficult with another (like a search tree), and vice-versa. So they just avoided this whole bag of monkeys.