1. [question on "Cuckoo Hashing with a Stash" paper] They propose a variant of Cuckoo hashing, where you may store O(1) exceptional keys in a "stash" outside of the main table. They show, by experiment and analysis, that even small stashes dramatically decrease the probability of the expensive "full rehash" situation. (The probability you need stash size s is roughly N^(-s).) PROs: as they say, a small stash dramatically reduces the frequency of "full rehash" operations. And for most keys in the table, a search is still very fast (just 2 probes, like Cuckoo). CONs: with fixed stash size s, we increase the worst-case number of probes to search (for an absent key) from 2 to 2+s, which largely defeats the original "perfect hashing" appeal of Cuckoo. (Or else, we need some exotic "CAM" memory for every hash table.) Both the insert and delete procedures are more complicated. Also, we have not eliminated the possibility of a full rehash, so we still need a good "universal family" of hash functions for this scheme to work reliably (not just one good-enough hash function, as in traditional hashing). REMARK: for a bit more intuition, see Mitzenmacher's blog post http://mybiasedcoin.blogspot.com/2008/05/more-robust-cuckoo-hashing-esa-paper.html 2. There are several possible solutions. Mainly you just need to describe the data structures, not the API or "code". In all of these, we use hash tables (say separate chaining, with automatic array doubling and halving to keep the load factor within some constant range, and a random-seeming hash function). Such tables need O(1) expected time per operation. Note you should not assume the graph is sparse, i.e. that E=O(V). Solution #1: First we have a table (or array with doubling) A, mapping each vertex v to its "adjacency table" A[v]. A[v] is the set of neighbors of v (just keys, no values). Since A[v] has constant load factor, we can list the neighbors of v in expected O(1) time per neighbor, just by walking through the slots of the underlying array. Both insertion and deletion are obvious. Solution #2: We have a table A[v], mapping each vertex v to some edge (v,w) (or else to null, if v has no neighbors). We have a hash table E, so that for each edge (u,v), E[(u,v)] returns (x,y), where (u,x) and (u,y) are the next and previous edges out of u (in some cyclic ordering of u's edges). Note that we use pairs of vertices as keys in E. E represents a doubly-linked circular adjacency list for each vertex, but using hash values rather than explicit node links. To enumerate neighbors of v, we use A[v] to find the first edge, and E[] to find all subsequent edges out of v. To add or remove an edge, we modify E[] as if it were a doubly-linked circular list (A[v] is our entry into that list), only O(1) modifications per add or delete. REMARK: if this is so great, why don't we use it? Well, the simple adjacency list representation is usually all we need for most of our algorithms, but there could be situations where O(1) time updates (edge add and remove) would also be useful.