- push(item)—Put an item on the top of the pile
- pop()—Remove the item on top of the stack for use
Sometimes other useful operations are added to a given implementation, such as...
- peek()—See what item is on the top of the stack without removing it
- size()—Determine the number of items on the stack
- isEmpty()—Check whether there are any items on the stack
A stack might be implemented using an array, a user space, or a database table as the storage area. The point of an ADT is that it allows us to think about a programming problem without worrying about how the structure is implemented.
The Dictionary ADT
A dictionary is an ADT that allows the programmer to store elements in the form of key-value pairs, much like an array. Unlike an array, the keys can be any type of data, not just positive integers. You could have keys that are floating point numbers or strings, or you could even have both in the same dictionary.
The definition of which operations are required for a dictionary is a little fuzzy. Each implementation includes a different set of operations on the elements in the dictionary. Dictionaries support at least the following operations:
- insert(key, value)—Add an element into the dictionary
- find(key)—Retrieve the value associated with the given key
- remove(key)—Remove the value associated with the given key from the dictionary
These are some of the other possible operations:
- keys()—Retrieve a list of all the keys currently in the dictionary
- isEmpty()—Check whether there are any items in the dictionary
- clear()—Remove all items from the dictionary
Sometimes a dictionary will allow multiple values to be associated with the same key, in which case these operations that operate on multiple values may be allowed:
- findAll(key)—Return a list of all values associated with the given key
- removeAll(key)—Remove all items associated with the given key from the dictionary
To add to the confusion, dictionaries are also referred to as associative arrays, maps, or hash tables. Hash table is one of the more common names for dictionaries, because dictionaries are often implemented using tables (aka arrays) and a hash function. Hash tables enable very fast insertion and searching in most cases. In order to use an array to store the elements, we need some way to translate from the given key to an integer in the range [1,N], where N is the number of elements in the array. We will find this index value using two functions: a hash function and a compression map.
Hash Codes
The hash function takes the key as input and produces a hash code. The hash function needs certain properties in order for the hash table to perform well.
The function should have results that are uniformly distributed. A hash function that does not have this property is said to exhibit clustering. Clustering can have a negative effect on the performance of the hash table, as it increases the likelihood of collisions and frustrates some collision-handling methods. Another important feature is that small changes in the key should not result in similar hash codes.
For the example, I decided to use the polynomial hash code. Some hash codes don't work very well for strings, because they don't take the order of characters into account, so "bat" and "tab" would have the same hash code, which is undesirable for most uses. The polynomial hash code works by taking each byte of the string x as a number (x0,x1,x2,...,xn) and then calculating this polynomial:
h(x) = x0an-1 + x1an-2 + ... + xn-1a + xn
By Horner's Rule, the equation above is equal to the equation below.
h(x) = xn-1 + a( xn-2 + a( xn-3 + ... + a( x2 + a( x1 + ax0))))
The second form is much more useful for calculating iteratively. The choice of the constant a is very important for this hash function to work well. Experiments have shown that 37, 39, and 41 work well for English words encoded in ASCII. I haven't done any experiments to see if this holds for EBCDIC or other languages. If this hash function is to be used in a serious way, some experiments to determine appropriate values of the constant a would probably be worthwhile. The best values are those resulting in a small number of collisions for your particular key space.
The hash code is usually a large value that is not suitable for direct use as an array index, which is why we also need the compression map.
Compression Maps
The goal of the compression map is to transform the hash code h(x) into a suitable array index c(x). It should distribute the hash codes evenly over the range [1,N], where N is the capacity of the array we are using to store the elements. The two most popular compression maps are the division method,
c(x) = |h(x)| mod N
and the Multiply, Add, Divide (MAD) method,
c(x) = |a * h(x) + b| mod N; a ≠ 0 mod N
In both cases, N should be a prime number to avoid clustering. The MAD method is a bit better at distributing the values, but if the hash table can be resized to non-prime sizes, a must be recalculated for the new N.
Collision Handling
It is impossible to come up with a hashing function that has no collisions across all possible keys, because there are many more keys than hash codes. We try to avoid collisions as much as possible, because collision handling is where the performance of hash tables can start to suffer. The performance of a collision-handling scheme is usually measured relative to the load factor of the hash table. Load factor is the ratio of the number of elements in the hash table to the total number of elements in the array. So what happens if we try to add a key that has the same hash code as something that's already in the array?
Separate Chaining
Using separate chaining, a collision is handled by storing a list of key-value pairs at each location in the array. When a collision occurs, the new key-value pair is added to the end of this list. The advantages of this method are its simplicity and its relatively good performance at high load factors. The disadvantages are that it tends not to perform as well as other methods on average and that it requires more storage overhead.
Open Addressing
When a collision occurs in a hash table using open addressing, a second probing algorithm determines the next spot in the array to check. Linear probing is the simplest probing method, where the probing algorithm checks for open slots by incrementing the array index by a fixed amount, usually 1. Linear probing tends to perform very well at low load factors, but the performance degrades very quickly at load factors above 0.8.
Note that collision handling must be taken into account when implementing the find(key) operation. The search key must be compared to the found key, and if they are not equal, the next position a collision can be stored must be checked. This is the reason that a lot of collisions degrade the performance of the hash table. Instead of just computing the hash code and pulling out the value, the search algorithm must do many key comparisons to find the matching element.
Data Structure Arrays
Hash tables may seem like an overly complex solution, particularly for RPG programmers, who have such easy access to database tables. A simpler method of implementing a dictionary could use a data structure array to sort the key-value pairs. A number of experts have published articles on how to set up the data structure arrays. Paul Tuohy has a particularly clear explanation. Once you have a sorted array, searching for a particular key is pretty fast because binary search can be used.
The Advantages of Hash Tables
That said, hash tables have certain advantages for large data sets and applications in which the performance of dictionary operations is critical. If the load factor is kept at an appropriate level and a good hash function is used, collisions can be kept to a minimum. For all intents and purposes, the hash table method allows the time taken for the three main dictionary operations to stay constant no matter how many items are added to the dictionary. Other methods—such as using sorted arrays or using a sorting tree ADT—cannot boast this, as the time taken to insert, remove, or find elements using those methods increases as the number of elements increases.
There is quite a bit of interesting reading out there for those interested in looking in more detail at hash functions. Wikipedia has a good overview and links to a number of interesting sites. I have implemented a dictionary in RPG using hash tables to see how this works in practice, which is downloadable here.
LATEST COMMENTS
MC Press Online