Considerations
Selecting a data structure for a particular situation is often a many-faceted decision.
Complexity
One consideration is often the difficulty and complexity behind the data structure. Often, we're inclined to use what's not necessarily best, but what's readily available and easy to understand. It's easier and (initially) less time-consuming to reuse something rather than create or search for the implementation of a more appropriate structure. For certain tasks (e.g., infrequently used utilities), this methodology is sufficient, but if you're building a "real-world" application, it's likely not a shortcut that's worth taking.
Space
Another area to consider when selecting a data structure is its use of auxiliary storage. In today's computing world, where having a gigabyte of main memory and 500 gigabytes of disk is considered normal, this has evolved to be much less of a consideration. However, within embedded devices, this can still be considered prime real estate. For example, consider a piece of data that consumes x bytes of storage. If a data structure that stores n instances of the data consumes 10xn bytes, it's certainly not making very efficient use of storage. This overhead might prohibit the use of the data structure.
Running Time
Typically, the most scrutinized aspect of data structure selection—and the one on which this article primarily focuses—is running time. That is, how long will a structure's operations take relative to the size of its data set? Everyone has experienced the effect of clicking a mouse button and (impatiently) waiting for an application to respond. Perhaps selecting a different internal data structure could have reduced your wait time several-fold. Data structure selection is critical to an application's responsiveness, and it's crucial to understand how to evaluate possible solutions.
How Is Running Time Measured?
Because running time is so critical, it's important to understand how it's measured.
Real Time
The running time of an algorithm or data structure operation typically depends on many factors, so how can we effectively measure this time? One simple solution is to measure the operation's real time. Most programming languages provide a mechanism to access system calls to retrieve the system time. Thus, we could write some profiling pseudo-code such as the following:
do_operation(n);
t2 := time();
From the above pseudo-code, we could (closely) represent the time spent in the operation as t2 – t1 units of time. Ultimately, we're interested in how the running time is affected as n (i.e., input to the operation) increases. We could illustrate this by running various trials of the above and create a two-dimensional graph, plotting running time on the y axis and n on the x axis.
While this approach offers a good insight into the "expense" of the operation (and thus, its internal data structures), it presents a few inherent problems:
- It's time-consuming. Creating a running time graph takes a lot of manual intervention.
- It's dependent on the underlying hardware and software. Running the same sets of tests on different machines will surely yield completely different results.
So, how can we work around this? The answer is through algorithmic complexity measurements using "Big Oh" notation.
Algorithmic Complexity
For reasons previously mentioned, there exists a need for a more universally applicable definition of an operation's running time. The solution is "Big Oh" notation, whose formal definition can be described as follows:
Let f(n) and g(n) be functions mapping nonnegative integers to real numbers. We can say that f(n) is O(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1 such that f(n) ≤ c ∙ g(n) for every integer n ≥ n0. We can also say "f(n) is order g(n)."
In simple terms, all that's being stated is that after a certain input (i.e., n0), the value of f(n) never exceeds the value of c ∙ g(n). Why is this important? Because it's a universal language that engineers can use to put a mathematical upper bound on the computational complexity of an operation or algorithm. Here are some of the commonly encountered algorithmic complexities (listed in ascending order with respect to total running time) when working with data structures:
- O(1)—Also referred to as "constant time," O(1) is the "fastest" algorithmic complexity because it is not at all related to the size of its input.
- O(log2(n))—Also referred to as "logarithmic time," this running time is extremely fast as well because the running time only grows logarithmically with respect to its input. Recall that if 2x = y, then the log2(y) = x. If you were to plot this function, you would see that the output grows very slowly relative to its input. For example, the binary search algorithm is considered an O(log2(n)) operation since each unsuccessful search eliminates half of the remaining possibilities. Note that this complexity can also be abbreviated via O(log n) (i.e., the base 2 portion of the logarithm is implied when speaking within a computer science context).
- O(n)—Also referred to as "linear time," this running time is portrayed when the duration of the operation is directly proportional to the size of the data on which it's operating. For example, searching a linked list is O(n) because in the worst case, the item for which you're searching is the last item in the list, thus requiring that all n items of the list be examined.
Today's Primitive Data Structures
The following sections aim to shed some light on the algorithmic complexities of common operations performed on everyday data structures. While it's important to realize that the complexities may vary slightly based on the structures' implementations, the following assessments provide a strong foundation for algorithm selection.
Unordered Linked List
A linked list in its simplest form is a set of nodes that form a linear ordering (i.e., no relative ordering among its items). Each node consists of two logical parts: the element and a pointer to the "next" node (the last node's "next" pointer references nil). Linked lists are often implemented via arrays or linked pointers.
Algorithmic Complexities of Basic Linked List Operations | |
Operation | Algorithmic Complexity |
findItem(x) | O(n) |
insertItem(x) | O(1) |
deleteItem(x) | O(n) |
sort() | O(n log n) |
If we were to employ an ordered linked list, we could improve the sort function's time to O(1) since the items would already be stored in a sorted fashion. However, the insertItem(x) operation would become an O(n) operation since, upon each insertion, the appropriate placement of the item within the list would need to be identified.
Hash Table
In simple terms, a hash table is a structure that maps keys to values. In theory, the key-to-value ratio should be 1:1. That is, given any two distinct keys, they should each map to distinct values. This is done through the use of a very important function called a hashing function.
Consider a function h(x) such that each distinct value of x outputs a distinct value y; this would be known as the perfect hashing function. In software terms, x (i.e., the input to the hash function) would typically be an object or structure of some kind. It is the properties of x (e.g., its location in virtual memory) that can be used to compute the result of the function. In the Java programming language, this value is the return value of the object's hashCode() method. That is, if any two objects' hashCode() methods return the same value, they will end up in the same "spot" in the hash table (though insertion of the second object will cause the first to forfeit its spot). Hash tables are often backed by an array, and the output of the hash function is merely used as an index into the array and thus yields the constant time operations we see in the table below. It's also noteworthy to mention that hash tables are not effective structures to use when frequent sorting is required as its elements will (likely) be placed non-contiguously and in no particular order. Thus, both iterating over the structure and sorting can be expensive operations.
What would happen if this hashing function started mapping different keys to the same location? This is called a hashing collision, which brings us to another very important component of a hash table: its load factor. A hash table's load factor is a term used to describe the point at which the hash table resizes itself. Permissible values for the load factor lie within the range (0, 1]. As the number of free slots within the hash table approach 0, the chance of a hash collision occurring approaches 1. An appropriately set load factor prevents this from happening. For example, imagine a hash table having a load factor of 0.70. When an insertion occurs that causes the hash table to surpass the 70 percent capacity mark, the hash table automatically increases its size to reduce the chance of collision. Since this is an expensive operation, it's ideal to minimize the number of times the resize operation occurs by appropriately setting its initial capacity and load factor. So, the next time you work with a hash table, be mindful of these properties.
Algorithmic Complexities of Basic Hash Table Operations | |
Operation | Algorithmic Complexity |
findItem(x) | O(1) |
insertItem(x) | O(1) |
deleteItem(x) | O(1) |
sort() | O(n log n) |
Balanced Binary Search Tree (e.g., AVL Tree or Red-Black Tree)
A tree is a fundamental data structure defined by a single node, referred to as the "root" node. There exists a unidirectional path from the root node to any other node within the tree. In a general tree, a node can have any number of children nodes. However, binary trees have at most two child nodes: the left child node and the right child node. A subset of trees, known as search trees, must define some well-known organizational strategy so that the structure can be effectively searched. For instance, given any node n in a binary search tree, all descendants in the left sub-tree of n are less than or equal to the value contained at node n, and all descendants in the right sub-tree of n are greater than the value contained at node n. Using this strategy, we can employ a search mechanism that knows precisely which sub-tree of a node to search in the event the item doesn't match the current node. This method is commonly known as the "binary search."
Why is it important that for any node, the heights of its left and right sub-trees differ by no more than 1 (also referred to as keeping the tree balanced)? Because if the tree becomes unbalanced, the complexity of the search tree becomes O(n) instead of O(log n). That is, it becomes linear instead of logarithmic. Recall that the binary search works so effectively because, with each unsuccessful comparison, it eliminates half of the remaining items. If the tree isn't properly balanced, far fewer than half of the remaining items will be discarded, ultimately resulting in more iterations of the search. The primary difference between a standard binary search tree, an AVL tree, and a red-black tree is that the latter two keep themselves balanced upon each insertion of a new item.
One advantage balanced binary search trees have over hash tables is that they also retain a relative ordering among their contents. While the other operations aren't quite as fast as a hash table (refer to the table below), logarithmic time is certainly not bad by any means. However, if you're not interested in the ordering of its items or frequent iterating, a hash table is certainly the way to go.
Algorithmic Complexities of Basic Balanced Search Tree Operations | |
Operation | Algorithmic Complexity |
findItem(x) | O(log n) |
insertItem(x) | O(log n) |
deleteItem(x) | O(log n) |
sort() | O(1) |
Building Your Application's Data Structures
Can the above information help you when you're creating your application's data structures? Absolutely! When you're creating your data structures, will you likely use the above structures alone in their primitive form? Probably not. However, chances are, you'll use a hybrid of these structures and expose those operations to which your users need access. For example, if you need to maintain the order in which certain items are added to a structure and support extremely fast searches, you might employ a combination of a linked list (i.e., to maintain insertion order) and a hash table (i.e., to support O(1) searches). However, if you can't afford the space overhead of maintaining two structures, then perhaps you could resort to a balanced search tree, offering only slightly slower insertion and search times, yet without the additional space overhead.
When you're selecting your back-end data structure(s), it's crucially important that you ask yourself "How will my structures be predominantly used?" That is, which operations are most likely going to be invoked? Insert operations? Delete operations? Search operations? Can only certain items be deleted (e.g., the smallest item)? For each data structure candidate, what are the algorithmic complexities of the operations I need to invoke? Can I afford the space overhead of maintaining multiple (internal) structures? The answers to all of these questions play important roles in selecting the right data structure(s) to efficiently realize your solution. Remember, selecting the appropriate structures can be the difference between your operations taking seconds or hours to complete!
The point: Even though it can sometimes be so easy to "forget" all the surrounding theory and just use what's convenient, it's important to be mindful that certain data structures lend themselves to certain situations better than others. Remember, you can pound a nail with a lot of objects, but a hammer just works best!
LATEST COMMENTS
MC Press Online