I’d like to give a brief overview of computational efficiency, since it’s a topic that has come up in a few conversations recently. The super short version is this: it’s often helpful to understand the resource (time, space, or power) needs for a given algorithm. Why? Because we want the fastest algorithm, or the one that uses the least amount of storage on our hard drive. In extreme computing environments (think Mars Rover, Apollo capsules, etc), we many have very limited resources available. For example, your digital wrist watch almost certainly has more memory than the Apollo capsules, which only had about 32KB of RAM. How can you possibly land a person on the moon with only 32KB?!!
So, how do we measure the resource use of a given algorithm? Essentially, we decide how much of a resource is used for each “step” in the algorithm, then try to count the number of steps the algorithm will perform. Suppose we had a very simple algorithm:
print 1
With a few simplifying assumptions, it’s probably safe to say that this would require just 1 step to perform, namely the step of displaying the value 1. Meanwhile, a similar, yet slightly more complex, algorithm such as
print x
would likely require 2 steps: 1 step to look up the value of x and 1 step to display the value. Here, I’m assuming that the variable x has already been assigned a value, which would have taken a little work, as well.
Taking it one step further (ha!), consider the slightly more complex example of
print x + 1
How many steps would that take? Well, looking up the value of x is still 1 step, plus 1 step for the addition operation, plus 1 step to display the result of the addition. So, 3 steps in total. This assumes that you have a sensible computer at your disposal where addition only needs 1 step to complete.
Again, let’s make it a little more complicated and consider
print x * 1
Now, multiplication, unlike addition, is usually a two step (or more!) operation. Why is that? Well, for simple addition problems, computers have a straightforward solution to solving the problem. But multiplication is a more complex process, as any 4th grader will tell you. One way to think of multiplication, in the general case, is as repeated addition. So, x * 4 is the same as x + x + x + x. But if addition is a 1 step process, then that approach to multiplication would require 4 steps! Fortunately, computers have better algorithms for multiplication than our grade school teachers taught us. Still, multiplication takes more steps than addition, so for the sake of argument, let’s say it’s 2 steps per multiplication. You can probably imagine what would happen with exponentiation!
In short, every operation that the computer must perform takes time, which in the abstract we can think of as steps. Looking up a value is an operation, storing a value is an operation, displaying a value, adding, multiplying, and even just waiting– they’re all operations. We can count the number of operations to be performed (not always as trivial as I’ve made it here) and use the total number of steps as a measure of efficiency.
Some algorithms take constant time to execute. Looking up the first entry in a phone book, for example, will always take the same number of steps, regardless of how big the phone book is.
Other algorithms will vary, based on the size of the input (i.e., how big the phone book is). For example, counting the number of people named Jane in the phone book is relatively easy in a very small town, but very hard in a large city! In computer science terms, we say that the input to an algorithm is of size n (i.e., how many names in the phone book, in this case). When n is big, the algorithm takes more steps.
For very large data sets, the difference between an algorithm that executes in 1 step versus an algorithm that requires n steps (where n varies on the size of the input) is a huge difference. Finding the first name in the phone book in New York City is as easy as finding the first name in the phone book in Smalltown USA.
In computer science we would say that asking “what is the first” is O(1) (read: “big-Oh of 1” or “on the order of 1”) and asking “how many are there” is O(n) (read: “big-Oh of n” or “on the order of n”).
O(1) algorithms execute in a constant number of steps (the time to execute it doesn’t vary when the input size varies). For example, “what is the first (or last) element?” is an O(1) question… no matter how large the input is, finding the first (or last) element is easy.
O(n) algorithms execute in n steps (the time to execute it DOES vary when the input size varies). For example, “what is the maximum (or minimum) value in a given list?” is an O(n) question… generally, you have to inspect every element in the list to be able to answer it. If you happen to know a priori that the list is sorted (a special case), then finding the minimum or maximum reduces to finding the first or last element in the list, respectively, making it an easy question — but you have to get the vector sorted, first.
There are many other orders of complexity: O(log(n)), O(n/2), O(n2), O(n3), O(n!), and so on. Something that’s O(n!) will take a very large number of steps, even for small values of n (the factorial function grows very quickly!).
In short, understanding something about the number of steps a given algorithm will take to solve an instance of a problem can help us choose among different algorithms that all produce the same result, but where some solutions are more efficient than others.