|
|
| |
|
|
|
|
Complexity of Searching and Sorting |
|
|
|
| |
|
|
|
|
We introduced Big-O notation in Chapter 6 as a way of comparing the work done by different algorithms. Let's apply it to the algorithms that we've developed in this chapter and see how they compare with each other. In each algorithm, we start with a list containing some number of values. We'll refer to the number of values as N. |
|
|
|
| |
|
|
|
|
In the worst case, our original Search function scans all N values to locate an item. Thus, it requires N steps to execute. On average, `Search~ takes roughly N/2 steps to find an item; however, recall that in Big-O notation, we ignore constant factors (as well as lower-order terms). Thus, function `Search~ is an order N-that is, an O(N)-algorithm. |
|
|
|
| |
|
|
|
|
Search2 is also an O(N) algorithm because, even though we saved a comparison on each loop iteration, the same number of iterations are performed. However, making the loop more efficient without changing the number of iterations decreases the constant (the number of steps) that N is multiplied by in the algorithm's work formula. Thus, function Search2 is said to be a constant factor faster than Search. |
|
|
|
| |
|
|
|
|
What about SearchOrd? The number of iterations is decreased for the case in which the item is missing from the list. However, all we have done is take a case that would require N steps and reduce its time, on average, to N/2 steps. Thus, SearchOrd is also O(N). |
|
|
|
| |
|
|
|
|
Now consider BinSearch. In the worst case, it eliminates half of the remaining array components on each iteration. Thus, the worst-case number of iterations is equal to the number of times N must be divided by 2 to eliminate all but one value. This number is computed by taking the logarithm, base 2, of N(written log2N). Here are some examples of log2N for different values of N: |
|
|
|
| | | | | | | | | | | | | | | | | | | | | | |
|
|
|
|
|
(text box continues on next page) |
|
|
|
|
|