Suppose you have analyzed two algorithms and expressed their run times in terms of the size of the input: Algorithm A takes 100n + 1 steps to solve a problem with size n; Algorithm B takes n2 + n + 1 steps.
The following table shows the run time of these algorithms for different problem sizes:
Input size |
Run time of Algorithm A |
Run time of Algorithm B |
---|---|---|
10 100 1 000 10 000 |
1 001 10 001 100 001 1 000 001 |
111 10 101 1 001 001 >1010 |
At n = 10, Algorithm A looks pretty bad; it takes almost 10 times longer than Algorithm B. But for n = 100 they are about the same, and for larger values A is much better.
The fundamental reason is that for large values of n, any function that contains an n2 term will grow faster than a function whose leading term is n. The leading term is the term with the highest exponent.
For Algorithm A, the leading term has a large coefficient, 100, which is why B does better than A for small n. But regardless of the coefficients, there will always be some value of n where an2> bn.
The same argument applies to the non-leading terms. Even if the run time of Algorithm A weren +1000000, it would still be better than Algorithm B for sufficiently large n.
In general, we expect an algorithm with a smaller leading term to be a better algorithm for large problems, but for smaller problems, there may be a crossover point where another algorithm is better. The location of the crossover point depends on the details of the algorithms, the inputs, and the hardware, so it is usually ignored for purposes of algorithmic analysis. But that doesn’t mean you can forget about it.
If two algorithms have the same leading order term, it is hard to say which is better; again, the answer depends on the details. So for algorithmic analysis, functions with the same leading term are considered equivalent, even if they have different coefficients.
An order of growth is a set of functions whose asymptotic growth behavior is considered equivalent. For example, 2n, 100n and n + 1 belong to the same order of growth, which is written O(n) in Big-Oh notation and often called linear because every function in the set grows linearly with n.
All functions with the leading term n2belong to O(n2); they are quadratic, which is a fancy word for functions with the leading term n2.
The following table shows some of the orders of growth that appear most commonly in algorithmic analysis, in increasing order of badness.
Order of growth |
Name |
---|---|
O(1) O(logb n) O(n) O(n logb n) O(n2) O(n3) O(cn) |
constant logarithmic (for any b) linear “en log en” quadratic cubic exponential (for any c) |
For the logarithmic terms, the base of the logarithm doesn’t matter; changing bases is the equivalent of multiplying by a constant, which doesn’t change the order of growth. Similarly, all exponential functions belong to the same order of growth regardless of the base of the exponent. Exponential functions grow very quickly, so exponential algorithms are only useful for small problems.
Exercise B.1. Read the Wikipedia page on Big-Oh notation at http: // en. wikipedia. org/wiki/ Big_ O_ notationand answer the following questions:
- What is the order of growth of n3+ n2? What about 1000000n3+ n2? What about n3+1000000n2?
- What is the order of growth of (n2+ n).(n + 1)? Before you start multiplying, remember that you only need the leading term.
- If f is in O(g), for some unspecified function g, what can we say about a f + b?
- If f1 and f2 are in O(g), what can we say about f1 + f2?
- If f1 is in O(g) and f2 is in O(h), what can we say about f1 + f2?
- If f1 is in O(g) and f2 is O(h), what can we say about f1 _ f2?
Programmers who care about performance often find this kind of analysis hard to swallow. They have a point: sometimes the coefficients and the non-leading terms make a real difference. Sometimes the details of the hardware, the programming language, and the characteristics of the input make a big difference. And for small problems asymptotic behavior is irrelevant.
But if you keep those caveats in mind, algorithmic analysis is a useful tool. At least for large problems, the “better” algorithms is usually better, and sometimes it is much better. The difference between two algorithms with the same order of growth is usually a constant factor, but the difference between a good algorithm and a bad algorithm is unbounded!
- 瀏覽次數:1535