The Big O notation is a mathematical notation used to describe the asymptotic behavior of functions. More precisely, it is used to describe an asymptotic upper bound for the magnitude of a function in terms of another, usually simpler, function.
In mathematics, it is usually used to characterize the residual term of a truncated infinite series, especially an asymptotic series.In computer science, it is useful in the analysis of the complexity of algorithms.
O(1)
An algorithm that runs the same no matter what the input. For instance, an algorithm that always returns the same value regardless of input could be considered an algorithm of efficiency O(1).
O(logn)
Algorithms based on binary trees are often O(logn). This is because a perfectly balanced binary search tree has logn layers, and to search for any element in a binary search tree requires traversing a single node on each layer.
The binary search algorithm is another example of a O(log n) algorithm. In a binary search, one is searching through an ordered array and, beginning in the middle of the remaining space to be searched, whether to search the top or the bottom half. You can divide the array in half only logn times before you reach a single element, which is the element being searched for, assuming it is in the array.
O
Algorithms of efficiency n require only one pass over an entire input. For instance, a linear search algorithm, which searches an array by checking each element in turn, is O. Often, accessing an element in a linked list is O because linked lists do not support random access.
O(nlogn)
Often, good sorting algorithms are roughly O(nlogn). An example of an algorithm with this efficiency is merge sort, which breaks up an array into two halves, sorts those two halves by recursively calling itself on them, and then merging the result back into a single array. Because it splits the array in half each time, the outer loop has an efficiency of logn, and for each "level" of the array that has been split up (when the array is in two halves, then in quarters, and so forth), it will have to merge together all of the elements, an operations that has order of n.
O(n^2)
A fairly reasonable efficiency, still in the polynomial time range, the typical examples for this order come from sorting algorithms, such as the selection sort example on the previous page.
O(2^n)
The most important non-polynomial efficiency is this exponential time increase. Many important problems can only be solved by algorithms with this (or worse) efficiency. One example is factoring large numbers expressed in binary; the only known way is by trial and error, and a naive approach would involve dividing every number less than the number being factored into that number until one divided in evenly. For every increase of a single digit, it would require twice as many tests.
Because there would be major implementation specific factors to take into account and really how trying to work out the function experimentally any better then doing it analytically?Wouldn't a better estimation of time be determined by a generational measure of real time performance?
Plot the results on a graph and then determine which function the graph takes.
Yes, but the point is to guesstimate what type of performance any particular algorithm has. God its been ages since i studied big O and all that, im sure someone else can give a better explanation.K.I.L.E.R said:Ta.
Wouldn't a better estimation of time be determined by a generational measure of real time performance?
Plot the results on a graph and then determine which function the graph takes.
K.I.L.E.R said:Ta.
Wouldn't a better estimation of time be determined by a generational measure of real time performance?
Plot the results on a graph and then determine which function the graph takes.
bloodbob said:Big O is only in regards to complexity generally either the number of computations required or the storage required.
Now since you've made a recursive function its ugly f = n ^ 2 + f(n-1)
now if we have n=infinity the actual function is
f=n^2+(n-1)^2+(n-3)^2+...+(n-n)^2=n(n+1)(2n+1)/6
Now as n(n+1)(2n+1)/6 tends to infinity the limit is K x n^3
now unless your comparing to algrothims of the same order K is gonna be scale away to nothing as n tends to infinity like Kxn^3/Yxn^2 when n=infinity K&Y don't matter do they.
So the big(O) notation would be O(n^3).
Because there would be major implementation specific factors to take into account and really how trying to work out the function experimentally any better then doing it analytically?
n(n+1)(2n+1)/6=(n^2+n)(2n+1)/6=(4n^3+2n^2+n^2+n)/6K.I.L.E.R said:Thanks.
I would really like to know how you managed to get to Kxn^3.
I lost you in there.
BRiT said:It gives you order of magnitude based on your domain. Thus if your domain is doubled, you have a general idea of how the performance of an algorithm will scale. Thus if you have a choice of using a O or a O(log) or O(n^2), your choice should be clear in which one to use for performance.
I think you could have wrote that better but I know what you meanDemoCoder said:It depends on how you view multiplication. The fastest known "practical" multiplication algorithm is nlogn (fast fourier transform + convolution + inverse transform). Karatsuba's method is n^lg(3)
So O(n^2) != O(1)
Yes, that's exactly what I was referring to.However, traditionally, you only count the cost of one kind of operation, not the individual arithmetic costs (e.g. cost of adds and mults), so you're essentially right.
Replace n^2 with any non-recursive polynomial of n and the result is still the same.