At first I didn't get how you could think that, but then I realized the misunderstanding.OpenGL guy said:If anyone here disagrees that f = n^2 is O(1) then I'll be very confused.
Yeah there are some errors but it doesn't really matter. The mistake isn't a -n^3 though or atleast I hope not.Basic said:bloodbob:
You have some small calculation errors in post #10 and #12 (constants, and signs). And a bigger one in the final step in post #12. Considering the small steps you took between the first steps, it's a major leap to say that "n^3+(2n-1)+(4n-4)+(6n-9)..." is O(n^3). It seems like you missed the negative n^3 term in it.
It wouldn't have mattered if the "(2n-1)+(4n-4)+(6n-9)..."-part could be neglected.bloodbob said:Yeah there are some errors but it doesn't really matter. The mistake isn't a -n^3 though or atleast I hope not.
The former just isn't right. Big O is always about algorithmic complexity. Every f(x) that can be expressed wholly without Sigma or Pi (accumulation, successive multiplication) is always O(1). That includes all functions used as examples in this thread so far.Basic said:So f = n^2 is O(n^2), but the time it takes to calculate f is O(1) (kind of, see DemoCoder's post).
I took his f as the full complexity not as the actual function since he saidOpenGL guy said:If anyone here disagrees that f = n^2 is O(1) then I'll be very confused.
.I understand you take out the multiplier of the largest growing function, and stick the fastest growing part inside a Big O
f = n ^ 2 + f(n-1)
Big O:
f = O(n^2)
main()
{
n=1000;
recurse(1000);
}
recurse( int n )// complexity =n^2-f(n-1)
{
if( n > 0 )
{
for( int outer=0;outer<n;outer++) //n^2 for loops
for( int inner=0;innter<n;inner++)
do something;
}
recurse( n-1); // f(n-1)
};
I seem to remember rather clearly from the mathematics that I have studied that the Big-O notation is frequently used in calculus and discrete mathematics also outside just algorithmic complexity measurements.zeckensack said:Big O is always about algorithmic complexity.
I must admit I'm a bit rusty, but I don't recall ever seeing that applied. I strictly associate O(whatever) to computer science, because during the math lessons I took (including only four "basic" semesters at university, because I didn't require more; might be different later on) there was always a different way to express what you "allow" as another meaning of O notation: limes.arjan de lumens said:I seem to remember rather clearly from the mathematics that I have studied that the Big-O notation is frequently used in calculus and discrete mathematics also outside just algorithmic complexity measurements.
When trying to look up the Big-O notation at e.g. Mathworld, i found:
http://mathworld.wolfram.com/AsymptoticNotation.html which seems to define it only in terms of functions, not algorithms. From what I can find, the notation is not invented specifically to deal with algorithmic complexity - it is merely very convenient to use for that purpose.
Err... I think you would actually say lim(3*x³-x) as x->infinity = infinity. That is why O is a useful notation.zeckensack said:E.g. for f(x):=3*x³-x, lim(f(x)) for x->infinity is 3*x³
OpenGL guy said:If anyone here disagrees that f = n^2 is O(1) then I'll be very confused.
f:=n² does not take n² operations to compute. It takes exactly one operation, regardless of how big n is. Even for f:=10*n², which takes two operations, O is still 1, not 2, because the number of operations does not grow with larger n.bloodbob said:If that is the case then how can you claim that algorthim that takes n^2 operation is a order 1 in complexity??? Its like saying bubble sorting is a O(1) algorithim because it takes n^2 operations.
I say it's O. You compute 1² for #1, square and add 2² for #2, square and add 3² for #3 etc. For each consecutive element, you do two operations extra (except for the first element). I.e. the number of req'd operations grows linearly with n.bloodbob said:And for a dataset of n if the complexity is n^2+(n-1)^2+(n-1)^2+..+(n-[n-1])^2 steps the order of complexity should be O(n^3). Its not O(n^2) due to the fact that the complexity formula that kiler gave us is depedant on the value of n.
Yes.bloodbob said:If I was wrong about kiler's meaning then it should be O since you have n recursions and a constant number of steps in each recursion though you could optimise the algorithim to be O(1).
Did you even bother to read???zeckensack said:Barring arjan de lumens and my reply ... I think this is about algorithmic complexity, because of the code f:=n² does not take n² operations to compute.
I took his f as the full complexity not as the actual function since he said....
Dang ... you're right. Nice new perspective.Simon F said:Err... I think you would actually say lim(3*x³-x) as x->infinity = infinity. That is why O is a useful notation.
Yes. I misunderstood some parts. In particular, I lost you when you added up complexities terms, not value terms. And so I also couldn't relate your code snippet to anything above, but rather thought it was a new example for a high complexity algorithm. My apologies. There is a point that still stands though.bloodbob said:Did you even bother to read???
He didn't make that claim. That was why I thought you misunderstood, and why I wrote my reply.bloodbob said:If that is the case then how can you claim that algorthim that takes n^2 operation is a order 1 in complexity???
It's easy to get confused when what is written is not what is meant. And just as frankly, I don't see how you reach that interpretation from what K.I.L.E.R originally asked, to which the answer is O ito algorithmic complexity without a doubt in my mind.bloodbob said:When he said f:=n^2-f(n-1) he ment O(f)=n^2+O(f(n-1)).
First he uses f and O, n is the size of the problem size. O is obivously order of complexity and well f generally just means function so what would be a function of the problem size? well it would obiviously be the complexity/term function. This is the only logical answer so if I'm wrong I blame kiler.zeckensack said:It's easy to get confused when what is written is not what is meant. And just as frankly, I don't see how you reach that interpretation from what K.I.L.E.R originally asked, to which the answer is O ito algorithmic complexity without a doubt in my mind.
I'd hope that kiler has some basic idea of the order of complexity actually relates to complexity. So when his talking about the largest "growing function"/term he is talking about complexity. So when he then and goes and pulls out n^2 from f it would also indicate that f is the complexity.I understand you take out the multiplier of the largest growing function, and stick the fastest growing part inside a Big O.
I don't read it that way at all. For example, KILER could have been giving a function that determines a series where it makes more sense to use "n" as your variable instead of "x".bloodbob said:First he uses f and O, n is the size of the problem size. O is obivously order of complexity and well f generally just means function so what would be a function of the problem size? well it would obiviously be the complexity/term function. This is the only logical answer so if I'm wrong I blame kiler.
Jury's still out on that oneMaybe I'm just crazy.
Clarification please. Are youK.I.L.E.R said:I'm talking about algorithmic functions and not mathametical ones.
I'm bringing a computer algorithm down into a math function and measuring it's complexity in O.
int recurse( n )
{
return n^2+recurse( n-1);
}
recurse( n )
{
for( int outter=0;outter<n;outter++)
{[indent]for( int inner=0;inner<n;inner++)
{
doSomething(outter,inner,n);
};
[/indent]};
recurse( n-1 );
return;
}
//this isn't needed but I'll put it anyway
doSomething(int x,int y,int z)
{
someglobalarray[z][y][x]=someglobalarray[z][y][x]^2-someotherglobalarray[z][y][x];
// Yeah I'm a dirty C bastard who wirtes bad code
}
Well then your just silly for using f(n) and O together and confusing me.K.I.L.E.R said: