Big O notation

OpenGL guy said:
If anyone here disagrees that f(n) = n^2 is O(1) then I'll be very confused.
At first I didn't get how you could think that, but then I realized the misunderstanding.

You're talking about the complexity to calculate the function f(n), but what we're talking about here is when f(n) is the complexity of some algorithm.

So f(n) = n^2 is O(n^2), but the time it takes to calculate f(n) is O(1) (kind of, see DemoCoder's post).


bloodbob:
You have some small calculation errors in post #10 and #12 (constants, and signs). And a bigger one in the final step in post #12. Considering the small steps you took between the first steps, it's a major leap to say that "n^3+(2n-1)+(4n-4)+(6n-9)..." is O(n^3). It seems like you missed the negative n^3 term in it.
 
Basic said:
bloodbob:
You have some small calculation errors in post #10 and #12 (constants, and signs). And a bigger one in the final step in post #12. Considering the small steps you took between the first steps, it's a major leap to say that "n^3+(2n-1)+(4n-4)+(6n-9)..." is O(n^3). It seems like you missed the negative n^3 term in it.
Yeah there are some errors but it doesn't really matter. The mistake isn't a -n^3 though or atleast I hope not.
 
Last edited by a moderator:
bloodbob said:
Yeah there are some errors but it doesn't really matter. The mistake isn't a -n^3 though or atleast I hope not.
It wouldn't have mattered if the "(2n-1)+(4n-4)+(6n-9)..."-part could be neglected.
(Btw, it should've been n^3-(2n-1)-(4n-4)-(6n-9)...)

And you're right that the misstake isn't -n^3. It's -2/3*n^3 + O(n^2). So the conclusion is corrrect, except that you shouldn't draw that conclusion without proving that the "-2/3"-part isn't -1.

Sorry for nitpicking, it's an "occupational injury" from all the math exams I've marked.
 
Basic said:
So f(n) = n^2 is O(n^2), but the time it takes to calculate f(n) is O(1) (kind of, see DemoCoder's post).
The former just isn't right. Big O is always about algorithmic complexity. Every f(x) that can be expressed wholly without Sigma or Pi (accumulation, successive multiplication) is always O(1). That includes all functions used as examples in this thread so far.

What you're doing there is polynomial classification. f(x)=x² is a second order polynomial function. But you don't use big O notation for that, that's just confusing.

Here's a math type friendly example for O notation:
a(0):=1
a(n+1):=a(n)*2, n>=0

O(n)=n, because if you want to compute the 1000th element, you need to do 1001 operations, 2001 ops for #2000 etc. Number of ops scales "almost" linearly with n. That's what this notation is about.

If you were going to classify this, you'll quickly see that a(n)=2^n.
And then you could go ahead with this insight and optimize the "algorithm" by simplification, making it O(1), because it is, again, only a single term.
 
Last edited by a moderator:
OpenGL guy said:
If anyone here disagrees that f(n) = n^2 is O(1) then I'll be very confused.
I took his f(n) as the full complexity not as the actual function since he said
I understand you take out the multiplier of the largest growing function, and stick the fastest growing part inside a Big O

f(n) = n ^ 2 + f(n-1)

Big O:

f(n) = O(n^2)
.
Clearly you could have cases where you could have such a complexity.
Code:
main()
{
n=1000;
recurse(1000);
}


recurse( int n )// complexity =n^2-f(n-1)
{
if( n > 0 )
{
for( int outer=0;outer<n;outer++) //n^2 for loops
for( int inner=0;innter<n;inner++)
do something;

}
recurse( n-1); // f(n-1)
};
If that is the case then how can you claim that algorthim that takes n^2 operation is a order 1 in complexity??? Its like saying bubble sorting is a O(1) algorithim because it takes n^2 operations.
Edit:n=problem size if it was x^2 then that would be O(1)


So frankly I do disagree since otherwise all I learnt at univeristy was for nothing.

And for a dataset of n if the complexity is n^2+(n-1)^2+(n-1)^2+..+(n-[n-1])^2 steps the order of complexity should be O(n^3). Its not O(n^2) due to the fact that the complexity formula that kiler gave us is depedant on the value of n.

If I was wrong about kiler's meaning then it should be O(n) since you have n recursions and a constant number of steps in each recursion though you could optimise the algorithim to be O(1).
 
Last edited by a moderator:
zeckensack said:
Big O is always about algorithmic complexity.
I seem to remember rather clearly from the mathematics that I have studied that the Big-O notation is frequently used in calculus and discrete mathematics also outside just algorithmic complexity measurements.

When trying to look up the Big-O notation at e.g. Mathworld, i found:
http://mathworld.wolfram.com/AsymptoticNotation.html which seems to define it only in terms of functions, not algorithms. From what I can find, the notation is not invented specifically to deal with algorithmic complexity - it is merely very convenient to use for that purpose.
 
arjan de lumens said:
I seem to remember rather clearly from the mathematics that I have studied that the Big-O notation is frequently used in calculus and discrete mathematics also outside just algorithmic complexity measurements.

When trying to look up the Big-O notation at e.g. Mathworld, i found:
http://mathworld.wolfram.com/AsymptoticNotation.html which seems to define it only in terms of functions, not algorithms. From what I can find, the notation is not invented specifically to deal with algorithmic complexity - it is merely very convenient to use for that purpose.
I must admit I'm a bit rusty, but I don't recall ever seeing that applied. I strictly associate O(whatever) to computer science, because during the math lessons I took (including only four "basic" semesters at university, because I didn't require more; might be different later on) there was always a different way to express what you "allow" as another meaning of O(n) notation: limes.

E.g. for f(x):=3*x³-x, lim(f(x)) for x->infinity is 3*x³ or, more colloquial, f(x) approaches 3*x³ toward infinity. Using that, there's no need for using O(n) in this context, which I find very confusing, so I like being able to avoid it, even though I now see that it is common in some places.

The "problem" with the limes is that it does not eliminate constant factors, like 3 in my example. "Your" O(n) of that function would be x³ (or n³ or whatever) while "my" limes is 3*x³. So it's not the same but close enough.
 
zeckensack said:
E.g. for f(x):=3*x³-x, lim(f(x)) for x->infinity is 3*x³
Err... I think you would actually say lim(3*x³-x) as x->infinity = infinity. That is why O is a useful notation.
 
Barring arjan de lumens and my reply ... I think this is about algorithmic complexity, because of the code :p
OpenGL guy said:
If anyone here disagrees that f(n) = n^2 is O(1) then I'll be very confused.
bloodbob said:
If that is the case then how can you claim that algorthim that takes n^2 operation is a order 1 in complexity??? Its like saying bubble sorting is a O(1) algorithim because it takes n^2 operations.
f(n):=n² does not take n² operations to compute. It takes exactly one operation, regardless of how big n is. Even for f(n):=10*n², which takes two operations, O(n) is still 1, not 2, because the number of operations does not grow with larger n.

Bubble sort OTOH does take a growing amount of operations depending on the size of the field to be sorted (which is n aka "the size of the problem" btw), so it must have O(n) dependent on n itself. Dunno if it's n² but that looks pretty reasonable.
bloodbob said:
And for a dataset of n if the complexity is n^2+(n-1)^2+(n-1)^2+..+(n-[n-1])^2 steps the order of complexity should be O(n^3). Its not O(n^2) due to the fact that the complexity formula that kiler gave us is depedant on the value of n.
I say it's O(n). You compute 1² for #1, square and add 2² for #2, square and add 3² for #3 etc. For each consecutive element, you do two operations extra (except for the first element). I.e. the number of req'd operations grows linearly with n.

The value of the term does grow much faster, but that's a separate issue.
bloodbob said:
If I was wrong about kiler's meaning then it should be O(n) since you have n recursions and a constant number of steps in each recursion though you could optimise the algorithim to be O(1).
Yes.
 
zeckensack said:
Barring arjan de lumens and my reply ... I think this is about algorithmic complexity, because of the code :pf(n):=n² does not take n² operations to compute.
Did you even bother to read???
I took his f(n) as the full complexity not as the actual function since he said....

When he said f(n):=n^2-f(n-1) he ment Complexity(f(n))=n^2+Complexity(f(n-1)).

Seriously if you looked at the code I posted I can't se how in hell you could do that in n steps your deluded.

and how can you say something that takes "n^2+(n-1)^2+(n-1)^2+..+(n-[n-1])^2" steps is order O(n)??????? you've quoted that text and then said you think thats O(n) simply doing what kiler did and looking at the largest term its clearly atleast O(n^2)
 
Last edited by a moderator:
Simon F said:
Err... I think you would actually say lim(3*x³-x) as x->infinity = infinity. That is why O is a useful notation.
Dang ... you're right. Nice new perspective.
 
bloodbob said:
Did you even bother to read???
Yes. I misunderstood some parts. In particular, I lost you when you added up complexities terms, not value terms. And so I also couldn't relate your code snippet to anything above, but rather thought it was a new example for a high complexity algorithm. My apologies. There is a point that still stands though.

OpenGL guy might have misunderstood something, too, but what he said was correct in itself. I read your previous post as disagreeing with him, and thought you misunderstood him as well. From earlier:
bloodbob said:
If that is the case then how can you claim that algorthim that takes n^2 operation is a order 1 in complexity???
He didn't make that claim. That was why I thought you misunderstood, and why I wrote my reply.

bloodbob said:
When he said f(n):=n^2-f(n-1) he ment O(f(n))=n^2+O(f(n-1)).
It's easy to get confused when what is written is not what is meant. And just as frankly, I don't see how you reach that interpretation from what K.I.L.E.R originally asked, to which the answer is O(n) ito algorithmic complexity without a doubt in my mind.
 
zeckensack said:
It's easy to get confused when what is written is not what is meant. And just as frankly, I don't see how you reach that interpretation from what K.I.L.E.R originally asked, to which the answer is O(n) ito algorithmic complexity without a doubt in my mind.
First he uses f(n) and O(n), n is the size of the problem size. O is obivously order of complexity and well f generally just means function so what would be a function of the problem size? well it would obiviously be the complexity/term function. This is the only logical answer so if I'm wrong I blame kiler.

I understand you take out the multiplier of the largest growing function, and stick the fastest growing part inside a Big O.
I'd hope that kiler has some basic idea of the order of complexity actually relates to complexity. So when his talking about the largest "growing function"/term he is talking about complexity. So when he then and goes and pulls out n^2 from f(n) it would also indicate that f(n) is the complexity.

Maybe I'm just crazy.
 
Last edited by a moderator:
bloodbob said:
First he uses f(n) and O(n), n is the size of the problem size. O is obivously order of complexity and well f generally just means function so what would be a function of the problem size? well it would obiviously be the complexity/term function. This is the only logical answer so if I'm wrong I blame kiler.
I don't read it that way at all. For example, KILER could have been giving a function that determines a series where it makes more sense to use "n" as your variable instead of "x".
Maybe I'm just crazy.
Jury's still out on that one ;)

Anyway, without some more feedback from KILER (like what context is this taken from), this whole thing is kinda moot. I still feel that people are making this way too complicated...
 
I'm talking about algorithmic functions and not mathametical ones.
I'm bringing a computer algorithm down into a math function and measuring it's complexity in O.

I think I'm understanding this a lot better.
I'm looking for a conclusive answer to this because I wish to apply this knowledge to a few algorithms I have.
 
K.I.L.E.R said:
I'm talking about algorithmic functions and not mathametical ones.
I'm bringing a computer algorithm down into a math function and measuring it's complexity in O.
Clarification please. Are you
A) Your bring the algorithm down to a math function
Or
B) Are you bring down the number of steps need for an algorithm down to a math function.

If A) then an code for the thing you posted up the top would be
Code:
int recurse( n )
{
return n^2+recurse( n-1);
}
or if
B) it could be something like this

Code:
recurse( n )
{

for( int outter=0;outter<n;outter++)
{[indent]for( int inner=0;inner<n;inner++)
{
doSomething(outter,inner,n);
};
[/indent]};

recurse( n-1 );

return;
}

//this isn't needed but I'll put it anyway
doSomething(int x,int y,int z)
{
someglobalarray[z][y][x]=someglobalarray[z][y][x]^2-someotherglobalarray[z][y][x];
// Yeah I'm a dirty C bastard who wirtes bad code
}
 
Last edited by a moderator:
The entire point of this thread is to give me knowledge. ;)
I think you missed the point of the entire thread. :LOL:

"Well then your just silly for using f(n) and O(n) together and confusing me."

So if I used f(n) and then O(k) would be alright?
 
Back
Top