In math, big-O is a set. For example, O is a set of functions which "equal or smaller" than n. That includes 10n, 1, log n, etc. Basically, if you take a limit x->infinity of f(x)/g(x) and it's limit is a finite value (or zero), f = O(g).
So, in math terms, writing:
f = n^2 + f(n-1)
means
f = n^2+(n-1)^2+...+1 = n(n+1)(2n+1)/6
which is a function in O(n^3), i.e. f = O(n^3) (the "=" should be replaced with a appropriate notion for "belongs to a set," but it's common to use "=" in big-O notions).
This means, if you have an algorithm, which for data n, it needs n^2 steps and apply the same algorithm for data n-1, the complexity of this algorithm should be O(n^3).
Take a real world example. The complexity of merge sort of n items is:
f = 2 * f(n/2) + n
because in merge sort, we divide the n items into two halves of n/2 items, merge sort the halves, and merge them (which needs about n steps). Solving the function f gives:
f = n log n (for n = 2^m, log is 2-based)
which is a O(n log n) function. Therefore, we say that merge sort is an O(n log n) algorithm.
However, if your algorithm is to compute the value of f, then it's completely different. To compute the value of f, you can just do it by definition:
f = n^2 + f(n-1)
which clearly needs n steps (assuming computing n^2 needs only 1 step). Therefore, the complexity of computing f is O. If you do some works and replace f by n(n+1)(2n+1)/6, then it needs only 1 step to compute, thus its complexity is O(1).
So, in math terms, writing:
f = n^2 + f(n-1)
means
f = n^2+(n-1)^2+...+1 = n(n+1)(2n+1)/6
which is a function in O(n^3), i.e. f = O(n^3) (the "=" should be replaced with a appropriate notion for "belongs to a set," but it's common to use "=" in big-O notions).
This means, if you have an algorithm, which for data n, it needs n^2 steps and apply the same algorithm for data n-1, the complexity of this algorithm should be O(n^3).
Take a real world example. The complexity of merge sort of n items is:
f = 2 * f(n/2) + n
because in merge sort, we divide the n items into two halves of n/2 items, merge sort the halves, and merge them (which needs about n steps). Solving the function f gives:
f = n log n (for n = 2^m, log is 2-based)
which is a O(n log n) function. Therefore, we say that merge sort is an O(n log n) algorithm.
However, if your algorithm is to compute the value of f, then it's completely different. To compute the value of f, you can just do it by definition:
f = n^2 + f(n-1)
which clearly needs n steps (assuming computing n^2 needs only 1 step). Therefore, the complexity of computing f is O. If you do some works and replace f by n(n+1)(2n+1)/6, then it needs only 1 step to compute, thus its complexity is O(1).