Post your code optimizations

DiGuru said:
Yes. But I rather type a bit more than having to guess about how my code might be executed.
Well, that's why I would avoid at all costs placing side effects into my operator overloads. One can, for example, define addition so that it modifies the values of both variables added! For example:
Code:
int operator +(int &a, int &b)
{
  a++;
  b--;
  return a+b;
}
That's what I mean when I say it has to be done right: operator overloading can be done very, very badly indeed. But it is quite possible to overload all operators so that they do what the programmer would think is obvious, and prevent order of execution from leading to errors.

For example, when I was browsing around the net for increment/decrement overloads, I even noticed that one example used "void" as the return value: this would probably be the best thing to do, as it would prevent the use of the operator in a statement where its result is undefined.
 
Basic said:
I assume that you thought that if i starts as 2, then that expression [(i++) + (i++)] could be 2+3 or 3+2, both equal to 5.
No, definitely not. As I said before I was taught that the post-increment was only evaluated after the complete statement. So the result would be 4 and after the statement i = 4. I verified that this is also Visual C++'s behaviour. I'm interested if anyone knows a compiler that behaves differently.

There's no practical situation I can think of where I would use code like this, but either way they really should fix this by making it well-defined. Any definition will do, but following C# would obvioulsy make most sense of all.
 
Basic said:
I'd say that it's an error prone and bad coding style to use expressions with side effects together with the variables it changes in the same expression. So if there's an error in the syntax, then it's that it allows it at all.
That could add some quite unexpected limitations to a language. E.g. if you're working with pointers or references, for the expression
Code:
g(f(x), y)
you would need to declare f() to have no side effects, otherwise it could change the object y points to.
 
Nick:
Analog Devices VisualDSP++ (first one i tried). Or rather, the value of the expression is as if they had done one of the increments when the addition is done. The actual code is better optimized than that.

Xmas:
True.
But a big fat warning on the obvious cases would be nice. Ie, if the compiler can see the side effect when looking at the expression, then warn. If it's hidden in a function, then tough luck. Ie, read "sideffects" as "++" in the same expression.
 
Xmas said:
That could add some quite unexpected limitations to a language. E.g. if you're working with pointers or references, for the expression
Code:
g(f(x), y)
you would need to declare f() to have no side effects, otherwise it could change the object y points to.

That's an optimization problem, hence things like assuming global alasing options for compilers, which already exist. Of course the order arguments are pushed to the stack will cause problems in the example you've given... args are pushed right to left in C (opposite to Pascal)... the example you've given sort of assumes left to right evaluation of the function call expressions. Sure, it's possible to do, but would require more stack space to eval left to right and then push right to left.
 
This whole thing about side effects is why it's best to not use global variables, and only pass by pointer or by reference without a const specifier when absolutely necessary.

Furthermore, when you do have a function that alters its arguments, it's probably best to always define such a function as returning void, so that this problem about side effects doesn't creep up.
 
RussSchultz said:
It sounds like it isn't.

Its news to me, also.
HOLY CRAP! We just ran into this issue:

Somebody had put the following into their code
Code:
avg_ptr = avg_base = malloc(sizeof(buffer));
It worked on one platform, but not on another. Lots of headscratching by others until I noticed this line.

On our embedded compiler, it was evaluating malloc() first.
On Win32...it wasn't.
 
You mean the embedded compiler did this:
Code:
avg_ptr = avg_base;
avg_base = malloc(sizeof(buffer));
and Win32 did this:
Code:
avg_base = malloc(sizeof(buffer));
avg_ptr = avg_base;
?
 
Last edited by a moderator:
Chalnoth said:
No, it's undefined behavior, completely up to the compiler's discretion.

If that were true assigment of an assignment would never be useful.
Are you certain that the "=" operator is not defined to be right associative?
To be honest I've never had cause to check the standard for this, and I don't generally write x = y = z = 0; or similar.
 
Just looked it up according to the C 99 standard assignments must be evaluated right to left as I would expect.
So it's a compiler bug.
 
For certain, our embedded compiler is C99. I'm not sure what MsDevStudio is.

I've done further research, and it seems I've jumped the gun. MsDev worked right in a test environment. I'll have to dig closer.

/false alarm. :)
 
Humus said:
You mean the embedded compiler did this:
Code:
avg_ptr = avg_base;
avg_base = malloc(sizeof(buffer));
and Win32 did this:
Code:
avg_base = malloc(sizeof(buffer));
avg_ptr = avg_base;
?
ERP's right, that's a compiler bug. Assignment operators have right associativity (grouping or "sitckiness", if you like). Again, it's not really the (run time) evaluation order, it's just that the original expression means
Code:
avg_ptr = (avg_base = malloc(sizeof(buffer)));

//and not
(avg_ptr = avg_base) = malloc(sizeof(buffer));
(Note that the second one is only valid in C++, since assignment operators don't return lvalues in C.)
Even the second one above should be equivalent to
Code:
avg_ptr = avg_base;
avg_ptr = malloc(sizeof(buffer));

//or
_tmp = malloc(sizeof(buffer));
avg_ptr = avg_base;
avg_ptr = _tmp;

//but not
avg_ptr = avg_base;
avg_base = malloc(sizeof(buffer));
 
Chalnoth said:
I don't think that C99 is the standard C implementation.
Associativity of the assignment operator did not change from C89 to C99. "a = b = c;" has always been valid and well-defined C code.
 
Basic said:
See the expression as a tree. It's defined where the branches are located, but it's not defined in which order they are evaluated. (Other than, of course, that the subexpressions into an operator must be evaluated before that operator is. :))

(A) + (B) + (C) where A, B and C are subexpressions must be calculated as
((A) + (B)) + (C), but A, B and C can be calculated in any order.

Yes, this makes sense in the case where the subexpressions are from other statements, grouped with parenthesis, or function calls, but the context of discussion was order of evaluation within an a statement (x = i++ * i++, f(a,b,c), etc)

if I have the following:

Code:
a op b op c op d op e op f op g op h

if the compiler were to evaluate a op b op c op d, and then e op f, and then add it to the result, it could get a different result than if it performs the operations strictly left to right. It is not strictly side effects is the problem.

Sure, if the author of the source code writes (a op b) op (c op d) op (e op f) each of those three groups can be evaluated in any order, but that wasn't my point. Hell, it isn't even just floating point where this is an issue. You can get overflow/underflow in integers too if you write an expression that you know deals with values wholes sum or product could be greater than max integer, but if evaluated strictly, yields sub totals and subproducts that are always less than max int.

I am personally FOR letting the compiler decide as long as there is a process by which one can ensure semantically equivalent results when compiled with different options or on a different compiler.

I like a Haskell-style language like Concurrent clean that is by default unstrict and lazy, but the programmer can assert control when they desire and demand strict, immediate evaluation.
 
DemoCoder said:
You can get overflow/underflow in integers too if you write an expression that you know deals with values wholes sum or product could be greater than max integer, but if evaluated strictly, yields sub totals and subproducts that are always less than max int.
As integers mod 2^N form a ring with operators + and *, you can do whatever you like with adds/subs/muls in any order. Divide, OTOH, is another matter.



(EDIT: had group but that only has 1 binary operator... thanks Maven)
 
Last edited by a moderator:
Simon F said:
As integers mod 2^N form a group, you can do whatever you like with adds/subs/muls in any order. Divide, OTOH, is another matter.
They are a ring IIRC, so why bother only with groups if you are talking about a more concise algebraic element... ;)
 
Back
Top