How well does Visual Studio .NET 2003's compiler optimise?

K.I.L.E.R

Retarded moron
Veteran
One of my classes is using C, I asked the teacher if I could do my fancy stuff and use C++ because I'm so much more productive under OOP.

Anyway my question is in regards to "Do I really have to do all those manual optimisations I used to do?".
I assume the compiler is smart enough to know what optimisations go where and when should it apply them.

Should I panick and do something really stupid or should I just happily code away?
Thanks.
 
afaik, VC++ 2003 is a very good optimising compiler, but like all compilers it's not perfect. You still have to tweak here and there, use inlines where appropriate, write relavent code in ways that are clearly vectorisable, to get the most out of it. However, for simple code I'm sure it'll opimise pretty well.
 
If you're worried about your code profile it. Profiling is more than simply checking for performance, it can tell you interesting things about your code, coding practices and bugs.
 
Visual Studio 2003 has a pretty good C++ compiler. The intel compiler is even better.

Is what you are doing really so performance critical that really have to worry about performance?
 
N00b said:
Is what you are doing really so performance critical that really have to worry about performance?
Im wondering the same thing. Very few things at the uni required me to worry about optimizations.

epic
 
There are at most one or two spots in any program worth optimizing. Things like sorting and thight loops that actually process the data. I optimized one single (stored SQL) procedure last week together with a co-worker (actually my manager) that improved a single run from my program from 20 minutes down to 30 seconds.

But that doesn't happen very often. Generally, I just don't care. The programs are fast enough as they are.
 
Done similar things myself. Reduced initializtion of something i was working on from 10 minutes down to less than 30 seconds by improving the way the data was indexed.
 
Colourless said:
Done similar things myself. Reduced initializtion of something i was working on from 10 minutes down to less than 30 seconds by improving the way the data was indexed.
Ditto - in an early version of the DC VQ compressor I found that 1 instruction was taking 5~10% of the time! There was some double indirection (i.e. ptr to a ptr) in the data structure and it thrashed the memory/cache system.
 
Some guidelines for opimization:

* Prematurely optimization is bad. Make it work and then make it fast.
* For most applications there is a "fast enough".
* CPU are getting increasingly faster, so some performance problems will take care of themselves.
* A good algorithm beats a great implementation. Always.
* If could squeeze out another 5% performance, but will have to sacrifice maintainability, think twice. In all but the rarest cases you will want to go for maintainability. Your co-workers will thank you.
* If you really need to make your application fast: Profile, profile, profile and then profile some more. Usually 80-90% of your cpu cycles are burnt in just 5-10% of the code. Concentrate on that 5-10%, forget about the rest.
 
Saem said:
The free lunch is over.
I guess you are refering to this article by Herb Sutter. A good read indeed. But even if Moore's Law slows down, we will still get cpus with faster single-threaded performance. At least for a couple of years.
 
K.I.L.E.R said:
I believe the pressure will move not onto programmers but smarter compilers.
I think that the pressure will move mostly onto software architects, frameworks and middleware then programmers. In that particular order. I heard the "smarter compilers" phrase so often in the last 15 years, but I have yet seen only incremential advancements (related to optimization and performance) but nothing really spectacular.
 
Can you explain your point about architects and middleware in a bit more detail?


N00b said:
I think that the pressure will move mostly onto software architects, frameworks and middleware then programmers. In that particular order. I heard the "smarter compilers" phrase so often in the last 15 years, but I have yet seen only incremential advancements (related to optimization and performance) but nothing really spectacular.
 
Ok. Right now most programs are single-threaded. Since the pace of Moore's Law is slowing down (i.e. single-threaded performance will no longer double every 18-24 months), other ways are needed to increase performance. Multi-core is one way to do this. But in order to take advantage of multi-core cpus you need to change the architecture of your application from single-threaded to multi-threaded.
That means:
* Identifying the parts of your application that can run in parallel.
* Breaking up your application into tasks, where at least some of them can be processed in parallel.
* Building a scaleable application architecture where each programmer can work on his task as if he was working on a single-threaded problem (best case, of course).
All this is usually the software architect's work.
Frameworks and middleware support you in the form that they either provide a finished (middleware) architecture (Applications Servers, J2EE, COM+, etc.) or building parts for various problems (synchronization, threading, message queuing) in an accessible and easy way.


To summarize this a bit:
Humans *really* like to think single-threaded and cannot handle complexity very well. And most programmers are no different.
The larger the application, the less you want the average programmer care about threading and synchronization issues.
So you need a good software architecture that hides complexity and lets the average programmer just concentrate on his task.
This is were the software architects come in.
Middleware and frameworks help create scalable, robust software architectures by providing a base architeture or building blocks or solving certain problems.

I hope I don't confused you.
 
It makes sense.
Thanks.

This doesn't really take into account long term technological progress.
What happens if we end up with photon electronics that progresses like Moore's law? If it becomes cheaper than multiple cores then wouldn't software just end up becoming single threaded again?
 
Die space has been ever increasing so even with photon electronics I doubt it would decrease seriously. In the end we end up with photon electronics powered multi-cores.
But let's pretend we would end up with single-core photon electronics cpus. Then I think that new software would maybe end up single-threaded. For old software, you know what they say: Never change a running software (architecture). ;)
 
Back
Top