Quake2 C->Managed C++ (.NET)

Hey guys,

I know this belongs elsewhere, but I think it would interest a lot of people, and it would probably end up getting missed.

I'm an absolute convert to C#/.NET, having now worked within the framework on a daily basis for the last year. One of my favorite sites, Code Project, sent out an email containing a link to a new article concerning transforming the Quake2 source code into managed C++ (if you're not up on any of this, it's basically .NET's version of C++).

The article talks about what it took to make the project work, some of the compromises, and more interesting, the performance penalty.

In the end, it looks like the managed code is able to run at around 85% of the native code. Bear in mind that the developers completely ripped out all Assembly from the source code. This is something that can still be done in managed code....

The other kool think these guys did was tack-on a new feature to Quake2...and all of it was done in managed code.

Anyhow, here's the link. I would like to see what others think about this. I'm going to definitely download it when I get home tonight.

http://www.codeproject.com/managedcpp/Quake2.asp
 
I saw this early this morning on MSDN http://msdn.microsoft.com/

I'm not to strong in C++ but it should be fun to play with.

I find it pretty interesting that the performance penalty is only about 15%. I would have guessed larger, (especially since if memory serves me correctly Id had someone dedicated to hand optimizing assembly inorder to get about the same about of performance gain.
 
The 15% performance penalty is indeed impressive, since Quake II ought to be CPU bound on modern systems.

If Quake 2 had been written in C++ to start with instead of C this project would have been much more difficult.
 
Note that the Quake2 code is NOT running in managed mode. It's just compiled so it can interface to managed .NET stuff. Most of the work is done the same way as before. So it's not realy that surprising if the performance hit is in the same order as the assembly optimizations that was torn out.

This is not an indication of how a program written for managed mode will run (other than "still no proof that it would suck horribly").

Still an interesting experiment.
 
Yeah, I'd be more interested if the code was rewritten to use garbage collection and compiled to CLI code. Compiling to native isn't really demonstrating anything.
 
Not to mention, how many years has it been since Quake 2 was compiled and how much have compilers progressed since then?

They might even end up taking advantage of SSE and what not.

In my eyes, 15% is insanely huge when considering the two.
 
Compiler technology hasn't changed that much. Compiling with SSE might help alittle, but C++.NET can also compile to SSE (as can the new Java 1.4.2)

The 15% loss is most likely due to indirect dispatch overhead and other housekeeping that the managed runtime has to do, minus recoding of some assembly language stuff into C.
 
I read...then reread the article, and once I had the code on my machine, I then realized the deal.

I think the author did make some statements that were somewhat misleading. Don't get me wrong, I still think it's pretty kool that they did this work and all. But they should have worded things a little different.

The one thing I do believe is worthy about this is that what the guy did is not too far off the beaten path from how I might approach doing such a project. That is to say, design the core engine (if necessary) around C/C++ for speed...and once that's working, design everything else around managed code.
 
For us lost in the embedded world, and somewhat having the state of the art development platform pass us by, what exactly is meant by "managed code"?
 
AFAIK, managed code is where the programmer is not responsible for deallocation of memory.

It might be .NET term and in which case this might involve CLI which is basically a byte code which is then compiled into native for the platfrom it's being run on.
 
Managed Code has several features:

o compiled to intermediate representation (think Java byte codes), therefore it runs on multiple platforms (e.g. you could execute the same code on a PocketPC). It can then be JITed at runtime to native, or pre-compiled to native
o memory management is handled by garbage collection
o null pointer dereferences and array bounds exceptions are checked and caught immediately (e.g. can't do damage)
o Code access security - can't access objects or memory areas that you aren't allowed to
 
In order of me likey:
I like the bytecode thing.
I like the JIT thing
I like the checking/security things
Memory management and garbage collection can suck my sweaty white nards.

Of course, I'm mostly speaking from an embedded viewpoint. Or a non-bloated web experience viewpoint. But there it is.
 
They really wrote the article to make it sound like the whole thing was running Managed. I guess that was just propaganda.

I thought it would take a lot of work to get any non-trivial C program running in Managed C++ (complete rewrite). From the article they made it sound as though the Quake codebase was written in such a highly disciplined way that they didn't have to do this.
 
Back
Top