XNA performance (CPU).

It's also about type safety and code safety, access control and a bunch of other things.

If you don't want things like buffer overflows, stack corruption, let alone things like wiping out vpointer tables, memory fragmentation, bad pointer casting, etc, you have to abstract memory access. Thats what a GC does. You don't have direct access to the memory. And because of that, you can't directly allocate/deallocate memory.

It may sound terrible and slow, but it's actually damn fast when you use it right, and saves you a *lot* of time. The problem is the usage patterns are quite different.

It's also can move with architecture changes. As with the JIT compiled languages. Hence the same code works perfectly well on a PPC, PC, cellphone, x64, whatever. Eg no need to worry about 64bit pointers.

[edit]
the 0.1% was for the process, not the pc.
 
But what is the actual benefit of having a separate program root through memory and potentially cause huge performance loss?

You might say only 0.1% but that's on top of all the other crap already running in a PC. Many little things have a way of adding up into large things..

Peace.

You get to not spend weeks of your life looking for other peoples NULL pointer derefs, random memory corruption and Writes to Free'd memory.

GC can be done extremely well, I'd swap large portion of none critical game code over to it tommorow if I could come up with a mechanism in C++ that

a. works with the language for a subset of allocated memory.
b. doesn't randomly leak. (i.e. the existing heap walkers)
c. doesn't really suck

The cost of the GC overhead would be negligible compared to the savings from optimisations in the performance critical code I could do because I wasn't chasing memory corruption issues.

I've been mulling some mechanisms over this last project, but the difficulty in C++ is identifying top level references.
 
You get to not spend weeks of your life looking for other peoples NULL pointer derefs, random memory corruption and Writes to Free'd memory.

Five dollars says you're getting the "damned lazy developers, why can't they just produce per-platform optimized code in time for yesterday's deadline" chant within 5 replies.
 

Which is exactly why I feel we will see A level managed titles coming out sooner than most people expect.
With slow optical drives, if anything one of the hardest problems is filling up that entire 512mb of memory. So using a language that isn't quite so potentially memory efficient, yet saves you bucket loads of time developing? Tough choice.

:p

It's not game related, but I think it helps illustrate ERPs point:
this is a post I made on thedailywtf a while ago. It's about a bug I had to deal with with some C++ code we paid for.
 
Last edited by a moderator:
Graham said:
if anything one of the hardest problems is filling up that entire 512mb of memory.
:LOL:

In the very early days of PS2 development, we've let artists experiment without limitations and we've ended up with a level or two that required 128MB to run. Using up memory is never particularly hard (and seeing those made me really wish for at least 64MB PS2 at the time).

So using a language that isn't quite so potentially memory efficient, yet saves you bucket loads of time developing?
I've got mixed feelings about that - on one hand I completely agree with ERP (and it's why we're pushing our scripting language development hard, even on lower end platforms - but that's still restricted to certain areas of the codebase only, and our garbage collection is handled in simple and predictable manner).
On the other hand - I'm always reminded of stories from certain PC-centric codebases that either got ported to consoles or are used as console middleware. If CPP codebases already come with scary things like half a million allocations just for application start (picture what that means for a 32MB machine), or tens of thousands dynamic reallocations PER frame (this one is from cutting edge tech right now, so memory is less of an issue but still), I have this fear of what would happen if you convert entire codebase to a VM language and let people responsible for above run amock with it...
 
I know from experience that it's easy to get excited about these new languages whenever your working on these kinds of "toy" programs or proof of concept demos.

They just never seem to scale well to a larger fully functional system. You start out with "oh this is within x% of my performance goal. Not bad. I can live with this". But then as the code base gets bigger and bigger, the performance problems just start getting magnified bigger and bigger in a worse than linear fashion.

I think we are still far away from the day lack of developer man hours is holding us back more than the hardware itself. And until then, whatever extra time is needed to describe a system in a lower level language is time well spent.

It is still good people are at least doing research in this area though. Because it will be needed sometime in the distant future.
 
It is still good people are at least doing research in this area though. Because it will be needed sometime in the distant future.
IMO, better languages are needed NOW, not just in the distant future. Also, decent language level support for parallelism and asymmetric cores, please.
 
I think we are still far away from the day lack of developer man hours is holding us back more than the hardware itself.

This I cannot agree with.

IMHO developer productivity is what is holding back the game during its preproduction and implementation, the period where the game grows in scope, finds new things to do, new gameplay elements; hardware is holding it back during the wrap-up optimization phase where the game stays the same in scope but improves in performance.

What do you prefer, more polished 60 fps more-of-the-same games, or more "innovative" (forgive me for the beaten to death word), larger in scope less-than-30 fps games? Given the overwhelming enthusiasm for Wii, I think trading off performance to gain flexibility, more design iterations and experimentation is the wise choice. And that means less bare-to-the-metal C++ in the game code and more dynamic languages.
 

I understand the concern, but I feel this is only a problem for those who do not understand the situations when a GC can be a performance problem. If you are boxing 20,000 structs a frame, then yes, you will see a hit (I've accidentally done this, fixing it gave a ~15% perf boost). That said it's just as easy to kill performance with unmanaged memory allocation - if you don't understand it's pitfalls.

problems just start getting magnified bigger and bigger in a worse than linear fashion.

This is only sortof true for the compact framework, which by choice is not designed for such projects.
The processor usage for the GC is roughly linear to the number of objects allocated. So you are looking at around 100,000 allocated object limit for smooth operation. The generational GC in the full framework is non-linear. It scales very well, and there are more than enough examples of gigantic projects out there that use it. Eg any aspx site like msdn.
 
Back
Top