Limiting factors in crowd rendering

stewacide

Newcomer
I'm having a bit of an argument on another forum I hope the pros here can settle:

I'm of the opinion that the number of zombies in a scene in Dead Rising (whether 360 or Wii) is limited by performance considerations. That is, more zombies could easily be dropped into a scene, but aren't because of what it would do to the frame-rate.

He's of the opinion that the number of zombies is a hard limit in memory. The implication being, if the Wii (or 360) had more memory - everything else being equal - the developers would be able to put more zombies on screen.

Who's right?
 
Its a bit of both. Ofcourse more memory could (and maybe would) help. But memory alone doesnt do anything other than store stuff. After that you still need the horsepower to render the zombies.
 
The cutscenes will be recordings of the 360 version. The actual Wii gameplay looks like the last screens shown above.
 
You need more polygons for more zombies. Wii just doesn't have the polygonal power of the 360. That's why I'd rather they do Lost Planet on Wii instead. At least that game wasn't about pushing as many enemies on screen as Dead Rising.
 
Originally asked this in the Dead Rising Wii thread - http://forum.beyond3d.com/showthread.php?t=49055 - but nobody seems to be reading it ;) ...

I'm having a bit of an argument on another forum I hope the pros here can settle:

I'm of the opinion that the number of zombies in a scene in Dead Rising (whether 360 or Wii) is limited by performance considerations. That is, more zombies could easily be dropped into a scene, but aren't because of what it would do to the frame-rate.

He's of the opinion that the number of zombies is a hard limit in memory. The implication being, if the Wii (or 360) had more memory - everything else being equal - the developers would be able to put more zombies on screen.

Who's right?




Expanded:

More specifically, I think it's the load of all those polygons on the GPU and all that physics on the CPU which limits the number of zombies. Memory's not a constraint because they're just recycling the same assets for each cookie-cutter zombie: the marginal memory 'cost' of each additional zombie is negligible compared to the processing load.

To quote him: "Graphically, the limiting factor for drawing zombies will be the memory, not the fillrate or any sort of processing power. The zombies will be most definitely instanced and therefore be solely limited on the size of gpu-side memory. As many models/textures they can fit on the gpu will be the amount of types of zombies on screen"

...I'd reject what he said out-of-hand as nonsensical except he claims to go to game developer school or something (I'm just a nerd :LOL:)
 
You are right - as long as the zombies are cookie-cutter material, memory won't be a problem.

Depending on your particular zombie situation, the bottleneck will be in one of the following:

- vertex processing on the GPU
- skinning on the CPU (on the Wii, which has a very primitive GPU skinning solution and is likely to use the CPU for skinning)
- skeletal animation calculation on the CPU
- drawcall-related costs on the CPU
- zombie AI/behavior - if you make something stupid, that is, too clever in the AI
 
Seconding everyone else here, instanced zombies should have very little impact on memory, but you still have to draw what's on screen, there's no dodging that part, so you are mostly right - This is an issue of performance, not of memory.
 
I heard there was a PS2 game with like 65,000 enemies on screen. It's kind of like Dynasty Warriors, but the models are bare bones of course. They're a bunch of little bug creatures or something. I'll go find a youtube video.
 
Isn't this rather self explainatory?

You have to draw everything on screen, the more enemies, the more resources is needed.

The bottlenecks coming from adding more enemies can come from tons of different aspect, not just memory. More enemies can mean you need more CPU power to calculate their AI, Physics, more shader calculations, etc etc etc. The bottlnecks can come from very many different aspects, not just memory alone.
 
Did anyone mention (skeletal) animation yet ?
On a number of games I worked on, it was a/the limiting factor.
 
It is probably billboarding. With that amount of enemies there's a limited discreet amount of different, distinguishable poses enemies can have. And boils down to what Shifty Geezer said before - 2D sprites.
 
Did anyone mention (skeletal) animation yet ?
On a number of games I worked on, it was a/the limiting factor.

You can cheat a bit on that, like having ~50000 critters share ~100 or so animations. You animate say 100 animations on the cpu. Then you skin those 100 animations onto 100 different sets of vertex data, again on cpu. Finally, when drawing all ~50000 critters, each batch of 500 is draw using one of the already skinned sets of verticies. All you have to before drawing each one is setup a register with that critters translation and orientation, then draw. Gpu work is minimized that way as well since no gpu side skinning is needed. It's hard to spot the repeats in a chaotic scene, but repetition will be visible in a more sedate scene.
 
Can we call this definitively then: so long as you're re-using assets you'll run out of cycles long before memory becomes a factor?

A good example came to me after I posted this: Elebits. In the creator mode the number of individual objects you can add seems pretty well unlimited, although the fame-rate degrades progressively as you add more and more objects (you'll be counting seconds-per-frame before the hard-coded object limit kicks in ;) ). The game did, of course, limit the number of object types (i.e. assets) you could have loaded at once due to memory constraints.
 
You can cheat a bit on that, like having ~50000 critters share ~100 or so animations. You animate say 100 animations on the cpu. Then you skin those 100 animations onto 100 different sets of vertex data, again on cpu. Finally, when drawing all ~50000 critters, each batch of 500 is draw using one of the already skinned sets of verticies. All you have to before drawing each one is setup a register with that critters translation and orientation, then draw. Gpu work is minimized that way as well since no gpu side skinning is needed. It's hard to spot the repeats in a chaotic scene, but repetition will be visible in a more sedate scene.

I didn't try CPU skinning, the poor CPU was already dying computing the animations states.
Your method does increase memory use quite a bit though.

Animation states sharing is the first solution you come by when you are working on a crowd and animating it is a limiting factor. ;)
 
Wow.
That snippet makes a great argument for why we need more GPU and CPU processing power - imagine all those critters individually drawn and directed by AI! For the paltry price of 100-1000 times or so of computing power we would get a vastly superior gaming experience, and would push the boundaries of realism far beyond the meager abilities of last gen tech. Halleluja! The innovations made possible by the advances in graphics technology never cease to amaze and astound!

:)

Sorry, couldn't help myself, I'm on vacation and thus not fully accountable. It's difficult not to be affected by a more relaxed perspective on the everyday proceedings. It'll be over soon.
 
For the paltry price of 100-1000 times or so of computing power we would get a vastly superior gaming experience, and would push the boundaries of realism far beyond the meager abilities of last gen tech.

Ummm... no. You won't notice the difference.
 
Back
Top