RSX: Vertex input limited? *FKATCT

What does it do?

You can find some infos here

kaigai_8.jpg
 
Is background OS expected to provide support functins than, such as file access? Does that mean some of that reserved OS is being used for game operations that would need to be coded otherwise if not present? And is that how XB360's OS operates too, providing a backbone of functions?
 
Question for the devs: instead of a comparison of the GPU architectures of either system...

What would have been ideal in each system? What could/should have been done additionally in hardware that would have made coding a whole lot easier and not incurred too much extra cost for the manufacturer and/or been a more reasonable tradeoff vice the featureset currently included?
 
I should also add that those were figures for one particular project. The other one has its proof-of-concept levels being enormous with really huge draw distances and very few occluded polys (lots of wide-open area), loads of full-screen effects, and so one... and we stick around 40 fps on that one -- 40 on the 360, that is. PC is lucky to get 30, and they don't have a PS3 build yet.

As a point of reference, what GPU is that using? And is it an issue of capability or optimisation?
 
As a point of reference, what GPU is that using? And is it an issue of capability or optimisation?
Either GeForce 7800 cards or ATI X1900s... doesn't make much difference because we're mainly CPU-limited (occasionally bandwidth-limited) on the PC for that particular project. Animation and physics sims are among the big hogs of CPU time in these gigantic levels, and no, we don't use any middleware.
 
Either GeForce 7800 cards or ATI X1900s... doesn't make much difference because we're mainly CPU-limited (occasionally bandwidth-limited) on the PC for that particular project. Animation and physics sims are among the big hogs of CPU time in these gigantic levels, and no, we don't use any middleware.

I guess the question then becomes which CPU? And if dual core, are you using both cores either partually or fully?

Also, how do you think a quad core would effect things? Is it even possible to leverage 4 x86 cores for that kind of project?
 
I guess the question then becomes which CPU? And if dual core, are you using both cores either partually or fully?

Also, how do you think a quad core would effect things? Is it even possible to leverage 4 x86 cores for that kind of project?

The reason you end up CPU limited on PC but not on console has little to do with the raw speed of the CPU, it's largely to do with the API overhead.
 
I guess the question then becomes which CPU? And if dual core, are you using both cores either partually or fully?

Also, how do you think a quad core would effect things? Is it even possible to leverage 4 x86 cores for that kind of project?
I figure it's worth echoing what ERP said prior to anything else, but to answer your questions... It again seems to be the case with any CPU. We're not really multithreading anything on the PC, and even the extent of work we have in that respect on 360 is pretty localized to things like audio, physics, and AI. So on the PC we have hell whether it's a dual-core A64 or a single-core P4. Bear in mind that neither of these projects are likely to be out until mid-2008. In a funny way, in spite of having a very incomplete codebase for the PS3, there are already enough SPE-directed tasks to give it something of an edge over the state of code on other platforms -- even though the only SPE code we have at the moment are the really obvious candidates.

Assuming that we can ultimately get around to making more cores fly on the 360 and PS3, and we follow in kind on the PC, you can theoretically see a benefit over 4 or 6 or 8 cores, but I can't say it'll be terrific or horrific... We'll cross that bridge when we come to it.

Though there are definitely cases where raw CPU power just for our end of the code is a problem on the PC but not on the consoles, but they come down mainly to just throwing a lot of activity into a scene.
 
Does anything change to a significant degree with Vista, SMM?
Can't say I've bothered to try or even look up anybody's examples. The only DX10 tests I've even seen were all running on the reference rasterizer, so it was more than a miracle if anything ran at better than 5 fps.

If I were to guess whether the API overhead will decrease enough to make a big difference, I'd have to say... I'm not holding my breath. I'm not capable of expecting things to go any direction but downhill.
 
I figure it's worth echoing what ERP said prior to anything else, but to answer your questions... It again seems to be the case with any CPU. We're not really multithreading anything on the PC, and even the extent of work we have in that respect on 360 is pretty localized to things like audio, physics, and AI. So on the PC we have hell whether it's a dual-core A64 or a single-core P4. Bear in mind that neither of these projects are likely to be out until mid-2008. In a funny way, in spite of having a very incomplete codebase for the PS3, there are already enough SPE-directed tasks to give it something of an edge over the state of code on other platforms -- even though the only SPE code we have at the moment are the really obvious candidates.

Assuming that we can ultimately get around to making more cores fly on the 360 and PS3, and we follow in kind on the PC, you can theoretically see a benefit over 4 or 6 or 8 cores, but I can't say it'll be terrific or horrific... We'll cross that bridge when we come to it.

Though there are definitely cases where raw CPU power just for our end of the code is a problem on the PC but not on the consoles, but they come down mainly to just throwing a lot of activity into a scene.

Thanks for the responses, so the bottom line really is that regardless how much raw power the CPU has you can't really get to it on a PC while with a console its fully accessable?
 
Does anything change to a significant degree with Vista, SMM?

Theoretically yes, the Vista driver model has less overhead. However PC drivers will still be black boxes, with the driver writers sticking God knows what work arounds in for software that does stupid stuff, so who knows how much of that potential improvement we'll actually see.
 
To understand you correctly, does that mean a ~28.1MB backbuffer (720p 4xMSAA 32bit and Z) is then resolved to ~3.5MB frontbuffer (or smaller for 24bit?) that is used as the framebuffer displayed on screen?

How much memory would a 720p 4xMSAA and Z take in terms of backbuffer and frontbuffer on Xenos? I imagine this would need to be heavily tiled?
 
What actual status of RSX vertex shaders ,despite great games today (like Ratchet and clank use more RSX than any other developer) and more devs with more knowledge of this hardware?
 
Last edited by a moderator:
It's a thread resurrected, but it's an interesting thread anyway ... Personally I would like to know if there are any devs who have since this discussion learnt more / changed their mind, etc.
 
Good god, of all the threads to resurrect! I know people keep bringing up Ratchet and Clank. Believe me I love my PS3, and I'm a *huge* blu-ray movie fan. The Ratchet demo was good fun, and the Groovitron weapon is one of the greatest game weapons ever invented.

But...R&C, to me, looks like it's a "relatively" average poly game. There's other things they do to make the game look very pretty, but a high vertex count, in my opinion, is not one of them, nor does it have to be for that type of game. I'd wager that just one of our baseball players has more verticies than all of their characters on screen at a given time combined, based on what I see.
 
But...R&C, to me, looks like it's a "relatively" average poly game. There's other things they do to make the game look very pretty, but a high vertex count, in my opinion, is not one of them, nor does it have to be for that type of game.
Yeah, I also get the impression that because of the typical distance between the camera and the characters, which save for a few exceptions aren't that huge, it gives the impression of being high-poly since the on-screen spatial density of verts is high.
 
Are the cut scenes real time? If so, characters and models look very high poly in these.
 
Yeah, I also get the impression that because of the typical distance between the camera and the characters, which save for a few exceptions aren't that huge, it gives the impression of being high-poly since the on-screen spatial density of verts is high.
Don't want to be anal..but what's the difference between giving the impression of being high poly and being high poly for real? It seems like the latter is considered to be smarter/cooler..when it should be the other way around.
R&C characters look very detailed to me, though their design certainly helps.
 
Back
Top