1280x720/60fps - How many polygons are "enough"?

expletive

Veteran
I had posted an earlier question regarding the fill rate and it was concluded that both consoles had plenty of fill rate.

What i am wondering is if theres a similar number of polygons where youve reached a technical or perception threshold where 'enough is enough'?

How many polys can each console produce without any CPU help? Assuming 60fps, are these numbers enough?

If you bring in the 2 CPUs and the bandwidth to/from them into the equation, how many can each system theoritcally produce? How many can they resonaby produce (i.e. no one would use all 3 xenon cores for triangles)?

SO at the end of the day does either console have a polygon advantage based against how many are 'enough' (that is, if theres ever enough :) )

J
 
Considering how games have different offscreen passes with geometry, I'm not sure this question has a very straightforward answer. If every game used the same algorithms, a single number might be attained.
 
From what I'm hearing these GPU's (or Xenos at least) will be setup limited to 500 million vertices/second, so that's as good it gets no matter how much CPU you throw into the equation.

About ten million polygons a frame isn't too bad, seeing as that's at least 3 times as many polygons as pixels. But these things will be shader limited so you won't get to use all that. A flat shaded cartoon renderer might get more though and produce smoother models.
 
Pardon my limited knoledge but what does "shader limited" concerning the max output of polygons mean...

is it a tradoff between normal mapping and actual poygons, because it seems to me that characters normal mapped (which is bump mapping) have fewer polygons that would seem at the disposal of devs?
 
Shading is what makes the polys look like more than flat faces (pixel shading) or wierd fancy stuff I don't understand (vertex shading!). It's adding textures and lighting effects such as normal mapping or calculating lights or displacement mapping. A shader is a series of instructions and better looking shaders tend to be longer programs, meaning they take more time to execute.

You've seen screenshots with normal mapped textures and they look fantastically detailed, saving lots of polygons. But at the same time you still have edge. Ears a good example. Every ear of a game character has about 3 straight edges defining it's curve, and normal mapping can't hide that. In an ideal world those ears would be smooth.
 
Shifty Geezer said:
From what I'm hearing these GPU's (or Xenos at least) will be setup limited to 500 million vertices/second, so that's as good it gets no matter how much CPU you throw into the equation.

About ten million polygons a frame isn't too bad, seeing as that's at least 3 times as many polygons as pixels. But these things will be shader limited so you won't get to use all that. A flat shaded cartoon renderer might get more though and produce smoother models.

Ive heard some people say you can use teh CEll to increase polygon count in the PS3, is this true? If so why is it true in PS3 not in 360?

Also, how can you tell these will be shader limited? On the 360 with 10 million polys and 48billion shader ops it doesnt seem like it would be.

Thanks

J
 
expletive said:
Ive heard some people say you can use teh CEll to increase polygon count in the PS3, is this true? If so why is it true in PS3 not in 360?

That is because either these people are talking crap or because you aren't listening closely.
When "people" talk about using or accelerating rendering with Cell, they usually either mean something like using some SPEs to apply post-processing effects or do rasterisation on seperate buffers from the GPU (because you don't want to have two differently behaved rasterisers working on the same scene), e.g. for shadow buffering.
Now, Shift Geezer was talking about Xenos being setup-limited to 500 million vertices / s (whether that number is any good, I don't know). This number would mean it simply cannot draw more than that no matter what XeCPU or Cell do. Such a number exist for RSX as well.
But what both of these CPUs can do is render (i.e. transform & rasterise) completely on their own, with the caveat that the XeCPU probably cannot directly access the frame buffer.
 
[maven] said:
That is because either these people are talking crap or because you aren't listening closely.
When "people" talk about using or accelerating rendering with Cell, they usually either mean something like using some SPEs to apply post-processing effects or do rasterisation on seperate buffers from the GPU (because you don't want to have two differently behaved rasterisers working on the same scene), e.g. for shadow buffering..

You could use a CPU for vertex work too. A CPU can help on either end of the pipe, you've discussed the end of the pipe (post processing), but there's also work that could be done before you feed data into the pipe. And yes, in some instances, parallel rasterisation of independent elements.
 
Titanio said:
You could use a CPU for vertex work too. A CPU can help on either end of the pipe, you've discussed the end of the pipe (post processing)
That's because I don't foresee a particular lack of vertex processing capabilities in either GPU, and because Shifty Geezer was talking about being set-up limited.

And yes, in some instances, parallel rasterisation of independent elements.
Welcome to hell. I am not saying it is impossible, but how many games nowadays still have visible T-junctions?
 
Last edited by a moderator:
vertex processing

[maven] said:
That's because I don't foresee a particular lack of vertex processing capabilities in either GPU.

Xenos has no limit on vertex shader use no? Theoretically could have 48 vertex shaders if setup allows with vertex transform of 6 B per second! Problem is real world situation of matching vertex rate of competing hardware since large vertex shader load = PS/rendering speed automatically reduced, so frame-rate goes down.
 
[maven] said:
That's because I don't foresee a particular lack of vertex processing capabilities in either GPU, and because Shifty Geezer was talking about being set-up limited.

We've seen how several models've been constrained in-terms of geometry complexity, and several sharp edges where there should be none, even in titles from top tier dev.s.

GT5, PGR3 anyone?
 
Last edited by a moderator:
zidane1strife said:
We've seen how several models've been constrained in-terms of geometry complexity, and several sharp edges where there should be known, even in titles from top tier dev.s.

GT5, PGR3 anyone?
Using first gen software (stuff not even released at that) as a metric is misleading. When games are developed with the hardware in mind (and in front of them!) things should shift a lot. This is one of the biggest reasons 1st gen and 2nd gen games look so different. You cannot compare starting a project on a 9800Pro or 6800 with working with the real thing.

Further, this generation will benefit down the road when procedural synthesis is used. e.g. The only issue is not setup limit of vertex transformation abilities. You have the problem that large detailed meshes are large in size. This is bad because it fills up memory and consuming a lot of memory bandwidth. Both CPU's on both consoles are designed with procedurally generating data and this will save memory and memory bandwidth. Another solution is we will begin seeing for the first time (in practical sense) displacement mapping for the same reason.

These type of techniques are difficult to test when you are fighting the system to just get the game out on time and are on a crunch schedule and either have a 9800Pro/X800 and minimal memory bandwidth from the system or have a CELL with 5% of the bandwidth to the GPU you will have on the real console. By the time you get a beta kit with finalized hardware your well beyond the design and testing stage and don't have time to research, develop, test, and deploy a completely new method for geometry generation.

Every gen is like this; only a small handful of titles in the first year hold up graphically by the last year of the consoles mainstream life--and that is usually due to art and a solid budget, not because the console's power was to any great degree.

Expect the poly count to go up significantly in the next 2 years.
 
Shifty Geezer said:
From what I'm hearing these GPU's (or Xenos at least) will be setup limited to 500 million vertices/second, so that's as good it gets no matter how much CPU you throw into the equation.

About ten million polygons a frame isn't too bad, seeing as that's at least 3 times as many polygons as pixels. But these things will be shader limited so you won't get to use all that. A flat shaded cartoon renderer might get more though and produce smoother models.

Just an FYI, the spec for 360 is 500 million triangles/sec and not vertices. I'm assuming you meant triangles because at 60fps this equates to roughly 8.3 million polys per frame.

http://www.xbox.com/en-US/xbox360/factsheet.htm

However, in a 1280x720 frame, you have roughly 1 million pixels so you get about 8x as many polys per pixel (not 3x as you state above).

AM i calculating incorrectly?

(BTW, i'm still not clear if 8million polys or 8x polys vs pixels is 'plenty'. :) )

J
 
expletive said:
Just an FYI, the spec for 360 is 500 million triangles/sec and not vertices. I'm assuming you meant triangles because at 60fps this equates to roughly 8.3 million polys per frame.

In modern GPUs

1 vertice = 1 triangle

in a 1280x720 frame, you have roughly 1 million pixels so you get about 8x as many polys per pixel (not 3x as you state above).

AM i calculating incorrectly?

(BTW, i'm still not clear if 8million polys or 8x polys vs pixels is 'plenty'. :) )

Yes, the theoretical limit is 8x as much as the number of pixels at 720p.

That said, that is peak setup--one vertex per cycle. Considering the rendering phases of a frame it is unrealistic to expect to hit peak numbers--although unlike past generations the new hardware will get closer than ever before. But poly/sec is almost a worthless metric now days since there are so many.

Another thing to consider that a lot of polygons are culled in the Z pass. And while the vertex shading abilities of both console GPUs are above and beyond their respective setup limits, the bottom line is the transformation and vertex shading is not really the bottleneck, but more the software design, memory and memory bandwidth.

Like Laa-yosh said HOS can help a lot. A TON of detail up close and adaptively tesselate it with a progressive LOD as it goes into the distance. So it is uber detailed up close and not so much in the far distance (no point wasting polygons on an object that is the size of 20 total pixels!)
 
An optimized poly mesh manages about 1 vertex per triangle. We've coverd this before in this forum so a search will explain, but as a quick explanation consider a triangle ribbon...

Code:
o	 o	 o	 o	 o
/ \	 / \	 / \	 / \	 / \ 
/	 \ /	 \ /	 \ /	 \ /	 \ 
o_______o_______o_______o_______o_______o
To add another triangle to the end, you only need add one vertex and use two other vertices. 'Vertex' and 'triangle' are generally interchangeable when talking about polygons.

EDIT : Hmph, so much for CODE tags. No visual aid I'm afraid!
 
Shifty Geezer said:
An optimized poly mesh manages about 1 vertex per triangle. We've coverd this before in this forum so a search will explain, but as a quick explanation consider a triangle ribbon...

Code:
o	 o	 o	 o	 o
/ \	 / \	 / \	 / \	 / \ 
/	 \ /	 \ /	 \ /	 \ /	 \ 
o_______o_______o_______o_______o_______o
To add another triangle to the end, you only need add one vertex and use two other vertices. 'Vertex' and 'triangle' are generally interchangeable when talking about polygons.

EDIT : Hmph, so much for CODE tags. No visual aid I'm afraid!


Got it. I think in the end it was the 8x vs 3x (polys to pixels) that threw me off.

J
 
Back
Top