Max poly-count per pixel?

DVRW

Newcomer
First of all, sorry for my bad english.

To each launching of new GPUs very it is said about the capacity of poly-processing, hundreds of millions. Which is the polygon limit for each pixel area? How many polygons we can notice in a HD TV (1280x720)?

With the fast advance of the graphical computation and the slowness in the TV/Monitor area in few years the poly-count could reach a limit because of the low resolution.

I read that theoretically a billion of polygons is need to produce a photo-realist scene. Which resolution would be the required to be able to see all details generated by this 1 billion of polygons?
 
I think the theoretical max would be: pixelCount * multisampleCount

So on a 1280x720 display with 4x MSAA, you could theoretically see 3,686,400 polygons. That's purely theory, though, as it assumes one unique polygon for each multisample point.
 
I found a interesting paper, it´s about the limit of the human vision in relation to pixels and triangles. Calculating the visible amount of pixels for the amount of triangles the result is + - 6 per pixel.

If it is correct is as i imagined, unless you get a high resolution monitor in briefing the GPUS/game consoles will be capable to process more triangles than we will able to see.

Link: http://www.itn.liu.se/~matco/TNM053/Papers/deering.pdf
 
Well, GPU's won't be able to calculate more polygons than we can see anytime soon. They're just not designed for it. GPU's are essentially designed to render large polygons (relatively) and use pixel shader "magic" to make each pixel look good.

This is in direct contrast to, say, high-end movie rendering where the hardware is designed to exclusively render sub-pixel-sized polygons. The performance characteristics are pretty different, and though sub-pixel polygons will look better, they're going to be much harder to get off the ground for realtime applications.
 
One billion vertices per second peak isn't really necessary in practice. However, 40 million vertices per second with a 50 instructions long vertex shader is pretty cool, and actually useful. And there's usually a strong correlation.
 
sunscar said:
What is it for REYES, like four polys per pixel?

I think I remember 2 but who knows what resolution they render at.

I think thier is more benifit to doing super high res rendering than down sizing than packing more polys in a given pixel.
 
sunscar said:
What is it for REYES, like four polys per pixel?
IIRC**, it's tied to the sub-pixel resolution, so it is basically just enough to go over the Nyquist limit (i.e. 2x2 samples per pixel). I believe the sample positions are jittered to convert any aliasing into high frequency noise.....

(** but I could be wrong)
 
For what it's worth, I also vaguely remember REYES using micro quads that are slightly less than half a pixel on each side, i.e. slightly more than 2x2 polys per pixel. I think this is a user setting in PRMan though.
 
GameCat said:
For what it's worth, I also vaguely remember REYES using micro quads that are slightly less than half a pixel on each side, i.e. slightly more than 2x2 polys per pixel. I think this is a user setting in PRMan though.
Come to think of it, that would make sense as you need to exceed the Nyquist limit. I suppose I could dig out the Reyes paper from my collection of Siggraphs ... but I'm not curious enough at the moment.
 
Chalnoth said:
Well, GPU's won't be able to calculate more polygons than we can see anytime soon. They're just not designed for it. GPU's are essentially designed to render large polygons (relatively) and use pixel shader "magic" to make each pixel look good.

As respected developer recently observed that the optimal triangle size for GPUs has remained in the vicinity of 10-20 pixels for the past few years, and is likely to stay there for the foreseeable future.
 
REYES was originally designed to render 4 Micropolygons per Pixel, to be below the Nyquist limit. Anyway, it turned out, that with proper shaders that do not alias too badly, you can usually get away with 1 Micropolygon per Pixel, which is the baseline for all movie rendering you can see atm. (Shaders are only executed per vertex, not per pixel). More MPGs are only needed if the shader has very high frequencies or if you do extreme displacing.

The main difference is you always get this single polygon per pixel, no matter how far the object is from the camera (at least when using high-level primitives). In realtime engines OTOH, you usually have 10-20+ pixel sized polys close to you, and 10-20 polys per pixel in the distance, and although you push the same poly rate as an offline renderer (let's say 1.000.000 polys @ 640x480 ;), you still see yaggies.
 
I keep hearing how 4 micropolygons per pixel is below the Nyquist limit which is right but to be just below the Nyquist limit it should be the width and height are just less than 2 pixels each (since you are sampling each pixel then the highest frequency you can capture is half that or in otherwords a frequency corresponding to a width of 2 pixels).

Or are they using some different interpretation of the Nyquist Limit?
 
Back
Top