1280x720/60fps - How many polygons are "enough"?

Acert93 said:
A TON of detail up close and adaptively tesselate it with a progressive LOD as it goes into the distance. So it is uber detailed up close and not so much in the far distance (no point wasting polygons on an object that is the size of 20 total pixels!)

HOS is generally not like that, what you're talking about is more like Multires or some other form of progressive mesh technology.

HOS basically means that you represent a curved surface with a relatively low number of control points. Generally a curve can be described with 3 or 4 points; depending on the HOS implementation, this might be enough to render a perfectly smoothly curved surface.
Possible forms of HOS are quadratic or triangular patches (Bezier or NURBS for example) or some kind of subdivision surface.

This is not for adding or removing detail - it only smooths out jagged curves on a surface. Like, a cube is turned into a somewhat squashed sphere after several iterative steps of a Catmull-Clark subdivision scheme. It won't turn into anything more complex no matter how many times you subdivide it; and it won't get any more simple because the original control points will remain no matter how far you get from it.

This is why I'd expect HOS to be used first on cars, roads and other stuff, where you want nice and clean curves without any additional geometric detail. The best way to add that would be displacement mapping, but that won't be working well enough on this gen of hardware. I don't know about combining HOS and normal mapping on characters - the perfectly smooth silhouettes would probably look worse than the current lowpoly ones.
 
Gotcha. I was more thinking adaptive meshes. Hardware wise I was thinking more along the line of the fact current GPUs dont natively do HOS (I think even Xenos converts them to triangles) but you could use HOS in your base art and use CELL/Xenon to adaptively tesselate them (this was described in arstechnicias procedural synthesis paper). I guess in not being very accurate and explaining it in the terms of the hardware I had in mind I sounded like a boob! :D Thanks for clarifying Laa-yosh (good chance I still don't understand what I am talking about haha)
 
Tesselating HOS is a complete topic in itself :)
You can have finite iterations, or adaptive iterations based on some criteria. Generally, most renderers tesselate to triangles, only Pixar's PRMan works with quadratic micropolygons AFAIK.

I'd expect some more fixed tesselation HOS this gen as it's been used in several PS2 and Xbox titles before. It'll be CPU based though, but the architectures seem to support this a lot better. The adaptive stuff on the X360 does not seem to have a bright future if they haven't even had any techdemos to show it off...
 
Laa-Yosh said:
Tesselating HOS is a complete topic in itself :smile:
You can have finite iterations, or adaptive iterations based on some criteria. Generally, most renderers tesselate to triangles, only Pixar's PRMan works with quadratic micropolygons AFAIK.
Off topic : Have you ever heard of the Software RealSoft (or Real3D as it was originally called)? It's a raytracer that renders without tessellation (or maybe micropolys, but there's no triangulation, only pure curves). At a graphics conference many years ago Pixar were raving about their wonderful unique curves rendering that they were first to introduce when a visitor piped up Real3D was doing this already. They were not amused!

On topic : It's from this rendering of HOS that I imagined something similar could be achieved in realtime situations. But as you've explained in other threads there's other troubles such as generating UV coordinates.
 
Laa-Yosh said:
Tesselating HOS is a complete topic in itself :)
You can have finite iterations, or adaptive iterations based on some criteria. Generally, most renderers tesselate to triangles, only Pixar's PRMan works with quadratic micropolygons AFAIK.

I'd expect some more fixed tesselation HOS this gen as it's been used in several PS2 and Xbox titles before. It'll be CPU based though, but the architectures seem to support this a lot better. The adaptive stuff on the X360 does not seem to have a bright future if they haven't even had any techdemos to show it off...
Since we are on HOS...

Xenos supports some higher order surfaces, but not NURBS/Bezier Curves (to my knowledge). How do the HOS it does support fall into the grand scheme of things? Useless extras (i.e. not worth the effort/tradeoffs, useless shapes, and/or too many adjacent issues with stuff like lighting/shadowing) or something you would think could get some use down the road?

To my knowledge that has not been discussed much. It does not seem much has been said about them being used realistically (outside it takes 2 cycles to work with them versus 1 for each vertex).
 
Shifty Geezer said:
Off topic : Have you ever heard of the Software RealSoft (or Real3D as it was originally called)? It's a raytracer that renders without tessellation (or maybe micropolys, but there's no triangulation, only pure curves).

Theoretically, HOS like NURBS and subdivision surfaces describe a perfectly smooth surface, it's called the limit surface. You get this surface by solving a mathematical formula with infinite iterations - that's why it's not achievable in practice.
So, you can choose to perform 1-5 or such iterations for the whole model (most renderers do this) or you can adaptively adjust the number of iterations. This will get you reasonably close to the limit surface.
Or, you can calculate the formula for each pixel/sample and thus forget tesselation completely.

I think PRMan uses adaptive tesselation...
 
I suppose a reason we've heard little on Xenos is it hasn't been available. Hopefully with the hardware in devs hands we'll see some examples.
 
Shifty Geezer said:
I suppose a reason we've heard little on Xenos is it hasn't been available. Hopefully with the hardware in devs hands we'll see some examples.
Ha! Good point :LOL: Cannot very well try something on hardware that has no way of supporting it!
 
polygon setup

expletive said:
(BTW, i'm still not clear if 8million polys or 8x polys vs pixels is 'plenty'. :) )

J

Remember that is setup limit, not actual geometry transform rate. Actual polygon limit is determined by developer need for VS-PS balance and real world sharing of vertices.
 
expletive said:
I had posted an earlier question regarding the fill rate and it was concluded that both consoles had plenty of fill rate.

What i am wondering is if theres a similar number of polygons where youve reached a technical or perception threshold where 'enough is enough'?

How many polys can each console produce without any CPU help? Assuming 60fps, are these numbers enough?

If you bring in the 2 CPUs and the bandwidth to/from them into the equation, how many can each system theoritcally produce? How many can they resonaby produce (i.e. no one would use all 3 xenon cores for triangles)?

SO at the end of the day does either console have a polygon advantage based against how many are 'enough' (that is, if theres ever enough :) )

J

As normal mapping techniques evole the number of polygons need'd for games will grow no-were near as much as they used to between generation's
 
scatteh316 said:
As normal mapping techniques evole the number of polygons need'd for games will grow no-were near as much as they used to between generation's

I doubt that. I think they will get more sophisticated in how geometry is applied, for example, to show movement of muscles under the skin and have better real-time deformation. Also new sophisticated geometry based effects will increase physics power required so no surprise PPU is coming to PC market.
 
Since the 360 uses unified shaders, is it pretty much a zero sum game between vertices and shader ops? For example if a scene has 1 million polygons per frame does that directly cut into the Xenos' shader op ability? Or is the way scenes are rendered not cause it to be one or the other?

Just wondering, how many vertices can the ps3 can be 'setup'?

J
 
Last edited by a moderator:
Zero Sum Game

expletive said:
Since the 360 uses unified shaders, is it pretty much a zero sum game between vertices and shader ops?

Yes. You mean between vertex and pixel shader ops no? Each shader can only do one or the other at one time.

For example if a scene has 1 million polygons per frame does that directly cut into the Xenos' shader op ability? Or is the way scenes are rendered not cause it to be one or the other?

Yes. More vertex shader ops = less pixel shader ops ......... zero sum game.

Just wondering, how many vertices can the ps3 can be 'setup'?

We do not know enough about RSX to know this. 550mhz G70 capable of 1.1B triangles/sec with 100% efficient vertex sharing, real world much less.
 
Laa Yosh said:
It'll be CPU based though, but the architectures seem to support this a lot better
Are you sure about that? :p
At any rate PS2 setup was pretty much ideal for supporting any kind of real-time tesselated surfaces, so there's little room for improvement aside for well - just making everything go faster.

But as you mentioned - you really want something more general purpose then HOS if it's ever going to see widespread use. Even for cars, I question how much real benefit there is to using HOS. If the only thing you gain is smoother extreme closeups, well...
 
just need to verify...

Aren't normal maps made from very highly detailed models consisting of millions of polygons?

Isn't that what the industry trying to head to??? The ultimate goal aside from the coveted "Hologram age".

Where one can render things on fly without having the need for "shortcuts" such as using bump maps and normal maps...
 
LunchBox said:
just need to verify...

Aren't normal maps made from very highly detailed models consisting of millions of polygons?

Isn't that what the industry trying to head to??? The ultimate goal aside from the coveted "Hologram age".

Where one can render things on fly without having the need for "shortcuts" such as using bump maps and normal maps...

Normal maps are baked from models with millions of polygons, yes. I'm not sure what you mean with "Isn't that what the industry trying to head to???". You mean the "millions of polys" or the normal maps?
I think the industry is trying to head towards Displacement maps, or real geometry, though i could be wrong. Normal maps will stick around for a looong time, and they should, given the performance.
 
ihamoitc2005 said:
Yes. You mean between vertex and pixel shader ops no? Each shader can only do one or the other at one time.



Yes. More vertex shader ops = less pixel shader ops ......... zero sum game.



We do not know enough about RSX to know this. 550mhz G70 capable of 1.1B triangles/sec with 100% efficient vertex sharing, real world much less.

Are there always both functions happening on every clock, or does geometry generally happen first, then shading? So when the shading actually happens there isnt a contraint for shader or vertex ops. Sorry just dont fully understand the rendering pipeline and the design of Xenos.

J
 
You'll generally run both vertex and pixel processing simultaneously, though in things like a vertex only shadow pass you can set all pipes to vertex shading. Presumably doing the whole rendering process in two passes, all vertex work followed by all pixel work, would be less efficient due to saving out and loading in data to RAM rather then passing it straight from pipe ot pipe.
 
Shifty Geezer said:
You'll generally run both vertex and pixel processing simultaneously, though in things like a vertex only shadow pass you can set all pipes to vertex shading. Presumably doing the whole rendering process in two passes, all vertex work followed by all pixel work, would be less efficient due to saving out and loading in data to RAM rather then passing it straight from pipe ot pipe.

Ok, and just clarify, if a pipe is doing a vertex op, it cant do a shader op, or can it do one of each simultaneously?

J
 
expletive said:
Ok, and just clarify, if a pipe is doing a vertex op, it cant do a shader op, or can it do one of each simultaneously?

J
In a "traditional" GPU design you have Vertex Shader Units (VS) and Pixel Shader Units (PS).

PS do pixel shading; VS do vertex shading; the two shall never twine. In a Unified Shader Architecture (USA) you have Shader ALUs (for lack of a better word); these ALUs can work on either vertex or pixel shader code. Xenos does this through a 3 shader array (each with 16 ALUs) and a dynamic scheduler can allocate each thread to VS or PS work so overall the three arrays can either be doing all VS, combo PS/VS, or all PS.

In a normal scene of data the vertex and shader load changes. Currently GPUs have a 4:2, 8:4, 16:10, etc... PS:VS ratio. Some games and scenes in a game (and even stages of a frames rendering) are more VS dependant, others are more PS dependant. So while PS and VS are both working at the same time in general the load between them changes. Sometimes VS are maxed out and some PS pipes are sitting idle, and vice versa. The goal is to have as much pixel shading and vertex shading work going on at any given time to maximize the static balance between PS and VS units.
 
Back
Top