John Carmacks "100 passes per polygon" achievable

Brimstone said:
Aren't displacement mapping and the upcomming PPP in DX 10 linked together. Displacement mapping will become more real world useful once VPU's get PPP's. If John Carmack was to update the Doom 3 engine design to take advantage of a PPP, the in-game models could be signifigantly more detailed because the stencil buffer wouldn't have to render all the polygons for the dynamic shadows. Or at least thats the way I understand it.
Carmack would still want those extra polygons to be rendered into the stencil buffer because a low poly shadow wouldn't look correct. I believe he's made statements concerning this before, although I don't have a link.
 
Regarding the advancement of graphics, I think we're referring here to one of the trends identified only for the near future since graphics will still be reliant on texturing for several years. I don't think anyone's advocating that progress towards simulating game worlds with more interactive building blocks should be impeded, and that the advancement of modeling will do nothing but increase the number of passes and shading indefinitely.

I've never understood how some interpret high polygon rates and heavy shading as being mutually exclusive focuses in today's hardware. Using a fixed reference point like the console market, we see that the dynamic per-pixel modeling the Xbox does to distinguish itself from the older PS2 doesn't preclude it from also approaching modeling by similarly using larger amounts of geometry (as should be expected from newer hardware). These won't be mutually exclusive directions for the future of graphic advancement, either, as we'd certainly want to avoid the awkwardness of the occasionally insufficient geometry modeling in Doom 3 and also not be limited to a cartoonish smoothness exhibited by a lot of PS2 games.

It'll be interesting to see how various solutions approach more extensive passes and shaders. I thought the rendering pipeline in the PowerVR architecture didn't really imply set limits to the potential number of textures per pass (and instead speed was just the limiting factor), and the limits on Kyro, for example, were really just set up for the programming environment.

Fox5:
BTW, I thought bump mapping didn't rely on texture passes and was applied before the textures were?
You do set-up work beforehand, but at some stage you inevitably crunch through the geometry to vary the underlying shading with the bump map.
 
I'm not agreeing with John Carmack on this one and I'm not disagreeing with him on this one either. It can clearly be seen he isn't the only software guy that pushes the way hardware is meant to go (maybe with Nvidia to an extent) so I would take what he suggests or wants with what others want.

The quality of a pixel must go up in order for graphics to improve to a new level of greatness, but also geometry will have to go up also to have such a leap like from the last generation to the current generation. Low amounts of geometry isn't the wave of the future and at least where consoles are going the higher the better. The two go hand in hand so well it really isn't all that funny. If it gets to the case where you can have a heavily shaded game in the next gen or a heavily geometry based game I'm going to assume that the one that uses the best of both resources will come out graphically superior. It doesn't matter if there are 100 passes per polygon in the way distant future if they are the size of Doom3 models' polygons. If that were to happen then when you do add more geometry the framerate will bulk and slow down so bad. From what I can tell in both the console and PC world is that geometry levels are going to go up in an insane way. It won't be long until we see games that finally have at least 100 million pps with mosy features turned on. I do think the storage of that will be a problem and HOS and some displacement mapping will do a world of good for geometry in the future.


Ahh, we'll see when the time comes.
 
Lazy8s said:
I've never understood how some interpret high polygon rates and heavy shading as being mutually exclusive focuses in today's hardware. Using a fixed reference point like the console market, we see that the dynamic per-pixel modeling the Xbox does to distinguish itself from the older PS2 doesn't preclude it from also approaching modeling by similarly using larger amounts of geometry (as should be expected from newer hardware).

I'd respond that you're confusing two different concepts:

  • Heavy Shading, Lighting, and other such advanced tasks (but not all obviously) are going to be computationally limited and the preformance in many instances are going to be bounded by the polygon count of the models and compexity of the worlds, which in many cases the computational requirments will scale upwards extremely fast. DoomIII is the perfect example for the reasons many stated and untill you reach a point where the physical limits imposed by Moore's Law will allow you to have an IC with enough logic to run the apps at acceptable levels.
  • Which brings us to point two; your XBox-PS2 comparason is horrible and you admit the flaws, which confuses me as to why you used it. XBox is built using a 150nm process where as PS2 came out 2 years earlier and used a 250nm process. I'd hope to God that such a transistor density increase would buy you added computational preformance.

Everything in this realm be about trade-offs and finding the regions where you field what you consider to be the best level of output for the resources you have to work with. It just happens that some, like me, happen to think the tradeoff is just a wee bit too nonlinear and you give up just a tad bit too much if your architecture is geared for nothing but shading. Hey, but if your architecture is plastic enough and developers have the freedom to make the choice without massive penalties for redistributing resources - more power to you.
 
I don't see very many reasons to do 100 passes per se.

I do see many reasons to execute potentially 100's of pixel shader ops/pixel, and that really isn't very far off.

To me at least I think the big graphics differentiator on the next gen consoles will be the quality of the shading and lighting models. Polygon counts will increase significantly, but I don't see this providing a massive leap visually.
 
To me at least I think the big graphics differentiator on the next gen consoles will be the quality of the shading and lighting models. Polygon counts will increase significantly, but I don't see this providing a massive leap visually.

I agree somewhat, and I believe next-gen systems shall deliver excellent results indeed, for from what I've seen on other systems in real-time... 1B+ verts with *real-time* ray-tracing seems possible next-gen...

But, I think animation and physics, shall also prove to be decisive...
 
To me at least I think the big graphics differentiator on the next gen consoles will be the quality of the shading and lighting models. Polygon counts will increase significantly, but I don't see this providing a massive leap visually.
Quite true.
However, with all the talk of increased polycounts, I thought I'd mention that the geometry we use for certain other areas that aren't necesseraly directly visual tends to still be (a lot) lower then the actual drawn stuff :p
Particularly collision models are in order of 100s+ less detailed then real stuff - and I don't see that differentia decreasing any time soon yet. Of course this is in no small way also thanks to visual benefits of higher accuracy collision models being pretty subtle stuff (and tied it with a bunch of other issues that aren't solved adequately yet).
 
jvd said:
can't you already apply up to 6 passes per polygon on the neon 250 or was it the kyro .
Kyro has native hw support for 8 passes. Although possibly not enabled in the drivers, it was theoretically possible to have a huge number of passes (with extra polygons) with 250 (and DC) because it had "scratch pad" pixel registers that could be used to combine results in interesting ways.

(I don't know if Kyro also had this feature, but then, 8 layers was probably already enough. The most I think that any game of the era had was 5 layers).
 
london-boy said:
As i said, i think Displacement mapping is the way to go until we get over the polygon way of thinking...

However, Displacement mapping uses up geometry, so we're back to square one.
There's a good quote in "Advanced Animation and Rendering Techniques" (Watt and Watt) which basically says you rarely need displacement mapping.
 
Simon F said:
london-boy said:
As i said, i think Displacement mapping is the way to go until we get over the polygon way of thinking...

However, Displacement mapping uses up geometry, so we're back to square one.
There's a good quote in "Advanced Animation and Rendering Techniques" (Watt and Watt) which basically says you rarely need displacement mapping.

Really? Do u have time to explain a bit further? Pretty please... :D
 
I'm no 3D animation expert, but I too cannot find many advantages for displacement mapping.
In 3D modelling it is a more useful technique, but in games, I see little use.
The end result is still polygons, why not just model it directly to polygons instead of using some texture passes for displacement maps.
There might still be some cool uses in 'animated bumpy surfaces'.
 
rabidrabbit said:
I'm no 3D animation expert, but I too cannot find many advantages for displacement mapping.
In 3D modelling it is a more useful technique, but in games, I see little use.
The end result is still polygons, why not just model it directly to polygons instead of using some texture passes for displacement maps.
There might still be some cool uses in 'animated bumpy surfaces'.


A displacement map is a "map" (not much different from a texture map), from which the hardware extrapolates geometry. So, instead of having a 2million polys character stored in memory (which will eat loads of bandwidth too, once u want to send it to the GPU), u can have a simpler model to which u can apply a displacement map, which the GPU will use to give it as much geometric detail as the original model "on the fly". Think of having to store the models for the elephants in LOTR (the movie) in memory and moving them around, compared to generating them on the fly by the chip itself from a displacement map. You'll need a lot of horse power of course, but everything else will be saved, especially memory and bandwidth.

That's how it works, unless i got some things wrong. I only have experience with it from working with Maya (and not THAT much anyway), but i guess it would definately help real-time applications a whole lot, once the horse power is there to deal with it in reality.
 
rabidrabbit said:
I'm no 3D animation expert, but I too cannot find many advantages for displacement mapping.
In 3D modelling it is a more useful technique, but in games, I see little use.
The end result is still polygons, why not just model it directly to polygons instead of using some texture passes for displacement maps.
There might still be some cool uses in 'animated bumpy surfaces'.
Well, displacement mapping is only one way out of many to compress high frequency geometric detail, so if other more efficient and general ways becomes viable, like wavelet compression, naturally it won’t be an attractive technique to use any more.
But until that happens, it's the only realistic way to do surfaces with parallax movement, like tarmac, stonewalls or bark.

Displacement mapping can save huge amounts of storage space and bandwidth, up until the point of tessellation or rasterisation.

There are many ways to implement the technique. Some of which does not involve polygons at all, but relies on z-checking normalmaps.
An approach I believe already have been demonstrated in another forum on this board.
 
How much smaller (approximately) is a displacement map compared to a mesh?
Let's take a trunk of tree for example, where the bark is either done with disp.map vs. modelled. The bark is not a repetitive, tiled texture.

I have no idea, that's why I'm asking.
 
rabidrabbit said:
How much smaller (approximately) is a displacement map compared to a mesh?
Let's take a trunk of tree for example, where the bark is either done with disp.map vs. modelled. The bark is not a repetitive, tiled texture.

I have no idea, that's why I'm asking.


I guess it will always depend on the content... Also remember that displacement mapping would resolve a "problem" of current methods, which is the fact that today bump maps can create the "illusion" of geometry, but once you get close enough, you can see it's flat. There are ways around it (one of which was recently posts in another forum here at B3D), but Displacement maps would give you "real" 3d details instead of "faking" them with bump maps.
 
You don't control much with displacement mapping.
Nor the necessary density (detail concentration) ,nor polycount.So efficiency is not there, things can get messy and very "Dirty work".Also the only really good displacement mapping i know is based on micropolygon.

Some kind of subdivision surfaces would be better i think.
I'm also not sure how you can make and solve collisions based on a displacement bitmap (?).
 
_phil_:
I'm also not sure how you can make and solve collisions based on a displacement bitmap (?).
Probably just convert it to geometry (could use simplified models) for handling that.
 
Most all collision detection meshes are far less detailed than the models you see on screen. If you ever write a CD scheme you'll understand why - it's O(n^2) with regard to triangle count.
 
akira888 said:
Most all collision detection meshes are far less detailed than the models you see on screen. If you ever write a CD scheme you'll understand why - it's O(n^2) with regard to triangle count.

Couldn't you do collision detection with a first level of tessellation ( coming from say using subdivision surfaces or another form of HOS ) and then render after you have tessellated the model many more times ?

In the previous page showing what displacement mapping + on the fly tessellation can do, you could do collision detection on the most basic model and then after you updated the vertices' position you would submit the mesh to the T&L processor, tessellate it and then light it and texture it.
 
Back
Top