Are we going to see true disp mapping?

scificube

Regular
I've been perusing some discussions here recently that focus on VTF/DB/R2VB.

There seems to be a hope that dynamic branching will take off on the X360 and then this will spur movement to use it PC games. At first, I was apt to think this was the most probable course but then I thought about it some more.

Will cross platform X360/PS3 titles cock this up?

From what I gather VTF and DB are not so hot on the G70 lineage of Nvidia GPUs. This is due to large batch sizes by the pixel shaders and with VTF the pipes texturing units aren't apt at dealing with latency as are Xenos's shared TMUs. I've no reason to not think this is not the case but I've seen some conflicting posts on the subject. It seems that every time I check, G70 batch sizes change...from over 1000 to 800 and just today I saw a poster claim it was actually 100 which wouldn't make it as good as Xenos's 64...or is it 48 as I've seen today as well...tis a bit difficult to nail things down.

Well anyway I'll just assume the case is for the worse at the moment. With that wouldn't it not be such a good idea to craft a game engine around capabilities which would make it difficult to port to the PS3? Just thinking cross platform games will be yet more common given the rising cost of development. Yet, there is such a thing as an exclusive title...but would there be enough of them to provide the "push" for PC developers...and will VTF and BD be used in a way in a way which is portable to PC HW given Nvidia DX9 hardware won't like using these features so much supposedly and then ATI elected to use R2VB instead of VTF as on Xenos? I've read that PC devs will tend to shy away from putting all the DX9 hardware out there at a serious disadvantage and I tend to agree....at least for a couple of years until DX10 takes over the PC scene.

What I see is VTF and DB becoming viable towards the middle to the end of the X360's life-cycle and not really being conducive to porting games to the PS3 at that point in time either.

Or is it? Could G70's DB PS be better than first advertised or if not how could RSX handle a game with heavy reliance on DB in the pixel shader? ...static branching wouldn't seem to make sense...so it would seem there is no practical way of deal with this. With respect to VTF I've read here that G70 supports something similar to R2VB in Ogl. If this is true then perhaps there is a chance something could be done with games that rely heavily on Xenos's VTF.

It would seem the biggest candidate for VTF is displacement mapping as a form of geometry compression. So I was thinking then well...maybe there's some hope for displacement mapping making it this round on consoles, but then I started thinking again...

Don't all those displaced vertices have to go through the setup engine? And aren't the setup engines for both Xenos and RSX not so difficult to overload? and then isn't the rest of the architecture in both cases not so efficient with really tiny triangles?

Perhaps I "think" I see problems and again...I've just gotten lost, but if not...what is the argument for using disp maps at this time? Perhaps disp mapping isn't used for really fine geometric detail then but maybe it still could be a win for not so fine detail and then again for dynamic detail as how things are displaced could be altered in real time. I have to wonder though if such couldn't well enough be handled with procedural geometry/adaptive tessellation on Cell and maybe to a lesser extent Xenon...well Xenos does have that tessellator thingie to play with...but from what I understand it's fixed function HOS kind of stuff there...

Well...sorry for the kind of flow of thought here...but I'm not the best at being succinct even if I try. Would someone care to comment on these things in general and specifically if it's reasonable or unreasonable to expect we'll see true displacement mapping on a console in this iteration?
 
I asked Tamasi what the batch size of G7x at the latest editors day and his reply was "> 200 quads", basically I would say that this is confirmation of the fact that it probably is 256 quads, or 1024 pixels.

I'd suggest that this also pertains to 256 cycles per quad pipeline, with each quad pipeline working on a different batch (unlike NV40, which appeared to have a batch size of 4096 pixels, or over all 16 pixel pipelines per cycle, for 256 cycles).
 
Thanks Dave :smile: ...so...do you think disp mapping is going to make the cut given the cirmstances of the situation? (given: DB I think is now a gonner beyond X360 exclusives!)

_phil_: I'm not sure Cell isn't actually pushing what's happening with Motorstorms's terrain deformation...instead of a static pattern as with WarHawk's procedural water surface...it's dynamic in Motorstorm. I got laughed at when I suggested Cell might make Motorstorm's tracks possible back when the E3 trailer was hotly discussed but maybe I wasn't out in boonies like some of my peers suggested.
 
Last edited by a moderator:
scificube said:
_phil_: I'm not sure Cell isn't actually pushing what happening in Motorstorm.

That would actually be real good to know (and beneficial to the thread I guess). What is actually working in the PS3 to allow displacement mapping in Motorstorm. Is it Cell, RSX or Cell and RSX in conjunction (like with Warhawk)
 
I thought "Displacement Mapping" was simply the way with which we can extrapolate geometry out of a 2D image, which is useful for compression issues.

And i thought Motorstorm basically had tracks made of geometry that gets displaced as the car goes through it, creating groves and holes on it according to the player interaction. Nothing to do with what we call "displacement mapping", and is something that has been done even on older hardware (mainly water-racing games).

Aren't those two things different? They're both "displaced geometry", but they're two different things, aren't they?
 
london-boy said:
I thought "Displacement Mapping" was simply the way with which we can extrapolate geometry out of a 2D image, which is useful for compression issues.

And i thought Motorstorm basically had tracks made of geometry that gets displaced as the car goes through it, creating groves and holes on it according to the player interaction. Nothing to do with what we call "displacement mapping", and is something that has been done even on older hardware (mainly water-racing games).

Aren't those two things different? They're both "displaced geometry", but they're two different things, aren't they?

Hmmmm, i'm thinking your right. I believe I heard Displacement Mapping being used in Motorstorm (now i'm not to sure) so I assumed that the deforming of the Geometry was using displacement mapping in some form. I'm thinking I just understood whats actually going on with the deformations.
 
london-boy said:
I thought "Displacement Mapping" was simply the way with which we can extrapolate geometry out of a 2D image, which is useful for compression issues.

And i thought Motorstorm basically had tracks made of geometry that gets displaced as the car goes through it, creating groves and holes on it according to the player interaction. Nothing to do with what we call "displacement mapping", and is something that has been done even on older hardware (mainly water-racing games).

Aren't those two things different? They're both "displaced geometry", but they're two different things, aren't they?

I would guess so.

I would imagine the tracks in MotorStorm are a big mesh underneath it all and as you say when the tires collide with polys in the mesh they are displaced and then tessellated to smooth out the curves. I would say what's happening in MotorStorm is a bit more advanced that what happen in water-games to date for 2 reasons. Dynamic change to the mesh and the changes being persistant. In say like the WaveRace games on the N64 or GCN the water surface was displaced by in a predetermined fashion and then racers really didn't affect how the water would be displaced. I've always been more apt to look at it as canned animation vs. what procedural alteration as I "think" I've only just seen now with next gen consoles. There are instances like Metroid Prime's water where the surface was dynamically altered but I don't think tessellated in real time nore were changes persistant...as that wouldn't make much sense of course.

Perhaps it's not so much a completely new thing but instead that it's never better attempted on this scale before and in this way that makes it really stand out to me.

-------------

I'm curious about another idea though...given how I don't know exactly how VTF works just yet this may sound silly...

What if Cell were generating heightmaps (should have the data you need right? normals, depth, position) on the fly and then RSX used these to alter the terrains surface via VTF?

Is this a plausible way this could be done?

-----------

No one wants to give their two cents on whether disp maps are going to happen and how :cry:
 
Last edited by a moderator:
Displacement mapping in itself isn't really a big deal - you take a bunch of vertices with UV texture coordinates and a bitmap, then move the vertices along their normals depending on the color/intensity value read from the texture.

The trick is about what you want to do with it. Taking a flat plane and creating a landscape from it is fairly standard nowadays, even with continous LOD stuff.
The fun part begins when you want to use it for creating mid to small scale geometry detail. You can build a simple model from some sort of HOS and displace the tesselated vertices to practically compress geometry, by sort of replacing the minimum 3 + 2 (XYZ UV) float values per vertex with a single 8 bit height value. But here's the catch: if you want detail, you need vertices. Displaying the detail in a 1K*1K displacement map requires about 1 million vertices at least (more if you want filtering) - and a 1K texture map per ingame character is pretty standard nowadays (I've seen such detail for footmen in an upcoming RTS game). When used on a heavily detailed scene, proper displacement mapping may require tens of millions of polygons. Remember Epic's figures for the first UE3 demo scene, where the source art for the normal maps reached more than a hundred million polys?

So the main problem is the geometry, and the related speed and memory issues. Adaptive tesselation is a very complicated issue, prone to flickering; simple tesselation will result in unbearably high polygon counts and geometry aliasing on distant objects. Tesselated objects consume dozens of megabytes of memory. Today's hardware just isn't enough to light and shade tens of millions of polygons per scene. And the list goes and as you try to implement advanced solutions to speed up the process, for example by REYES and micropolygons.

In the end, the answer to your question is that yes, we are going to see displacement mapping, but more than likely for only very large scale details, and in combination with normal mapping. As quick examples, Lair seems to be a possible candidate for that on the dragons, much like King Kong's Trex on the X360; and Motorstorm might be using it for the terrain. Pixar-quality stuff will have to wait for the next decade.
 
Laa-Yosh said:
Displacement mapping in itself isn't really a big deal - you take a bunch of vertices with UV texture coordinates and a bitmap, then move the vertices along their normals depending on the color/intensity value read from the texture.

The trick is about what you want to do with it. Taking a flat plane and creating a landscape from it is fairly standard nowadays, even with continous LOD stuff.
The fun part begins when you want to use it for creating mid to small scale geometry detail. You can build a simple model from some sort of HOS and displace the tesselated vertices to practically compress geometry, by sort of replacing the minimum 3 + 2 (XYZ UV) float values per vertex with a single 8 bit height value. But here's the catch: if you want detail, you need vertices. Displaying the detail in a 1K*1K displacement map requires about 1 million vertices at least (more if you want filtering) - and a 1K texture map per ingame character is pretty standard nowadays (I've seen such detail for footmen in an upcoming RTS game). When used on a heavily detailed scene, proper displacement mapping may require tens of millions of polygons. Remember Epic's figures for the first UE3 demo scene, where the source art for the normal maps reached more than a hundred million polys?

So the main problem is the geometry, and the related speed and memory issues. Adaptive tesselation is a very complicated issue, prone to flickering; simple tesselation will result in unbearably high polygon counts and geometry aliasing on distant objects. Tesselated objects consume dozens of megabytes of memory. Today's hardware just isn't enough to light and shade tens of millions of polygons per scene. And the list goes and as you try to implement advanced solutions to speed up the process, for example by REYES and micropolygons.

In the end, the answer to your question is that yes, we are going to see displacement mapping, but more than likely for only very large scale details, and in combination with normal mapping. As quick examples, Lair seems to be a possible candidate for that on the dragons, much like King Kong's Trex on the X360; and Motorstorm might be using it for the terrain. Pixar-quality stuff will have to wait for the next decade.

Thank you Laa-Yosh :smile: That was very helpful. I sort of figured there were innate problems with using displacement mapping at this point but you make it very clear this is the case. Thank you for your insight. (and yes I do remember the numbers about the source art poly count for the normal mapping for UE3.0 demos...or should we say Gears of War?)

I was also thinking that Lair may be using disp mapping on it's dragons but I really didn't notice it on King Kong's V-Rexs. I though that was just good normal/bump mapping. After watching what was presented of Lair at the GDC it looked pretty clear disp mapping was used when they went from wireframe to the what the final model looked like. I suppose this is where the claims that Lair's dragons use upwards of 100,000 polys stem from.

-------------

The question I would ask is whether you think what's happening in Motorstorm and Lair a product of the Cell's or RSX's work or if it's a more of a collaboration...of what kind and to what degree?


I wish I still had King Kong...I'd love to take a look at it again. Guess I'll have to check pics over at Ign or something. I want to do that before I ask the same of that game, but I'd imagine it's more likely to be Xenon and Xenos's tessellator getting the job done in any case on the X360 no?

------------------

I went back and checked around....I think it's bump/normal on King Kong's V-Rex dinorsaurs.

There's also something funny about Lair's dragons. During the initial portion fo the video the dragons seem to show a good bit more fine detail to me thant during the gamplay portions of the video. Has anyone else noticed that? I want to think it's because the dragons at the beginning of the vid are from current work while Factor 5 elected to just re-use their demo from last years E3 for the real-time interactive stuff. I hope that's the case at least.
 
Last edited by a moderator:
lair isn't disp mapping ,it's just high poly + normalmapping.From bump to pure polygonal displacement mapping you have a wide range of technics that are pixel shader only .while you don't need to handle collision ,you can push the pixel_only techs pretty far:
normalmapp --> relief mapping ---> paralax mapping ---> paralaxe occlusion--->>..... -->

http://www.divideconcept.net/d2k4/render/engine-arbitrary-model-divx.avi
.. involving more or less GPU raytracing.
The last one doesn't render any more polygone than the original surface ,but is still doing internal micropolygon stuf.
 
_phil_ said:
lair isn't disp mapping ,it's just high poly + normalmapping.From bump to pure polygonal displacement mapping you have a wide range of technics that are pixel shader only .while you don't need to handle collision ,you can push the pixel_only techs pretty far:
normalmapp --> relief mapping ---> paralax mapping ---> paralaxe occlusion--->>..... -->

http://www.divideconcept.net/d2k4/render/engine-arbitrary-model-divx.avi
.. involving more or less GPU raytracing.
The last one doesn't render any more polygone than the original surface ,but is still doing internal micropolygon stuf.

Coolz! :cool:

What's going on is just over my head at this point so I can't really begin to comment on whether we'll see stuff like this or not.

----------------------------------------

Welp...looks like it's back to the drawing board...again...

Doesn't look like we'll get to much disp mapping and certainly not higher qualtiy stuff. It's not like we can't get by without it for the moment.

Hopefully in school I'll learn the difference between parallax offset mapping and parallax occlusion mapping...and what the heck relief mapping is. It's gonna be fun!
 
It would seem the biggest candidate for VTF is displacement mapping as a form of geometry compression. So I was thinking then well...maybe there's some hope for displacement mapping making it this round on consoles, but then I started thinking again...
whilst this is prolly true, i fail to see how dynamic branching is gonna be of a major benifit to displacement mapping
 
scificube said:
With respect to VTF I've read here that G70 supports something similar to R2VB in Ogl
Unless your graphic ram is segmented there's no science to R2VB, it's just a case of memory aliasing - treat output from one unit as input to another (Heck you don't even need programmable pixel pipelines for this - people have done it on PS2).
It's the APIs and their (often lacking)abstractions that make it sound like a bigger deal then it actually is. That said, on platform like PC where the said APIs are the only means to access the hardware, things will continue being limited or complicated in respect to certain 'features' that are trivial from hardware point of view.

Laa Yosh said:
Adaptive tesselation is a very complicated issue, prone to flickering; simple tesselation will result in unbearably high polygon counts and geometry aliasing on distant objects. Tesselated objects consume dozens of megabytes of memory.
Well it's not trivial, but adaptive tesselation is really the key here - storing tesselation results in memory is a horrific way to do things(both memory and performance wise), and moreover you need means to control average polygon size on screen - not only because of aliasing issues (which could be horrific on their own) but also because small polys are very GPU unfriendly these days, so you really don't want to tesselate too small.
 
zed said:
whilst this is prolly true, i fail to see how dynamic branching is gonna be of a major benifit to displacement mapping

I didn't imply DB was. DB is more of an aside I wanted to squeeze into the discussion.
 
Fafalada said:
Unless your graphic ram is segmented there's no science to R2VB, it's just a case of memory aliasing - treat output from one unit as input to another (Heck you don't even need programmable pixel pipelines for this - people have done it on PS2).
It's the APIs and their (often lacking)abstractions that make it sound like a bigger deal then it actually is. That said, on platform like PC where the said APIs are the only means to access the hardware, things will continue being limited or complicated in respect to certain 'features' that are trivial from hardware point of view.

Is it really that simple? I'm not sure my question got answered in any case though. I was more looking to get a grasp of whether R2VB could be used on RSX as a sort of lesser equivalent of Xenos's VTF for cross platform games. The question may not be worth answering other than for academic purposes given it seems there are problems with using disp mapping at this time and I'm not really aware of any other techniques that would make VTF "really" valuable.

Just to be sure...I think you're saying R2VB or something similar is nothing G70 or really lesser hardware couldn't handle. Thanks for answering this for me...if I understand you correctly.

-------------

Here's a question about R2VB...or really buffers (or a texture) in general I guess. What is to stop a unit from putting results of some shader operation into a buffer and then reading those results back into the shader again? I'm just thinking that if you can branch in a shader why can you write some stuff out to memory (system/VRAM) and then loop back to the top of the shader if you wanted to get something like memexport going. Is the key with memexport having access to system RAM? If that's the case then couldn't you hide vertex data in AGP textures on a PC or is this something APIs/drivers prevent programmer manipulation of?

Just tossing some thoughts out there....
 
I don´t know much about anything. But Vertex texture fetch is used fot the water in Farcry on the 360.

I´m guessing the wavesimulator outputs to a texture that gets displaced using VTF in the watematerial.

Same could be for Motorstorm, Cell could calculate the tiremarks based on pressure and what not and the output to a texture which gets displaced using VTF on the groundmaterial.
 
Back
Top