Cascades - NVIDIA's first DX10 demo available

Ok, those pictures actually do look spiffy. The previous ones (irrespective of what was going on in the background) looked pretty old school. I'm sure it looked better in motion, and I'm sure it was more impressive knowing the GPU was doing everything. They still looked kinda lame.

Yeah it's kinda funny in this hightech hardware age, that we still don't got the option to se HQ HD techdemo videos at the release ;)
 
Here I enter this thread hoping to read a highly fascinating discussion on how this demo is technically done, but instead someone says it's crap and everyone spends about a page getting huffed about it.

Let's get back on track here, I'd like to talk about what makes doing what we used to do on the CPU on the GPU instead so appealing to the industry. Is it really faster? Does it really allow you to do more? Is it practical to do at this time? From what I understand, doing geometry shading isn't very fast on these first generation DX10 GPUs, am I wrong?
 
That's mostly because what the demo is doing has already been fairly sorted out, elsewhere ( http://www.geisswerks.com/ryan/whats....html#cascades ). Sure, nobody's talking about the real technical stuff about it, but this isn't the "How To" section, either, so it makes sense.

Now, to answer your questions - What makes it appealing to do these things on the GPU is they're commonly faster(sometimes innumerably so), and they free up the CPU for work that cannot currently be done on the GPU.
 
That's mostly because what the demo is doing has already been fairly sorted out, elsewhere ( http://www.geisswerks.com/ryan/whats....html#cascades )
No it is not.

That only tells you what the demo does, but it doesn't tell you why it is special, or which functions are new to DX10 and couldn't have been done with DX9!

For instance: the developer talks about really cool displacement mapping. But that is not something new. Instancing is not new either.
Having the gpu create geometry is new in DX10 but modifying geometry was already possible.
(and no trinibwoy that is NOT well spelled out in the article!)

Now, to answer your questions - What makes it appealing to do these things on the GPU is they're commonly faster(sometimes innumerably so), and they free up the CPU for work that cannot currently be done on the GPU.

Do you have url's that show how fast this is on the GPU compared to the CPU?
It wouldn't be the first time that we later find out that everybody keeps on using the CPU for a certain feature because the gpu isn't faster or is too busy with other tasks.
(Or maybe you want your physics engine (hardware) to create the geometry. It seems these things might typically be used for particle and fluid effects, which those physics hardware guys have used as a selling point for their hardware)
 
No it is not.

1.) That only tells you what the demo does, but it doesn't tell you why it is special, or which functions are new to DX10 and couldn't have been done with DX9!

For instance: the developer talks about really cool displacement mapping. But that is not something new. Instancing is not new either.
Having the gpu create geometry is new in DX10 but modifying geometry was already possible.
(and no trinibwoy that is NOT well spelled out in the article!)



2.) Do you have url's that show how fast this is on the GPU compared to the CPU?
It wouldn't be the first time that we later find out that everybody keeps on using the CPU for a certain feature because the gpu isn't faster or is too busy with other tasks.
(Or maybe you want your physics engine (hardware) to create the geometry. It seems these things might typically be used for particle and fluid effects, which those physics hardware guys have used as a selling point for their hardware)



1.) True, it doesn't explain to the letter how it works, or how it's special, or even explain very well what is new and what is not. BUT, it does give a rough idea what it's doing... My point was that the mechanics behind this demo are fairly well understood and explained on the web (not necessarily a single page). Now, as for what is new and what is not, you already sort of answered that question to a degree - The ability to create and destroy geometry is not a trivial thing. This is the type of thing that will see us moving away from parallax mapping, bump mapping, and various other surface effects to represent geometric detail. It's the type of thing that will allow us to adjust the complexity of our meshes dynamically according to screen proximity, etcetera. THAT is what is so new and significant. You could fake a lot of this on DX9 hardware, or you could do it correctly on the CPU, doggedly slow or just with relatively modest geometry, and then have the GPU shade it. DX10 hardware, on the other hand can do all this without the aide of the CPU, and without designers having to resort to approximations.

2.) Actually, no, I have no links, only experience - Art is my background, but there're plenty of accademics here that might be able to point you to something.
 
That only tells you what the demo does, but it doesn't tell you why it is special, or which functions are new to DX10 and couldn't have been done with DX9!

Not trying to be rude but if you were really interested in why it's special you would go do some research instead of harping on it constantly in this thread. There are several good articles on DX10 out there that would provide the info you are seeking.
 
Opinions are like assholes, everyone has them.

The primary difference being that it is generally frowned upon to show the former to the world in public. :???: Please use the more polite form of this in future. (elbows, noses, bellybuttons).
 
I think you should demonstrate new tech and demonstrate it the best you can. If you want just the inner workings, it's best to put a bunch of numbers into the screen.
The demo is visually impressive IN MOTION, because you can create cascades anywhere you want on the rock and see the water flow. It is impressive (at least imo), but like it or not, you can't judge it from the screenshots. The detail texturing + parallax occlusion mapping also looks quite damn nice when getting very close to the rock.
 
The demo is visually impressive IN MOTION, because you can create cascades anywhere you want on the rock and see the water flow. It is impressive (at least imo), but like it or not, you can't judge it from the screenshots. The detail texturing + parallax occlusion mapping also looks quite damn nice when getting very close to the rock.
Agreed - creating waterfalls is the best part of the demo by far :)

The per-pixel displacement mapping (whatever form of it that they're using - is it the same POM implementation that ATI used with Toy Shop?) also does look very nice, although it's still disconcerting to me when I zoom in and the terrain morphs slightly as the effect is distance-limited...
 
Here I enter this thread hoping to read a highly fascinating discussion on how this demo is technically done, but instead someone says it's crap and everyone spends about a page getting huffed about it.

My guesses:
The surface is defined by an iso-surface and the geometry is generated in a geometry shader that does marching tetrahedra (similar to marching cubes) where the input primitive probably is a single point per tetrahedron. I assume they use streamout to store the results and only recompute when you move up or down enough to require a new geometry. With a surface that's mathematically defined the iso function can be used to compute the surface normal at any given point, or at points not on the surface you can get a "field normal vector" or whatever you want to call it. That can be used to animate the flying things so they don't fly into the geometry. The surface function can also be used to raytrace the geometry for lighting, for instance for an ambient occlusion term. This can also be streamed out and stored so you don't need to recompute it. The function could also be used for the water simulation. Collision detection against the function would be fairly straightforward and normals can be easily computed at any given point.

Some nifty ideas, but kind of limited use in real applications since you deal with a mathematical function which has a number of useful properties, while games use polygon models. Everything this demo does would be massively harder to implement if the input is a polygon soup. In a real game the closest thing would probably be using a heightmap for terrain and have water flow over it. Should not be too hard to do.
 
For the record, the demo can actually look pretty damn impressive when you play around with the textures, lighting and detail textures.

Here are a few shots I took, you need to picture this with the moving water.

http://img.photobucket.com/albums/v68/pjbliverpool/Cascades1.jpghttp://img.photobucket.com/albums/v68/pjbliverpool/Cascades1.jpg
http://img.photobucket.com/albums/v68/pjbliverpool/Cascades2.jpg
http://img.photobucket.com/albums/v68/pjbliverpool/Cascades3.jpg

Thank you for the screen shots. Looks very interesting to me.
 
Some nifty ideas, but kind of limited use in real applications since you deal with a mathematical function which has a number of useful properties, while games use polygon models. Everything this demo does would be massively harder to implement if the input is a polygon soup. In a real game the closest thing would probably be using a heightmap for terrain and have water flow over it. Should not be too hard to do.

That was very interesting. Thanks!

So the way I understand this, this could be used in a real game to create a large world from a mathematical function without the CPU having to really care about it. Are those things that would typically consume a lot of CPU cycles and/or PCIe bandwidth and/or CPU memory and will it free up the CPU to do other stuff in a significant way or will the overall impact be fairly minimal?
 
That was very interesting. Thanks!

So the way I understand this, this could be used in a real game to create a large world from a mathematical function without the CPU having to really care about it. Are those things that would typically consume a lot of CPU cycles and/or PCIe bandwidth and/or CPU memory and will it free up the CPU to do other stuff in a significant way or will the overall impact be fairly minimal?

Sounds great for on-the-fly creation of multiplayermaps in RTS-Games. :)
 
The ability to create and destroy geometry is not a trivial thing. This is the type of thing that will see us moving away from parallax mapping, bump mapping, and various other surface effects to represent geometric detail. It's the type of thing that will allow us to adjust the complexity of our meshes dynamically according to screen proximity, etcetera. THAT is what is so new and significant. You could fake a lot of this on DX9 hardware, or you could do it correctly on the CPU, doggedly slow or just with relatively modest geometry, and then have the GPU shade it. DX10 hardware, on the other hand can do all this without the aide of the CPU, and without designers having to resort to approximations.

I hope you are right that this will mean a move away from parallax mapping etc. But will it really?

I had those some hopes in the past with the TruformII demo's which showed the same thing.
Those already showed the adjustment of complexity of meshes dynamically according to screen proximity etc. (I think it was a demo for the R300)

Unfortunately I don't know of a single game that uses it.
Are there people here who know why it was never adopted in real games?
And given that experience, what do they think about the adoption of these features in DX10 games?

Was it really a limitation with DX9 that prevented the adoption, that is now lifted with DX10, or are there other factors that determine it?

Past experience has made me skeptical about the adoption of techdemo features in real games...
 
Well for one, geometry shading is a core aspect of DX10 - like pixel and vertex shading so it's not going to be ignored like past IHV specific functionality (Truform).

If I understand this correctly, instancing on DX9 is pretty limited in what it can do - you send over the definition of an object then a list of parameters that slightly modify it to create different versions. But I think the type of modification is relatively fixed across all instances - it's not programmable. With DX10 you have a lot more flexibility with how you manipulate that object data.
 
i think a more important question is, will g80 and r600 be fast enough to properly make use of any of the major innovations dx 10 brings, or are they just high speed dx9 chips with checkbox features. i mean if that cascade demo runs at 20 to 40 fps, i dont see g80 having the power to run an actual dx10 game.
 
i think a more important question is, will g80 and r600 be fast enough to properly make use of any of the major innovations dx 10 brings, or are they just high speed dx9 chips with checkbox features. i mean if that cascade demo runs at 20 to 40 fps, i dont see g80 having the power to run an actual dx10 game.

When real D3D10 games arrive in the future, GPUs like G80/R600 will most likely be a tad more powerful than IGPs (if those still exist by then) of that time. Developers are just starting to work with D3D10; it'll take at least 2 years until we'll see anything that could deserve being called a full D3D10 application and even that is quite optimistic if you count how long ISVs really need to get a triple A title on shelves.

Or take a different perspective: is a R300 (which was the first D3D9.0 GPU on shelves) really powerful enough to run games like Crysis or UT3?

A techdemo is just that a techdemo and there's a huge difference between a game with a few D3D10 performance optimisations (immediate future) and full D3D10 games (distant future).
 
i think a more important question is, will g80 and r600 be fast enough to properly make use of any of the major innovations dx 10 brings, or are they just high speed dx9 chips with checkbox features. i mean if that cascade demo runs at 20 to 40 fps, i dont see g80 having the power to run an actual dx10 game.
I don't see why not. The cascades demo runs at fair framerates at 1920x1200 res on my 8800 GTX. Given that I could drop the res in an actual DX10 game, a game developer could up the load on the graphics card by a good factor of 3-4 before making the game seriously unplayable.

But this sort of use isn't going to happen for a couple more years in games anyway. Though I do rather hope that more games end up going OpenGL this time around, given that these features can be used in XP or Vista in OpenGL.
 
Back
Top