Why ATI demos don't run on NV hardware, and Rain in ToyShop
Hey guys!
My name is Natasha Tatarchuk, I work on ATI demo team along with Thorsten (and I was the tech lead for the ToyShop demo). Thanks so much you guys for some of the awesome things you've said about our demo. It's sweet to hear these good things about the demo we put so much hard work into.
Now let me answer some of your questions:
psurge said:
Hi Thorsten - just wanted to say you guys did a great job on the Toyshop demo - that's the best looking real-time rain I've seen yet. It's also very nice to see POM in action on a non-trivial scene.
Thanks so much, man!
psurge said:
This is maybe the wrong thread for this, but I saw a slide on the demo where it was stated that much of the rain was not rendered using particle systems, but was composited into the scene. Would it be possible to elaborate on how this was done? Unfortunately I don't have an r520 to try the demo out on, but judging by the video, it looks like the rain drops are volumetrically lit... is this the case, and if so, could you share how it was done?
Serge, thanks for the interest in that. I won't go into the exact implementation details, but I have developed a new technique for rendering the rain using post-process compositing for the impression of falling rain. The basic premise of it is to simulate some of the lighting effects from the environment (along with rain responding correctly to the lightning flashes, the subtle refraction of water droplets and the mist that is created from all of the rain drops in the air) in a single shader that's composited on the scene. The rain motion is controlled with velocity / direction vectors and I do some additional tricks to make sure that the raindrop movement isn't repeated visually. While it has its drawbacks, it works quite well for many scenarios. We do also use particles for additional rain effects - for example we have rain drop splashes and rain drops falling off various objects (you can see it from ledges of the toy shop, the neon sign, etc).
We are having developer days next week where we will present the breakdown of many of the ToyShop effects in variety of presentations and we will post the decks afterwards. We will also have additionally some whitepapers that we've written on various algorithms (like POM) for you guys on our website when we post the executables. But let us know if there are any algorithms you are interested in hearing about and we'll try our best to break it down for you. If you want to go to the dev days, email
devrel@ati.com for details.
Now on the fun subject of what will happen when we actually release the exes..
Mariner said:
I'm sure if it's possible to do, someone will write a 'wrapper' of some description - that's always happened in the past hasn't it?
Like Thorsten and Chris said (by the way, both Thorsten and Chris work on the demo team, so they've got the 'insider' knowledge too), we use some of the features that aren't supported on NVidia hardware that are extremely useful for our demo (1010102 formats which allow huge memory savings with good HDR quality, the depth texture formats, 3DC compression and vertex data formats without which this demo would simply not be able to fit into the graphics memory, etc).
Also, the parallax occlusion mapping technique really takes advantage of the excellent dynamic branching that X1K cards have.
Whle developing this technique, I ran performance tests on G60 / G70 cards versus the R520 generation and on a typical parallax occlusion mapped scene, performance on G6/70 generation cards is around 30-50% of the R5XX cards depending on the situation.
So if you wanted to make this demo look exactly as it stands right now and run as smoothly as it does (the average frame rate is around 25-28fps) (as
R300King! wanted), it simply would not happen on the current latest generation hardware from NVidia. Even fitting this demo into memory wouldn't work without 1010102 and 3Dc and additional vertex data formats that we use (dec3n, for example). The demo uses a huge amount of texture and vertex data.
Of course, one could think up alternative ways to implement some of the algorithms that we have used in this demo. For example, you could use relief mapping instead of parallax occlusion mapping. The relief mapping technique performs well on both ATI and NVIDIA hardware because it doesn't utilize dynamic branching and also makes heavy use of the dependent texture reads. However, in my quality results tests for comparison of these two techniques, the relief mapping technique displayed visual artifacts on our current dataset.
But since we too love comparing things against all possible platforms, we will post some of our demo effects in RenderMonkey workspaces and you can certainly run them on both platforms.
wireframe said:
Anyone know if Luna and other 7800 demos function on X1000 hardware?
We have tried and they don't.
pjbliverpool said:
Here's a juicy question for you, how would this demo run on Xenos?
The demo could run on Xenos and will run quite well. But we haven't tested this theory.
Again, thanks you guys. It's awesome to read this lively discussion and to see the interest this demo have generated!