New ATi Demo

My reference to AI was due to the fact that real games use lots physics and AI, now my understanding of geometry instancing is that's it's meant to remove a lot of the cpu overhead of rendering the same object several times. This reduction in cpu overhead means that games can employ more objects in realtime and it won't impact the cpu time left for other necessary AI and physics tasks. This demo like democoder pointed out, looks heavily precomputed in path and behaviour, if it's using 100% cpu just do a very dumb display of lots of characters on screen, forget seeing that many characters in any real game for a long while.
 
Why are fanATIcs so insistent on downtalking every single NV40 feature, even one that's really a generic efficiency improvement that fixes problems with DX and small batches

Why are nvidiots allways claiming that a person is a fanATIc when they point out something that casts nvidia in a bad light ? :rolleyes:
 
mozmo said:
My reference to AI was due to the fact that real games use lots physics and AI, now my understanding of geometry instancing is that's it's meant to remove a lot of the cpu overhead of rendering the same object several times. This reduction in cpu overhead means that games can employ more objects in realtime and it won't impact the cpu time left for other necessary AI and physics tasks. This demo like democoder pointed out, looks heavily precomputed in path and behaviour, if it's using 100% cpu just do a very dumb display of lots of characters on screen, forget seeing that many characters in any real game for a long while.
the problem is, your only point of reference is one nVidia demo with even less AI....
That, and the fact that you are wrong.

Oh Democoder - why is it that nVidiots feel compelled to bag on ATI demos?
Oh wait - you dont like having labels applied that way (in such a broad brush)? Then don't do it yourself - stop making nonsense arguments that "anyone who points out anything slightly anti-nvidia MUST be a ATI fanboy.
 
the problem is, your only point of reference is one nVidia demo with even less AI....
That, and the fact that you are wrong.
Wrong about what in specific? You don't agree reducing cpu overhead in drawing calls will enable more in game characters and objects in real games at decent framerates.

Oh Democoder - why is it that nVidiots feel compelled to bag on ATI demos?
Oh wait - you dont like the label applied that way? Then don't do it yourself.
How was he baggin the demo, if anything he was just making observations, he only had a go at ATI fans claiming things that m$ deem an important improvement (ie Geometry instancing) as irrelevant from one ATI demo just because , which says nothing about what benefits geometry instancing can bring or can't. Just more FUD imo.
 
DemoCoder said:
Why are fanATIcs so insistent on downtalking every single NV40 feature, even one that's really a generic efficiency improvement that fixes problems with DX and small batches.

Well, regarding this particular feature I guess the thing is that it's a feature that's quite hard to get excited about. It a performance thing only, and only useful in a very non-typical scenario. For it to matter you need to have loads of objects, and each object must be very simple. Otherwise the overhead of the drawing calls will be neglectable relative the actual drawing. In this demo for instance I don't think instancing would improve performance noticable. Maybe a percent or two. Sure it has a lot of characters, but each character is fairly complex, so the bottleneck is not going to be on the number of draw calls but in the vertex shader. And in the case where your objects are fairly simple it's not going to cost you a lot either to simply pack many copies into the same vertex buffer to reduce the number of drawing calls.
 
Humus said:
DemoCoder said:
Why are fanATIcs so insistent on downtalking every single NV40 feature, even one that's really a generic efficiency improvement that fixes problems with DX and small batches.

Well, regarding this particular feature I guess the thing is that it's a feature that's quite hard to get excited about. It a performance thing only, and only useful in a very non-typical scenario. For it to matter you need to have loads of objects, and each object must be very simple. Otherwise the overhead of the drawing calls will be neglectable relative the actual drawing. In this demo for instance I don't think instancing would improve performance noticable. Maybe a percent or two. Sure it has a lot of characters, but each character is fairly complex, so the bottleneck is not going to be on the number of draw calls but in the vertex shader. And in the case where your objects are fairly simple it's not going to cost you a lot either to simply pack many copies into the same vertex buffer to reduce the number of drawing calls.
Yeah, that last bit seems to be one of the better choices for things like particle systems. My current particle system implementation re-populates a vertex array each frame with the quads necessary for each particle and then renders that array in a single set of relevant calls. It proved much faster than using a display list to hold the data for each particle, and then creating a transformation matrix for each particle (I'm not sure that this method was much better than basic immediate mode). Of course, my current method could certainly be better optimized further. . .

Anyway, how is this demo accomplished? Do you simply have a massive number of copies of the mesh as a single buffer in memory? That would certainly make things easier. I know that in OpenGL one could simply then stream an array containing model positions into video memory each frame, which would have a minimal impact on bandwidth usage. Bind that array as variant data and use it to modify vertex positions in the vertex shader. It'd be just like each vertex in the meshes had an extra vec3 attribute. Drawing would simply involve a single glMultiDrawArrays call. Of course, this doesn't solve the issue of animation -- which I'm not too familiar with anyway. . .
 
pocketmoon66 said:
Runs with bug on 9800Pro (with the existing x800 demo wrapper) - shows the shadows only!

You can switch to show ambient occlusion lighting only - you'll see the scene in B&W complete with soldiers, or switch to 'vector' which shows plain coloured soldiers (indicating instance id I guess)

Like a particle system on legs :)

Anyway, minimum fps arount 34, moslty around 45ish.

Another demo you don't need a X800 for :devilish:

Oh and the sushi error (for colorless of course )

//=====================================================
// ATI Sushi Error Log Created 6/1/2004 11:51 pm
//=====================================================
[SSGenericAPI_D3D.cpp] (line 1312): No D3D Error message. You might be specifying more than one component on a source register.[oCrowdSkRandom.ssh] (line 288): Error creating Pixel Shader
[Main.cpp] (line 881): Normal Application Exit


Which sounds fixable...

I wouldn't say you dont need an x800 when you CANT GET COLOR... to me atleast that is somewhat important....
 
Althornin said:
Oh Democoder - why is it that nVidiots feel compelled to bag on ATI demos?

And why do you always feel compelled to make nonsense arguments? I didn't "bag" on ATI's demo.
 
Humus said:
Well, regarding this particular feature I guess the thing is that it's a feature that's quite hard to get excited about. It a performance thing only, and only useful in a very non-typical scenario. For it to matter you need to have loads of objects, and each object must be very simple. Otherwise the overhead of the drawing calls will be neglectable relative the actual drawing. In this demo for instance I don't think instancing would improve performance noticable. Maybe a percent or two. Sure it has a lot of characters, but each character is fairly complex, so the bottleneck is not going to be on the number of draw calls but in the vertex shader. And in the case where your objects are fairly simple it's not going to cost you a lot either to simply pack many copies into the same vertex buffer to reduce the number of drawing calls.

I think the latter is probably what was done in the demo (packing multiple copies). I don't think geometry instancing is about reducing 1000 DIP calls into 15. I think it's about reducing 10,000 calls into 100. I expect it to be used more for terrain rendering than for character models, although Battle for Middle Earth seems like a contender due to lots and lots of identical simple models.

There are probably other uses for the vertex frequency divider besides "geometry instancing" that people haven't figured out yet.
 
Humus said:
For it to matter you need to have loads of objects, and each object must be very simple.
Something like drawing grass or leaves, no ? NV did a grassy field demo back in GF2 days, wonder if they'd do a rewrite for NV40.
 
DemoCoder said:
I think the latter is probably what was done in the demo (packing multiple copies). I don't think geometry instancing is about reducing 1000 DIP calls into 15. I think it's about reducing 10,000 calls into 100. I expect it to be used more for terrain rendering than for character models, although Battle for Middle Earth seems like a contender due to lots and lots of identical simple models.

There are probably other uses for the vertex frequency divider besides "geometry instancing" that people haven't figured out yet.

I think geometry instancing can be used in multiple characters even if they aren't identical. Maybe the NV40 can instance a base model and modify it so each model would look different.
 
DemoCoder said:
Why are fanATIcs so insistent on downtalking every single NV40 feature, even one that's really a generic efficiency improvement that fixes problems with DX and small batches.

Oh please! :rolleyes: I said that geometry instancing is something visually impressive that R420 is *lacking* in comparison to NV40, but that this demo shows the same kind of effect. Where is that factually incorrect?

You are so defensive of anything that you construe to be vaguely critical of Nvidia, that you are turning into a bad cliche. You've claimed in the past that you are looking to balance out the fanATIcs, but you are doing this by becoming an Nvidiot yourself. I notice you never jump up to defend ATI in the same way when we get an influx of idiot Nvidia fans.
 
DemoCoder said:
There are probably other uses for the vertex frequency divider besides "geometry instancing" that people haven't figured out yet.
I think one really exciting use would be for more organic environments. Imagine an island covered in trees, with each tree swaying in the wind slightly out of phase.
 
Just tried it on my FX5900 also...

Without colorless's nv3xruby d3d9.dll:

This demo requiressupport of DirectX 9.0b Pixel Shaders v2.0b.
The graphics processor and/or drivers in this system does not meet this requirement...
etc


With colourless's nv3xruby d3d9.dll:

ATI Sushi Error Log Created 6/2/2004 10:34 am

[AwFn.cpp] (line 3418): D3DAw Error: AwCreateRenderableTexture - Unable to create color texture object
[StartEnd.cpp] (line 2047): Error creating color buffer "cDensity"!
[Main.cpp] (line 881): Normal Application Exit
 
Bouncing Zabaglione Bros. said:
I notice you never jump up to defend ATI in the same way when we get an influx of idiot Nvidia fans.

I've defended ATI in the past. It just so happens that there are far more fanATIcs than nvidiots on this board and more often than not, Nvidia is being attacked (especially NV3x era) If the number of sheer defensive posts on each side is your criteria for bias, then you'd have to conclude other personalities here are biased too, e.g. Dave, since it seems like the majority of his "defensive" messages (e.g. arguing against something, or "correcting" info) fall into threads which attack an ATI feature. (e.g. 3Dc vs DXTC, HDR on R300/HL2 vs HDR filtering/blending, Video Processor vs pixel shader "encoding" acceleration, non-bridged PCIE, etc)

With the exception of the colored mipmap fiasco recently, there just aren't many things people attack about ATI. There's no new features really to criticize, and almost everything seems focused on criticizing NV40 features, PCIE, power, noise, or the color of JenSen's underwear.

And unlike others, I've never attacked the R300 or R420 hardware.
 
With the exception of the colored mipmap fiasco recently, there just aren't many things people attack about ATI. There's no new features really to criticize, and almost everything seems focused on criticizing NV40 features, PCIE, power, noise, or the color of JenSen's underwear

Well I can't much about nv40s features as its a good feature set. They aren't a huge deal as some on this board want to make it out to be (radar , chrisray and others)

Pcie . Yes they spent all this time on sm3 but they coulnd't have native pcie support ?

Noise , Well come on man its a personal pc. It sits a few feet away from where we are using the computer. I don't want a jet engine taking off next to me while trying to hear snipers in farcry . Do you ?

And come on Brown underwear ? Thats just wrong !!!!

Seriously though what do u want people to say about the nv40 ?


the r420 . Lacks as sm3.0 but had gdc and temporal aa . Temporal aa for any that have ati cards and hacked it to work is a very very nice feature. I only had a 9700pro to play with it but quake 3 with a 4x temporal aa is very very nice to look at .

But the r420 for me personal fit what I wanted the most. 2-3x the performance of my 9700pro. Esp in shader limited games which I hate to break it to you are all finally coming out in the next few months .

It doesn't require a new power supply . It is single slot and has a low noise cooling system.

I don't see how you can fault this card .

Personaly I have gotten tired of seeing you responed to threads as of late because its either playing up nvidia or putting down ati.
 
Back
Top