Nvidia Instancing demo on x800

Re: Instancing

Vysez said:
Scott_Arm said:
I find it strange that instancing is only being added to specs now .... you'd think this would have been an obviously beneficial technology early on. So many games repeat geometry.

On Ati hardware? If yes, then it's because Ati's Instancing is some kind of software trick (that mainly use the Radeon hardware, and very little CPU, if i understood correctly), and logically it wasn't available from day one...
Maybe, Ati decided to "work on it", when they saw Nvidia creating a buzz around instancing, with their "little dude rendering" :LOL:

I don't believe its a trick. I believe it is actual instancing just like nvidia's is . Its just that ati doesn't have vs 3.0 and thus it can't be turned on as it needs to check the caps for vs3.0
 
I personally think its a "trick", but not a bad one. I'm betting that ATI really never gave it much thought until they saw NV doing it, and then they realized they could do the same basic thing by using their silicon in a slightly different way than they originally intended.

With a small bit of help from the CPU (via the driver), the card is capable of performing instancing by somehow altering how data is sent to a completely alternate piece of silicon.

That's my uneducated guess :)
 
It's not something you can just hack in after the fact - ATI designed the hardware to do it, just like they designed R3x0 to do centroid sampling well in advance.

The only problem is getting it used in an API where instancing is tied to VS3.0. I expect that (as happens now with a lot of games), developers will just check device IDs to enable an instancing codepath.

Of course it's always possible that ATI will lobby MS and get instancing decoupled from VS3 and added to some sort of SM2.0d profile.
 
jvd said:
If it was a trick or a hack we would have found out by now.
Since when has that been the case?

Regardless, the only real thing left to check is the CPU dependency of ATI's algorithm vs. nVidia's (though some more exact performance comparisons would be nice, too, over a larger range of batch sizes).
 
Chalnoth said:
jvd said:
If it was a trick or a hack we would have found out by now.
Since when has that been the case?

Regardless, the only real thing left to check is the CPU dependency of ATI's algorithm vs. nVidia's (though some more exact performance comparisons would be nice, too, over a larger range of batch sizes).

Humus's trick was announced :) the dxtc 5 was announced
 
Re: Instancing

Vysez said:
Scott_Arm said:
I find it strange that instancing is only being added to specs now .... you'd think this would have been an obviously beneficial technology early on. So many games repeat geometry.

On Ati hardware? If yes, then it's because Ati's Instancing is some kind of software trick (that mainly use the Radeon hardware, and very little CPU, if i understood correctly), and logically it wasn't available from day one...
Maybe, Ati decided to "work on it", when they saw Nvidia creating a buzz around instancing, with their "little dude rendering" :LOL:

To clarify, I'm surprised this wasn't added to DirectX much earlier i.e. DX7 or DX8, regardless of how it was going to be implemented. Maybe it wasn't feasible at the time?
 
madmartyau said:
AFAIK The AI and physics is clamped at 60FPS not the actual framerate of the game.

There's no good reason to render above 60 FPS, however. Any frames rendered between engine updates will be the exact same rendering.

It would make for a transparent and very funny benchmark cheat though. Between updates, the card can just redisplay the same frame at 500000 fps. :LOL:
 
3dilettante said:
madmartyau said:
AFAIK The AI and physics is clamped at 60FPS not the actual framerate of the game.

There's no good reason to render above 60 FPS, however. Any frames rendered between engine updates will be the exact same rendering.

It would make for a transparent and very funny benchmark cheat though. Between updates, the card can just redisplay the same frame at 500000 fps. :LOL:
it should be really good with temporal
 
jvd said:
3dilettante said:
There's no good reason to render above 60 FPS, however. Any frames rendered between engine updates will be the exact same rendering.
it should be really good with temporal
I'd be highly surprised if camera movement was limited to 60 fps. You may have the same animations of everything, but I don't see why that means it'll always be the same frame if rendered above 60 fps.
 
Chalnoth said:
jvd said:
3dilettante said:
There's no good reason to render above 60 FPS, however. Any frames rendered between engine updates will be the exact same rendering.
it should be really good with temporal
I'd be highly surprised if camera movement was limited to 60 fps. You may have the same animations of everything, but I don't see why that means it'll always be the same frame if rendered above 60 fps.

I'm just going with what they are saying .

But i don't see if the physics engine is capped at 60 why you would render over it . WOuldn't that just break the timing ?
 
That may be why you'd want to just limit the framerate. There may be artifacts if you're moving and animation and physics is still at 60fps...
 
Chalnoth said:
That may be why you'd want to just limit the framerate. There may be artifacts if you're moving and animation and physics is still at 60fps...

So there wont be any rendering of same frames correct ? If so then temporal will not work
 
Well, not that it won't work, but it won't be quite as good. Besides, you'd have to be running the game at a steady 120 or 180 fps (with the appropriate refresh rate) for this to have a possibility of significantly impacting temporal AA. I don't think any system will be capable of that for a long time...
 
Chalnoth said:
Well, not that it won't work, but it won't be quite as good. Besides, you'd have to be running the game at a steady 120 or 180 fps (with the appropriate refresh rate) for this to have a possibility of significantly impacting temporal AA. I don't think any system will be capable of that for a long time...
I play on farcry with a 75hz vysnc and i don't get any flicker and i only play at 1027x768. So with doom3 2x temporal would be possible or even 4x with my overclocks.
 
My problem with temporal isn't so much the flicker, but the fact that it uses less optimal AA for each pattern than the sparse AA that is usually used, so I'm just not sure it's better than normal sparse AA when you're looking at a scene that's in motion. Myself, I noticed a good deal of edge crawling when I tested it in UT2k4.
 
Chalnoth said:
My problem with temporal isn't so much the flicker, but the fact that it uses less optimal AA for each pattern than the sparse AA that is usually used, so I'm just not sure it's better than normal sparse AA when you're looking at a scene that's in motion. Myself, I noticed a good deal of edge crawling when I tested it in UT2k4.

what mode were u using ?

I normaly play at 1027x 7680 with 4 or 6x temporal.

I don't notice any flicker (and i do look for it ) I will check for edge crawling next time i have a chance. But the final image quality is much better than 8x on my fathers 6800gt (well not for alpha textures but the rest of the scene )


Its def worth using and it should get better as time goes on.
 
jvd said:
I normaly play at 1027x 7680 with 4 or 6x temporal.

You must be tired and making some typos or thats an interesting resolution you are capable of achieving temporal AA in :p (So how tall is that monitor btw?)
 
Back
Top