For The Last Time SM2.0 vs SM3.0

Chalnoth said:
DaveBaumann said:
You sure? ;)
Simply put, Dave, I claim that they didn't emulate geometry instancing unless they were able to draw more than one soldier with a single draw call.

Who's talking about emulation, and who's said that this is something that has already occured? Afterall, Vertex Instancing isn't available anywhere yet. [Edit: Whoops, as Dean points out, with the exception of OGL!]
 
Cryect said:
Basically unless you are going to change your displacement mapping someway I don't see why would you use the vertex shaders except for laziness
LOD. We don't want to store displacement mipmaps in vertex attributes..
 
Just in case the idea of hardware being able to 'gain' capability sounds like ATI fans imagining things ;-) I'd just like to point out I reckon I could do a fair version of SetVertexFrequency on NV2A (and presumably GF3 and GF4).

Not sure I could support the indexed version, but the non-indexed instance geometry API would be possible on Xbox.

NV2A can assign user default to vertex registers not used by the stream hardware, so you (in theory, been 6 months since I used an XBOX1) could insert a packet into the push buffer to change it after every instance, than send another drawprim packet (for non-indexed data this would be very quick).
I.e. A 2 instanced call setting a matrix for each instance would become
SetVertexShaderCostants()
SetVertexStream()
SetPixelShaderStates()
LoadDefaultVertexRegister( Matrix[0] )
DrawPrim
LoadDefaultVertexRegister( Matrix[1] )
DrawPrim

You can see the second call is very cheap compared to the first, and as this would be in the push buffer it would use little CPU (just a little extra copying setting up the push buffer). This would be efficient (vertices would likely be in pre-vertex cache) and cheap with none of the problems with vertex constant renaming caused by vertex constant uploads...
 
DaveBaumann said:
HDR effects can be, and have/will be supported down to DX8.1 cpabilities and so there can be workarounds

Plese, people, can we stop calling simple glow/bloom effects "HDR" stuff? These are completely different things that can be used together, but without each other as well...
 
DeanoC said:
And how well do you know ATI's hardware?

Most hardware doesn't match the vertex declaration/stream API at all well. Imagine trying to match D3D9 vertex streams to PS2 VU1, you would miss a large set of capabilities (VU1 is roughly VS2.0 but has vertex frequency support).

Thats why we all watch OpenGL extensions, things often appear there before being added to D3D.
Let me just put it this way:
I can run the demo on my GeForce 6800.

No low-level stuff is going on in ATI's Crowd demo, it's all DX9b.
 
Chalnoth said:
DaveBaumann said:
You sure? ;)
Simply put, Dave, I claim that they didn't emulate geometry instancing unless they were able to draw more than one soldier with a single draw call.

Err, you could quite easily store multiple soldiers in the same vertex buffer and draw them with a single call... not quite instancing but does reduce the number of draw calls. So instead of looping through one and the same buffer you just manually repeat the data in the vertex buffers...

So you could just generate batches, store say 20 identical models in the same vertex buffer, and you draw the whole scene in batches of 20 soldiers. Want less draw calls, well more identical models into the vertex buffer...

Then again I did not read the thread so not sure what we are discussing here.

K-
 
DaveBaumann said:
N-Patches are not really supported in hardware for R300 and up and there is no requirement for support of this or adaptive tesselation in DX9. IMO, anything to do with tesselation is a bit of a waste of time before DX10 anyway (and even then its not a certainty that the tesselation engine will be a requirement or even supported).

Dave, anybody, I was wondering why is this exactly that they are a waste of time? I am not trying to be an ass, but when ATI first introduced it I was really excited then it kind of fizzled. When I have actually forced its use I got errors and gaps in models and things so the question is . Why is it a waste? Will it work better in the future? And why did it not work so wonderfully in the present?



As to the original topic if the question is about which to buy based on eye candy then it really doens't matter by the time either one is the minimum for games in general they will both be obsolete.
 
If Cry-tek will implement HDR, I wonder if Cry-tek will implement NV40's new full GI algorithm which doesn't work without PS 3.0.

Note: The part about GI is FUD.
 
Kristof said:
Err, you could quite easily store multiple soldiers in the same vertex buffer and draw them with a single call...
VS constants memory is so small...one can't skin a decent number of soldiers with just only one call (and we can't load new bones from a texture..emh.. :) )
Using the CPU to skin the soldiers and fill the VB just to avoid multiple dip calls don't seem a good idea (maybe in a demo..not in a game where the CPU could do a lot of more interesting things..)

ciao,
Marco
 
pat777 said:
HDR
full GI algorithm

:( :( :( :(

* tries to decide wether it's time to jump out the window *

This is no longer a mis-use of CG terms, it's abusing them as much as it's possible.
 
nAo said:
Kristof said:
Err, you could quite easily store multiple soldiers in the same vertex buffer and draw them with a single call...
VS constants memory is so small...one can't skin a decent number of soldiers with just only one call (and we can't load new bones from a texture..emh.. :) )
But you could maybe draw ten sets of legs in one draw call, switch states, and then draw ten torsos and ten pairs of arms in a second draw call. That may still be unrealistic numbers for "state of the art" content, I don't know. Just use whatever granularity fits the limitations ;)
 
nAo said:
Kristof said:
Err, you could quite easily store multiple soldiers in the same vertex buffer and draw them with a single call...
VS constants memory is so small...one can't skin a decent number of soldiers with just only one call (and we can't load new bones from a texture..emh.. :) )
Using the CPU to skin the soldiers and fill the VB just to avoid multiple dip calls don't seem a good idea (maybe in a demo..not in a game where the CPU could do a lot of more interesting things..)
We use the method Kristof is talking about for static objects and we tried skinning on the CPU on a seperate thread but it was too slow on PC.

Interesting the current vertex frequency stuff doesn't help very much for lots of soldiers either, its rare they are all using the same animations (at least in our army's...), so the actual sharing of bone data is fairly low and without using vertex texturing you can't load enough data to be useful.

I wonder if bones in a vertex texture and vertex frequency for other instance data might be a win (if you can hide the latency by doing lots of vertex ops) in PC land but haven't tried it.
 
I do really wonder how one might manage to render soldiers in this fashion. I would imagine that what you'd primarily want to use it for is, for example, a RTS game where you control units of soldiers, instead of individual soldiers. In such a game, all soldiers may end up acting as one.

But yes, the uses for object instancing on moving objects are limited. They're more useful for static (or nearly so) objects, such as grass or foliage.
 
poly-gone said:
Vertex stream frequency is ignored for indexed geometry and for shaders below 3.0, so how did ATI implement their geometry instancing? Plus, without indexed geometry, you'd be wasting too much memory. So how have the hardware folks overcome this limitation?

Geometry Instacing is supported on VS 3.0 cards for all shader versions from 1.1 to 3.0.

Thomas
 
DaveBaumann said:
As for HL2, well, they don't have any choice, know do they? We are talking of ATI/Valve: and they make a nice couple.... :rolleyes:

Lets deal with the facts shall we, and this is not "no one" as you initially stated. Valve are already supporting HDR to in integer texture for the FX series, as well as float for ATI's shader 2.0 parts - we don't know if they are going to use blending or not yet.

The problem is that Valve is to much "friends" of ATI. I read again the lighting on the Source Engine and it still freaks me out: 1920 diferent shaders... And the results aer not that brilliant, but work well on as a marketing thingy...
But, again on HDRI... They say in the PDF that they render the lighting equation of HL2 in a single pass. So, they just need to write it down on a FP buffer and voilá. For this kind of situation, FP blending is not needed, because there's nothing to blend to or blend with. How much overbright or good glares they will have with this supossed HDRI is beyond me.

It is like saying that Quake3 already used PS2.0 but only took advantage of a small part of it called PS1.1.... :rolleyes:
 
The problem is that Valve is to much "friends" of ATI.

Hey I could say the same for Crytek

How much overbright or good glares they will have with this supossed HDRI is beyond me.

It's not beyond you, we already have functional HDR-using tech demos that work with R300 and up cards, if you want to find something out just consult those demos and their tech docs.
 
reever said:
The problem is that Valve is to much "friends" of ATI.
Hey I could say the same for Crytek
Which is why they're supporting 3Dc, of course. :rolleyes:

Anyway, yes, you can do HDR on pre-SM3 hardware if you do all lighting in the vertex shader, but this isn't feasible for all situations, particularly not in PS 2.0.
 
Back
Top