How far can you realistically reduce texture usage?

Tim said:
You don't have to make different paths the NVIDIA cards have no problem using render to vertex buffer, it might even be preferable considering the lackluster implementation of vertex texturing (a lackluster implementation is of cause better than no support at all).
Is that so? Have you got a link to any demos/docs for that (just out of curiosity!)?

I'll admit that the information I have is based on reading information online - I'm not lucky enough to have my hands on both types of hardware ;)

Under D3D9, I was under the impression it was just selecting a custom FOURCC - so I suppose theres no reason why the NV drivers can't expose the same functionality.

Cheers,
Jack
 
JHoxley said:
Is that so? Have you got a link to any demos/docs for that (just out of curiosity!)?

I'll admit that the information I have is based on reading information online - I'm not lucky enough to have my hands on both types of hardware ;)

Under D3D9, I was under the impression it was just selecting a custom FOURCC - so I suppose theres no reason why the NV drivers can't expose the same functionality.

It seems I was a bit confused, for some reason I thought the functionality allready was exposed on NVIDIA hardware, but that does not seem to be the case.

Dave writes this in his r520 review:
"Render to Vertex Buffer should be supportable by any DirectX9 (SM2.0 or 3.0) board, should the vendors choose to expose it. As pre-VS3.0 hardware does not have any vertex texture sampler support only one R2VB sampler can be specified, however with ATI's VS3.0 hardware the driver exposes up to five samplers and we believe that this could be equally supportable in NVIDIA's NV4x/G7x series."


As I see it NVIDIA has really no reason exposing this function - if they did they would pretty much nulify Atis disadvantage from lacking Vertex texturing.
 
Anteru said:
I remember reading an ATI presentation where they said that you could render directly to a vertex buffer (meaning you could generate or modify geometry) and that his extension would be available to developers soon, anyone who knows details about that?

It's in the drivers since Cat 5.9.
 
JHoxley said:
Yeah, it's called 'R2VB' (Render 2/to Vertex Buffer). Annoyed more than enough dev's - sure, the AAA top-grade dev's can afford to make an ATI X1k specific path and a GeForce 6/7 path, but a lot of us don't have the time/resources for that. Means that most people aren't going to use it much because you'll have to implement a fairly advanced effect in two completely different ways :devilish:

The overlap is quite significant, so it should be fairly straightforward in most cases to support both.

JHoxley said:
I don't know about ATI's R2VB extension, but I've heard that vertex texturing on the Nvidia hardware is still fairly limited - both feature wise (only 1 format) and performance is less than ideal.

R2VB can use any render target format that can meaningfully be accessed as a vertex buffer.
 
But by rendering to vertex buffer wouldn't you have to lock the vertex buffer? Which then you have to remake the D3D device. So at the end of it all you get a simliar penalty as nV's implementation of VTF.
 
Razor1 said:
But by rendering to vertex buffer wouldn't you have to lock the vertex buffer? Which then you have to remake the D3D device. So at the end of it all you get a simliar penalty as nV's implementation of VTF.
Well, with nVidia's implementation of vertex texturing, you have to have instructions in the shader that can do the latency hiding for you. So there's typically a cost to doing the texture fetch within the shader for each vertex. Any context switching costs will be upfront costs, and will not be incurred per-vertex, but at a much higher granularity.
 
Razor1 said:
But by rendering to vertex buffer wouldn't you have to lock the vertex buffer? Which then you have to remake the D3D device. So at the end of it all you get a simliar penalty as nV's implementation of VTF.

No. There are no locks or synchronizing events on the CPU side associated with rendering into a vertex buffer. The only clear overhead involved is the turnaround time finishing the rendering to the VB before it can be made available to the vertex shader, however this time is basically small, and if you design your rendering stream such that you do other rendering work between generating the rendered VB and using it as a vertex buffer then even that small overhead can be eliminated.
 
Ah ok thx guys, wierd though, on the MS's developers site they mentioned that when rendering to vertex buffer you end up losing D3D device.

I was under the impression to make changes in the vertex buffer the buffer has to be locked
 
Last edited by a moderator:
Razor1 said:
Ah ok thx guys, wierd though, on the MS's developers site they mentioned that when rendering to vertex buffer you end up losing D3D device.
That really doesn't make any sense. A lost device scenario should not have anything to do with this...

Have you got a link/reference for this one?

Razor1 said:
I was under the impression to make changes in the vertex buffer the buffer has to be locked
Aspects of GPU/CPU programming are the same as regular concurrent/multiprogramming environments, so if the CPU wants to access a GPU resource it has to acquire a lock on it so that they don't screw eachother over. So, yes, traditionally you will lock/unlock a VB to modify it - but thats because the CPU is the one doing the work. R2VB and VTX is GPU based - it owns the resources its manipulating, thus the lock is not required.

It's the same as render-to-texture.

Cheers,
Jack
 
http://discuss.microsoft.com/SCRIPTS/WA-MSD.EXE?A2=ind0303d&L=directxdev&D=1&P=12553

Here it is,

And its our very own Deano C :LOL:

>You can have seperate render-target and vertex buffer and then
>lock and copy via CPU which does allow it but is alot slower
>and uses more RAM.

I'm not sure I'd call that "Rendering to a vertex buffer" - which is what
the thread asks about.

But we're in danger of getting lost in pure semantics here. We all agree
that it's a neat trick and can be useful, and that DirectX on PC doesn't
allow you direct access to the technique at this time.


There was actually another place where I saw this....

Ah thx JHoxley :)
 
Last edited by a moderator:
Razor1 said:
Here it is
I think it's safe to say that I wouldn't base the understanding of the current situation and methods on a developer conversation that's 3 years out of date - chances are that events have moved on since then. ;)

Using the CPU to copy rendered data from a texture to a vertex buffer would indeed cause a lock, synchronization and horribly slow performance. That's exactly why you don't do it that way...
 
Tim said:
As I see it NVIDIA has really no reason exposing this function - if they did they would pretty much nulify Atis disadvantage from lacking Vertex texturing.
Actually, I think ATI should figure out a way to support VTF behind the scenes (though complex dependent texturing would be a pain). VTF gets a monsterous speed boost in a unified architecture, so ATI should try to get developers using VTF now so that it gets used significantly in R6xx's lifetime. Maybe the driver could render single pixel point sprites to a texture in one pass and then stream it into the vertex shader in the next pass, and repeat if necessary until you have the data to do the actual rendering pass.

BTW, Humus, when is the new Radeon SDK coming out? Didn't you say it would be released sometime around now? I really want to see some R2VB examples.
 
Maybe ATI's pinning its hopes on XB360 to get devs used to the idea of vertex texturing, as well as dynamic branching.

Jawed
 
Razor1 said:
But by rendering to vertex buffer wouldn't you have to lock the vertex buffer? Which then you have to remake the D3D device. So at the end of it all you get a simliar penalty as nV's implementation of VTF.

Razor1 said:
Ah ok thx guys, wierd though, on the MS's developers site they mentioned that when rendering to vertex buffer you end up losing D3D device.

I was under the impression to make changes in the vertex buffer the buffer has to be locked

To begin with, locking a vertex buffer doesn't not make you lose the D3D device. If that would be the case, it would be impossible to even use a vertex buffer as you have to lock it to fill it with any data to begin with.

But anyway, you don't actually render to a vertex buffer per se, you render to a texture. This texture's memory is then reinterpreted as a vertex buffer. There's no actual vertex buffer object involved.
 
Back
Top