Report says that X1000 series not fully SM3.0?

991060 said:
The texture filtering algorithm depends on the size and position of the fragments on the screen.

When rendering to VB, you're effectively rendering to a Nx1 render target, I don't know how the filtering unit can do any reasonable work in this situation.
This is exactly the same for real vertex texturing. If you want meaningful texture LOD in a vertex texture fetch, you must compute that LOD yourself. There is no automatic LOD at the vertex level.

And you can do the same thing on the fragment level. If you match up the "texels" from your "source vertex buffer texture" to the "pixels" you're going to write into the new vertex buffer, you have a 1:1 mapping texels to pixels automatically. And you really should do that, because you want to generate one new set of vertex attributes for every input vertex, after all. Minification/magnification don't make a whole lot of sense there.

In the 1:1-mapping case, no matter how fancy your filtering settings are, you'll get unfiltered samples from the base mipmap level. To sample from anywhere else you need LOD bias and or explicit (computed) LOD. This, too, is exactly on par with what you get at the vertex level.
991060 said:
Also, a vertex is a mathmatical defition, it occupies no space or area in the 3D space or on the screen. A fragment can cover more than one texel is the reason why we need txture filtering in the first place. For vertex, why we need such capability?
Linear filtering is useful for making smooth transitions.

Say you put a heightfield into a vertex texture and put it onto a highly tesselated rectangle. If you scroll around the heightfield over the surface by smoothly varying the texcoords (could easily be done by adding an FP32 VS constant to the texcoords before the lookup), the heightfield entries will over time "pop" around without filtering. A high peak surrounded by lows can't smoothly move to a position between two vertices. It can either be at vertex A or vertex B. It cannot be halfway across unless there is another vertex between these two, but then we could recursively continue ad inifinitum. This is no solution.

So you really want a linear filter to produce smooth in-betweens, and also to hide the limited resolution of your height field.

Beyond basic linear filtering, making the case for mipmapping on vertex textures is more complicated, but I'd rather have that capability as well.
991060 said:
What I mean by the loss of tolology is that you can't use the original index bffer(by which the topology of the mesh is determined) when doing R2VB.
You don't need it. Rearranging the vertices in the order given by the index buffer ...
a)would produce an incompatible "R2VB" result. You'd have to write the rearranged original vertices as well, to match them up.
b)can lead to reprocessing of duplicates. The output buffer would be larger (in terms of vertices) than the input buffer.
You don't want that.

But no problem. You can process the vertices in any order, because vertex processing can't access neighbours anyway.

By virtue of the 1:1-mapping, the produced vertex buffer has the same ordering as the input buffer, so you can still use your original index buffer for rendering.
 
zeckensack, you're right about the usage of filtering on vertex texturing. I simply overlook the situation in which the resolution of vertex and vertex texture is not 1:1. Yet it raises another question: how to compute the correct LOD on for arbitrary meshes? This kind of computation requires the knowledge of neighbouring vertices, which is not avaiable in existing HW/APIs.


About the topology thing, I think we're talking about the 2 separated stages of doing R2VB. To preserve a compatible texture(which will be used as the vertex buffer in the next pass) layout, you have to use a index buffer with content like"0,1,2,3,...n", and make sure each next vertex is transformed to the correct position in the rendertarget(i.e. the nth vertex is transformed to the nth fragment in the resulting rendertarget.) Then in the next pass, the original index buffer should be used, as you've said.
 
It seems to me that an important point being overlooked in all this is the delay that happened with R520, and what it suggests about the demand for this feature, particularly given that ATI says they can make it available with driver tricks.

We had reliable reports and indicators (readmes and such) that ISV's had R520 boards to play with in March --if there had been a hue and cry, it would have 1) leaked into the public sphere long before now --because ISV's know as well as the rest of us how to bring pressure to bear. and 2) ATI would have had ample time to code that driver work around to make it transparent. It didn't. They didn't. Therefore there was no hue and cry even after the ISVs found out about it.
 
Here's a review where vertex shading performances was tested in T&L, VS 1.1, VS 2.0 and VS 2.X/3.0 in RightMark. The also did some dynamic branching shader tests:

http://www.behardware.com/articles/592-3/ati-radeon-x1800-xt-xl.html

Unlike NVIDIA, ATI seems to have forgotten about Vertex Texturing support which we thought was required for Vertex Shader 3.0 support. ATI says the opposite and this is quite odd. If we take a closer look, we see that ATI reports vertex texturing to DirectX, but doesn’t authorize it for any texture formats. This looks suspicious and may be a clever way to avoid DirectX specifications and announce Vertex Shader 3.0 support without the use of Vertex Texturing. Either way, it´s unclear and anyway, in practice, Vertex Texturing isn’t really important except for 2-3 technological demonstrations
 
geo said:
They didn't. Therefore there was no hue and cry even after the ISVs found out about it.

word!

as far as I know the XL was the card everyone played and worked with (albeit at lower clocks / less pipes.) but an instruction is just that.. an instruction right? a function call can be intercepted and split into several smaller hardware supported calls and the odd software call to deliver the same result, maybe even at a higher speed.

Gawd, it's not like they took out truform or something like that
icon_exclaim.gif
 
Little Offtopic:

OpenGL guy, I must apologize. The new ATI SDK contains the GI information (and some more) I was looking for on the developer page.
 
Could somebody tell me, where could I found officially written (preferably from MS), that vertex texture fetch is not a must, but an optional feature in SM3.0? Thanks in advance.
 
no-X said:
Could somebody tell me, where could I found officially written (preferably from MS), that vertex texture fetch is not a must, but an optional feature in SM3.0? Thanks in advance.
Here you can see that vertex texture fetch is a non-optional feature of VS 3.0. Clearly Microsoft has made an exception in the case of ATI's Radeon X1x00.
 
That page says nothing. It simply leads to pages that say "SM3.0 supports vertex texturing". Which R1K does :p Shame there's no surface formats to go along with that support though :LOL:
 
Certainly if you read here

It seems like VT are a must. The loop hole is DX is allowed to return that NO texture formats are supported, but this to me is kind of a violation of the spirit of the shader model.

It's like if shipped you 3D accelerator that only supported Gouraud shading, because it reports *no* texture formats as supported, even tho PS2.0 suggests that the TEX instruction is a must.

So you support TEX vacuously. It can never be used, because no texture formats are supported, ergo, it passes the implementation unit test!
 
DemoCoder said:
It's like if shipped you 3D accelerator that only supported Gouraud shading, because it reports *no* texture formats as supported, even tho PS2.0 suggests that the TEX instruction is a must.

This comparison is right but Microsoft have closed this loop hole as you have to support at least 3 textureformats (4-4-4-4,1-5-5-5,8-8-8-8) in the pixelshader.
 
I just think that if MS really intended for VT to be optional, they would have said so and have made TEX in VS3.0 optional. The fact that it's really only optional implicitly through the lack of texture formats suggests to me that optional VT for SM3.0 was not the original intention when DX9 was authored.
 
All this hand-wringing is very, ugh, educational, but I don't see many games with VT giving NVidia SM3 cards an undisputed advantage in IQ.

The same thing goes for dynamic branching.

I tend to think the reason that these two key differential features from SM2 are not more widly used in actual games that you can buy, is that the performance on the 18-month old SM3 parts out there in the market stinks.

Is there a raft of VT-based games coming down the river in the next year?

Which do developers consider is actually more important? VT or DB?

Is VT much use in comparison to the geometry shading functionality of DX10? In other words, is VT without GS an exciting concept, or will capabilities really take a giant leap forwards when DX10 hits?

Or, is it just a crying shame that VT is just a 3rd class citizen?

How long before we see VT in XB360 games?

Jawed
 
The point is, what was the intent of the original spec, and whether it has been subverted via a loophole.

Practical consequences? None.

But using loopholes leaves me with a queasy feeling.
 
I can imagine there were plenty of arguments about the practicalities of VT (being a bad fit for vertex processing, with nothing to hide texture-fetch latency) - and if ATI had the perspective of a "unified architecture with out of order scheduling" being the only reasonable solution, that ATI would have argued against VT simply on the grounds of its impracticality.

M$ went ahead anyway. ATI got what it wanted. NVidia got its marketing check box, eventually.

Does the PowerVR implementation hide texture latency?

Jawed
 
Is VT much use in comparison to the geometry shading functionality of DX10? In other words, is VT without GS an exciting concept, or will capabilities really take a giant leap forwards when DX10 hits?

Albeit vertex texturing doesn't only allow more advanced displacement mapping, it's not the only effect developers could realize with it. If we'd concentrate on DM exclusively, IMHO it'll make even more sense with a geometry shader or programmable primitive processor. Even far more in a unified shader core where vertex or geometry textures aren't a headache anymore in terms of latency.

Allow me to re-phrase ATI's marketing catch phrase for the X1k family: it shouldn't read IMO "SM3.0 done right" but "SM3.0 done better". I'd rather give the first label to Xenos personally.
 
Jawed said:
I'm assuming that does hide texture latency - but can you play games on one?

Jawed

It's a unified shader core Jawed; I wouldn't expect you especially to even ask me that. If you can play games on a Eurasia? Depends what the target market is. Since you can already play games on MBX, it's the next step for the coming years in the PDA/mobile space and not only.
 
I'd rather ask than jump in with two left feet - since I don't know much about it, and Simon is saying so little.

Jawed
 
Back
Top