New demo

Chalnoth said:
But the FX does support float buffers in OpenGL, doesn't it? If anything, I think the problem is a difference in format. nVidia supports a packed buffer that supports up to 128 bits per pixel, into which you can place whatever information you desire. It may not be easy to translate this format to Microsoft's (which is obviously based on ATI's definition of a float buffer).

Chalnoth you just don't ever stop do you? ATi did not 'design' DirectX9. All the vendors had input - including NV initially - and it was built open a lot of information from developers too.... just as DirectX Next is doing now with mailing lists up and running and discussions beginning on rendering techniques and items to be included....

Just because NV *can* do 128 bits per pixel doesn't meen that they will open up their drivers in such a way as to allow for the use of the render target types that all developers ask for... case in point, naturally, is HL2 - HDR is *being worked on* for NV cards because the floating renderpoint method Valve implemented initially won't work for NV cards as this method is not opened up in the drivers yet - indeed on this very board debate had ensued previously over whether it *was* possible for them to provide this functionality or not.... the same was seen in the rthdrbl (is that right?) demo....
 
Well I wouldn't care about float buffers if at least they supported high precision fixed point rendertargets.
Remember that both ATI's HDR demo and HL2's HDR extension uses high precision fixed point rendertargets.

FP is overated, the FX's problem is not that it has no FP support, it's the lack of any high precision RT.

But it looks like I'm repeating myself.
 
Chalnoth said:
Xmas said:
Basically everything in DX is optional. You sure can get WHQL certification for DX7 hardware.

GFFX doesn't support float buffers because the conditional FP16 format isn't in DX9 yet.
But the FX does support float buffers in OpenGL, doesn't it? If anything, I think the problem is a difference in format. nVidia supports a packed buffer that supports up to 128 bits per pixel, into which you can place whatever information you desire. It may not be easy to translate this format to Microsoft's (which is obviously based on ATI's definition of a float buffer).

DirectX has always required texture support to be orthogonal. You must support a texture format without (many) restrictions to claim support under Direc3D. This rule has been in existance since Direct3D was created, the only relaxations was non POW2 conditional because enough devs asked for it. NVIDIA is probably getting a similar conditional flag to help support it weird restrictions.

You can blame one set of people for no floating support on NVIDIA under DirectX9 and thats NVIDIA. They choose to build hardware knowing it wouldn't be able to support a standard feature of Direct3D (they aren't the only vendor hit by this restriction in the past, indeed they can consider themself special as there getting a conditional flag added).

NVIDIA GFFX texture set is very odd compared to ATI's. ATI support a wide range of formats with little/no restrictions. NVIDIA don't have 16 bit integer support, don't support mipmaps, have odd texture coodinate restrictions etc.

Direct3D support is exactly how it should be. OpenGL doesn't support any of these format yet unless you use IHV extension in which case you getting code that will not work on any other cards so you can restrict to whatever you want. When OpenGL gets ARB support that probably won't work on GFFX either (I can't see the ARB allowing the limited set that GFFX support into the main API).
 
jimbob0i0 said:
Just because NV *can* do 128 bits per pixel doesn't meen that they will open up their drivers in such a way as to allow for the use of the render target types that all developers ask for...

I would guess that most developers doesn't ask for wrap mode on render targets. Heck, I don't even know why you'd want that. It's nice for orthogonality with other textures, but for render targets I don't see any situation under which it is useful. Mipmapping is useful in some situations, but seldom critical. Normalized coordinates is convenient, but hardly critical under any circumstance either.
 
Humus said:
jimbob0i0 said:
Just because NV *can* do 128 bits per pixel doesn't meen that they will open up their drivers in such a way as to allow for the use of the render target types that all developers ask for...

I would guess that most developers doesn't ask for wrap mode on render targets. Heck, I don't even know why you'd want that. It's nice for orthogonality with other textures, but for render targets I don't see any situation under which it is useful. Mipmapping is useful in some situations, but seldom critical. Normalized coordinates is convenient, but hardly critical under any circumstance either.

Wrap mode on render target textures is useful, usually to replace operation that could be done on the CPU and uploading (like fake noise or compositing terrain textures, both benefit from higher precision formats). When you look at textures are generalised arrays, having wrap mode (essentially free modulus array indexing) is a handy feature. Not fatal of course (you can do the wrap in the shader) but nice. If the restrictions were just for render-targets (not textures) (which is fairly reasonable) than a blit to a texture (non-render target) of the same format would have fixed the problem but alas thats not how the hardware works.

Its unfortunate more than anything that they didn't notice that there restrictions would run foul of the way Direct3D allows textures. 99% of the uses are fine on the NVIDIA hardware.

Thinking about it, in Direct3D 9 there is a way of specifing per-format restrictions (currently used for render-targets, automipmapping and a few others), so MS could have added a few bits and this trouble would have been avoided...
 
Humus said:
I would guess that most developers doesn't ask for wrap mode on render targets. Heck, I don't even know why you'd want that. It's nice for orthogonality with other textures, but for render targets I don't see any situation under which it is useful. Mipmapping is useful in some situations, but seldom critical. Normalized coordinates is convenient, but hardly critical under any circumstance either.
Since you can't display a floating point render target, the only real way to use them effectively within D3D is as texture inputs to a later pass. There is no real discrimination, and nor should there be IMHO, between a floating point texture that has been previously used as a render target, and a pre-authored floating point texture map that might require WRAP addressing behaviour, mipmapping or whatever. Hence the argument that you don't need wrap mode, mipmapping or normalised coordinates for floating point render targets is somewhat specious, because the more stringent requirements on the addressing come from the API requirement for support for pre-authored textures, not rendered ones.

[EDIT}Fundamentally I don't understand the reason for much limitation on texture addressing at all - why is there any difference in addressing a floating point texture map to a fixed point one? There are understandable differences in the difficulty of filtering higher precision components, but surely there should be no meaningful differences for addressing?[/EDIT]
 
andypski said:
Fundamentally I don't understand the reason for much limitation on texture addressing at all - why is there any difference in addressing a floating point texture map to a fixed point one? There are understandable differences in the difficulty of filtering higher precision components, but surely there should be no meaningful differences for addressing?

No.. You mean that THE superior Nvidia filtering, textbook quality and so on, has large failure lists?

I don't believe it!

;)
 
andypski said:
[EDIT}Fundamentally I don't understand the reason for much limitation on texture addressing at all - why is there any difference in addressing a floating point texture map to a fixed point one? There are understandable differences in the difficulty of filtering higher precision components, but surely there should be no meaningful differences for addressing?[/EDIT]

a faulty implementation qualifies as a good reason, no? apparently the logic for proper addressing is not functioning. or, lord forbid, simply not there -- they may have intentionally traded off the required transistors for say, yet another combiner unit(tm). whatever the causes might be, the outcome is a recorded mistake.
 
DeanoC said:
Its unfortunate more than anything that they didn't notice that there restrictions would run foul of the way Direct3D allows textures. 99% of the uses are fine on the NVIDIA hardware.

Thinking about it, in Direct3D 9 there is a way of specifing per-format restrictions (currently used for render-targets, automipmapping and a few others), so MS could have added a few bits and this trouble would have been avoided...

Yup, that's pretty much what I meant. Full orthogonality is nice, but seldom critical. I guess nVidia counted on MS to let them expose it one way or another.
 
andypski said:
Since you can't display a floating point render target, the only real way to use them effectively within D3D is as texture inputs to a later pass. There is no real discrimination, and nor should there be IMHO, between a floating point texture that has been previously used as a render target, and a pre-authored floating point texture map that might require WRAP addressing behaviour, mipmapping or whatever. Hence the argument that you don't need wrap mode, mipmapping or normalised coordinates for floating point render targets is somewhat specious, because the more stringent requirements on the addressing come from the API requirement for support for pre-authored textures, not rendered ones.

[EDIT}Fundamentally I don't understand the reason for much limitation on texture addressing at all - why is there any difference in addressing a floating point texture map to a fixed point one? There are understandable differences in the difficulty of filtering higher precision components, but surely there should be no meaningful differences for addressing?[/EDIT]

The same restrictions apply to all floating point textures on the GFFX, regardless of it they are render targets or not. But only for render targets do the restrictions make sense. Well, the texture addressing restriction doesn't make sense under any circumstances, but the other. ;)
 
Humus said:
Yup, that's pretty much what I meant. Full orthogonality is nice, but seldom critical. I guess nVidia counted on MS to let them expose it one way or another.
I have a feeling that what we might be seeing is a bid for the control of the direction of 3D graphics. The following is total speculation:

Microsoft started to feel that nVidia was dictating the direction of 3D graphics. nVidia felt that the API exists only to expose hardware features, and does not exist to dictate the future of 3D graphics to hardware companies.

From nVidia's perspective, they put far more work into developing the hardware than Microsoft puts into developing the software. It is much harder to make last minute changes in the hardware than it is to make last minute changes in the software. Therefore, Microsoft should change the API to expose whatever features nVidia supports.

But Microsoft didn't like this attitude, and believed that they should control the direction of 3D graphics. Therefore, Microsoft strategically decided to support ATI's design decisions with the R300. The decisions that ATI and nVidia made are different, and not necessarily worse, but the decisions that Microsoft made deliberately hurt the NV3x graphics cards' performance, image quality, and feature support in Direct3D.

I think that a more sane company would, after looking at the hardware features of both architctures, have supported the "lowest common denominator" of both architectures, with possible extensions to support the special features of each architecture. This would be better for developers and gamers as a whole, allowing the best support for current features under Direct3D.
 
Ok, let's imagine the following scenario:
nVidia releases the FX cards and puts a developer docs on their site about: how to use floating point buffers under D3D on FX cards.

It explains that due to the rules of D3D the standard formats are unavailable, but they have their own proprietary FourCC formats that can be used with restrictions.
- or -
It explains that due to the rules of D3D the standard formats are not reported as available but they can still be used with restrictions.
- or -
It explains that due to the rules of D3D the standard formats are only available if the program calls a special function from a library they provide, in which case they are available with restrictions.

They did none of those...
There's always a solution, but seems like they just didn't want to solve it hard enough.

Those solutions are not great - but are far greater than having no solution at all. The developers would use those.

Oh, I wrote this assuming the conspiracy theory is true - which I don't beleive. ;)
 
Hyp-X said:
It explains that due to the rules of D3D the standard formats are unavailable, but they have their own proprietary FourCC formats that can be used with restrictions.
- or -
It explains that due to the rules of D3D the standard formats are not reported as available but they can still be used with restrictions.
- or -
It explains that due to the rules of D3D the standard formats are only available if the program calls a special function from a library they provide, in which case they are available with restrictions.
I'm not sure any of these solutions are viable within the Direct3D framework. There's a reason why there aren't specific, proprietary extensions in Direct3D.

And I'm pretty sure the second one would break WHQL certification.

Anyway, as for the conspiracy theory angle, all I'm doing is drawing conclusions from Microsoft's culture, a culture that they have shown time and again. That culture is that they will do anything for control of the computer market. This is linked to a huge number of monopolistic practices that Microsoft has been involved with in the past.
 
Chalnoth said:
Anyway, as for the conspiracy theory angle, all I'm doing is drawing conclusions from Microsoft's culture, a culture that they have shown time and again. That culture is that they will do anything for control of the computer market. This is linked to a huge number of monopolistic practices that Microsoft has been involved with in the past.
no, you are drawing conclusions from a body orifice - not your mouth, ears, or nose.

Your idea has been refuted in the past. I dont feel like re-hashing all of that - we've been there, done that on these boards.

Your ideas always seem to flip flop based on your IHV preference. "microsoft should code for the lowest common denominator" always seems to be there to benefit nVidia, but never ATI. In those cases, its always "ATI is lacking blah blah blah". Its gotten quite humorous. Remember the 32bit FP performance (ala your predictions before GFFX was released - and your statements that FP24 was insufficient - quickly reversed to "FP16 is sufficient, GFFX is better cause its flexible!")?
 
Chalnoth said:
Anyway, as for the conspiracy theory angle, all I'm doing is drawing conclusions from Microsoft's culture, a culture that they have shown time and again. That culture is that they will do anything for control of the computer market. This is linked to a huge number of monopolistic practices that Microsoft has been involved with in the past.
Compelling.
I still prefer the "NVIDIA screwed it up big time" pov.
 
Chalnoth said:
But Microsoft didn't like this attitude, and believed that they should control the direction of 3D graphics. Therefore, Microsoft strategically decided to support ATI's design decisions with the R300. The decisions that ATI and nVidia made are different, and not necessarily worse, but the decisions that Microsoft made deliberately hurt the NV3x graphics cards' performance, image quality, and feature support in Direct3D.
Decisions such as including a partial precision modifier to improve the API's support for hardware that had really slow full-precision performance? That choice really hurt nVidia's performance a lot, compared to some other very valid choices Microsoft could have made, like not having a partial-precision modifier at all.

A decision to severely restrict addressing modes that are freely available on other textures for floating point textures is a worse decision in every way IMHO than one that makes the addressing and treatment of these textures as orthogonal as possible to prior behaviour. The hardware to address texture maps is basically identical for high-precision and low-precision textures, so why introduce an artifical difference where none is required? These surfaces are there to be used as textures - why would a hardware designer think that basic addressing modes like WRAP would not be necessary? Or that it should only be possible to address these textures using un-normalised coordinates? It makes no sense to me at all.
 
Hyp-X said:
It explains that due to the rules of D3D the standard formats are unavailable, but they have their own proprietary FourCC formats that can be used with restrictions.
Certainly possible, although it requires a holding up of the hands and saying "Our formats are not as good as the standard ones", which is probably not a very attractive option. ;)

I'm also not sure if FourCC formats can be attached as render targets - I don't see why not, but it wouldn't be the first time that an API restricts something like this.

It explains that due to the rules of D3D the standard formats are not reported as available but they can still be used with restrictions.
Depending on the behaviour of the run-time this might fail validation and result in rendering errors, and the behaviour of the runtime towards this could not be guaranteed from one API release to the next, so things might suddenly stop working. Not a great option.
It explains that due to the rules of D3D the standard formats are only available if the program calls a special function from a library they provide, in which case they are available with restrictions.
Possible, and easier for developer support than the FourCC method, but this is not really legal within the constraints of the API while the FourCC method is.
 
Back
Top