NV40: 6x2/12x1/8x2/16x1? Meh. Summary of what I believe

Isn't that more of a sign towards the future though? Afterall, we're talking about NVIDIA's integer support in NV3x which isn't FX16 - the signals from future versions of DX indicate that FX16 will be there so its not much surprise that OGL1.5/2.0/GLSLANG will have been designed with that in mind as there is likely to have already been a consensus between at least ATI and NVIDIA that FX16 will occur in the future.
 
I agree. What it says is that people view integer support as an important feature, that GLSLANG supports declaring integers, and that future hardware can be designed to run these integer operations in parallel with the FP ops. (current HW will just execute them in FP registers)

NV30 isn't GLSLANG compliant, but the idea of having some integer registers and some integer ALUs around isn't bad for some algorithms that GPUs need to run. The integer ALUs are cheaper, and you don't need as many, since they are most likely going to be used for looping, and flow control.

I just wanted to point out that the integer units aren't "wasted", and that although it started out as a proprietary NVidia extension, future versions of DX and OGL will include integers (FX16)
 
In the context of this discussion though it bears little relevance - the FX units of NV3x aren't supported by the core API's inconjunction with floating point operations.
 
DaveBaumann said:
In the context of this discussion though it bears little relevance - the FX units of NV3x aren't supported by the core API's inconjunction with floating point operations.

THank you dave. I was like wtf I don't ever remember the fxs interger versions being supported in opengl
 
Well, I read the context as "integer units are wasted" -> "but NVidia allows them to run in parallel" -> "but integers aren't in OGL/DX" Nvidia had right idea early, but ARB adopted FX16 (*after* NV30 was shipped, i might add, so it is not a case of NVidia "not adhering to spec", since there was no spec) , so Nvidia's future cards need to increase to FX16.
 
DemoCoder said:
Well, I read the context as "integer units are wasted" -> "but NVidia allows them to run in parallel" -> "but integers aren't in OGL/DX" Nvidia had right idea early, but ARB adopted FX16 (*after* NV30 was shipped, i might add, so it is not a case of NVidia "not adhering to spec", since there was no spec) , so Nvidia's future cards need to increase to FX16.

we just looked at it in a diffrent way its no problem with me . I freely admit when i'm wrong as you all know far more than I do .

But as it stands opengl doesn't support nvidia's interger method and so I don't see the problem with dx 9 not supporting it either . Which is where my comments come from
 
DemoCoder said:
Well, I read the context as "integer units are wasted" -> "but NVidia allows them to run in parallel" -> "but integers aren't in OGL/DX" Nvidia had right idea early, but ARB adopted FX16 (*after* NV30 was shipped, i might add, so it is not a case of NVidia "not adhering to spec", since there was no spec) , so Nvidia's future cards need to increase to FX16.

Sorry - how is adopting a specification that wasn't in either core API's current release and subsequently not supported in future API's the "right idea early"? Surely the right idea would be to adopt the correct format when the API's were adopting it as well? Its kinda like saying that ATI got the correct format future format spot on with R200 (being FX16)! It would appear that NVIDIA didn't even concur that it was the right idea seeing as they removed the format for better support of the current API's from susbequent parts in that generation.
 
DaveBaumann said:
Sorry - how is adopting a specification that wasn't in either core API's current release and subsequently not supported in future API's the "right idea early"? Surely the right idea would be to adopt the correct format when the API's were adopting it as well? Its kinda like saying that ATI got the correct format future format spot on with R200 (being FX16)! It would appear that NVIDIA didn't even concur that it was the right idea seeing as they removed the format for better support of the current API's from susbequent parts in that generation.

:oops:
What on earth are you trying to say ? Do you know how innovation works ? Do you know why one of the most important aspects in design/engineering is to be able to forsee what's going to be the next big thing ? I am going to close my eyes now and pretend you never said the above/assume you were drunk/just woke up.
 
DoS said:
What on earth are you trying to say ? Do you know how innovation works ? Do you know why one of the most important aspects in design/engineering is to be able to forsee what's going to be the next big thing ? I am going to close my eyes now and pretend you never said the above/assume you were drunk/just woke up.

Since when does not turning out to be ultimately right, mean that Dave was accusing nVidia of not trying to be Innovative?
 
What on earth are you trying to say ? Do you know how innovation works ? Do you know why one of the most important aspects in design/engineering is to be able to forsee what's going to be the next big thing ?

For starters here, wrt FX12 support in NV30 are we talking about “the next big thing†or “old legacy support� IMO There is a fine line between them, and clearly its not the “next big thing†as both DirectX Next and OGL call for 16bit integer support, not FX12.

Design innovation also calls making the right decisions at the right time. Its clear that DX9 shaders do not call for integer support, however for compatibility previous versions need to be supported. Which is the more innovative approach to DX9 support – the one that takes legacy support literally and includes integer units or the one that decides to remove integer support entirely and handle it internally in FP? I would suggest that for this instance the latter approach appears to have been the better – clearly NVIDIA agrees as that’s exactly what they did subsequently with the rest of their PS2.0 generation.

However, that doesn’t mean that innovation stops just because one generation doesn’t support a format. When DX Next is about in a few years time I fully expect everyone to support FX16 and FP32 – their innovation is going on now and they are ratifying that with the API shapers.
 
DaveBaumann said:
For starters here, wrt FX12 support in NV30 are we talking about “the next big thing†or “old legacy support� IMO There is a fine line between them, and clearly its not the “next big thing†as both DirectX Next and OGL call for 16bit integer support, not FX12.

Design innovation also calls making the right decisions at the right time. Its clear that DX9 shaders do not call for integer support, however for compatibility previous versions need to be supported. Which is the more innovative approach to DX9 support – the one that takes legacy support literally and includes integer units or the one that decides to remove integer support entirely and handle it internally in FP? I would suggest that for this instance the latter approach appears to have been the better – clearly NVIDIA agrees as that’s exactly what they did subsequently with the rest of their PS2.0 generation.

However, that doesn’t mean that innovation stops just because one generation doesn’t support a format. When DX Next is about in a few years time I fully expect everyone to support FX16 and FP32 – their innovation is going on now and they are ratifying that with the API shapers.

Now we understand each other.
Still, IMO there were obviously some very good reasons for the engineers of the industry leader (which has revolutionised 3D graphics and has in the past showed the way in numerous occasions to API makers and game devs) to take the design decisions they did. It didn't work out for them for various reasons (underestimating ATi also helped a lot) but beating every choice they made to death is meaningless and unfair.

CAN WE PLEASE HAVE THE NEXT GEN CARDS NOW ?
:LOL:
 
DoS said:
Still, IMO there were obviously some very good reasons for the engineers of the industry leader (which has revolutionised 3D graphics and has in the past showed the way in numerous occasions to API makers and game devs) to take the design decisions they did.

What does 3dfx have to do with this? :D
 
DaveBaumann said:
No, 3dfx aren't industry leaders anymore - clearly he's talking about Intel!

Clearly you're both wrong. He's talking about MicroSoft.
 
Xmas said:
Wrong. GLslang supports 16bit signed integers, but that doesn't mean it supports FX16.
Huh?

That statement makes no sense. FX16 doesn't really even exist. FX12 was a term coined by nVidia to describe their 12-bit integer format. Nobody has a 16-bit integer format that is called FX16.
 
Back
Top