That is what I think he is trying to say
That is what I think he is trying to say
I take it i was pretty close to what you wanted to say huh hahaHeathen said:That is what I think he is trying to say
jvd said:I think his point is that the open gl panel did not vote this in . So you saying its a shame its not in d3d is stupid. As its not in open gl .
DaveBaumann said:In the context of this discussion though it bears little relevance - the FX units of NV3x aren't supported by the core API's inconjunction with floating point operations.
DemoCoder said:Well, I read the context as "integer units are wasted" -> "but NVidia allows them to run in parallel" -> "but integers aren't in OGL/DX" Nvidia had right idea early, but ARB adopted FX16 (*after* NV30 was shipped, i might add, so it is not a case of NVidia "not adhering to spec", since there was no spec) , so Nvidia's future cards need to increase to FX16.
DemoCoder said:Well, I read the context as "integer units are wasted" -> "but NVidia allows them to run in parallel" -> "but integers aren't in OGL/DX" Nvidia had right idea early, but ARB adopted FX16 (*after* NV30 was shipped, i might add, so it is not a case of NVidia "not adhering to spec", since there was no spec) , so Nvidia's future cards need to increase to FX16.
Wrong. GLslang supports 16bit signed integers, but that doesn't mean it supports FX16.DemoCoder said:Wrong, GLSLANG supports FX16 integers.
DaveBaumann said:Sorry - how is adopting a specification that wasn't in either core API's current release and subsequently not supported in future API's the "right idea early"? Surely the right idea would be to adopt the correct format when the API's were adopting it as well? Its kinda like saying that ATI got the correct format future format spot on with R200 (being FX16)! It would appear that NVIDIA didn't even concur that it was the right idea seeing as they removed the format for better support of the current API's from susbequent parts in that generation.
DoS said:What on earth are you trying to say ? Do you know how innovation works ? Do you know why one of the most important aspects in design/engineering is to be able to forsee what's going to be the next big thing ? I am going to close my eyes now and pretend you never said the above/assume you were drunk/just woke up.
What on earth are you trying to say ? Do you know how innovation works ? Do you know why one of the most important aspects in design/engineering is to be able to forsee what's going to be the next big thing ?
DaveBaumann said:For starters here, wrt FX12 support in NV30 are we talking about “the next big thing†or “old legacy support� IMO There is a fine line between them, and clearly its not the “next big thing†as both DirectX Next and OGL call for 16bit integer support, not FX12.
Design innovation also calls making the right decisions at the right time. Its clear that DX9 shaders do not call for integer support, however for compatibility previous versions need to be supported. Which is the more innovative approach to DX9 support – the one that takes legacy support literally and includes integer units or the one that decides to remove integer support entirely and handle it internally in FP? I would suggest that for this instance the latter approach appears to have been the better – clearly NVIDIA agrees as that’s exactly what they did subsequently with the rest of their PS2.0 generation.
However, that doesn’t mean that innovation stops just because one generation doesn’t support a format. When DX Next is about in a few years time I fully expect everyone to support FX16 and FP32 – their innovation is going on now and they are ratifying that with the API shapers.
DoS said:Still, IMO there were obviously some very good reasons for the engineers of the industry leader (which has revolutionised 3D graphics and has in the past showed the way in numerous occasions to API makers and game devs) to take the design decisions they did.
DaveBaumann said:No, 3dfx aren't industry leaders anymore - clearly he's talking about Intel!
Huh?Xmas said:Wrong. GLslang supports 16bit signed integers, but that doesn't mean it supports FX16.