Chalnoth said:FX16 doesn't really even exist. FX12 was a term coined by nVidia to describe their 12-bit integer format. Nobody has a 16-bit integer format that is called FX16.
While we are nitpicking FX12 is NOT an integer format...
Chalnoth said:FX16 doesn't really even exist. FX12 was a term coined by nVidia to describe their 12-bit integer format. Nobody has a 16-bit integer format that is called FX16.
Chalnoth said:Now that's about the most ludicrous thing I've ever heard. It's certainly not a floating-point format.
Xmas said:Wrong. GLslang supports 16bit signed integers, but that doesn't mean it supports FX16.DemoCoder said:Wrong, GLSLANG supports FX16 integers.
DaveBaumann said:Sorry - how is adopting a specification that wasn't in either core API's current release and subsequently not supported in future API's the "right idea early"? Surely the right idea would be to adopt the correct format when the API's were adopting it as well?
There ain't nothing wrong with adding new features, but they really should of worried about having their card be capable of performing the existing standards before they started tripping the cinematografantastic.DemoCoder said:So you believe IHVs should design their hardware *ONLY* to what's in the spec, and not add any new features that aren't already standardized, but instead wait 12 months to 2 years for it to be hashed out in committee?
The difference is superficial as far as I am concerned. It only affects how you interpret numbers, not how (most) operations are actually performed.Demirug said:FX12 is a fixed point format.Chalnoth said:Now that's about the most ludicrous thing I've ever heard. It's certainly not a floating-point format.
digitalwanderer said:There ain't nothing wrong with adding new features, but they really should of worried about having their card be capable of performing the existing standards before they started tripping the cinematografantastic.DemoCoder said:So you believe IHVs should design their hardware *ONLY* to what's in the spec, and not add any new features that aren't already standardized, but instead wait 12 months to 2 years for it to be hashed out in committee?
C'mon DemoCoder, they screwed up....quit trying to justify it.
Sorry, I thought you were trying to defend their 16/32 design aspect.DemoCoder said:digitalwanderer said:There ain't nothing wrong with adding new features, but they really should of worried about having their card be capable of performing the existing standards before they started tripping the cinematografantastic.DemoCoder said:So you believe IHVs should design their hardware *ONLY* to what's in the spec, and not add any new features that aren't already standardized, but instead wait 12 months to 2 years for it to be hashed out in committee?
C'mon DemoCoder, they screwed up....quit trying to justify it.
Hey, if you've read my posts since the NV30 was released, I have more than criticised them for the mistakes they made and I was probably the most disappointed of anyone on this board. But I like to distinguish between *bad implementations* and *bad ideas*, and I prefer to criticize the idea instead of engaging in mindless corporate shilling or bashing.
Many people on this board seem to practice guilty-by-association. Since NV30 had problems, or since Derek Perez is evil, therefore, every architectural idea coming from NVidia must also be the same. To me, integer units are a great idea. I also like NVidia's gradient instructions, pack/unpack, and predicates. Regardless of whether or not they are usable in the NV30 is not relevant, since I believe they are fine ideas that should be part of the standard and other IHVs should adopt them.
NVidia created a pipeline with DDX/DDY, integer (but bad impl, limited to fx12 only), pack/unpack, predicates, before there was a spec. Nvidia was able to get predicates and DDX/DDY into 2.0 extended and 3.0. Pack/unpack didn't make it.
I give them credit for that. I give ATI credit for adding centroid sampling when it wasn't in the spec, then successfully lobbying to have it included. Even if their first implementation was busted, I still would have given them credit. Uber buffers? Another good idea whose time has come.
I mean, come on Dave, you can't expect the development teams working today, who are designing for 18 months from now, to wait for Microsoft or ARB to tell them what to do, and you can't expect features designed for 18 months from now to be agreed upon by everyone, especially since it gives away what the IHVs are working on.
DemoCoder said:(DX8 ps1.0, 1.1-1.3, and 1.4 are the best examples of this. specs which exist purely to "map" IHV specific extensions and features. Each revision was pretty much one IHV.)
Was this the case when DX9.0 was drafted ? Recalling the post below it seems that in the case of DX9.0 certain issues were settled long enough before that changes could have possibly been made. Or is that reading to much into it?DemoCoder said:Irrelevent. You think ATI and NVidia are going to divulge to either other what they are are working on for the NV50/60 and R500/600 and agree now, 2 years out, what features should be in the standard? That's not how it works Dave. Companies do not want to divulge their future designs early in standard working groups. These chips are being designed and the major architectural feaures being "locked in" over a year before the next round of DirectX first drafts" are even published.
sireric said:Dave H said:Very interesting point about from-scratch vs. evolutionary designs. Getting back to the original issue: do you think the decision to base DX9 around FP24 was sealed (or was at least evident) early enough for Nvidia to have redesigned the NV3x fragment pipeline accordingly without taking a hit to their release schedule? (And of course the NV30 was realistically planned for fall '02 before TSMC's process problems.) Obviously a great deal of a GPU design has to wait on the details of the API specs, but isn't the pipeline precision too fundamental to the overall design? Or is it?
The FP24 availability was at least 1.5 years before NV30 "hit" the market. I'm sure MS would of been very reasonable to inform them earlier or even work with them on some compromise. I have no idea what actually "happened". Not sure how that fits in with their schedule, but I think it's safe to say that they had time, if they wanted. I don't believe in the TSMC problems. I agree that LowK was not available, but the 130nm process was clean by late spring 02, as far as I know. The thing that people fail to realize is that MS doesn't come up with an API "out of the blue". It's an iterative process with the IHVs, and all of us can contribute to it, though it's controlled, at the end, by MS.
No, it wasn't. I don't think FP32 would of made things much harder, but it would of cost us more, from a die cost.Or, since you might not be able to speak to Nvidia's design process: was R3x0 already an FP24 design at the point MS made the decision? If they'd gone another way--requiring FP32 as the default precision, say--do you think it would have caused a significant hit to R3x0's release schedule? Or if they'd done something like included a fully fledged int datatype, would it have been worth ATI's while to redesign to incorporate it?
...and apparently the answer is no. Which brings the mind the question of why Nvidia stuck with a 4x2 for NV30 and NV35, if not because they didn't have the transistor budget to do an 8x1. Two ideas spring to mind. First, that they were so enamored of the fact that they could share functionality between an FP32 PS 2.0 ALU and two texture-coordinate calculators that they went with an nx2 architecture. Second, that they planned to stick with a 128-bit wide DRAM bus after all; that NV35 is not "what NV30 was supposed to have been", but rather the quickest way to retrofit improved performance (particularly for MSAA) onto the NV30 core; and that if NV35's design seems a little starved for computational resources compared to its impressive bandwidth capabilities (particularly w.r.t. PS 2.0 workloads), that's because it was just the best they could do on short notice.
I can't speculate too much, but I agree with some of your posts. At the end, the GF4 was a 4x? arch (was it x1 or x2 -- I don't remember). A natural evolution of that architecture would be a 4x2 still. A radical change there might be more than their architecture can handle.
You think ATI and NVidia are going to divulge to either other what they are are working on for the NV50/60 and R500/600 and agree now, 2 years out, what features should be in the standard? That's not how it works Dave. Companies do not want to divulge their future designs early in standard working groups. These chips are being designed and the major architectural feaures being "locked in" over a year before the next round of DirectX first drafts" are even published.
Moreover, DirectX isn't the only relevant standard. ARB can't be bullied around by just 2 vendors the way ATI and NVidia can influence the Microsoft monopoly.
-DC
Sitting member on 3 W3C, 2 IETF, 2 OMA, and 2 JCP working groups.
ATI has features in the R300 which went beyond DirectX9.0. Clearly, the process of developing the R300 wasn't "let's design DX9 on paper with MS first, then implement the HW later"
3dLabs designed and shipped hardware that became the basis for their OpenGL2.0 proposals, which will become the basis for their next hardware, which will end up fully compliant with the spec.
DemoCoder said:So you believe IHVs should design their hardware *ONLY* to what's in the spec, and not add any new features that aren't already standardized, but instead wait 12 months to 2 years for it to be hashed out in committee? Perhaps you think we should get rid of the OpenGL extension mechanism that allows vendors to expose features in their HW that Microsoft won't let them, and that the only features that can be exposed in OpenGL are those that ARB adds to the Core?
Hey, why even have R&D? Let other IHVs and Microsoft do it for you, then just implement their spec, right?
I mean, come on Dave, you can't expect the development teams working today, who are designing for 18 months from now, to wait for Microsoft or ARB to tell them what to do, and you can't expect features designed for 18 months from now to be agreed upon by everyone, especially since it gives away what the IHVs are working on.
I've been sitting on standards comittees for years, and it's simply not how it works. What you do is lobby to get your vision and your features supported, and then once the spec reaches a working draft, you start trying to do two things:
1) alter your design to track the working draft
2) expose features which did not make it into the working draft in a nice and friendly way
But this is late in the game, and sometimes it is not always possible to make radical changes, so you end up non-compliant.
If we had to live in a world with "design by committee", we'd be f*cked.
This idea that you don't add any features that aren't in current or future specs is a naive fairy tale.
DemoCoder said:And the biggest reason: 3dLabs put forth one hell of a nice, clean, design.