Why doesn't Nvidia simply lock NV3x cores at fp16

Craig Peeper from the MS DirectX team was also at ATI's Shader Day and Lars specifically asked him about the FP specs of DX9. Craig confirmed that there was an initial typo, but the spec said that the minimum requirement was for FP16 and that the partial precision mode was FP16, so it would have been illogical to conclude that FP16 was required for the full precision mode even with the typo.
 
demalion said:
Your commentary seems to treat accuracy as an acceptable casualty in a battle to propose your viewpoint. :-?

The partial precision hint, and using fp16, is DX 9 compliant. Period.
Forcing fp16 in place of fp32, without the _pp hint, is not. Period.
The NV3x is DX 9 compliant. Period.

Mixing and matching select bits of phrasings related to these without regard to veracity, to suite your opinion, is not a suitable basis for a "platform of attack", and this manufactured premise seems the only "factual basis" for everything said throughout. Therefore, the rest of your discussion based on this premise seems to be purely rhetorical.

My observation: Even when you do a lot of work with reinforcing wording and posturing to make it sound valid to you, it doesn't actually become any more valid when the premise is flawed. :-?

Well, the problem was that after I made the original post I saw the flaw in it and knew someone would call me on it...yea, it was pretty weak...:) As to why I proceeded with such an inane, lengthy rebuttal of myself I have no clue. I will ask everyone to pardon my excesses here...I am truthfully sitting here with a fever, a hacking cough, and liberal doses of analgesics. (Not offered in the way of excuse, but apology...;))
 
kyleb said:
lol ya it didn't quite seem like the WaltC i know; get well soon! :)

Thanks, man...;) This stuff makes me feel like I've had ten strong cups of coffee...or something...
 
Detailed recognition of a problem and apology? That's about all that can conceivably be asked for on the issue at this point.
 
WaltC said:
People are upset to discover that the hardware nVidia has been touting, pushing, and selling into the market all year long as "DX9" hardware is not actually DX9 compliant. That is the root and branch of the complaint. Just talking about the fp pipeline and nothing else--fp16 is not DX9 compliant. Period. nVidia knew this long before nV3x shipped, just as did ATi. The difference was that one company chose to make an API-compliant product and the other did not.

I seriously doubt the specs are solidified far enough ahead for a company to make such considerations ... it is more a luck of the draw issue.

I personally think it wouldnt be all that bad for m$ to just plonk down a specification designed in pretty much a vacuum without any hardware in the pipeline ... but at the moment success depends partly on the hardware and partly how much influence you can get over DX. I personally would like to see benchmarks with fixed 16 bpp precision to know how much of their precent predicament is because of failing in that second part. The evidence seems to suggest that they just plain failed on both fronts.
 
demalion said:
Detailed recognition of a problem and apology? That's about all that can conceivably be asked for on the issue at this point.

Well, I really had no business sounding off on the "non-DX9 compliant" line...which is a gross oversimplification, if not a misrepresentation of the situation. That better?...;) Yep, cthellis42 had it pegged right...

And now that I see dig's post in response to my own ramblings...I say thanks much for the compliment...but I did skew off tangent here...so it's not time to stand and applaud....yet. 8)

OK, D, have I satisfied your instinct for persecution?...*chuckle* Just kidding...
 
MfA said:
I seriously doubt the specs are set far enough ahead for a company to make such considerations ... it is more a luck of the draw issue.

I personally think it wouldnt be all that bad for m$ to just plonk down a specification designed in pretty much a vacuum without any hardware in the pipeline ... but at the moment success depends partly on the hardware and partly how much influence you can get over DX. I personally would like to see benchmarks with fixed 16 bpp precision to know how much of their precent predicament is because of failing in that second part. The evidence seems to suggest that they just plain failed on both fronts.

I intended to mention this somewhere else but really forgot about it...I have a clear recollection of Scott Sellers at 3dfx, when the company was still in business, talking about how although DX8 had just been released, that DX9 would be the "important" step forward, and he was talking about 3dfx's input with M$ at the time for DX9. I recall he said something like, "Some of the things were doing and working on really won't be relevant until DX9." I thought it was extremely interesting that even as 3dfx was just releasing its first DX8 drivers (which were their last, I believe) that he'd be talking about DX9--at a time when it was clear that DX8 was yet to get off the ground in terms of developer support. I didn't understand it then, but I certainly do today...

I suppose it's possible that it might boil down to luck of the draw, but I find it unlikely that ATi would hit the nail on the head out of just lucky guesses...I think nVidia did make some unfortunate guesses, however, about its own architecture relative to .13, and about what competitors were likely to be able to field in terms of architecture. The more that becomes evident about nV3x architecture the stranger nVidia's insistence on .13 for nV30 becomes, at least to me.
 
MfA said:
They foresaw they needed a higher clock.

Right, which was architecture-related...had they an 8x1 architecture, the need for the higher clock would have been nowhere as critical.
 
Back
Top