NV30 processor result evangelism

andypski said:
There have been examples in the past where the spec says how things are supposed to be, but the defacto standard ends up being whatever a particular IHV ended up implementing. Then other IHVs come along and implement things correctly and get 'bugs' in existing software because people have written their software expecting the other IHVs incorrect behaviour, and the really daft thing is that the IHVs that implement things correctly are the ones who get blamed for driver bugs.

Well, we've had PowerVR guys stating something almost exactly the same as this not so long ago. So, you and PowerVR... by a process of elimination... ;)
 
Chalnoth said:
Very possible, but remember that nVidia still has the larger marketshare, and the release of a low-cost DX9 part (NV34) will essentially guarantee that nVidia will have more DX9 parts than ATI.

Just because they have NV34 does not mean it is going to happen, they have to have significant OEM wins for it to happen. I suspect that 9600 class products will work their way down to that end of the market very quickly as TSMC ramps up 130nm more as well, and the low end 9600's aren't exactly slouches.
 
Seen any recent WHQL drivers from Nvidia for the NV30? No, didn't think so.

I was thinking along the lines of something more drastic especially is MS and Nvidia get into a staring match.

"Microsoft Corp, today announce that it does not consider the NV3X class of video accelerators suitable of DirectX9 use. Blah Blah Blah" That sort of thing.
 
I find it hard to recommend any NVIDIA products except on the ultra lowend parts like GF4MX for people with very little budgets.

I'm actively selling and recommending the ATI cards at the moment and it seems that people are catching on that ATI is where its at.

I can only see more of this happening even as the GFFX 5200 are released. It is a big let down and even the average consumer will eventually cotton on.

If you consistently execute poorly what else can you expect? No Fairies will save you (ask 3dfx).
 
I think an interesting point here is that the NV3x model for shader execution differs significantly from the model that is implied by both DX9 PS2.0 and the OpenGL 2.0 high-level shading language.

Presumably, NVidia had a lot of pull with both standards-setting organizations. Did they just not lobby hard enough for the execution model they wanted to support (big benefits from using reduced precision/ints where possible and a fast integer register combiner stage)? Did the other stakeholders gang up against NVidia because they threatened to dominate the graphics market?

Or (conspiracy theory here) did NVidia consciously allow these standards to go forward "broken" for their hardware, in the hope that developers, eyeing NVidia's marketshare, would abandon the IHV-independent standards and embrace the NVidia-controlled Cg platform? If so, this would have been a much more effective strategy if NV30 had been released on time and in a clear market leadership position.
 
I think nVidia stuck to their 'proprietary' business model. It just so happened that doing things their "own way" resulted in something less functional than the competition, not just in features but performance as well. They wanted the GF FX to strengthen development ties to their specific architecture, so they committed to both keeping it unique to themselves and making it similar to their past design.

Another idea that goes along with that is their commitment to a unified driver model, in which case similar architecture allows them to leverage past optimization effort into the "new" architecture. In this light, it could be said that the architectural elements they tried to carry forward were just too limited.

Finally, I think there was some perception that they set the trend, and that the market would follow where they led. That would be consistent with all of their approaches to the nv30. They may have learned better (the engineers, atleast, and the leadership if they didn't get too caught up in their success) with the 8500 from ATi, but I think it is a case of the subtle semantic difference between "momentum" and "inertia" in describing their evolution. I'm still wondering, as I have been for a while, how their engineering will adapt.

All of the above tie into their developer relations strategy, IMO.

I don't think any of these are anyone's fault besides nVidia's. I think this follows from what had been going for some time before the R300 launch. I also think this is a good opportunity for competition to return to the 3D space beyond just two parties, and I hope certain companies are up to taking advantage of the opportunity. *poke* Kristof and friends
 
Folks, enabling clear type in your display properties will allow you to read that last part...heh
 
Back
Top