Heathen said:Of course they study the implications. That doesn't mean they judge correctly. That's what I was talking about. And the fact that companies don't often judge correctly is born out almost daily.
Yeah it's a shame nvidia misjudged the situation so badly, never thought I'd hear you admit it though Chal.
Seriously though, you still show no real appreciation of the complexities of project management. The fact that the R300 and it's derivatives achieved such popularity on their own strengths proves ATi made a significant number of correct decisions. Defining 'correctly' is a very ambiguous in black and white is next to impossible unless you get a complete implosion of the company who made that decision and even then there are still benefits for the industry (whatever industry that is) assuming they're willing to learn.
In our specific case it could actually be argued that Nvidia is in a better theoretical position than ATi because they have more to learn from the NV3* project than ATi has to learn from the R3** project.
The only important questions are:
1) Can they learn from what's occurred? (So far the consumers seem more bothered about performance and IQ, and not theoretical objections to the differences between FP24 & FP32)
2) Can they (or more importantly are they willing) to apply these lessons to next generation parts?
If the answer to either question is no then nvidia is in trouble, ATi's issues are less appparent but still no less important for them to answer. Hopefully both companies are up to the challange as resting on your laurels and believing things rosy, while dissing the opposition, is the easiest thing to do in the world.
A stupid person has more to learn than a clever person, but who is in the better position?
nelg said:I think that it is wrong to believe that nV’s acceptance of failure with the nV30 lies solely with its performance. IMHO, it was just one part of a strategy that they were going to use to wrestle API design way from Microsoft. They thought that they would do it their way with CG and multiple precision. We all know how it ended but the nV30 was only one piece of the puzzle.
DemoCoder said:I guess the GF2 was just an attempt to control OpenGL with proprietary NV extensions eh?
Finally!DemoCoder said:nelg said:I think that it is wrong to believe that nV’s acceptance of failure with the nV30 lies solely with its performance. IMHO, it was just one part of a strategy that they were going to use to wrestle API design way from Microsoft. They thought that they would do it their way with CG and multiple precision. We all know how it ended but the nV30 was only one piece of the puzzle.
Cart before the horse. Why is it with NVidia people have to come up with such asinine theories? So NVidia spent hundreds of millions of dollars, not to design a chip to sell for money, but with the goal of stealing control of an API?
I guess the GF2 was just an attempt to control OpenGL with proprietary NV extensions eh?
DemoCoder said:Cart before the horse. Why is it with NVidia people have to come up with such asinine theories? So NVidia spent hundreds of millions of dollars, not to design a chip to sell for money, but with the goal of stealing control of an API?
I guess the GF2 was just an attempt to control OpenGL with proprietary NV extensions eh?
Designing a graphics chip is all about budgeting transistors and deciding how to allocate them most efficiently. For our R300 DirectX 9 architecture (used in the RADEON 9500, 9600, 9700, and 9800 series), our designers resisted the temptation to add a lot of unnecessary features that we felt game developers would not use, or that would not noticeably improve speed or image quality. Instead, they kept very close to the base DX9 specification and added more pipelines, more shader units, and better compression technology. Not only did this allow our products to easily outperform the competition, but it allowed us to do it with lower clock speeds, with fewer transistors, and without requiring a lot of additional software optimization (which is really appreciated by game developers).
DemoCoder said:nelg said:I think that it is wrong to believe that nV’s acceptance of failure with the nV30 lies solely with its performance. IMHO, it was just one part of a strategy that they were going to use to wrestle API design way from Microsoft. They thought that they would do it their way with CG and multiple precision. We all know how it ended but the nV30 was only one piece of the puzzle.
Cart before the horse. Why is it with NVidia people have to come up with such asinine theories? So NVidia spent hundreds of millions of dollars, not to design a chip to sell for money, but with the goal of stealing control of an API?
I guess the GF2 was just an attempt to control OpenGL with proprietary NV extensions eh?
It's a quirk of fate that Nvidia had anything to do with the Xbox - or any other Microsoft product. In Nvidia's early days, Huang tried to end-run icrosoft with a proprietary programming interface. The decision, he now admits, almost killed Nvidia. In a move of desperation, he directed his engineers to build GPUs to work with Microsoft's Direct3D standard. It not only saved the company, but established a partnership that eventually led to the contract to develop the chipset for the Xbox - a contract worth as much as $500 million a year.
I always thought it was to coverup the lack of PS power of the NV30, and more importantly the whole FX line since the NV30 was supposed to be the base for their chips for at least 2 years.martrox said:then just what was CG and the lack of DX9 compatibility in the FX series all about?
thatdude90210 said:I always thought it was to coverup the lack of PS power of the NV30, and more importantly the whole FX line since the NV30 was supposed to be the base for their chips for at least 2 years.martrox said:then just what was CG and the lack of DX9 compatibility in the FX series all about?
AndrewM said:DemoCoder, I agree with you completely. Unfortunately some people dont understand what Cg is, let alone what nvidia were trying to accomplish with it. It was a good step forward in developer tools and support. Nvidia is still supporting Cg, as well as the other high level languages. The quality of FXComposer just goes further to prove their dev relations support.
Also, I totally agree with the API/standards/specs thing.