Roy Taylor: "Only Brits care about ATI"

You're forgetting the chain of events forced prices downwards.

In the alternative history:
  • G80 retains enthusiast class from October to February, with price holding up since there is no competition
  • G94 launches in January at ~$250 alongside RV670 at the same or a bit less
  • G92 launches in February/March at $400/350 in its initial 9800GTX/GT guises. The last few G80's are still available...
  • The prices of G94/92 hold up because G92 never had to compete against a $250 or less RV670. There would even be space for a 9800GS between 9600GT and 9800GT ;)
Jawed

The price of the G94 (9600 GT) holds up at $250 ?
I'm sorry, but that is completely wrong. Last time i checked, it was selling at $150 on Newegg.

I wouldn't call a full $100 price cut in just a quarter "holding up". For a little more than $250 you can now get a 9800 GTX...
 
The price of the G94 (9600 GT) holds up at $250 ?
I'm sorry, but that is completely wrong. Last time i checked, it was selling at $150 on Newegg.

I wouldn't call a full $100 price cut in just a quarter "holding up". For a little more than $250 you can now get a 9800 GTX...

Jawed said:
In the alternative history: ...
 
I have NO idea why anyone would think G94 was ever supposed to come out before G92? The former is an A11, the latter is an A12... Now, I agree a much better strategy all around would have been:
1) Release 256-bit G94.
2) Release 064-bit G98.
2) Release 320-bit G92.
4) Release 128-bit G96.
3) Release 512-bit GT200.

Of course, rumours were that G98 and G96 would come out sooner, so presumably they had problems there and perhaps needed a completely new tape-out. I still think G94 coming first and G92 coming with a 320-bit bus would have been a much better strategy both in terms of inventory management (all evantualities) and to achieve a coherent performance line-up. Oh well, whatever.

I also don't think the plan ever was for G94 to cost as much as $250. $200, perhaps, who knows... Which reminds me: where is the 9600GTS? Hmmm.
 
When RV670 was reviewed and already available, NV took a great hit at their margins, because they had to deal with it. G92 was cutting into their margins, right from the start. G94 was released months later and still is way above 200mm², while RV670 is only ~190mm. Now, if NV had released G94, when ATI released RV670, this wouldn´t have been much of a problem. However, NV only had G92 ready by that time and they even needed to bring it to the market, as fast as they could. ~190mm² vs. ~330mm² isn´t exactly what NV had in mind and they didn´t realize ATi was ready until it hit them by surprise.

Business-wise, what matters is what you get in return. So, all in all, ATI gave them a hard time, but only because NV vastly underestimated ATi. This should not happen twice. This is competition and ATi did a good job this time.
You have to bear in mind that even ATI was taken by surprise by RV670's release date. ATI was working on the assumption that they would need an A12 silicon revision, because ATI chips invariably did. (R600 even went to A13). The fact that the A11 version was actually bug-free surprised everybody, ATI included. One can hardly blame Nvidia for not anticipating an ATI launch that ATI wasn't anticipating either.

I think it's likely that Nvidia was bounced into releasing G92 earlier than originally intended. I suspect that's what led to the current naming fiasco (8800GT outperforming 640MB 8800GTS, etc.)
 
One can hardly blame Nvidia for not anticipating an ATI launch that ATI wasn't anticipating either.
You also can´t anticipate everything that happens in this world. If there´s some of the famous earthquakes in Taiwan and supply (for PCBs) is critically constrained for several months, because of the damages to fab equipment, there´s not much you can do. You could probably expect something like that to happen, since this is not something that´s completely unusual.

What I want to tell you with this is that NV cannot anticipate everything, but since NV works in this business and NV knows that ATi is a hidden shark that can still (relatively speaking) fight back if they let them (market share, pricing etc.), NV has to adjust their strategy to that if they haven´t already.

NV wants to earn money and new ASICs are always a costly and time consuming process but bringing G92 to market first probably wasn´t the right decision in this case and if NV can really learn from that this isn´t gonna be much of a problem.

nicolasb said:
I think it's likely that Nvidia was bounced into releasing G92 earlier than originally intended.
Yes, that´s pretty likely and also confirmed, since in the last CC Marv said that they were focused on getting the product out and that was because they had no other ASIC ready that could´ve seriously competed with RV670, without damaging their great margins.
 
I have NO idea why anyone would think G94 was ever supposed to come out before G92? The former is an A11, the latter is an A12... Now, I agree a much better strategy all around would have been:
1) Release 256-bit G94.
2) Release 064-bit G98.
2) Release 320-bit G92.
4) Release 128-bit G96.
3) Release 512-bit GT200.

I've always thought that the G92 was technically a mid-end part. So I suppose they never opted for 320+-bit memory interface and 768+ mb memory because there was no need.
All in all it meant that the performance-level of the 8800GTX/Ultra just became much more affordable through G92. Much like how in the past a 7600GT would be much cheaper than an 6800 card, but deliver similar performance... The G92 was just 'too fast' to really be a mid-end part...
But the actual high-end part with 320+ bit bus and more than 512 mb memory, to truly replace the 8800GTX/Ultra, simply never surfaced. I suppose the 9800GX2 was more cost-effective for nVidia than developing another G92-variation, and with no pressure from AMD at all, nVidia had full control of how fast the fastest card would be. They'd only be outperforming themselves, what's the point?
 
I've always thought that the G92 was technically a mid-end part. So I suppose they never opted for 320+-bit memory interface and 768+ mb memory because there was no need.
Too bad the end result was truly subpar, though. You don't make a chip viable for the mid-range market just by saying it's aimed at it; you're also supposed to, you know, come up with a coherent and balanced design...

If they wanted G92 to be mid-range, what they should really have done is make it a 6C/256-bit SKU, and G94 should have been 3C/192-bit. If clocked sufficiently high, G92 would still have been competitive with all RV670 SKUs and G94 would have hit an intriguing gap between 256-bit and 128-bit GPUs.

Anyway there's no point speculating here. There are a billion different possible line-ups that could have resulted in better financial performance had all the data been known in advance, and there also are plenty that would have likely been better no matter what. Talking about it now isn't going to change anything, and last I heard NVIDIA isn't quite dead because of it just yet... :)
 
Too bad the end result was truly subpar, though. You don't make a chip viable for the mid-range market just by saying it's aimed at it; you're also supposed to, you know, come up with a coherent and balanced design...

If they wanted G92 to be mid-range, what they should really have done is make it a 6C/256-bit SKU, and G94 should have been 3C/192-bit. If clocked sufficiently high, G92 would still have been competitive with all RV670 SKUs and G94 would have hit an intriguing gap between 256-bit and 128-bit GPUs.

Sorry, but I think this is completely arbitrary. Also, I don't see what's wrong with G92. It's given us an excellent array of affordable cards, with performance levels that AMD simply cannot match. Just because AMD can't deliver this kind of performance doesn't make it any less mid-end, let alone 'subpar'.
I have an 8800GTS 320, which was mid-end when I bought it... the upper fringe of mid-end, but mid-end nonetheless. These cards are now nearly 2 years old, and the new G92-based cards are cheaper and offer ~30% more performance, which I still consider a natural progression for mid-end over time. We all know that it is technically possible to create a 'thoroughbred' high-end G92 core with the 320+ bit bus and 768+ MB memory, which would make it a logical successor to the 8800Ultra, but the 9800GTX just isn't that card. It doesn't have to be. It also isn't a worthy upgrade-path from my 8800GTS 320 for that reason. Something that hasn't happened before.
 
Too bad the end result was truly subpar, though. You don't make a chip viable for the mid-range market just by saying it's aimed at it; you're also supposed to, you know, come up with a coherent and balanced design...

That's arguable - I'd say you make it rather viable for the mid-range if you can sell as many as you can make, firmly maintain your status as market leader, and still end up turning a healthy profit.

The bulk of the argument here seems to be that they could've made even more money if they had been able to scale up to the current level of performance over a longer period of time, getting rid of older inventory along the way. And that certainly seems to be true, but I think people aren't giving them much credit for having dealt with the minor setback rather well, and still coming up with a nice balance sheet for the first quarter.
 
Sorry, but I think this is completely arbitrary. Also, I don't see what's wrong with G92. It's given us an excellent array of affordable cards, with performance levels that AMD simply cannot match. Just because AMD can't deliver this kind of performance doesn't make it any less mid-end, let alone 'subpar'.
Oh come on. Just compare the perf/mm² of G92 and G94. Compare the ASPs/mm² after taking the relative volumes of the different SKUs into consideration. Look at quite low performance penalties of reducing the number of clusters (I can't remember which site did that, if you really want to I can find the link).

I'm not saying it's a bad chip. And it's very far from a fiasco, both financially and technically; the compression efficiency is especially impressive. However it's also pretty clearly suboptimal, and that's the point. I can't see a single scenario under which G92 would have been an optimal design. The engineers aren't to blame, the ones who came up with the roadmap are.
Florin said:
but I think people aren't giving them much credit for having dealt with the minor setback rather well, and still coming up with a nice balance sheet for the first quarter.
Oh, certainly from September onward I believe there was nothing that they or anyone else could have done to handle the problem better. The only problem is that their roadmap put them in a position of vulnerability, and although I'll admit that hindsight is 20/20, I firmly believe that it would have been possible in the appropriate timeframe to have taken substantially better decisions with the data available at that time.
 
Oh come on. Just compare the perf/mm² of G92 and G94. Compare the ASPs/mm² after taking the relative volumes of the different SKUs into consideration. Look at quite low performance penalties of reducing the number of clusters (I can't remember which site did that, if you really want to I can find the link).

I fail to see the relevance in any of that. The chip performs well and is sold at reasonable prices.
On top of that, the chip is relatively low on power-consumption (so basically that nullifies all reasons why one would like to have a smaller die and smaller manufacturing process):
http://www.anandtech.com/video/showdoc.aspx?i=3209&p=12

As you can see, the 8800GT actually uses less power than the 3870, while delivering considerably better performance. And although the 8800GTS does use a bit more power, its performance is in line with the power consumption.
Really, these chips beat AMD on anything, except perhaps the diesize, but that's because AMD was pushed to move to 55 nm early. It is quite obvious that AMD has to stretch the silicon to the max in order to get competitive performance (and fail at that). nVidia outperforms AMD in absolute performance and performance-per-watt with just 65 nm. So yes, the die is larger... and? They don't have to push the silicon too hard, so they probably get better yields than AMD anyway, which would explain why their profit margins are still healthier than AMD's are.

I agree they could have done even better... but then again, that is usually the case... At some point you just have to get a product out of the door aswell, in the real world.
But I think any kind of comparison to AMD is just preposterous. AMD loses everywhere... They don't make healthy profits, their chips aren't energy-efficient despite the 55 nm advantage, their cards aren't that attractive in price... and worst of all, they still can't outperform my aging 8800GTS with any single GPU they have. 55 nm just made their chips 'bearable' instead of the complete powerhog that the 2900 was. Other than that, the chips are still as unimpressive, and since most people can easily afford an 8800GT or better, they aren't an attractive option anyway.
Again, who cares about diesize? I care about the advantages of a smaller diesize, if they actually exist... In this case they don't.

I think your argument is similar as arguing that Intel shouldn't have put out 4 MB L2-cache Core2 Duo's since the 2 MB models are nearly as fast, and would be cheaper to produce, because the die could be considerably smaller. Apparently the difference is not interesting when you have a solid design and good yields on your production line. You get diminishing returns on faster models anyway.
 
Last edited by a moderator:
I fail to see the relevance in any of that. The chip performs well and is sold at reasonable prices.
Sigh, I'll just quote myself again and emphasize the key parts:
Arun said:
I'm not saying it's a bad chip. And it's very far from a fiasco, both financially and technically; the compression efficiency is especially impressive. However it's also pretty clearly suboptimal, and that's the point.

On top of that, the chip is relatively low on power-consumption
Optimizing either die area *or* power consumption is a fundamental trade-off in several parts of the chip design process. There is no glory in achieving lower power consumption at three times the cost for a given performance point - I'm not saying that's the case here, it very clearly isn't, but I think that extreme example is fairly clear: there is no systematic link between die size and power consumption. It's perfectly possible to sacrifice cost in favour of lowering power consumption.

I agree they could have done even better... but then again, that is usually the case... At some point you just have to get a product out of the door aswell, in the real world.
Once again, you're missing the point. This has NOTHING to do with the decisions that were
But I think any kind of comparison to AMD is just preposterous. AMD loses everywhere... They don't make healthy profits, their chips aren't energy-efficient despite the 55 nm advantage, their cards aren't that attractive in price... and worst of all, they still can't outperform my aging 8800GTS with any single GPU they have. 55 nm just made their chips 'bearable' instead of the complete powerhog that the 2900 was. Other than that, the chips are still as unimpressive, and since most people can easily afford an 8800GT or better, they aren't an attractive option anyway.
Again, who cares about diesize?
The manufacturer does; more specifically, the fabless company cares about minimizing (Wafer Cost * Die Size * Yields)/(Average SKU ASP). Clearly NVIDIA would appreciate being able to improve one or more of these factors given their margin problems in Q1...

EDIT: CPUs aren't comparable to GPUs because it's harder to improve single-threaded performance, and sadly for Intel and AMD it still matters a lot. So reducing perf/mm² is often acceptable to improve raw performance; in fact, if it wasn't, we would be 486 chips with thousands of cores instead! More cache can also improve system-wide power efficiency in certain cases by minimizing memory bandwidth requirements.
 
You can repeat all you want, I just don't see your point. Any design is suboptimal, that's why we get product refreshes and new architectures so often. G92 is a very successful product, by your own admission. I don't see why you even feel the need to discuss its 'weak' points.
 
You can repeat all you want, I just don't see your point. Any design is suboptimal, that's why we get product refreshes and new architectures so often.
Duh. I really don't know what you're trying to argue with; what I'm saying is that some of the roadmap decisions 18+ months ago were bad and some of today's problems could have easily been avoided. Your arguement basically boils down to "I don't care". Well great, I'm happy for you that you don't, and I can perfectly understand that you wouldn't think that it's worth discussing.

However, this discussion had the implicit assumption that it was indeed worth discussing, and that it might be insightful to consider what led to NV's margin problems in Q4/Q1, and how that ties in with Roy Taylor's potential overconfidence in the original post. If you don't care about any of this and you think NVIDIA's margins are 'good enough', great - but then once again, I really don't understand why you're taking part in this thread. Is there something I'm missing?
 
I thought this discussion was supposed to be about how nobody cares about ATi anymore, but somehow you think it's more relevant to go on about G92. Perhaps you just forgot about the actual topic completely?
Fact is, there is very little to care about with ATi at this moment, and currently G92 is the biggest reason for that. If you think G92 suffers from problems and bad decisions, then you must think ATi's current product line-up is REALLY useless...
So why don't you get back on topic and tell us how badly ATi planned and executed its roadmap, which led to the predicament they're in currently (only 12-13% marketshare in DX10 according to the article).
 
Any design is suboptimal, that's why we get product refreshes and new architectures so often.

Only if you live in a context-free universe where everything is possible at any moment.

If you live in the real world, where process constraints, platforms, and eco-system partners impact your decisions, then those factors will be much more decisive in driving product refreshes and new architectures than any perceived "suboptimal" nature of your current lineup for the context assumptions that its designers had.
 
Only if you live in a context-free universe where everything is possible at any moment.

If you live in the real world, where process constraints, platforms, and eco-system partners impact your decisions, then those factors will be much more decisive in driving product refreshes and new architectures than any perceived "suboptimal" nature of your current lineup for the context assumptions that its designers had.

So you agree with me then.
 
No, about me saying "At some point you just have to get a product out of the door aswell, in the real world."
 
Yes, I'd agree with that. I'd also agree that people's perceptions of "suboptimal" are driven by the facts on the ground on that day, rather than the day many months before when the designers consulted the ouija board as to what the facts on the ground would be on release day.

Which is why sometimes it really sucks to be a designer.
 
Back
Top