Roy Taylor: "Only Brits care about ATI"

Discussion in 'Graphics and Semiconductor Industry' started by INKster, May 10, 2008.

  1. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    The price of the G94 (9600 GT) holds up at $250 ?
    I'm sorry, but that is completely wrong. Last time i checked, it was selling at $150 on Newegg.

    I wouldn't call a full $100 price cut in just a quarter "holding up". For a little more than $250 you can now get a 9800 GTX...
     
  2. Karoshi

    Newcomer

    Joined:
    Aug 31, 2005
    Messages:
    181
    Likes Received:
    0
    Location:
    Mars
    Jawed said:
    In the alternative history: ...
     
  3. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    I have NO idea why anyone would think G94 was ever supposed to come out before G92? The former is an A11, the latter is an A12... Now, I agree a much better strategy all around would have been:
    1) Release 256-bit G94.
    2) Release 064-bit G98.
    2) Release 320-bit G92.
    4) Release 128-bit G96.
    3) Release 512-bit GT200.

    Of course, rumours were that G98 and G96 would come out sooner, so presumably they had problems there and perhaps needed a completely new tape-out. I still think G94 coming first and G92 coming with a 320-bit bus would have been a much better strategy both in terms of inventory management (all evantualities) and to achieve a coherent performance line-up. Oh well, whatever.

    I also don't think the plan ever was for G94 to cost as much as $250. $200, perhaps, who knows... Which reminds me: where is the 9600GTS? Hmmm.
     
  4. nicolasb

    Regular

    Joined:
    Oct 21, 2006
    Messages:
    421
    Likes Received:
    4
    You have to bear in mind that even ATI was taken by surprise by RV670's release date. ATI was working on the assumption that they would need an A12 silicon revision, because ATI chips invariably did. (R600 even went to A13). The fact that the A11 version was actually bug-free surprised everybody, ATI included. One can hardly blame Nvidia for not anticipating an ATI launch that ATI wasn't anticipating either.

    I think it's likely that Nvidia was bounced into releasing G92 earlier than originally intended. I suspect that's what led to the current naming fiasco (8800GT outperforming 640MB 8800GTS, etc.)
     
  5. Sunrise

    Regular

    Joined:
    Aug 18, 2002
    Messages:
    306
    Likes Received:
    21
    You also can´t anticipate everything that happens in this world. If there´s some of the famous earthquakes in Taiwan and supply (for PCBs) is critically constrained for several months, because of the damages to fab equipment, there´s not much you can do. You could probably expect something like that to happen, since this is not something that´s completely unusual.

    What I want to tell you with this is that NV cannot anticipate everything, but since NV works in this business and NV knows that ATi is a hidden shark that can still (relatively speaking) fight back if they let them (market share, pricing etc.), NV has to adjust their strategy to that if they haven´t already.

    NV wants to earn money and new ASICs are always a costly and time consuming process but bringing G92 to market first probably wasn´t the right decision in this case and if NV can really learn from that this isn´t gonna be much of a problem.

    Yes, that´s pretty likely and also confirmed, since in the last CC Marv said that they were focused on getting the product out and that was because they had no other ASIC ready that could´ve seriously competed with RV670, without damaging their great margins.
     
  6. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I've always thought that the G92 was technically a mid-end part. So I suppose they never opted for 320+-bit memory interface and 768+ mb memory because there was no need.
    All in all it meant that the performance-level of the 8800GTX/Ultra just became much more affordable through G92. Much like how in the past a 7600GT would be much cheaper than an 6800 card, but deliver similar performance... The G92 was just 'too fast' to really be a mid-end part...
    But the actual high-end part with 320+ bit bus and more than 512 mb memory, to truly replace the 8800GTX/Ultra, simply never surfaced. I suppose the 9800GX2 was more cost-effective for nVidia than developing another G92-variation, and with no pressure from AMD at all, nVidia had full control of how fast the fastest card would be. They'd only be outperforming themselves, what's the point?
     
  7. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Too bad the end result was truly subpar, though. You don't make a chip viable for the mid-range market just by saying it's aimed at it; you're also supposed to, you know, come up with a coherent and balanced design...

    If they wanted G92 to be mid-range, what they should really have done is make it a 6C/256-bit SKU, and G94 should have been 3C/192-bit. If clocked sufficiently high, G92 would still have been competitive with all RV670 SKUs and G94 would have hit an intriguing gap between 256-bit and 128-bit GPUs.

    Anyway there's no point speculating here. There are a billion different possible line-ups that could have resulted in better financial performance had all the data been known in advance, and there also are plenty that would have likely been better no matter what. Talking about it now isn't going to change anything, and last I heard NVIDIA isn't quite dead because of it just yet... :)
     
  8. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Sorry, but I think this is completely arbitrary. Also, I don't see what's wrong with G92. It's given us an excellent array of affordable cards, with performance levels that AMD simply cannot match. Just because AMD can't deliver this kind of performance doesn't make it any less mid-end, let alone 'subpar'.
    I have an 8800GTS 320, which was mid-end when I bought it... the upper fringe of mid-end, but mid-end nonetheless. These cards are now nearly 2 years old, and the new G92-based cards are cheaper and offer ~30% more performance, which I still consider a natural progression for mid-end over time. We all know that it is technically possible to create a 'thoroughbred' high-end G92 core with the 320+ bit bus and 768+ MB memory, which would make it a logical successor to the 8800Ultra, but the 9800GTX just isn't that card. It doesn't have to be. It also isn't a worthy upgrade-path from my 8800GTS 320 for that reason. Something that hasn't happened before.
     
  9. Florin

    Florin Merrily dodgy
    Veteran Subscriber

    Joined:
    Aug 27, 2003
    Messages:
    1,649
    Likes Received:
    220
    Location:
    The colonies
    That's arguable - I'd say you make it rather viable for the mid-range if you can sell as many as you can make, firmly maintain your status as market leader, and still end up turning a healthy profit.

    The bulk of the argument here seems to be that they could've made even more money if they had been able to scale up to the current level of performance over a longer period of time, getting rid of older inventory along the way. And that certainly seems to be true, but I think people aren't giving them much credit for having dealt with the minor setback rather well, and still coming up with a nice balance sheet for the first quarter.
     
  10. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Oh come on. Just compare the perf/mm² of G92 and G94. Compare the ASPs/mm² after taking the relative volumes of the different SKUs into consideration. Look at quite low performance penalties of reducing the number of clusters (I can't remember which site did that, if you really want to I can find the link).

    I'm not saying it's a bad chip. And it's very far from a fiasco, both financially and technically; the compression efficiency is especially impressive. However it's also pretty clearly suboptimal, and that's the point. I can't see a single scenario under which G92 would have been an optimal design. The engineers aren't to blame, the ones who came up with the roadmap are.
    Oh, certainly from September onward I believe there was nothing that they or anyone else could have done to handle the problem better. The only problem is that their roadmap put them in a position of vulnerability, and although I'll admit that hindsight is 20/20, I firmly believe that it would have been possible in the appropriate timeframe to have taken substantially better decisions with the data available at that time.
     
  11. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I fail to see the relevance in any of that. The chip performs well and is sold at reasonable prices.
    On top of that, the chip is relatively low on power-consumption (so basically that nullifies all reasons why one would like to have a smaller die and smaller manufacturing process):
    http://www.anandtech.com/video/showdoc.aspx?i=3209&p=12

    As you can see, the 8800GT actually uses less power than the 3870, while delivering considerably better performance. And although the 8800GTS does use a bit more power, its performance is in line with the power consumption.
    Really, these chips beat AMD on anything, except perhaps the diesize, but that's because AMD was pushed to move to 55 nm early. It is quite obvious that AMD has to stretch the silicon to the max in order to get competitive performance (and fail at that). nVidia outperforms AMD in absolute performance and performance-per-watt with just 65 nm. So yes, the die is larger... and? They don't have to push the silicon too hard, so they probably get better yields than AMD anyway, which would explain why their profit margins are still healthier than AMD's are.

    I agree they could have done even better... but then again, that is usually the case... At some point you just have to get a product out of the door aswell, in the real world.
    But I think any kind of comparison to AMD is just preposterous. AMD loses everywhere... They don't make healthy profits, their chips aren't energy-efficient despite the 55 nm advantage, their cards aren't that attractive in price... and worst of all, they still can't outperform my aging 8800GTS with any single GPU they have. 55 nm just made their chips 'bearable' instead of the complete powerhog that the 2900 was. Other than that, the chips are still as unimpressive, and since most people can easily afford an 8800GT or better, they aren't an attractive option anyway.
    Again, who cares about diesize? I care about the advantages of a smaller diesize, if they actually exist... In this case they don't.

    I think your argument is similar as arguing that Intel shouldn't have put out 4 MB L2-cache Core2 Duo's since the 2 MB models are nearly as fast, and would be cheaper to produce, because the die could be considerably smaller. Apparently the difference is not interesting when you have a solid design and good yields on your production line. You get diminishing returns on faster models anyway.
     
    #51 Scali, May 13, 2008
    Last edited by a moderator: May 13, 2008
  12. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Sigh, I'll just quote myself again and emphasize the key parts:
    Optimizing either die area *or* power consumption is a fundamental trade-off in several parts of the chip design process. There is no glory in achieving lower power consumption at three times the cost for a given performance point - I'm not saying that's the case here, it very clearly isn't, but I think that extreme example is fairly clear: there is no systematic link between die size and power consumption. It's perfectly possible to sacrifice cost in favour of lowering power consumption.

    Once again, you're missing the point. This has NOTHING to do with the decisions that were
    But I think any kind of comparison to AMD is just preposterous. AMD loses everywhere... They don't make healthy profits, their chips aren't energy-efficient despite the 55 nm advantage, their cards aren't that attractive in price... and worst of all, they still can't outperform my aging 8800GTS with any single GPU they have. 55 nm just made their chips 'bearable' instead of the complete powerhog that the 2900 was. Other than that, the chips are still as unimpressive, and since most people can easily afford an 8800GT or better, they aren't an attractive option anyway.
    The manufacturer does; more specifically, the fabless company cares about minimizing (Wafer Cost * Die Size * Yields)/(Average SKU ASP). Clearly NVIDIA would appreciate being able to improve one or more of these factors given their margin problems in Q1...

    EDIT: CPUs aren't comparable to GPUs because it's harder to improve single-threaded performance, and sadly for Intel and AMD it still matters a lot. So reducing perf/mm² is often acceptable to improve raw performance; in fact, if it wasn't, we would be 486 chips with thousands of cores instead! More cache can also improve system-wide power efficiency in certain cases by minimizing memory bandwidth requirements.
     
  13. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    You can repeat all you want, I just don't see your point. Any design is suboptimal, that's why we get product refreshes and new architectures so often. G92 is a very successful product, by your own admission. I don't see why you even feel the need to discuss its 'weak' points.
     
  14. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Duh. I really don't know what you're trying to argue with; what I'm saying is that some of the roadmap decisions 18+ months ago were bad and some of today's problems could have easily been avoided. Your arguement basically boils down to "I don't care". Well great, I'm happy for you that you don't, and I can perfectly understand that you wouldn't think that it's worth discussing.

    However, this discussion had the implicit assumption that it was indeed worth discussing, and that it might be insightful to consider what led to NV's margin problems in Q4/Q1, and how that ties in with Roy Taylor's potential overconfidence in the original post. If you don't care about any of this and you think NVIDIA's margins are 'good enough', great - but then once again, I really don't understand why you're taking part in this thread. Is there something I'm missing?
     
  15. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I thought this discussion was supposed to be about how nobody cares about ATi anymore, but somehow you think it's more relevant to go on about G92. Perhaps you just forgot about the actual topic completely?
    Fact is, there is very little to care about with ATi at this moment, and currently G92 is the biggest reason for that. If you think G92 suffers from problems and bad decisions, then you must think ATi's current product line-up is REALLY useless...
    So why don't you get back on topic and tell us how badly ATi planned and executed its roadmap, which led to the predicament they're in currently (only 12-13% marketshare in DX10 according to the article).
     
  16. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Only if you live in a context-free universe where everything is possible at any moment.

    If you live in the real world, where process constraints, platforms, and eco-system partners impact your decisions, then those factors will be much more decisive in driving product refreshes and new architectures than any perceived "suboptimal" nature of your current lineup for the context assumptions that its designers had.
     
  17. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    So you agree with me then.
     
  18. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    About you living in a context-free universe?
     
  19. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    No, about me saying "At some point you just have to get a product out of the door aswell, in the real world."
     
  20. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Yes, I'd agree with that. I'd also agree that people's perceptions of "suboptimal" are driven by the facts on the ground on that day, rather than the day many months before when the designers consulted the ouija board as to what the facts on the ground would be on release day.

    Which is why sometimes it really sucks to be a designer.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...