Nvidia GT300 core: Speculation

Discussion in 'Architecture and Products' started by Shtal, Jul 20, 2008.

Thread Status:
Not open for further replies.
  1. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Those are closely related.

    Regardless, they aren't salvage parts. They can be compared to 8800GTX/Ultra, not GTS.

    That doesn't make sense, because G80 wasn't the one trying to compete.
    2900XT was supposed to equal or outperform G80, but fell well short of that goal. So getting the clockspeed as high as possible, at the cost of extreme power consumption was their alternative.

    8800GTS PCBs are smaller than the GTX/Ultra ones:
    [​IMG]
    Regardless, it doesn't change the argument. 384 bit is still considerably less than 512 bit.
     
  2. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    Arcane magics.

    :razz:
     
  3. Creig

    Newcomer

    Joined:
    Nov 20, 2006
    Messages:
    57
    Likes Received:
    1
    The problem for Nvidia is that they assumed ATI was going to continue trailing them in performance. Don't forget that it was literally only days before launch that the 4800 series was known to have 800 SPs. Up until that time, ATI kept feeding disinformation rumors that said it was to be only a 480 SP part.

    Nvidia also continued with their "make a big, faster single chip and disable portions of it for lower end SKUs" strategy which hasn't really worked out all that well for them this time around. Their 1.4 billion transistor, 65nm GPU took up a LOT of die space per wafer. In addition, their initial yields were reported to be an abysmal 40%. When you have a large die coupled with low yields, you end up with a very expensive GPU to produce.

    ATI, on the other hand, chose to design their single GPU cards towards enthusiasts and lower. To address the high-end market, they continued to produce their dual-GPU cards. The RV770 was built on a smaller process (55nm), had over 400 million fewer transistors than the GT200 and was said to have very good yields right from day 1 (around 70% IIRC). In fact, I don't think it even required a re-spin. When was the last time ATI managed to get a GPU out the door without at least 3 re-spins?

    I think it was a simple matter of ATI gambling on an overall simpler design philosophy instead of continually trying to one-up their competition with an ever increasingly large and complex GPU. And it worked.
     
  4. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,455
    Likes Received:
    471
    Sorry, but majority of your arguments is off. Yields have nothing to do with power requiremants. Clock speed of R600 was very conservative because of the process power-leakage and related power requiremants. Target clock for top version was 850MHz. Search for reviews, interviews and sample photos.

    As for PCB price - the difference is in number of layers. E.g. 2 layers on complex (high-end card) PCB means about $5 difference. But both cards had 12-layer PCB. If ATi used some space for additional 128bit for memory bus, or nVidia used this space for NVIO routing, is irrelevant.

    But GTS had NVIO chip - XT doesn't. GTS had 640MB od VRAM, XT only 512MB. That makes quite a difference.

    And as for PCB length - that is also almost irrelevant (at least when compared to price difference given by number of layers). Anyway - HD2900XT was also shorter than 8800GTX, so there is no advantage for GTS also.

    I still don't see any well-founded argument, why would HD2900XT be more expensive for production, than GF8800GTS-640.
     
  5. Rayne

    Newcomer

    Joined:
    Jun 23, 2007
    Messages:
    91
    Likes Received:
    0
    What do you think about the memory interface on the GT300 ?

    I think that we can assume that they will use GDDR5, but, what about the bus width & the ROP count ?

    & How will affect the ROP count to the SSAA performance ?

    Also, what is the best balance for bandwidth / price ? 256bit with that Samsung 7 GHz GDDR5 or 384/512bit with the sort of GDDR5 used in the 4870 cards ?

    I would love see a big jump in the memory bandwidth for my personal needs. :)
     
  6. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    IMO there is no dichotomy between having a small fast chip for $300 and a big faster chip for $500. People seem to think that one precludes the other. All that happened this round is that AMD's $300 chip was faster than Nvidia's. It says nothing about the inherent high-end viability of a large single die vs multiple smaller dies.

    How do we know for sure that RV870 won't find itself going head-to-head with GT314 this time around? Unlikely scenario but it could happen if one party picks up a significant perf/watt advantage (it was pretty much on par this round).
     
  7. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    Intentions don't change the effect of actions.

    The result of their actions was just as I described. As far as them "hurting the industry", if by that you mean NV's margins then I agree. Otherwise AMD's own GPG seems to be at or above water so I'd say they're doing alright.
     
  8. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    I don't necessarily agree with the veracity of razor's argument but I get the gist of what he's saying. A price cutting strategy only works if you have a sustainable competitive advantage and can outhustle the competition. So AMD is golden if they're simply smarter and more productive (doing more with less) than everyone else. They can keep prices low and still put out good products. Let's hope that's the case because if not they will get left behind and simply branded as the cheaper/lower-quality option.

    As a matter of fact they have to execute flawlessly for a while. They don't have a pile of cash to fall back on when times are rough. It's almost certain that Nvidia poured a lot more cash into R&D in the past year or so than AMD did since the acquisition. I think they're actually lucky that Nvidia got sidetracked by this whole CUDA business else it could have been messy. On the other hand I think Jawed pointed out the potential cash cow that GPGPU could be in the future if it takes off. So AMD could miss that boat if they don't shift some focus that way as well.

    And in terms of hurting the industry - well if everybody follows AMD into low-ASP/low-margin territory eventually things will slow down.
     
  9. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    and there are historical reasons CPUs don't emphasize denormal support. mainly, that it is a waste of logic.
     
  10. leoneazzurro

    Regular

    Joined:
    Nov 3, 2005
    Messages:
    518
    Likes Received:
    25
    Location:
    Rome, Italy
    I think the major hint of success for the ATI cost strategy comes from the VGA makers themselves: Nvidia lost several of former exclusive partners in one year, even long-term and renowned ones like Gainward and XFX (also Albatron recently, IIRC). Normally this happens when a GPU maker has not competitive parts or when vendors are not really happy with margins. Having the GXXX very good performance results, in my opinion it's clear what the reason could be.

    EDIT: also the present ATI mobile lineup is IMHO better positioned than Nvidia's: the 4650-4670 are against the 9600-9700M GT and are undoubtly faster, it's a pity the 4850M is not used widely as it should lie between the GTX260M and GTX280M performance-wise but judging by the MSI gaming notebook line and the competitors, it should also cost less. But the 4860M should be really a hit (also a pity there are not so many designs based on this chip): 140 mm^2 40 nm chip with 85% of the power of the desktop counterpart, 128 bit interface and also power consumption should be not that high for a gaming laptop chip.
     
    #870 leoneazzurro, Apr 25, 2009
    Last edited by a moderator: Apr 25, 2009
  11. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    I don't see how AMD can be "blamed" for dragging ASPs/margins downwards. If NVidia chooses to produce cards that don't have decent price-performance, there's simply no alternative. Before RV770 launched there was a widespread "meh" in response to GTX280, e.g.:

    http://www.techreport.com/articles.x/14934/17

    It's amazing that people forget that even without HD4870 there was widespread apathy in reaction to GT200.

    Jawed
     
  12. Speccy

    Newcomer

    Joined:
    Oct 11, 2002
    Messages:
    86
    Likes Received:
    6
    Equally so 9800 GTX that had dropped below $300 even prior to RV770's introduction yet featured a similar die size at that point in time.
     
  13. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    They have shifted their design focus to target smaller, more efficient chips, knowing full well they do not have the cash to spend on R&D for monolithic designs nor can they afford failure. I think they're on the right track.

    AMD's GPG margins were around the 40% mark according to the most recent Quarterly results, if I'm not mistaken. Is this not the desired level and average for the industry? I'll grant that there were stories about AIB partners being unhappy with margins but did we ever receive verification?
     
  14. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    This is, IMHO, hitting bulls eye. It's not every day that an IHV can pump out a chip with 2.5x alu's with only a somewhat bigger die. The real advantage of the sweet spot strategy lies elsewhere. Assume, that in generation n, the two competitors are competitive with their products. If nV launches a the high end part first in gen n+1 and AMD launches mid range part first in gen n+1, then AMD will have a newer product in the most profitable part of the market competing with an older product until the high end can be shrunk down.
     
  15. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    People don't exactly leap over eachother to buy Intel Extreme Edition CPUs either.
    If you can get nearly the same performance for a much lower price, most people won't pay the premium.

    However, the Steam survey seems to show that most people just stuck with their 8800s so far.
     
  16. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I hadn't actually been able to see the Steam survey results because it was blocked at work...
    But I'd like to comment on that now:
    You compare the GTX285 with the 2900... But clearly the GTX285 has only been around since January. So in only 3 months the GTX285 managed to sell about 60% more than the 2900 in its entire lifetime. Hardly an indication that GTX285 sales are comparable to the 2900.
    Even the expensive enthusiast GTX295 has outsold the mainstream 2900.

    And if a GTX280 high-end card is as common as a value mainstream part like the 3870, that's quite an achievement.

    In fact, there is only one range of ATi cards in the top 12. And that's the combined 4800 series.
    That only reinforces the idea that nVidia's brand is incredibly strong. And that ATi is still fighting against the 8800, years after it was introduced.
     
  17. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,455
    Likes Received:
    471
    The chart doesn't show how many cards were sold during their lifetime, but how many users use it now. R600 is EOL for 1,5 year - many users already sold it. Despite it, the todays best performing high-end single chip board, which is available much cheaper compared to old HD2900XT, is not markedly frequent. That's interesting esp. because the HD2900XT was noisy, power hungry, unpopular, more expensive and buggy (broken AA resolve in hardware). I'm conviced, that ATi made a bigger profit with the unpopular HD2900XT, than nVidia with GTX285.
    HD2900XT was never sold for mainstream prices, maybe HD2900GT for about 6 weeks during cleaning inventories before launch of RV670. Both HD2900XT and GF8800GTS-640 were high-end priced parts performing about 20% under the enthusiast GTX. Mainstream parts were GF8600 and HD2600.
    GTX280 isn't high-end part for months. It's significantly cheaper than HD3870 in 07' and early 08'.
    Was. nVidia is able to sell GT200 only for mainstream prices. Brand name isn't strong enough to sell it for high-end prices. That's reality.

    What was 8800? I did a quick research and discovered, that there are at least 12 different products (different GPU, memory bus 192/256/320/384-bit, memory configs 256/320/512/640/768/1024 MB etc.), which were launched and sold during 2 year period - from cheap mainstream 8800GS or GT-256 to enthusiast 8800Ultra. If nVidia decided to call entire portfolio of products "8800" to get this name to the first positions in charts, than it's no wonder, that the name's there.

    Anyway, I'll not continue in this way of discussion, it's already OT and not related to future products.
     
  18. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I fail to see the logic in that comparison.
    Here's the logic I see:
    GTX285 has only been on the market for a VERY short time, sales are still picking up.
    Aside from that, many shops were getting rid of their GTX280 stock at very low prices, so potential GTX285-buyers would often go for the GTX280 instead of the GTX285 (just look at the Steam survey, GTX280 continued to increase even in the last months, where GTX285 was launched).
    Once the GTX280 is completely gone, all sales will go to GTX285.
    We'll check Steam survey again in a few months. GTX285 will probably have taken a bigger share then.

    It was about the same price as the 8800GTS, which was mainstream pricerange.

    Flawed logic. They performed only 20% less, but they didn't cost 20% less, they were WAY cheaper, more like 50% less.
    The price dictates the market segment they're in, not the performance.

    8600GTS was only marginally cheaper than 8800GTS.
    Back when I bought my 8800GTS I think the prices were something like this:
    8600GTS: 200 euro
    8800GTS320: 250 euro
    8800GTS640: 300 euro
    8800GTX: 550 euro
     
    #878 Scali, Apr 25, 2009
    Last edited by a moderator: Apr 25, 2009
  19. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    I am not so sure about it. I've checked the price for the last listed HD 2900 XT (It's a sapphire so probably quite representative) and the cheapest still available GTX280 (point of view).

    http://geizhals.at/?phistgfx=344251&loc=de&age=365&width=640
    http://geizhals.at/?phistgfx=255143&loc=de&age=2000&width=640
    (you can make out, which is which from the dates)

    Considering only the first year after launch, the GTX280 stayed above the max for what HD 2900 sold for over half a year and even after that it wasn't so much cheaper. probably also because HD 2900 XT was EOL'ed and not in the self regulated market any longer.

    Other factors would include wafer prices - considering that AMD was using the not-so-high-volume 80nm while Nvidia used mass-volume 65nm two years later. Nvidia also had to pay for twice as much memory, which also was faster, but then memory prices went down quite a bit in the meantime.

    I wouldn't give too much to the argument of HD 2900 XT being positioned as a Performance-solution instead of High-End, because the whole design of the card and GPU except from clock rates and memory clearly indicates it was intended for high end.
     
  20. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Also, these are euro prices for American products. Prices of all American products have come down in the past year, because the dollar is very weak now.
    I'd like to see some US chart next to this, I bet in the last year, the US price didn't drop nearly as much as the European prices did.

    Depends on how you look at it.
    For ATi it was their fastest and most expensive part at the time, so in that sense it was high-end.
    But consumers look at nVidia aswell, and compared to nVidia it certainly wasn't the fastest... So to them it wasn't high-end.
    Hence nVidia dictated prices, and ATi had to price it according to nVidia's performance, and sell it cheaper than intended.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...