Nvidia GT300 core: Speculation

Status
Not open for further replies.
The problem was extreme power requiremants, not yields.

Those are closely related.

This argument is groundless. Yield issues were never mentioned in relation to R600, there were no problems with R600 supplies and both performance and cost-down derivates had good OC headroom.

Regardless, they aren't salvage parts. They can be compared to 8800GTX/Ultra, not GTS.

This is the same argument as saying, that G80 yields were probably bad, because majority of the GPU was clocked to 1500MHz to stay competitive :rolleyes:

That doesn't make sense, because G80 wasn't the one trying to compete.
2900XT was supposed to equal or outperform G80, but fell well short of that goal. So getting the clockspeed as high as possible, at the cost of extreme power consumption was their alternative.

Sorry, but GeForce 8800 GTS used the same (384bit) PCB as GTX.

8800GTS PCBs are smaller than the GTX/Ultra ones:
8800gtx-vs-8800gts.jpg

Regardless, it doesn't change the argument. 384 bit is still considerably less than 512 bit.
 
NVs current problem is that G92 is old and can't compete with RV770 and GT200 is relatively bad and never really was able to compete with RV770. Again, the problem is that NV doesn't have a proper RV770 competitor, not that it has a big high-end GT200. Big high end chip in itself isn't a problem, it's a way to ultra-high-end/workstation/server/GPGPU markets. It's not like you can't do a two-chip AFR card with two of your middle class chips if you have a high end chip. You still can and you have more options.

The problem for Nvidia is that they assumed ATI was going to continue trailing them in performance. Don't forget that it was literally only days before launch that the 4800 series was known to have 800 SPs. Up until that time, ATI kept feeding disinformation rumors that said it was to be only a 480 SP part.

Nvidia also continued with their "make a big, faster single chip and disable portions of it for lower end SKUs" strategy which hasn't really worked out all that well for them this time around. Their 1.4 billion transistor, 65nm GPU took up a LOT of die space per wafer. In addition, their initial yields were reported to be an abysmal 40%. When you have a large die coupled with low yields, you end up with a very expensive GPU to produce.

ATI, on the other hand, chose to design their single GPU cards towards enthusiasts and lower. To address the high-end market, they continued to produce their dual-GPU cards. The RV770 was built on a smaller process (55nm), had over 400 million fewer transistors than the GT200 and was said to have very good yields right from day 1 (around 70% IIRC). In fact, I don't think it even required a re-spin. When was the last time ATI managed to get a GPU out the door without at least 3 re-spins?

I think it was a simple matter of ATI gambling on an overall simpler design philosophy instead of continually trying to one-up their competition with an ever increasingly large and complex GPU. And it worked.
 
Sorry, but majority of your arguments is off. Yields have nothing to do with power requiremants. Clock speed of R600 was very conservative because of the process power-leakage and related power requiremants. Target clock for top version was 850MHz. Search for reviews, interviews and sample photos.

As for PCB price - the difference is in number of layers. E.g. 2 layers on complex (high-end card) PCB means about $5 difference. But both cards had 12-layer PCB. If ATi used some space for additional 128bit for memory bus, or nVidia used this space for NVIO routing, is irrelevant.

But GTS had NVIO chip - XT doesn't. GTS had 640MB od VRAM, XT only 512MB. That makes quite a difference.

And as for PCB length - that is also almost irrelevant (at least when compared to price difference given by number of layers). Anyway - HD2900XT was also shorter than 8800GTX, so there is no advantage for GTS also.

I still don't see any well-founded argument, why would HD2900XT be more expensive for production, than GF8800GTS-640.
 
What do you think about the memory interface on the GT300 ?

I think that we can assume that they will use GDDR5, but, what about the bus width & the ROP count ?

& How will affect the ROP count to the SSAA performance ?

Also, what is the best balance for bandwidth / price ? 256bit with that Samsung 7 GHz GDDR5 or 384/512bit with the sort of GDDR5 used in the 4870 cards ?

I would love see a big jump in the memory bandwidth for my personal needs. :)
 
Nvidia also continued with their "make a big, faster single chip and disable portions of it for lower end SKUs" strategy which hasn't really worked out all that well for them this time around. Their 1.4 billion transistor, 65nm GPU took up a LOT of die space per wafer. In addition, their initial yields were reported to be an abysmal 40%. When you have a large die coupled with low yields, you end up with a very expensive GPU to produce.

IMO there is no dichotomy between having a small fast chip for $300 and a big faster chip for $500. People seem to think that one precludes the other. All that happened this round is that AMD's $300 chip was faster than Nvidia's. It says nothing about the inherent high-end viability of a large single die vs multiple smaller dies.

How do we know for sure that RV870 won't find itself going head-to-head with GT314 this time around? Unlikely scenario but it could happen if one party picks up a significant perf/watt advantage (it was pretty much on par this round).
 
No I didn't say that, did I? its great they did that but for the industry as a whole, for them, and for nV, they just hurt them for no reason at all. You really think lower prices was to benefit consumers? You think thats how corporations operate, to make us happy?

Intentions don't change the effect of actions.

The result of their actions was just as I described. As far as them "hurting the industry", if by that you mean NV's margins then I agree. Otherwise AMD's own GPG seems to be at or above water so I'd say they're doing alright.
 
The result of their actions was just as I described. As far as them "hurting the industry", if by that you mean NV's margins then I agree. Otherwise AMD's own GPG seems to be at or above water so I'd say they're doing alright.

I don't necessarily agree with the veracity of razor's argument but I get the gist of what he's saying. A price cutting strategy only works if you have a sustainable competitive advantage and can outhustle the competition. So AMD is golden if they're simply smarter and more productive (doing more with less) than everyone else. They can keep prices low and still put out good products. Let's hope that's the case because if not they will get left behind and simply branded as the cheaper/lower-quality option.

As a matter of fact they have to execute flawlessly for a while. They don't have a pile of cash to fall back on when times are rough. It's almost certain that Nvidia poured a lot more cash into R&D in the past year or so than AMD did since the acquisition. I think they're actually lucky that Nvidia got sidetracked by this whole CUDA business else it could have been messy. On the other hand I think Jawed pointed out the potential cash cow that GPGPU could be in the future if it takes off. So AMD could miss that boat if they don't shift some focus that way as well.

And in terms of hurting the industry - well if everybody follows AMD into low-ASP/low-margin territory eventually things will slow down.
 
The DP is more fully featured than in the past, and denormal support is massively faster than it is on an x86, but not every bit of the specification is there.

and there are historical reasons CPUs don't emphasize denormal support. mainly, that it is a waste of logic.
 
I think the major hint of success for the ATI cost strategy comes from the VGA makers themselves: Nvidia lost several of former exclusive partners in one year, even long-term and renowned ones like Gainward and XFX (also Albatron recently, IIRC). Normally this happens when a GPU maker has not competitive parts or when vendors are not really happy with margins. Having the GXXX very good performance results, in my opinion it's clear what the reason could be.

EDIT: also the present ATI mobile lineup is IMHO better positioned than Nvidia's: the 4650-4670 are against the 9600-9700M GT and are undoubtly faster, it's a pity the 4850M is not used widely as it should lie between the GTX260M and GTX280M performance-wise but judging by the MSI gaming notebook line and the competitors, it should also cost less. But the 4860M should be really a hit (also a pity there are not so many designs based on this chip): 140 mm^2 40 nm chip with 85% of the power of the desktop counterpart, 128 bit interface and also power consumption should be not that high for a gaming laptop chip.
 
Last edited by a moderator:
I don't see how AMD can be "blamed" for dragging ASPs/margins downwards. If NVidia chooses to produce cards that don't have decent price-performance, there's simply no alternative. Before RV770 launched there was a widespread "meh" in response to GTX280, e.g.:

http://www.techreport.com/articles.x/14934/17

The trouble is, things are pretty decidedly not equal. More often than not, the GeForce 9800 GX2 is faster than the GTX 280, and the GX2 is currently selling for as little as 470 bucks, American money. Compared to that, the GTX 280's asking price of $649 seems mighty steep. Even the GTX 260 at $399 feels expensive in light of the alternatives—dual GeForce 8800 GTs in SLI, for instance—unless you're committed to the single-GPU path.

Another problem with cards like the 9800 GX2 is simply that they've shown us that there's more performance to be had in today's games than what the GTX 260 and 280 can offer. One can't escape the impression, seeing the benchmark results, that the GT200's performance could be higher. Yet many of the changes Nvidia has introduced in this new GPU fall decidedly under the rubric of future-proofing. We're unlikely to see games push the limits of this shader core for some time to come, for example. I went back and looked, and it turns out that when the GeForce 8800 GTX debuted, it was often slower than two GeForce 7900 GTX cards in SLI. No one cared much at the time because the G80 brought with it a whole boatload of new capabilities. One can't exactly say the same for the GT200, but then again, things like a double-size register file for more complex shaders or faster stream-out for geometry shaders may end up being fairly consequential in the long run. It's just terribly difficult to judge these things right now, when cheaper multi-GPU alternatives will run today's games faster.
It's amazing that people forget that even without HD4870 there was widespread apathy in reaction to GT200.

Jawed
 
It's amazing that people forget that even without HD4870 there was widespread apathy in reaction to GT200.
Equally so 9800 GTX that had dropped below $300 even prior to RV770's introduction yet featured a similar die size at that point in time.
 
I don't necessarily agree with the veracity of razor's argument but I get the gist of what he's saying. A price cutting strategy only works if you have a sustainable competitive advantage and can outhustle the competition. So AMD is golden if they're simply smarter and more productive (doing more with less) than everyone else. They can keep prices low and still put out good products. Let's hope that's the case because if not they will get left behind and simply branded as the cheaper/lower-quality option.

As a matter of fact they have to execute flawlessly for a while. They don't have a pile of cash to fall back on when times are rough. It's almost certain that Nvidia poured a lot more cash into R&D in the past year or so than AMD did since the acquisition. I think they're actually lucky that Nvidia got sidetracked by this whole CUDA business else it could have been messy. On the other hand I think Jawed pointed out the potential cash cow that GPGPU could be in the future if it takes off. So AMD could miss that boat if they don't shift some focus that way as well.

They have shifted their design focus to target smaller, more efficient chips, knowing full well they do not have the cash to spend on R&D for monolithic designs nor can they afford failure. I think they're on the right track.

And in terms of hurting the industry - well if everybody follows AMD into low-ASP/low-margin territory eventually things will slow down.

AMD's GPG margins were around the 40% mark according to the most recent Quarterly results, if I'm not mistaken. Is this not the desired level and average for the industry? I'll grant that there were stories about AIB partners being unhappy with margins but did we ever receive verification?
 
IMO there is no dichotomy between having a small fast chip for $300 and a big faster chip for $500. People seem to think that one precludes the other. All that happened this round is that AMD's $300 chip was faster than Nvidia's. It says nothing about the inherent high-end viability of a large single die vs multiple smaller dies.

This is, IMHO, hitting bulls eye. It's not every day that an IHV can pump out a chip with 2.5x alu's with only a somewhat bigger die. The real advantage of the sweet spot strategy lies elsewhere. Assume, that in generation n, the two competitors are competitive with their products. If nV launches a the high end part first in gen n+1 and AMD launches mid range part first in gen n+1, then AMD will have a newer product in the most profitable part of the market competing with an older product until the high end can be shrunk down.
 
It's amazing that people forget that even without HD4870 there was widespread apathy in reaction to GT200.

People don't exactly leap over eachother to buy Intel Extreme Edition CPUs either.
If you can get nearly the same performance for a much lower price, most people won't pay the premium.

However, the Steam survey seems to show that most people just stuck with their 8800s so far.
 
Anyway, sales of GTX285 are etremely low. If you look at steam stats, the card is as common, as the old lackluster HD2900XT, which isn't available for more than a year. Every ultra-value 1-2 generation old board is twice as frequent in gaming PC...

Even 1 generation old GTX280, which is now sold for bargain price (local price is lower than HD4890) is as common, as 1,5 gen. old HD3870...

http://store.steampowered.com/hwsurvey/directx/

Are you sure, that this kind of strategy is really good?

I hadn't actually been able to see the Steam survey results because it was blocked at work...
But I'd like to comment on that now:
You compare the GTX285 with the 2900... But clearly the GTX285 has only been around since January. So in only 3 months the GTX285 managed to sell about 60% more than the 2900 in its entire lifetime. Hardly an indication that GTX285 sales are comparable to the 2900.
Even the expensive enthusiast GTX295 has outsold the mainstream 2900.

And if a GTX280 high-end card is as common as a value mainstream part like the 3870, that's quite an achievement.

In fact, there is only one range of ATi cards in the top 12. And that's the combined 4800 series.
That only reinforces the idea that nVidia's brand is incredibly strong. And that ATi is still fighting against the 8800, years after it was introduced.
 
I hadn't actually been able to see the Steam survey results because it was blocked at work...
But I'd like to comment on that now:
You compare the GTX285 with the 2900... But clearly the GTX285 has only been around since January. So in only 3 months the GTX285 managed to sell about 60% more than the 2900 in its entire lifetime.
The chart doesn't show how many cards were sold during their lifetime, but how many users use it now. R600 is EOL for 1,5 year - many users already sold it. Despite it, the todays best performing high-end single chip board, which is available much cheaper compared to old HD2900XT, is not markedly frequent. That's interesting esp. because the HD2900XT was noisy, power hungry, unpopular, more expensive and buggy (broken AA resolve in hardware). I'm conviced, that ATi made a bigger profit with the unpopular HD2900XT, than nVidia with GTX285.
Even the expensive enthusiast GTX295 has outsold the mainstream 2900.
HD2900XT was never sold for mainstream prices, maybe HD2900GT for about 6 weeks during cleaning inventories before launch of RV670. Both HD2900XT and GF8800GTS-640 were high-end priced parts performing about 20% under the enthusiast GTX. Mainstream parts were GF8600 and HD2600.
And if a GTX280 high-end card is as common as a value mainstream part like the 3870, that's quite an achievement.
GTX280 isn't high-end part for months. It's significantly cheaper than HD3870 in 07' and early 08'.
That only reinforces the idea that nVidia's brand is incredibly strong. And that ATi is still fighting against the 8800, years after it was introduced.
Was. nVidia is able to sell GT200 only for mainstream prices. Brand name isn't strong enough to sell it for high-end prices. That's reality.

What was 8800? I did a quick research and discovered, that there are at least 12 different products (different GPU, memory bus 192/256/320/384-bit, memory configs 256/320/512/640/768/1024 MB etc.), which were launched and sold during 2 year period - from cheap mainstream 8800GS or GT-256 to enthusiast 8800Ultra. If nVidia decided to call entire portfolio of products "8800" to get this name to the first positions in charts, than it's no wonder, that the name's there.

Anyway, I'll not continue in this way of discussion, it's already OT and not related to future products.
 
Despite it, the todays best performing high-end single chip board, which is available much cheaper compared to old HD2900XT, is not markedly frequent.

I fail to see the logic in that comparison.
Here's the logic I see:
GTX285 has only been on the market for a VERY short time, sales are still picking up.
Aside from that, many shops were getting rid of their GTX280 stock at very low prices, so potential GTX285-buyers would often go for the GTX280 instead of the GTX285 (just look at the Steam survey, GTX280 continued to increase even in the last months, where GTX285 was launched).
Once the GTX280 is completely gone, all sales will go to GTX285.
We'll check Steam survey again in a few months. GTX285 will probably have taken a bigger share then.

HD2900XT was never sold for mainstream prices

It was about the same price as the 8800GTS, which was mainstream pricerange.

Both HD2900XT and GF8800GTS-640 were high-end priced parts performing about 20% under the enthusiast GTX.

Flawed logic. They performed only 20% less, but they didn't cost 20% less, they were WAY cheaper, more like 50% less.
The price dictates the market segment they're in, not the performance.

Mainstream parts were GF8600 and HD2600.

8600GTS was only marginally cheaper than 8800GTS.
Back when I bought my 8800GTS I think the prices were something like this:
8600GTS: 200 euro
8800GTS320: 250 euro
8800GTS640: 300 euro
8800GTX: 550 euro
 
Last edited by a moderator:
HD2900XT was more expensive than GTX280 for majority of its life-span.
I am not so sure about it. I've checked the price for the last listed HD 2900 XT (It's a sapphire so probably quite representative) and the cheapest still available GTX280 (point of view).

http://geizhals.at/?phistgfx=344251&loc=de&age=365&width=640
http://geizhals.at/?phistgfx=255143&loc=de&age=2000&width=640
(you can make out, which is which from the dates)

Considering only the first year after launch, the GTX280 stayed above the max for what HD 2900 sold for over half a year and even after that it wasn't so much cheaper. probably also because HD 2900 XT was EOL'ed and not in the self regulated market any longer.

Other factors would include wafer prices - considering that AMD was using the not-so-high-volume 80nm while Nvidia used mass-volume 65nm two years later. Nvidia also had to pay for twice as much memory, which also was faster, but then memory prices went down quite a bit in the meantime.

I wouldn't give too much to the argument of HD 2900 XT being positioned as a Performance-solution instead of High-End, because the whole design of the card and GPU except from clock rates and memory clearly indicates it was intended for high end.
 
I am not so sure about it. I've checked the price for the last listed HD 2900 XT (It's a sapphire so probably quite representative) and the cheapest still available GTX280 (point of view).

http://geizhals.at/?phistgfx=344251&loc=de&age=365&width=640
http://geizhals.at/?phistgfx=255143&loc=de&age=2000&width=640
(you can make out, which is which from the dates)

Considering only the first year after launch, the GTX280 stayed above the max for what HD 2900 sold for over half a year and even after that it wasn't so much cheaper. probably also because HD 2900 XT was EOL'ed and not in the self regulated market any longer.

Also, these are euro prices for American products. Prices of all American products have come down in the past year, because the dollar is very weak now.
I'd like to see some US chart next to this, I bet in the last year, the US price didn't drop nearly as much as the European prices did.

I wouldn't give too much to the argument of HD 2900 XT being positioned as a Performance-solution instead of High-End, because the whole design of the card and GPU except from clock rates and memory clearly indicates it was intended for high end.

Depends on how you look at it.
For ATi it was their fastest and most expensive part at the time, so in that sense it was high-end.
But consumers look at nVidia aswell, and compared to nVidia it certainly wasn't the fastest... So to them it wasn't high-end.
Hence nVidia dictated prices, and ATi had to price it according to nVidia's performance, and sell it cheaper than intended.
 
Status
Not open for further replies.
Back
Top