AMD: R9xx Speculation

wouldn't be surprised to see the 1920SP part launch as 67xx and 28nm parts as 68xx as Gipsel suggests (I think I said that a couple of months ago as well :/)

What would be the reason to do that? A new series should always get a little extra buzz.

Actually, one reason I can think of is to keep the 40nm card sales going on longer in parallel with 28nm. If 28nm will be supply constrained I guess that would make sense.
 
What would be the reason to do that? A new series should always get a little extra buzz.

Actually, one reason I can think of is to keep the 40nm card sales going on longer in parallel with 28nm. If 28nm will be supply constrained I guess that would make sense.

With the current deafening silence, it could also be a new series of re-branding coming up. Other than the lifting the current limitation on Eyefinity (the DP connector) and UVD I don't think there will be a lot of new features on the new parts that would warrant that big an overhaul.

;)
 
With the current deafening silence, it could also be a new series of re-branding coming up. Other than the lifting the current limitation on Eyefinity (the DP connector) and UVD I don't think there will be a lot of new features on the new parts that would warrant that big an overhaul.

;)
I guess that is the current consensus of the rumors. Southern Islands will still belong to the R800 generation (think of the RV610->RV620 or RV630->RV635 update minus the shrink plus maybe some small tweaks) and only NI will bring us the big changes.
 
In the end price will be the biggest factor. They can have 30% faster card than 5870, when they sell it for that much more :rolleyes:.
It will be rather interesting if AMD will have the balls to begin a price war and completly destroy the GF100 and GF104 :?: (they could do it already with the 5800 cards if they would be satisfied with much lower margins).
 
It is expected to be under 400mm² and perform ~20% better than GTX480? It would be nice, but isn't this scenario overly optimistic?

what performance metric are we talking about ? The 5870 already performs close to a gtx 480 and sometimes surpases it in certian aspects
 
How? RV870 and GF104 are similarly sized and performing. Demand for HD5850/5870 is still higher than supplies. They could lower its price, but it wouldn't sell more cards, because they don't have them. They could create another 8800GT - cheap fast card, which nobody could buy.
 
How? RV870 and GF104 are similarly sized and performing. Demand for HD5850/5870 is still higher than supplies. They could lower its price, but it wouldn't sell more cards, because they don't have them. They could create another 8800GT - cheap fast card, which nobody could buy.

Thats assuming that in late fall when these cards might come out , they will be supply limited still.

For all we know global foundry may be ready to start producing some of the ati cars.
 
For all we know global foundry may be ready to start producing some of the ati cars.

I thought the latest news was that the first Fusion parts shipping this year would also be on 40nm bulk, but I don't see TSMC have enough capacity to do that along with their current amount of orders.

edit: Oh. I should check DigiTimes way more often than I do..

http://www.digitimes.com/news/a20100720PB201.html

Orders for AMD's Ontario chips, which will be the first Fusion APU the vendor brings to market, are expected to play a key contributor for Taiwan Semiconductor Manufacturing Company's (TSMC's) revenue growth during the second half of 2010, the Chinese-language Commercial Times quoted unnamed industry watchers as saying in a July 20 report.

The paper indicated that TSMC should have begun manufacturing the chips using 40nm bulk technology in the third quarter.

Sales from 40/45nm topped 14% of TSMC's total wafer sales in the first quarter of 2010. TSMC will update the ratio for the second quarter at its upcoming investors conference on July 29.

AMD CEO and president Dirk Meyer, during a Q&A session following the company's recent earnings release, confirmed that one of its forthcoming APUs codenamed Ontario will be built in 40nm bulk technology supplied by TSMC.

AMD plans to ship Ontario APUs in the fourth quarter of 2010, ahead of schedule. However, it will not start shipping Llano APUs, built at Globalfoundries using 32nm SOI, until 2011.

"In reaction to Ontario's market opportunities and a slower than anticipated progress of 32nm yield curve, we are switching the timing of the Ontario and Llano production ramps," Meyer said.

So next Friday could tell us a bit more about the current 40nm occupation at TSMC.
 
No. But you can at least guess a range. With transistor density comparable to GF100, GF104 would be 345mm² large. -//- RV870, GF104 would be about 305mm² large. That's close to G70, G92, R580 and RV870...
 
It's Egg Chicken Circle.
Once they give the performance and capabilities there'll be programs -> applications who would take advantage of that.

I'm not really sure(Don't have any real knowledge in that), but does Video Encoders use DP or do they us SP?

I don't know, for me as a user who doesn't play any game I'm eager to see some applications who would benefit from the GPU, Something beyond gaming.
In much larger scale.

I've been away from the keyboard for a few days, so apologies for the belated response.
Short response is - you'll never see it, if by "In much larger scale" you mean that it will become relevant to the general public.

The reason being that when viewed as a general co-processor the GPU is horribly, horribly inefficient use of transistors, (and thus money and power). At the introduction of Nehalem Intel made quite a bit of propagandistic noise about how new power drawing features had to pay for themselves 2:1 in terms of overall performance or they wouldn't be implemented. And this makes perfect sense as the majority of x86 processors are used in environments where power limits apply, so performance/Watt is critical. Viewed in that light, a GPU is incredibly inefficient even if by magic all x86 software was rewritten from scratch due to the very limited set of problems it can be applied to. In the real world, the level of utilization would be so low so as to be hard to calculate.

So why is it so popular a topic on these boards? Well, GPGPU was something that was pushed by ATI and nVidia in order to try to strengthen their market legs outside gaming, or at least appear to investors as if they did. (I take the somewhat cynical view that this was done to increase the likelyhood of being bought out under good conditions to their shareholders.) So it received a lot of PR attention. It was new, and therefore interesting to those who take an interest in these things.
But it was only ever a valid proposition on the condition that the GPU was already in the system "for free", and any extra use you could put it to was gravy. And this is only true for the core gamer market. For all other users it's essentially true that once you can drive the interface, little more 3D performance is needed. This is why Intel integrated graphics will always suck in the view of the denizens of this forum, and why it should do so. The way the market is moving, the ever increasing focus on power efficiency and cost of the whole system makes it unlikely that a large part of the market will ever have high-power GPUs.
 
Last edited by a moderator:
Huh?

GPGPU is real and has been demonstrated time and again to offer a proven improvement in the computing experience for the end-user. GPU as co-processors is not a pancea for all that ails x86, but it is a great tool for software developers to use to improve their products features and performance. With Direct2D browsers, GPU accelerated Office 2010, GPU accelerated media transcode (Windows 7 drag'n'drop for compatible media foundation devices), and DXVA enabled media players there is already a suite of applications in place.

OpenCL and DirectCompute make it easier for developers to offload their parallel processing workloads from the CPU to the GPU. Like anything there is a lag time from hardware capabilities to mass adoption and mainstream use. Same as FP co-processors, and multiple threads.

Focusing on high-power GPU's is the wrong market - look at the benchmarks for mainstream graphics cards, accelerating performance in say Mediashow. A $100 card can improve the performance of a PC with a $100 processor in it immensely. That's the goal. Not beating a $1300 processor with a $500 card.
 
I've been away from the keyboard for a few days, so apologies for the belated response.
Short response is - you'll never see it, if by "In much larger scale" you mean that it will become relevant to the general public.

The reason being that when viewed as a general co-processor the GPU is horribly, horribly inefficient use of transistors, (and thus money and power). At the introduction of Nehalem Intel made quite a bit of propagandistic noise about how new power drawing features had to pay for themselves 2:1 in terms of overall performance or they wouldn't be implemented. And this makes perfect sense as the majority of x86 processors are used in environments where power limits apply, so performance/Watt is critical. Viewed in that light, a GPU is incredibly inefficient even if by magic all x86 software was rewritten from scratch due to the very limited set of problems it can be applied to. In the real world, the level of utilization would be so low so as to be hard to calculate.

So why is it so popular a topic on these boards? Well, GPGPU was something that was pushed by ATI and nVidia in order to try to strengthen their market legs outside gaming, or at least appear to investors as if they did. (I take the somewhat cynical view that this was done to increase the likelyhood of being bought out under good conditions to their shareholders.) So it received a lot of PR attention. It was new, and therefore interesting to those who take an interest in these things.
But it was only ever a valid proposition on the condition that the GPU was already in the system "for free", and any extra use you could put it to was gravy. And this is only true for the core gamer market. For all other users it's essentially true that once you can drive the interface, little more 3D performance is needed. This is why Intel integrated graphics will always suck in the view of the denizens of this forum, and why it should do so. The way the market is moving, the ever increasing focus on power efficiency and cost of the whole system makes it unlikely that a large part of the market will ever have high-power GPUs.

May be you should look here

http://www.realworldtech.com/page.cfm?ArticleID=RWT090909050230&p=2

Your views on perf/W of GPU's might change a bit.
 
That's bigger than I expected. Tranzistor density is significantly lower compared even to GF100...

It might have something to do with doubled up vias… or perhaps GF104's apparent ability to clock quite high—though obviously not high enough.
 
That's bigger than I expected. Tranzistor density is significantly lower compared even to GF100...
It's not too different to GF100. Maybe the difference there is just because there's comparatively more logic & less cache?
GF104 compared to GF100 has:
2/3 of trans
3/4 normal alus
1/1 SFUs / TMUs
2/3 L2 cache (and rops, mem i/o)
1/2 L1 cache (and register file)
1/2 PolyMorph Engines / Raster Engines
Hard to tell though if that's overall really less simple stuff (like cache) percentage-wise...
In any case it's not surprising transistor density is lower than rv870 - that was already very easily seen with g92b / rv770.

Well, 5870 is smaller and performs better than GF104.
I would argue not by that much though (for a full chip with "reasonable" achievable clocks - 5870 doesn't really overclock too much without additional voltage which makes power draw much worse). Of course, if nvidia could produce a full chip in quantity is another question (I have no idea really).
So if GF104 is 10% bigger and 10% slower basically, that wouldn't be too bad. I would say definitely better than any gt2xx comparisons against rv7xx. Though I suspect GF106 against Juniper will look worse again for nvidia (I still think Cypress didn't scale too well compared to Juniper).
 
Back
Top