AMD: R7xx Speculation

Status
Not open for further replies.
There are certain rendering operations that consume much more bandwidth than others. ~70GB/s would be fine for normal rendering but start lagging when AA/AF gets cranked.
I´m aware of this but still, that´s almost double the bandwidth, while the differentiation between Pro and XT sounds rather minor. Judging from the past, I can´t think of one single instance where this has been the case.
 
=>Sunrise: 0,8 ns is 2,5 GHz theoretically.

=>Berek-Halfhand: If Fudzilla (or anyone else for that matter) were to tell who their sources are, soon there would be none. Right now, there is a number of people who know GT200 specs but are under NDA. So all I can tell you is that some of the rumours about GT200 are true and others are false.
 
Last edited by a moderator:
Just divide 1000 : 0,8, it should come out to 1250 unless my Allendale has an FDIV bug.
Where´s that frying pan if you need it? :LOL:
(I´ve not even considered to calculate it rather than just looking it up on Samsung´s semiconductor page, where there´s a table stating "0.8ns - 1.100MHz"...)

Ok, so theoretically 1.250MHz (2.500MHz) can be reached, but my argument still holds that if you want to have those available in mass, it would probably be very expensive because this apparently is already at the very edge of their doable frequencies and since Samsung doesn´t have it on their page this is not exactly making me believe that they can supply it at that rated speed.

If we just completely ignore this one, the bandwidth difference between the PRO and the XT still makes no sense to me.
 
It's actually 0.83ns =~ 1200Mhz => 2400Mhz DDR.

And maybe they just want to keep costs in check for the 4850. Let the 4870 be the expensive blow out part that fights for the crown and let the 4850 remain cost competitive with whatever Nvidia comes up with.
 
=>Sunrise: Well, Samsung is not the only GDDR manufacturer out there. Qimonda lists 2,4GHz GDDR3 on their website. Sure, such memory is expensive, but what's the alternative? 2,8GHz GDDR4? Those are expensive just as well and the clockspeed can't be compared to GDDR3.
By the way, AFAIK there are some cheaper versions of GDDR5, working at less than 4 GHz. Perhaps RV770XT will use something like that.
 
=>Sunrise: Well, Samsung is not the only GDDR manufacturer out there.
No, but one that always led the race when it came to GDDR-memory. With GDDR5, however, it looks like that has changed.

Qimonda lists 2,4GHz GDDR3 on their website. Sure, such memory is expensive, but what's the alternative? 2,8GHz GDDR4? Those are expensive just as well and the clockspeed can't be compared to GDDR3.
Well, didn´t ATi already use 1.125MHz (2.250MHz) on their ref. 3870? Also, while latencies might of course be a factor (that´s why i mentioned it), GPUs tend to cope with it pretty well.

Lukfi said:
By the way, AFAIK there are some cheaper versions of GDDR5, working at less than 4 GHz. Perhaps RV770XT will use something like that.
Afaik those are the 1,8GHz parts from Qimonda. Maybe ATi doesn´t even want (need) to exploit bandwidth like that and just save on power while still having at least >50% more bandwidth (compared to the Pro) for their XT SKU.
 
I´m aware of this but still, that´s almost double the bandwidth, while the differentiation between Pro and XT sounds rather minor. Judging from the past, I can´t think of one single instance where this has been the case.
The mem clk difference between HD3870 and HD3850 has also been (more than) twice as large as the core clk difference (36 vs 16 percent).

Now imagine - as i do - that AMD has more functional Units in RV770 and some might be disabled in the lower-end models. That plus a (IMHLO) hefty core clk difference would make the XT-model or whatever it is called quite a bit more bandwidth hungry than can be satisfied with GDDR3.

Additionally, do not forget the supposedly beefed-up (or should i say fixed?) ROPs. With 4z/clk you'd also need more bandwidth than with just two. And you don't want your fastest model to be bandwidth impaired...
 
The mem clk difference between HD3870 and HD3850 has also been (more than) twice as large as the core clk difference (36 vs 16 percent).
In absolute terms and compared to the core clocks that is certainly the case. However, if you look at it strictly by available bandwidth, it just looks like a very large difference. When you take into account that it´s probably the lowest end GDDR5 and highest end GDDR3, that huge difference will disappear pretty fast.

Quasar said:
Now imagine - as i do - that AMD has more functional Units in RV770 and some might be disabled in the lower-end models. That plus a (IMHLO) hefty core clk difference would make the XT-model or whatever it is called quite a bit more bandwidth hungry than can be satisfied with GDDR3. Additionally, do not forget the supposedly beefed-up (or should i say fixed?) ROPs. With 4z/clk you'd also need more bandwidth than with just two. And you don't want your fastest model to be bandwidth impaired...
These go hand-in-hand with what I currently think will be the case. There´s probably a relatively high delta between the Pro and the XT when taking functional units into account and that´s one of the few factors that could justify that difference. Also, GDDR5 fits perfectly within the idea that it saves you even more power when idle, which could be a sign that idle-power has indeed been improved, partly because GDDR5 has some very nifty power saving features.
 
Since Fuad insists on the 256bit vs 512bit buses in RV770Pro and XT respectively, I am trying to work up a theory that would explain such a gap.

So now I am thinking that the "Pro" model would go face the 9600 price category (it would be a bit more expensive, but faster), while the "XT" model would go after the 9800 category. Obviously I'm expecting the upcoming 9800 cards to impress more in comparison to 9600 than it's been the case for the past few months (mostly due to bandwidth limitations, a problem which could be alleviated by using faster memory with G92b). At the high-end, RV770 would probably yield the performance crown to the GT200, but ATi would be trying to dominate the juicier market (say, covering and competing in all of $150-$400 with only RV770).

(It's all a supposition here, but) ATi could even subdivide each price domain by being able to fuse resistors to disable ALU/TMU group(s), so we could end up with multiple SKUs for Pro and XT models.

This approach would be economical and quite similar to 3870/3850, but with (I'm guessing) more differentiated and better positioned products. Nvidia might still beat them perf-wise or sell more cards but, in the end, ATi would have much more optimized design and production costs. Nvidia has admitted lately that selling G92 for cheap was costing them dearly.
 
More from FUDzilla: http://www.fudzilla.com/index.php?option=com_content&task=view&id=7321&Itemid=1

We've learned that R700, a dual RV770 product, won't launch alongside with RV770 products. As we've said before, both RV770 are scheduled for late June launch and it looks like the dual chip card won't launch simultaneously.

R700 will launch at least one month later and you can probably blame the complicated design and drivers for such an inconvenient timing.

The way it looks now, GT200 based cards will launch first and then will AMD follow with its high end card.
 
(It's all a supposition here, but) ATi could even subdivide each price domain by being able to fuse resistors to disable ALU/TMU group(s), so we could end up with multiple SKUs for Pro and XT models.
I don't really see the advantage of this. Perhaps if yields are really bad this could be useful, but why artificially limit the profit you get from a product when the price of making it is the same?
 
I don't really see the advantage of this. Perhaps if yields are really bad this could be useful, but why artificially limit the profit you get from a product when the price of making it is the same?


It allows you to address different market segments. You may have a chip that sells in the $300 range, but that doesn't get you any money from someone who only has $200 to spend. So you chop down the expensive chip to make it cheap, and keep it not so good that nobody will buy the full version with everything enabled for $300.

Looking at the whole lifespan and development of a chip, it might even be cheaper (to a degree) to disable parts of a more expensive chip to address a cheaper market segment than to make a new chip that has to have 80 percent of the same work put into it again.

It's been the case for a while that chips that don't qualify for top product can have parts disabled and then they can be sold more cheaply and still make money, rather then just throwing them away. But what happens if your yields are really good? You can't sell all those chips at top whack and just ignore every other market segment. You either have to disable good chips to have something to sell in the lower markets, design and build a more limited chip specifically for that market, or concede that whole market to your competitors.
 
It allows you to address different market segments. You may have a chip that sells in the $300 range, but that doesn't get you any money from someone who only has $200 to spend. So you chop down the expensive chip to make it cheap, and keep it not so good that nobody will buy the full version with everything enabled for $300.

Looking at the whole lifespan and development of a chip, it might even be cheaper (to a degree) to disable parts of a more expensive chip to address a cheaper market segment than to make a new chip that has to have 80 percent of the same work put into it again.

It's been the case for a while that chips that don't qualify for top product can have parts disabled and then they can be sold more cheaply and still make money, rather then just throwing them away. But what happens if your yields are really good? You can't sell all those chips at top whack and just ignore every other market segment. You either have to disable good chips to have something to sell in the lower markets, design and build a more limited chip specifically for that market, or concede that whole market to your competitors.

I see the logic behind it. I think their current strategy of lowering clocks and changing memory configuration is better.

Why cripple a working chip in such a competitive market?
 
It allows you to address different market segments. You may have a chip that sells in the $300 range, but that doesn't get you any money from someone who only has $200 to spend.

...

But what happens if your yields are really good? You can't sell all those chips at top whack and just ignore every other market segment. You either have to disable good chips to have something to sell in the lower markets, design and build a more limited chip specifically for that market, or concede that whole market to your competitors.
Well, that bolded part is what I was after. Why use a big, expensive, mid-range chip and cut it down when it's not necessary? That's why you've got low-end chips which are a lot smaller and cheaper to produce. Cutting down part of a perfectly functioning chip wouldn't seem like a sound decision to me. Especially in AMD's case. They don't have the high-end market, so they've got to find a way to appeal to consumers. So why cut functionality away from a chip? Wouldn't it be smarter to either lower the price of the whole product range, so consumers get a better perf/$ ratio? That's bound to get you more attention from press and consumers, hopefully resulting in more sales, a better name and something like an halo effect. I think AMD would benifit more from that path than from artificially creating new SKU.

An example could be NVIDIA's 8800GT. It was hyped because it gave near 8800GTX like performance for half it's price. Perhaps NVIDIA could have sold it at a higher price point, or created two SKUs out of it, perhaps with less SP's or something like that. They didn't however, and the result was an enormous amount of attention and praise, essentially drowning AMD's 3870 in the proces. The 8800GT may have had lower margins than possible, but it made sure everyone talked about NVIDIA. Who knows how many extra sales this gave them, just from the halo effect and the brand awareness? And I'm not just talking about 8800GTs, I'm also talking about other cards in NVIDIA's lineup.
 
Status
Not open for further replies.
Back
Top