AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Assuming they stay with a 256bit memory interface (which seems likely if a single PCB X2 is planned), that is about the best they can do given GDDR5 availability.
 
Then sure as hell, the bus width would be larger. A ~22% increase in mem bandwidth is not going to cut it.
Assuming they stay with a 256bit memory interface (which seems likely if a single PCB X2 is planned), that is about the best they can do given GDDR5 availability.
But hey, single PCB gtx 295 is possible with 512 bit mem interface too.
 
rpg.314 said:
A ~22% increase in mem bandwidth is not going to cut it.

Why not?

http://www.anandtech.com/video/showdoc.aspx?i=3555&p=2

Here is a 23% increase on the 4890 providing a whopping maximum of 3.8% performance increase. So increasing the bandwidth only is about 17% efficient.

On the next page, we have 17.6% clock increase providing a max of 13.3% performance increase, which is 76% efficient.

Thus, I'd say it makes sense to focus on what will give you the greatest ROI. Sure 320bit GDDR5 with ~180GB/s of bandwidth would have been nice, but I think it is way to early to conclude the 5870 is going to be seriously bandwidth starved.
 
Edit: is definitely trees

Starting to fill in the gaps:
x2: ??
performance: evergreen
mainstream: juniper
entry: ??
fusion: ??

Some people have also suggested "cypress" is the name for the performance chip as well. Not sure who to believe, cypress may be the family name or of a third chip in the series. Big questions about replacing the mainstream so soon, and what is happening at entry level.

Plant people are going to be very angry about this as "evergreen" is a much more broad category than cypress. Probably should have been swapped and used as the family name.

Edit: The BSN site has a story where the evergreen name came from.
 
Last edited by a moderator:
Why not?

http://www.anandtech.com/video/showdoc.aspx?i=3555&p=2

Here is a 23% increase on the 4890 providing a whopping maximum of 3.8% performance increase. So increasing the bandwidth only is about 17% efficient.

On the next page, we have 17.6% clock increase providing a max of 13.3% performance increase, which is 76% efficient.

Thus, I'd say it makes sense to focus on what will give you the greatest ROI. Sure 320bit GDDR5 with ~180GB/s of bandwidth would have been nice, but I think it is way to early to conclude the 5870 is going to be seriously bandwidth starved.

When you competitor is doubling the bandwidth outright (GDDR3->GDDR5), IMHLO, a ~22% increase is definitely a recipe for being bandwidth starved. Rv670->Rv770 was already a massive jump in peak compute/mem ratio.
 
rpg.314 said:
When you competitor is doubling the bandwidth outright (GDDR3->GDDR5), IMHLO, a ~22% increase is definitely a recipe for being bandwidth starved.
I hate to break this to you, but your HLO has no basis in rational thought.

1. Whether or not the HD5870 turns out to be bandwidth starved has nothing to do with anything other than itself.

2. Some of the best bang/buck chips in history have been "bandwidth starved"

3. G3x0 is not the HD5870's competitor; the G3x0 will most likely be priced 150-200 dollars higher.

4. The idea that a product having less bandwidth than its competition makes it inherently inferior is incredibly myopic. 2900XT vs 8800GTS, 2900XT vs HD 4770

rpg.314 said:
Rv670->Rv770 was already a massive jump in peak compute/mem ratio.
And how did that work out?
 
1. Whether or not the HD5870 turns out to be bandwidth starved has nothing to do with anything other than itself.

Well, you gotta compare it with it's competitors, don't you.

2. Some of the best bang/buck chips in history have been "bandwidth starved"

I'd like to know. No sarcasm intended.
3. G3x0 is not the HD5870's competitor; the G3x0 will most likely be priced 150-200 dollars higher.

My bad, got carried away by the rv770 vs gt200 fight.

4. The idea that a product having less bandwidth than its competition makes it inherently inferior is incredibly myopic. 2900XT vs 8800GTS, 2900XT vs HD 4770

Agreed, it's a complex balance of many factors, but hurting one important factor can tip the scales.
And how did that work out?

This time, wonderfully. Next time? It's not a guarantee of it being successful. And you can increase alu vs mem ratio only so much without disturbing the chips' balance.
 
I would think a small bandwidth increase would limit if RV870 really does increase the number of ROPs(double?) and TMUs(1.5-2x?).
 
I would think a small bandwidth increase would limit if RV870 really does increase the number of ROPs(double?) and TMUs(1.5-2x?).

I thought those rumours speak of 48 TMUs which would be a 20% of unit amount increase compared to RV770.

Besides I know it may sound like a stereotype but isn't it more important how a GPU handles its bandwidth then its theoretical raw amount?
 
I would think a small bandwidth increase would limit if RV870 really does increase the number of ROPs(double?) and TMUs(1.5-2x?).

Why would it be bandwidth starved and if it is bandwidth starved in some workloads does it really matter?

I'm sure I can create a workload that will cause ANY card to be bandwidth starved: maximum overdraw and maximum texturing. But I'm not sure that workload matters because its way outside the bounds of reasonable: if I really have that much overdraw, I'm going to be a hell of a lot better off as an application developer doing a pre-pass to load the depth values to take advantage of the built in hardware.

And if I'm really using that much texturing, well... time to invest in better texture caching.

And if I'm BW limited at reasonable rez's past say 75 or 100 FPS then it doesn't matter anyways. Realistically, if the card can hit 60 FPS at max quality, anything else is really a waste of power.
 
And if I'm BW limited at reasonable rez's past say 75 or 100 FPS then it doesn't matter anyways. Realistically, if the card can hit 60 FPS at max quality, anything else is really a waste of power.
Basically, i agree. But I'd better like it, when said card could sustain 60 fps at said settings than only hit it (once in a while). :)
 
Some news from Spyre.

JUST BEFORE NOON Taipei time on Wendesday, ATI is going to to drop a bombshell on the graphics world, showing off working 40nm DX11 silicon. Nvidia has yet to tape out a DX11 part or put a 40nm chip on the market. The cards that will be showed off are Evergreen/Cypress, what is commonly referred to as R870. That code name however is not used internally, so if you ever see any leaked specs or charts with R870 on them, they are flat out faked. There have been several of these bandied around, but they are all laughably off.
AMD promised that they would have DX11 cards out on the market by the Windows 7 launch, and barring a massive and as of yet undetected problem, it looks like they will make that with ease. In the mean time, at the company's Computex press conferences, Nvidia distinctly avoided any mention of DX11 or parts that run it..
Mid-day tomorrow, the winners and losers for the next 2-3 quarters will become very very clear.

http://www.rage3d.com/board/showthread.php?t=33946594
 
Terry (Catalystmaker) just tweeted it 3 hours ago, followed by a retweet by Ian (another AMD chap). Valid? Almost definitely.
 
Nice, hopefully they have demo goodies in store for us too. Or is it a press only behind closed doors affair?
 
There seems to be no sign of an NDA about it. Would be pretty useless anyway, IMO, when you are about to throw stones at your competitors smoke screen, you'd want the public to take notice.
 
Back
Top