AMD: R9xx Speculation

Like your schedule isn't affected by what your competition is doing :) I believe they can afford to wait a bit and tweak a few things so the match against 570 is won without a doubt. And by the way, nobody but a few know for sure, so what you and I believe doesn't matter much in the end :D

There is a vast difference between thinking and believing.

I also believe that the delay was not due to 580 and 570 appearing, their performance numbers were estimated quite well in the middle of last year when a 512CC 480 was expected to appear.

Don't you remember the "Cypress is not meant to go up against GF100" messages?

It would be idiotic to think that the launch of the 580 (and announcement of the 570) caught AMD completely by surprise because, in your line of thought, AMD might just as well have to down-tune Cayman so that they don't look so ridiculously fast.
 
The mills and their rumors disagree with ya'll, unless 'twas always 890mhz.

Computerbase.de

Donanimhaber


Antilles - GTX580 / GF114x2?

Cayman - GTX570/ GF114?

You must be truly out of your mind if you think GTX580 would get anywhere close to Antilles when even HD5970 and HD6870 CF beat it :rolleyes:

edit:
And HD6950 against GTX560? Unless they ramp up the clocks to around the "GTX460 max factory OC levels" and enable full GF104 (call if 114 if you want) core, GTX560 will be having hard time against 6870 already, let alone 6950
 
You must be truly out of your mind if you think GTX580 would get anywhere close to Antilles when even HD5970 and HD6870 CF beat it :rolleyes:

edit:
And HD6950 against GTX560? Unless they ramp up the clocks to around the "GTX460 max factory OC levels" and enable full GF104 (call if 114 if you want) core, GTX560 will be having hard time against 6870 already, let alone 6950

Perhaps I worded that badly, but why so zealous to portray me as a delusional nVIDIA defender? I was only trying to extrapolate it's price competitor/closest competition and relay the info from the article. Obviously it would appear AMD will have the better value. :p

edit: Don't agree about GTX560 v 6870. I believe GTX560 will effectively replace the GTX470, with some overclocking headroom still available. The 6870 achieves this, but likely will have less absolute performance. Against 6950 though, I agree, it's likely to be no contest.
 
Last edited by a moderator:
Last I checked neither the AMD 6970 nor 6990 were released and neither you nor Xbit have access to cards or benchmarks.

Can we take the melodramatics down a notch?
 
Sorry for my ignorance, but if this is true that the 6970 is 40% faster than the 6870, where does it leave the 6950? My guess is around the 5870 level wich is a bit faster than the 6870, and if so who will by the 6950 then?
Find it a bit weird that all these rumors compare the 6970 to the 6870 and not Cypress series from last year that beats Bart ..
 
Last edited by a moderator:
Sorry for my ignorance, but if this is true that the 6970 is 40% faster than the 6870, where does it leave the 6950?
40% faster than 6870 means 30-35% faster than 5870. I don't think the gap between 6970 and 6950 will be more than 20-25%, then the 6950 would be a bit faster than 5870 and approximately on GTX 480/570 level.

and if so who will by the 6850 then?
Assuming you meant 6950, I'd guess those who want a 2GB card with GTX 480 performance at <225W for < 350$.
 
This thread has gone to shit.

Simple fact is that AMD has had a faster card than the gtx 580 for months now. I don't see why AMD would target slower performance targets than the gtx 580 with its refresh parts. It really makes no sense at all unless you have a hardon for nvidia winning.

AMD has been producing 40nm chips on this process since last summer and I don't believe that they would suddenly have process problems. We see Barts is 90% of the performance of cypress with 80% of the die space. I would think that Cayman at a 120-130% of the sice of cypress would be 40-50% faster than cypress .

Regardless we will know in a few weeks.
 
To me it's not about holding the performance crown, it's about creating products that are excellent performance for the price. If AMD can produce a chip that is comparable in speed or higher than the 480, yet smaller, more heat efficient and costs less to produce then that is a big win. I really don't think Nvidia likes having to match pricing to AMD when I assume they need to spend more to achieve that pricepoints performance. Penis envy can be left to the kids to fight over.

This board is losing it's appeal for it's (once) technical discussions rather than all this red vs green ranting bullshit.
 
Tell me something. Is everyone comfortable with entire pages of this thread being deleted? Because I'm not interested in this thread "achieving" 200+ pages on the back of tired old tangents like the one we just had. Everyone talking past each other about their personal gaming preference is not my idea of architectural discussion.
 
Simple fact is that AMD has had a faster card than the gtx 580 for months now. I don't see why AMD would target slower performance targets than the gtx 580 with its refresh parts. It really makes no sense at all unless you have a hardon for nvidia winning.

Because when they started they 580 didn't exist (in fact I don't think the 480 did). So they're setting their own line in the sand, and the line is defined not solely by competitor performance but by performance target, cost target, power and heat budgets, and requests from partners. There are many different influence pulling many different ways.

You want a $400 card from AMD that outperforms a $500 card from NVIDIA? Yeah we all do, but will AMD make one? If they could then all kinds of decisions come up - is it responsible to the business and the shareholders to make a faster card for $100 less? Isn't it more responsible to price it competitively? What about the product pricing line up? Do you leave gaps to fill in or design for a product performance/value/power target, and stick to it? Will you sell more cards and gain more market and mindshare by selling $400 GPU's instead of $500 GPU's? Barts launched into the $150-$250 price point because that's where the money is for the enthusiast gamer. $300+ is higher margin, much lower sales. Why then target an even smaller segment with $500+ products?

Aiming for the farthest yard stick with a single throw gives you R600's and GF100's - you miss the market.
 
Barts launched into the $150-$250 price point because that's where the money is for the enthusiast gamer. $300+ is higher margin, much lower sales. Why then target an even smaller segment with $500+ products?

Because when you target that market with a dual GPU card it can potentially reduce costs due to economies of scale, boosting the margins of both products.
 
Because when you target that market with a dual GPU card it can potentially reduce costs due to economies of scale, boosting the margins of both products.

Right, so aiming a single GPU card at that point (e.g. Cayman must be faster than GF110!) makes even less sense. Antilles vs. GF110? See Hemlock vs GF100.
 
Tell me something. Is everyone comfortable with entire pages of this thread being deleted?

I am. Remember folks, this is the AMD R9xx Speculation thread. Keep the other topics elsewhere.
 
Does anyone know anything about that scalable off chip buffering and what it could possibly mean? I'm not sure if that may refer to tessellation or something else.
 
Tell me something. Is everyone comfortable with entire pages of this thread being deleted? Because I'm not interested in this thread "achieving" 200+ pages on the back of tired old tangents like the one we just had. Everyone talking past each other about their personal gaming preference is not my idea of architectural discussion.

I applaud it.
Thank you.
 
Right, so aiming a single GPU card at that point (e.g. Cayman must be faster than GF110!) makes even less sense. Antilles vs. GF110? See Hemlock vs GF100.
Who said anything about Cayman being above $499, that is Antilles territory.
Last I heard XT was ~$449 and Pro was ~$349 with Antilles being priced where Hemlock was.

Does anyone know anything about that scalable off chip buffering and what it could possibly mean? I'm not sure if that may refer to tessellation or something else.

Tessellation since the slide is mentioning the features/changes to each generation of their tessellation engine.

Scalability is probably due to the increase in setup rate and/or tessellation units.
No idea on the off chip buffering.
 
Okie dokie. We're on the 20th here so that means we're about 20 days from the slated release, right?
 
Back
Top