AMD: R9xx Speculation

apparently that overclock site is saying that amd is going to raise the prices in january. Makes little sense to me ,but meh , looks like i'm sitting this out cause i wouldn't be able to order till after the holiday
 
apparently that overclock site is saying that amd is going to raise the prices in january. Makes little sense to me ,but meh , looks like i'm sitting this out cause i wouldn't be able to order till after the holiday

Didn't that have something to do with their taxes increasing? I don't really remember, but if supply is good I wouldn't expect prices to rise.
 
Didn't that have something to do with their taxes increasing? I don't really remember, but if supply is good I wouldn't expect prices to rise.

I dunno , the way it was told to me is that AMD was raising the prices , not that the taxes would increase.

I dunno , i'm hoping a pride drop from nvidia will spur one from amd !
 
Anand said:
The limit of NVIDIA’s design is that while Fermi can execute multiple kernels at once, each one must come from the same CPU thread. Independent threads/applications for example cannot issue their own kernels and have them execute in parallel, rather the GPU must context switch between them. With asynchronous dispatch AMD is going to allow independent threads/applications to issue kernels that execute in parallel. On paper at least, this would give AMD’s hardware a significant advantage in this scenario (context switching is expensive), one that would likely eclipse any overall performance advantages NVIDIA had.
Context parallel execution?!
 
A very inconsistent card. Honestly, nvidia has nothing to worry about. An entire year after the 5870 this is a disappointing launch.

I posted a while back that Ati had a great chance to really drive it home with the 6900 series when nvidia were struggling with fermi. This is their answer and from my viewpoint, they blew it.

What's worse for me as a consumer is that we won't see a 580 price drop now. Before anyone chimes in, Antilles has no appeal to me. Crossfire might scale well but it tends to have issues in games and needs patches/hacks/tweaks often. I like plug n play.
 
A very inconsistent card. Honestly, nvidia has nothing to worry about. An entire year after the 5870 this is a disappointing launch.

I posted a while back that Ati had a great chance to really drive it home with the 6900 series when nvidia were struggling with fermi. This is their answer and from my viewpoint, they blew it.

What's worse for me as a consumer is that we won't see a 580 price drop now. Before anyone chimes in, Antilles has no appeal to me. Crossfire might scale well but it tends to have issues in games and needs patches/hacks/tweaks often. I like plug n play.

I don't know. If drivers even out the performance , nvidia might have to drop its prices. Reviews are mixed but some of them paint it being very close to the gtx 580. If drivers can give them another 5-10% across the board nvidia will have to drop.
 
A very inconsistent card. Honestly, nvidia has nothing to worry about. An entire year after the 5870 this is a disappointing launch.

I posted a while back that Ati had a great chance to really drive it home with the 6900 series when nvidia were struggling with fermi. This is their answer and from my viewpoint, they blew it.

What's worse for me as a consumer is that we won't see a 580 price drop now. Before anyone chimes in, Antilles has no appeal to me. Crossfire might scale well but it tends to have issues in games and needs patches/hacks/tweaks often. I like plug n play.

The thing is that the 6970 costs almost same as 570 and not 580. And i think if someone realy buys 580gtx than he dont need price drops to afford it.:LOL:
 
The difference betwen 6950 and 6970 is not much. Anyone tried to increase the powertune by 20% on 6950 without OC what hapens (and not like oc it by 80 MHz would be so dificult:LOL:) ?
It seems to me that 80 MHz and 2 simds doesnt make up for the 140W vs 190W TDP and they limit the 6950 by the powertune tdp seting. In anands review the 6970 250W limit of the povertune kicks in only in one game (metro2033).
 
Seems like the loss of the 32nm node really hurt ATI here. Not that the cards are bad for where they're priced, but with the shrink I think we'd really be looking at a repeat of the 58x0 series launch. I'd be interested in knowing just what their feature and performance targets were before they had to shift back to 40nm.

That being said, both companies' products are hardly amazing relative to the previous gen. Though some of the 6900 refinements are certainly interesting. I doubt much more can be done at 40nm for either party, so I guess it's down to the long wait for 28nm now?
 
I've just read Damien's review… either the drivers are really immature or Cayman is a clear failure. I mean you might as well just overclock an HD 5870 and you'd get similar performance and power.
 
Is there a chance on EQAA for HD6800-series, or does it require the hardware changes from Cayman?
 
The thing is that the 6970 costs almost same as 570 and not 580. And i think if someone realy buys 580gtx than he dont need price drops to afford it.:LOL:

It costs that much because that is the price the performance justifies. Strong competition at the flagshig level is a good thing, always. Not only for price wars but it forces each company to keep pushing the limits and delivering bigger jumps with each version.
 
Is there a chance on EQAA for HD6800-series, or does it require the hardware changes from Cayman?
While you can do probably all of it via shader programs (very expensive ones), you'd want to keep the ROPs doing that kind of stuff, so no, EQAA comes with the hardware changes in Cayman's ROPs.


On another note, my personal take on Cayman, apart from silly performance numbers:

I think, AMD took a necessary architectural sidestep and layed grounds for architectural fine-tuning and scaling in generations to come. But they also experienced a painful lesson, which I am sure they'd knew would come. Whereas the last few generations were optimized to the last transistor for performance from a less than thrilling foundation (R600), Cayman seems to be a way better starting point, albeit they'd have to pay the price now that Nvidia payed with Fermi in Q3/09-Q2/10. But AMD planned wisely and had very compelling parts to bridge this gap, whereas Nvidia stood there pants down for the better part of a year.


It also seems that the double geometry output doesnt seems to help to much with tesselation. The botleneck is still elswhere.:rolleyes:
http://www.tweaktown.com/reviews/3735/sapphire_radeon_hd_6970_2gb_video_card/index7.html
It shows just same gains as 6800 cards.

Mostly, drivers, I was being told. Here's more:
http://www.pcgameshardware.de/aid,8...klasse-Grafikkarten/Grafikkarte/Test/?page=12
 
Any word on AF quality for Cayman?
Same as Barts (unfortunately).

On another note, my personal take on Cayman, apart from silly performance numbers:

I think, AMD took a necessary architectural sidestep and layed grounds for architectural fine-tuning and scaling in generations to come. But they also experienced a painful lesson, which I am sure they'd knew would come. Whereas the last few generations were optimized to the last transistor for performance from a less than thrilling foundation (R600), Cayman seems to be a way better starting point, albeit they'd have to pay the price now that Nvidia payed with Fermi in Q3/09-Q2/10. But AMD planned wisely and had very compelling parts to bridge this gap, whereas Nvidia stood there pants down for the better part of a year.
Agreed. They made many architectural changes that don't really pay off yet, but had to be done sooner or later.

For example, in my opinion the switch to VLIW4 was in preparation of doubling the ALU count per SIMD, but 40nm didn't allow for that yet (at least not without reaching GF100 levels of die area and power consumption).

I fully expect Cayman's 28nm successor to feature 128 ALUs per SIMD and more SIMDs as well. 32*128=4096 sounds nice to me.
 
Last edited by a moderator:
Same as Barts (unfortunately).

Other than the larger-than-it-should-be blur area in the end, is it unfortunately? If the screenshots from GeForce is correct (in the 5800 AF Broken thread), it seems nVidia is just blurring the textures on the certain areas prone to shimmer.
Of course choice would be nice and all, but I rather take shimmering in some games vs blurred textures.
 
I think, AMD took a necessary architectural sidestep and layed grounds for architectural fine-tuning and scaling in generations to come. But they also experienced a painful lesson, which I am sure they'd knew would come. Whereas the last few generations were optimized to the last transistor for performance from a less than thrilling foundation (R600), Cayman seems to be a way better starting point, albeit they'd have to pay the price now that Nvidia payed with Fermi in Q3/09-Q2/10. But AMD planned wisely and had very compelling parts to bridge this gap, whereas Nvidia stood there pants down for the better part of a year.

Yeah, in a way the architecture change has been separated from process node transition so things should go smoother for AMD. That is not to say it worked best for them 32nm got canceled as this may only be true for nV.
 
Any word on AF quality for Cayman?
Unchanged , unless 10.12 brings somethings we don't know about .

Edit : too slow ..


Please Carsten , have your colleagues publish your reviews in English , it's been a while since the last time you guys did that , you used to write reviews and reports both in English and German , what happened ?
 
Back
Top