AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

I think pricing is a lot more important,

HD5770, October 2009 = price $159
HD6870, October 2010 = price $240
HD7770, February 2012 = price $159

HD5670, January 2010 = price $99
HD6670, April 2011 = price $100

why? pricing is variable. People want AMD to make money and be successful, yet they complain when they do......... :rolleyes:
 
I think pricing is a lot more important,

HD5770, October 2009 = price $159
HD6870, October 2010 = price $240
HD7770, February 2012 = price $159

HD5670, January 2010 = price $99
HD6670, April 2011 = price $100

Yes and...

4870 = 299 USD.
5870 = 369 USD.
6970 = 369 USD.
7970 = 549 USD.

4670 = 79 USD.
5670 = 99 USD.
6770 = 120 USD.
7770 = 159 USD.

Prices went up across the board. 5xxx prices were already being criticized for their high prices relative to the 4xxx cards being phased out so there wasn't much room for pricing without earning a lot of internet ire. I don't like it, but graphics card prices while (IMO) too high this generation are probably about where they would have been if not for the 4xxx <-> Geforce 2xx price war.

The progression for the midrange lineup is actually more linear than the enthusiast lineup.

What's your point?

Regards,
SB
 
Back in 2008, they had 4 chips: RV710, RV730, RV740, RV770, with just 2 of them being the "high-end" market. In 2009, they had Cypress and Juniper. In 2010, they had Cayman, Bart and Juniper/Turks. Now they have Tahiti, Pitcairn and Cape Verde.
3 chips, instead of 2 chips in the high-end, with lots of difference in die-size and power-consumption, completely different from the RV770 days.
The new naming scheme makes perfect sense... and so does price. AMD had low prices for quite too long. They are just doing their job: to earn money, while there is no competitor.
 
I wonder what other consumer realm you see a company offering less, asking for more and have consumers on forums thinking it's great.
I'm sorry that you don't see the reality of the situation.

You're comparing a top-end model from "last year", using it's currently depreciated price, and then lamblasting a mid-end model released today and complaining about how it doesn't stack up.

The reality, again, is that this is the same across every non-trivial electronics realm you can bring up. Televisions, audio equipment, cameras (both still and motion), you name it. The new product has new features that the old products do not; if you do not need those new features, then the new product isn't for you. Tada! Problem has solved itself.
 
I'm sorry that you don't see the reality of the situation.

You're comparing a top-end model from "last year", using it's currently depreciated price, and then lamblasting a mid-end model released today and complaining about how it doesn't stack up.

The reality, again, is that this is the same across every non-trivial electronics realm you can bring up. Televisions, audio equipment, cameras (both still and motion), you name it. The new product has new features that the old products do not; if you do not need those new features, then the new product isn't for you. Tada! Problem has solved itself.

Which new features provides over a 6850 ?.

We need competition. Without competition we still would be stuck in the 3Dfx days...Hurry Nvidia!.
 
Is this the closest we're going to get in terms of a die-shot?

wlVXh.jpg


We could double-confirm the CU count of Pitcairn from this.

From what it looks like, I'd guess, Pitcairn still uses a placeholder (RV770?) instead of actual die shots.
 
They are just doing their job: to earn money, while there is no competitor.
In contrast to HD 7900 the HD 7700 series has more than enough competition, even from older AMD products.

And I do not know if this good for customer relationship, if you have to cut prices by ~30% after just few months.;)
 
Which new features provides over a 6850 ?
And again, you're not comparing the right things. The depreciated cost of an old card does not make it equivalent to the MSRP of a brand new card; I do not understand why this is difficult to grasp. DO you need a better example? Last year's top-end TV at it's currently depreciated price does not make it equivalent to today's brand new mid-end TV at a new MSRP.

At best, you could compare the 77xx series (due to price equivalence at release) to the 67xx series, which is effectively identical to the 57xx series. In that vein, I could rattle off the entire GCN marketing slide as "new features" that aren't on the old model. More simultaneous display outputs, far more compute power, far less power consumption, improved audio playback and multi-endpoint audio, blah-de-blah the list is a page long in 12-point font.

We need competition. Without competition we still would be stuck in the 3Dfx days...Hurry Nvidia!.
NVIDIA isn't some sort of saint in this example, either. For every point in the timeline where you can attempt to single out ATI / AMD for making some money from the lack of competition, you can make at least that many similar points for NV doing the same -- and often at an even higher price point. Simply because "they can."

I'm not buying any of the new cards until NV comes out with their stuff, FWIW.
 
Yes and...

4870 = 299 USD.
5870 = 369 USD.
6970 = 369 USD.
7970 = 549 USD.

4670 = 79 USD.
5670 = 99 USD.
6770 = 120 USD.
7770 = 159 USD.

Prices went up across the board. 5xxx prices were already being criticized for their high prices relative to the 4xxx cards being phased out so there wasn't much room for pricing without earning a lot of internet ire. I don't like it, but graphics card prices while (IMO) too high this generation are probably about where they would have been if not for the 4xxx <-> Geforce 2xx price war.

The progression for the midrange lineup is actually more linear than the enthusiast lineup.

What's your point?

Regards,
SB


my point is quite simple, it's easier to define any hierarchy by the price,
AMD strategy has clearly changed many times, and comparing to the 5000 series they added a new segment between the x700 and the x800, renaming the x800 to x900, but I can't see the 6870 as a successor for the HD5770, although AMD can claim that or whatever they want...


although I see your point about claiming that the prices in general are higher and the card equivalent to more or less half the higher end part is not the x700 anymore...

but at the end I'm simply looking at the 160 usd card from today against the 160 usd card from 10/2009,

--------------------------------->7970
4870 -> 5870 -> 6970
------------------>6870
-------------5770 -------------> 7770
4770----------------->6770
------------5670 -> 6670

also works,
 
More simultaneous display outputs, far more compute power, far less power consumption, improved audio playback and multi-endpoint audio, blah-de-blah the list is a page long in 12-point font.

Because all 5770 users own 3 monitors, run compute software the whole day, and watch full HD videos 5 inches from monitor to see the latest invisible video improvements.
 
R/W caches for compute, more LDS memory, larger L1 caches, much increased tessellation performance, new scalar shader core which increases efficiency, to give some examples.

...manifested for example in 46% more performance in Luxmark 2,0's Room-Scene. :)
 
Because all 5770 users own 3 monitors, run compute software the whole day, and watch full HD videos 5 inches from monitor to see the latest invisible video improvements.

The question was asked, and I answered. Is there any point of my answer that you can refute as "wrong"? If not, then I fail to see your point. Let me reiterate:
The new product has new features that the old products do not; if you do not need those new features, then the new product isn't for you. Tada! Problem has solved itself.
 
Last edited by a moderator:
...And seems they finally have double precision support on the mainstream :)
Anandtech said:
Even FP64 support is accounted for, however similar to how NVIDIA handles it on lower-end parts it’s a performance-limited implementation for compatibility and software development purposes, with FP64 performance limited to 1/16th FP32 performance.

All the HSA blahblah would also seem a bit hollow without the same basic compute functionality across the board. I just wonder what they save by making it that much slower than Tahiti's (or how they cut it down)

so could this GPU have in reality 12 CUs with 2 disabled for now!?

The (miniature) die shot looks very much like 10 of something.
 
I'm sorry that you don't see the reality of the situation.

You're comparing a top-end model from "last year", using it's currently depreciated price, and then lamblasting a mid-end model released today and complaining about how it doesn't stack up.

The reality, again, is that this is the same across every non-trivial electronics realm you can bring up. Televisions, audio equipment, cameras (both still and motion), you name it. The new product has new features that the old products do not; if you do not need those new features, then the new product isn't for you. Tada! Problem has solved itself.

The reality of the situation is that the old card looks like a better buy for majority of the consumers, who should care about gaming performance first and foremost. The new features aren't anything too special, although I do like the Zero Core tech.

The main feature of a vid card is it's performance and typically in those other examples you don't get better features and cheaper price by buying the old model, here you do. The reality is that these cards at these prices aren't a good buy for the consumer and 77xx have plenty of competition from both parties currently on the shelves. Imo only way these prices make sense is if they have plenty of old cards in the channel and want to encourage people to buy them/lowish supply for the new cards.
 
It seems the whole "Fusion" gizmo is taking its price.

They wont release low end GCN parts because yeah, that would kill completly the A-series CPU-s for the next 2 years today :rolleyes:. A 30-40W GCN GPU thats much faster than the fusion part , with 70-80mm*2 die are and next gen architecture.
Probably the pricing and performance of all cards is reflecting this too. If they would push the gpu-s like in the 4800 day-s, they could say goodbye to fusion in the next cycle.

The big question remains what will Nvidia think about this :LOL:. It seems they will kill AMD-s fusion even before it gets into some meaningful form.
 
With 1 GHz clock its on par with 6850. Thats not much.
Except that 6850 has 32 ROPs and a larger memory bus, so you aren't looking at just tessellation differences.
GZ007 said:
And its even less if u think about gtx460 :rolleyes:.
Shifting the goalposts now? Fine, here's mine: HD7770 uses ~45 watts less than GTX 460 (~55 watts if you look at the 1GB GTX 460) (based on average power consumption from http://www.techpowerup.com/reviews/HIS/HD_7750_7770_CrossFire/21.html). It's not even a contest.
 
The main feature of a vid card is it's performance and typically in those other examples you don't get better features and cheaper price by buying the old model, here you do.

Really? So these are no better than the 5770 (and it's rebranded sibling, the 6770)? At all? Because that's what they're replacing -- NOT the 6850, or 6870, or 6950, or 6970. Until you realize this, you're just spouting nonsense and comparing entirely unrelated product lines.

Last year's TOP END television may still do things better than today's brand new midrange television -- and likely do it at a lower cost so long as you don't mind it being already a year old. Does that make the new midrange device less competent or marketable? Nope.
 
Increased compute performance doesn't benefit current games (except for BF3 and a few others), but as DX11 API gets used more, we will see more games using DirectCompute for their lighting (deferred lighting) and post processing (screen space ambient occlusion, antialiasing, etc). Later we will surely see many more algorithms moving to GPU, so the performance gap will become even wider in the future. Middle class GCN based hardware is a more future proof choice than 6850. It might be on par when running existing DX9/DX10 games, but in the future games (and with the future drivers) it will outperform the old architecture by a wide margin.

46% faster LuxMark score is one indication that future lighting pipeline runs much faster on a hardware that is designed for computate workloads.
 
Back
Top