AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

Where does this factiod come from?

From impressions from real life. Like this one:

68bw9k.jpg


I bet this is not an exception but normal for most pcstores. Sorry for the rotation, just did it to save some space.

That is channel "add-in-board", not the overall discrete market.

What are the numbers for the overall discrete market?
 
From impressions from real life. Like this one:



I bet this is not an exception but normal for most pcstores. Sorry for the rotation, just did it to save some space.

Number of products for sale doesn't necessarily equate to sales. If it does AMD needs to ship some 7750s with 64bit and/or ddr3 configurations and buy some new stickers for their reference boards.
 
Thanks. Does the article explain why MilkyWay is so much faster on AMD Hardware, and so slow on Kepler vs. Fermi? Does it use DP?
It uses double precision math.
Maybe not optimised at all.
It is not bad. I wrote the first GPU version for AMD in IL (for the HD3800/HD4800/HD5800 series) and arrived not that far off the theoretical maximum for the instruction mix in there (it's a lot of multiply adds with some adds and muls you can't fuse and very few transcendentals, all in a really embarassingly parallel and completely compute bound algorithm with a tiny management overhead for loops and memory accesses [which are well ordered and not very abundant] and no thread divergences, so perfect even for the older AMD GPUs).
The OpenCL version the project guys wrote later is only about 30% slower than the old handtuned IL version on a HD6970 (it doesn't run on GCN and nobody wants to touch CAL/IL anymore to fix it). AMD GPUs always dominated this (maybe save for Tesla cards, but nobody bothered to run it on such a GPU afaik).
 
Last edited by a moderator:
It uses double precision math.
It is not bad. I wrote the first GPU version for AMD in IL (for the HD3800/HD4800/HD5800 series) and arrived not that far off the theoretical maximum for the instruction mix in there (it's a lot of multiply adds with some adds and muls you can't fuse and very few transcendentals, all in a really embarassingly parallel and completely compute bound algorithm with a tiny management overhead for loops and memory accesses [which are well ordered and not very abundant] and no thread divergences, so perfect even for the older AMD GPUs).
The OpenCL version the project guys wrote later is only about 30% slower than the old handtuned IL version on a HD6970 (it doesn't run on GCN and nobody wants to touch CAL/IL anymore to fix it). AMD GPUs always dominated this (maybe save for Tesla cards, but nobody bothered to run it on such a GPU afaik).

Makes sense, thanks!
 
How many times should it be repeated that NV has the upper hand in the discrete market with 60 or more % market share?
How many times should it be repeated that AMD NEEDS lower prices in order to sustain interest in their products, when they are overpriced, significantly less people would tend to go there.
How many people/ gamers do actually care about that compute performance?
What exactly is the performance difference between Pitcairn (costing around 300-350$) and this same Tahiti in compute? Is it even worth to pay so much more for marginal improvement in most compute scenarios?

And the last but not least. IF YOU CARE so much about the company, you are always free to donate. But don't give wrong ideas, and don't protect and justify the interest of this rich corporation. Please! :rolleyes:

Yet actually AMD continues to gain overall GPU share, well ahead of Nvidia now, due to fusion type products...

How many times should it be repeated that AMD NEEDS lower prices in order to sustain interest in their products,

Once again, AMD nor Nvidia are typically priced outside exactly where there performance falls. There appears to be zero brand premium for either. If there was, GTX 680 should be $750...maybe I can say Nvidia "had" to price it lower than slower 7970 to get anybody to buy it? Makes just as much sense...

Even 7970 supposed drop to 479 is not in line with it's performance, it's almost same price as 680 despite significantly slower.

I do agree there are definitely more Nvidia fans on message boards though, hard to argue that.
 
There is no game on the market that could not run a 4870, and most Pc gamers are not running above 1680 x 1050, even Diablo 3 will run fine on a 4870. Why do people pour thousands into a video card when they don't need too, I have always said PC gaming is like buying a 1080P HD TV and using a VCR as its video source. Very few game engines push the hardware but most PC games are optimized for consoles and will not change anytime soon i.e RAGE.

Steam says: http://store.steampowered.com/hwsurvey/

Nothing has changed, 60% of installed user base is still dx8-dx10, I laugh at these debates today arguing about a few FPS between two products....oh my video card gets 5 fps more at 5760x1080, big woop thats like .9% of the gaming card sales.
 
If you're interested, here's some GPGPU benches on GTX580, 680 and HD6970, 7970. It's in finnish but the graphs are quite universal, as long as you remember "suurempi on parempi" means bigger bar is better, while "pienempi on parempi" means smalle bar is better :)

edit:
Oh ye, the link :D http://muropaketti.com/artikkelit/naytonohjaimet/gpgpu-suorituskyky-amd-vs-nvidia,2
I do want to know the exact reason for this. I think 680 is not that bad on its theoretical performance and 680 has done some improvement in the scheduler according to the post of the Kepler in this form. What change in 680 cause this result? Cache bandwidth I may guess?:rolleyes:
 
There is no game on the market that could not run a 4870, and most Pc gamers are not running above 1680 x 1050, even Diablo 3 will run fine on a 4870. Why do people pour thousands into a video card when they don't need too, I have always said PC gaming is like buying a 1080P HD TV and using a VCR as its video source. Very few game engines push the hardware but most PC games are optimized for consoles and will not change anytime soon i.e RAGE.

Steam says: http://store.steampowered.com/hwsurvey/

Nothing has changed, 60% of installed user base is still dx8-dx10, I laugh at these debates today arguing about a few FPS between two products....oh my video card gets 5 fps more at 5760x1080, big woop thats like .9% of the gaming card sales.

Maybe this is the answer ;)

http://techreport.com/articles.x/21516/1
 
I do want to know the exact reason for this. I think 680 is not that bad on its theoretical performance and 680 has done some improvement in the scheduler according to the post of the Kepler in this form. What change in 680 cause this result? Cache bandwidth I may guess?:rolleyes:

Improvement in the scheduler? Moving from hardware to software scheduler isn't usually considered "improvement", is it?
 
Nothing has changed, 60% of installed user base is still dx8-dx10, I laugh at these debates today arguing about a few FPS between two products....oh my video card gets 5 fps more at 5760x1080, big woop thats like .9% of the gaming card sales.
We have no idea how long hardware x stays in the survey when they once get in there, so the numbers don't tell much really
 
There is no game on the market that could not run a 4870, and most Pc gamers are not running above 1680 x 1050, even Diablo 3 will run fine on a 4870. Why do people pour thousands into a video card when they don't need too, I have always said PC gaming is like buying a 1080P HD TV and using a VCR as its video source. Very few game engines push the hardware but most PC games are optimized for consoles and will not change anytime soon i.e RAGE.

Steam says: http://store.steampowered.com/hwsurvey/

Nothing has changed, 60% of installed user base is still dx8-dx10, I laugh at these debates today arguing about a few FPS between two products....oh my video card gets 5 fps more at 5760x1080, big woop thats like .9% of the gaming card sales.

Arguing about 5fps on a 100fps scale is really silly. Also jousting about which beast is better, in regards to the 7970 and the 680, is also silly. People are grasping from tiny details, when they try to prove their own opinion as the correct one.

Still, there is a very real reason as to why people shove out decent amounts of money for their hardware, this being high quality gaming. Do you know how many recent games the 4870 could run even at 1080P/60fps with high settings? Very few.

Have you tried Witcher 2, Alan Wake, Crysis 2, Battlefield 3 etc lately, to see what they require in terms of processing power to hit a stable vsynced 60fps? I've found that I would be gaming on a jerk fest if I didn't have a GTX 570 SLI solution. Not to mention people shooting for stereoscopic gaming.

Just explained my own reasons here. Of course aside from the offered usability, it's always fun to see Heaven 3 run faster on a GTX 680 than two 570s. We are not called enthusiasts for no reason.
 
http://hexus.net/tech/news/graphics/37969-amd-feels-geforce-cuts-radeon-hd-7900-series-pricing/

I quess the price drop is official now.

Starting today, AMD is slashing the cost of the range-topping Radeon HD 7970 and HD 7950 cards, and sweetening the deal by throwing in a trio of free games.

The promotion, expected to run for a limited time, is being dubbed 'Three for Free' and will offer buyers a chance to download DiRT Showdown (released in May), Nexuiz (released in May), and Deus Ex: Human Revolution (available now) for free.

A couple of games might prove enticing, but it's the price cuts that have been eagerly anticipated. Effective immediately, Radeon HD 7970 pricing has been slashed from a US MSRP of $549 to $470, while the Radeon HD 7950 falls from $449 to $399. Further down the ladder, it seems the Radeon HD 7770 will also see a price reduction, moving from $159 to $139, though 7700-series cards aren't participating in the Three for Free promotion.
 
Improvement in the scheduler? Moving from hardware to software scheduler isn't usually considered "improvement", is it?
It's an improvement if you consider the reduced effort and power consumption. This drops off sharper than the performance. :smile:
 
Have you tried Witcher 2, Alan Wake, Crysis 2, Battlefield 3 etc lately, to see what they require in terms of processing power to hit a stable vsynced 60fps? I've found that I would be gaming on a jerk fest if I didn't have a GTX 570 SLI solution. Not to mention people shooting for stereoscopic gaming.
That's because you have all the eye-candy turned up (I'm not talking about AA/AF), much of which makes a very small change in image quality. A 4870 is way more powerful than the XBox or PS3, for example, and the latter two provide 90% of the non-input-related gaming experience that you get on a PC.

I don't think Doomtrooper is being that unreasonable from that perspective. However, everything is subjective, and you may really enjoy the incremental details that a 570 SLI provides.
 
Not until Nvidia puts some pressure on them in the ~$350 market.

Yes i second on this, without saying if Nvidia is ready to launch soon a GTX660... there's no need to put down the price now, before to know how many it cost and how it perform. ( basically this is not only a question of 670, but more the entire lineup. )
 
Back
Top