AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

honestly Dave...i would choose the 650TiB if i was looking for a <199 GPU....no offence...650TiB hits the sweet spot better than 7790 imho. Bonaire should have come with 1.5GB and 192bit bandwidth....or it should go cheaper...it is not like 7790 overclocks better than 650TiB...last i heard both them can hit 1.2ghz on the core....
Well personally I'd look at a HD 7850 instead - not that I'm looking :).
But really the 650Ti Boost should compete with HD 7850 and not 7790. Those are very similar sized chips, with most likely very similar costs for the cards overall, and both are slightly cut down from their full versions.
The difference being the 650Ti Boost only being cut down 20% on SMX, whereas the 7850 is cut down both 20% on CUs and 14% on clocks. And it shows both on overclocking potential and perf/power.
Oh and btw the 7850 is <199 already anyway. Though I think it is indeed somewhat surprising nvidia undercut the HD 7850 price (by 10$). But we'll see how that turns out. And it is obvious the 7790 has some potential for price cuts.
 
It looks like the card has two 8-pin power connectors.

I don't know, but until a few days ago, Cape Verde was the only 7700-series GPU. It's just an idea, but AMD is up to some strange things with its lineup.
Maybe Hainan will show up as a 7930 or a 7950 GE?

I thought about the possibility of Hainan in this dual-GPU card, but I'm not so sure. While it would neatly explain the codename change, the presence of the codename Aruba for a future dual-GPU part would mean either Aruba is 2x Curacao or 2x some chip likely further in the future. If it's the former, then since Aruba is probably going to be larger than and have at least similar power consumption as Tahiti, then there would be little reason for a dual Tahiti to not have been possible, at least within the TDP of Aruba. What's stumping me is how to explain the rumored clock speeds of ≥ 1050 MHz. Either AMD pulled some magic with Tahiti, the rumors are wrong, a significant number of SPs are disabled, or Malta isn't a dual Tahiti.
 
honestly Dave...i would choose the 650TiB if i was looking for a <199 GPU....no offence...650TiB hits the sweet spot better than 7790 imho. Bonaire should have come with 1.5GB and 192bit bandwidth....or it should go cheaper...it is not like 7790 overclocks better than 650TiB...last i heard both them can hit 1.2ghz on the core....

I think 7790 is an interesting product...at $110-129.
Thats the price AMD is ..i suspect targeting for the Haswell crowd....:D

Well, if someone was planning on buying Bioshock Infinite anyway, then this card would for all intents and purposes be only 89.99. :D

I was tempted to pick one up just because of that as I'm planning on buying Bioshock Infinite which means I'd be spending less than 100 USD for the card itself. But I really would prefer a passively cooled card for my HTPC and the darn 7750 is still over 100 USD for a passively cooled version.

Of course, if they aren't interested in that game or have it already, that doesn't do much.

Regards,
SB
 
honestly Dave...i would choose the 650TiB if i was looking for a <199 GPU....no offence...650TiB hits the sweet spot better than 7790 imho. Bonaire should have come with 1.5GB and 192bit bandwidth....or it should go cheaper...it is not like 7790 overclocks better than 650TiB...last i heard both them can hit 1.2ghz on the core....

I think 7790 is an interesting product...at $110-129.
Thats the price AMD is ..i suspect targeting for the Haswell crowd....:D

7790 isn't <199 GPU, it's <149 GPU.
Isn't 7850 <199 too?
 
My two cents on the new AMD chips (Mars-384 and Bonnaire)- they're primarily mobile-oriented designs, no?

Currently 8500-8700M are served by Mars, 8830-70 by Cape Verde, and the rest shows as ?; Bonnaire slots in above the 45W zone really nicely for 8900Ms, taking 8950 and I reckon even 8970M. Spec-wise it's a downgrade, but apparently those 7970Ms could never really hit performance due to throttling...

We'd probably see another Mobility Pitcairn as the flagship product again- this time fixed and all- but Bonnaire seems to serve its greater purpose in high-end mobile (particularly because of it's clockspeed/voltage granularity too)
 
A 7850 with boost would be more than enough for AMD in this space again.

I don't think they really need it. There are incredible deals already if only people bothered to look at all.

http://www.newegg.com/Product/Product.aspx?Item=N82E16814150641
XFX 7850 2GB @ $179.99 - $20 rebate = $159.99 + Free delivery + Tomb Raider & Bioshock: Infinite

Further sweeten the deal by selling the codes if you own the games already.

There was also a similar deal for a Powercolor Tahiti LE 7870 @ $209 + Free delivery + Tomb Raider & Bioshock but the card is sold out now. There are similar deals on other 7870s though.
 
Last edited by a moderator:
I agree they don't need to do it, but it must surely be one of the easier options if they did decide to put a bit more pressure back on to Nvidia. It's pretty unusual for AMD to be slower and more expensive (even though the gaming bundle still gives a pretty large value lead to the AMD cards).

I would otherwise anticipate a price drop on the 7870, which to be frank is long overdue one anyway. That card has just never looked good value to me.
 
I am just wondering, is there any way to explain beyond 100 percent scaling with crossfire we sometimes see in reviews? The new Runt frames being counted as an fps, would explain this actually and reflect badly on AMD. There are a couple reviews coming out of the wood work from Techreport and pcper that highlight the flaws in crossfire technology. Anandtech is also starting an article.

So Malta is unreleasable until they fix this frametime, runt issue.

Its almost certain any crossfire product will undergo this type of testing in the future.... If Malta were released tomorrow and Anandtech did a revew for example, it would attract more bad publicity by highlighting the flaws of cross fire. This flaw being that in some games, (not because of crossfire scaling directly), crossfire isn't particularly faster than a single card for gameplay experience.
 
The CF results are so disastrously bad that I suspect that they simply never bothered to verify if the final generated image made sense.

Which, in a way, is encouraging: it's usually easier to fix dramatic bugs than very subtle ones.

I wouldn't be surprise to see this addressed in a next driver.
 
Yep, I seriously doubt there was anything malicious going on there. AMD was probably just unaware of how habitually bad the CF behavior was.
 
I am genuinely intrigued by the whole frametime issue on Crossfire and to be frank, I was always reluctant to accept that there is any. I've been using dual gpu solutions since the 4870X2 and never looked back. 5850 CFX, 570 SLI and now 7950 CFX. All systems performed admirably. 60fps vsynced gameplay on all my games.

Reading more about the subject, I found out that vsync does indeed help a lot in reducing or making stuttering non perceivable and maybe that's why it never annoyed me. I do consider myself very picky as far as game motion is concerned.

I haven't played many games on my 7950 CFX system yet, since I very recently got them, but Crysis 3 run wonderfully. I made a short clip of it, for anyone interested.

http://www.multiupload.nl/7XAMIRVZNI

I did see stuttering on Tomb Raider though, even with vsync enabled. Here's the video of that.

http://www.multiupload.nl/1LZLWBUVCI

Surprisingly, I found out that if I enable MSI Afterburner framerate limiter, along with vsync, the stuttering goes away. Here's the result.

http://www.multiupload.nl/GMIH6GT0GE

So up until now for current games, I have 100% success for absolutely smooth gameplay on my 7950 CFX. I did some more testing on some older games as well. They too are absolutely pixel smooth. Even the new Bioshock Infinite, when run with CFX, it is still absolutely smooth, although each gpu has like 30% load.

From my scope, I believe the whole micro/stuttering thing is overblown. It's not that bad.

Also @people that say it's better to play on a single card, I can only say go watch Crysis 3 on a single 7950 and then on 7950 cfx and then talk.
 
psolord is doing vsync'd gameplay.

Frankly, that's how it should be done if you aren't measuring your epenis.

Not for benchmark testing. It gives users a false impression of crossfire performance.

If the shoe was on the other foot and it was Nvidia doing this, people would be lighting their pitchforks. Techreport is not particularly Nvidia friendly(remember they exposed the crysis 2 Nvidia fiasco), so whatever techreport is showing that makes AMD look negative in this light is hardly Nvidia motivated. The fact is people likely bought a second card based on this scaling. If the scaling was as produced when runt frames are filtered out and only 10 percent scaling is achieved in reviews, would people still have bought a second card? If wouldn't have and AMD did this on purpose, thats grounds for a class action lawsuit.

Basically these errors double framerates without the card doing as much work. E.g because one frame takes 15 ms to render while another takes 0.3 ms to render(which should be impossible), it doubles frame rates because it only takes 15.3 ms to render both, compared to if both took 15ms to render and thus 30ms. This increase the marketability of a crossfire set up vastly when left undetected.

When I read anandtechs AMD explanation, I didn't buy the excuse. If the cost of producing low latency frame rates is not a real frame but a runt frame which is a visual artifact, why bother doing it. Its basically a cheat to produce higher fps then should be possible(i.e greater than 100 percent scaling). Do you think a videocard company would simply look at fps, not look into these runt artifacts as the final testing before drivers are released. I think that is naive.

AMD is simply able to get away with it because they are the underdog.
 
The cards are doing all the work. It's just that for runt frames the user doesn't see all of it.
They're still real frames that the system has simulated and submitted, but the final IO and display mechanisms operate on their own timing.

If there were a mechanism for the graphics subsystem to know that X% of a frame has become effectively pointless to the outside world, it could save itself a bit of effort and potentially increase performance even further.
 
Latency from simulation to display is also reduced, which is one of the critical factor as well. The point of increasing the graphics power is to ensure that the pipeline is is as drained a possible so that the the input actions is represented on the screen as soon as possible; the more graphics bound you are the more the queue will fill and the higher the latency, add more graphics capabililities the more queue is drained and the lower the latency between input and screen.
 
Mintmaster said:
Frankly, that's how it should be done if you aren't measuring your epenis.
:rolleyes: It depends on the gametype and whether one is playing competitively or not, but that's really beside the point...

Dave Baumann said:
The point of increasing the graphics power is to ensure that the pipeline is is as drained a possible so that the the input actions is represented on the screen as soon as possible
The problem is that in the ping-pong scenario that is presenting itself in some titles the frame is virtually invisible and thus visibly meaningless. Latency isn't really helped because you have 1 frame immediately following the other then a wait that is just as long as if the frame were optimally spaced. If the stuttering is bad enough, I imagine it is distracting to the point where it is far worse than an additional 8ms of latency.
 
Last edited by a moderator:
The way is currently works, you could introduce softCF, for half the price: all it does is drop 50% of the frame for reduced latency. ;)
 
Back
Top