AMD: Sea Islands R1100 (8*** series) Speculation/ Rumour Thread

Thanks for disabusing me of that notion. I'm also impressed that CrossFire gets so much closer to a factor of 2x single card performance than SLI this generation.

it have nearly allways be the case since the introduction of CFX on the x1800PE and X1900xtx times .. But Nvidia have done a lot of effort on the fermi period to come back on this .. this said.. cfx or SLI are not perfect, when you want increase scaling, you forcibly put some things aside..

I remember many years ago... SLI maximum scaling was at 75% maximum ( 75% was really the maximum in the better condition ) .. CFX was going easely to the +90% in some situation . hence why if the single AMD cards was slower, in cfx they was allways faster of the Nvidia opponnent in SLI . Without saying the performance increase was really linear for extreme overclocking, Outside professional use where we was using dual gpu, my first dual cards was 2x 6600GT in SLI watercooled ( and 6800 , a revolution because before i was OC and benchmak with the 9700 Pro Maya and later 9800 Pro).. then i have use the first CFX of x1800XTXPE 512mb ( not a card wjho have been there long, it was replaced by ATI 3 months after the release of it ( december, release of the x1800xtx PE.. march, release of the x1900xtx ) and then use x1900xtx+X1950XTX ( coupled in CFX ( Beast cards and system with a extremely good A64 for oc on subzero) --
 
Last edited by a moderator:
Crossfire appears to have other issues however, according to this - http://www.pcper.com/reviews/Graphi...ance-Review-and-Frame-Rating-Update/Frame-Rat

I'd avoid both SLI and Crossfire like the plague personally.


Understand its a game, and for most variation, taken, pixel just are around 25-30ms for light up ( not grey to grey.. full blue to blue, or white to black ) .. you will not see this variation.

I dont say AMD dont need to work on this ofc.. but i can compare system here, include Kepler in single and CFX.. and i cant see a difference between the 2 screens i watch on smoothness . I hope PCper will make us a large review with more of 1 game .

Anyway, Nvidia look to have a good technic.. delay the frame render ...i dont know yet if it is really a good thing or a bad thing, but anyway it have this impact on thoses tests...
 
Last edited by a moderator:
Yeah I don't know how much it's really affecting gamplay like they suggest at pcper. I've used Crossfire in the past however and I really wasn't impressed. Just couldn't handle the microstutter.
 
The PCPer measurements are interesting on its own, in that it demonstrates how certain frames are only visible for a couple of lines, but what's also very useful is to correlate frame times as recorded by FRAPS with the frame times are recorded on the monitor.

If the game uses its reference time based on the moment things are handed off to the driver (without putting some low pass filter on it), then the difference between fraps times and real visibility would be another metric for how much stutter is introduced by imbalance between the 2 parallel GPUs. (I'm assuming here that single GPUs never, or hardly ever, show this 1 line only behavior.)
 
Anyway, Nvidia look to have a good technic.. delay the frame render ...i dont know yet if it is really a good thing or a bad thing, but anyway it have this impact on thoses tests...

Hmmm... a possibly ridiculous question from my sleep deprived mind, would this have any effect on input lag?
 
If the game uses its reference time based on the moment things are handed off to the driver (without putting some low pass filter on it), then the difference between fraps times and real visibility would be another metric for how much stutter is introduced by imbalance between the 2 parallel GPUs. (I'm assuming here that single GPUs never, or hardly ever, show this 1 line only behavior.)
Exactly! People seem to be getting into two silly camps here based on "FRAPS" or colored frame-style analysis... in reality both of them matter. You need to know not only what is showing up and when on the display, but the *contents* of that frame (i.e. the simulation time used by the game engine to generate it). You can't ignore either of those in a full analysis.
 
Kaotik said:
Yes, it increases perceived input lag, and it varies all the time.
For higher framerates (60+), it shouldn't increase perceived lag in any significant manner as the worst case scenario should be it displays a full frame in sync with the monitors refresh rate.

Though it would seem a better solution would be having an in-engine framerate cap. That really shouldn't be terribly difficult to support and should be standard option of any game settings GUI...
 
Yes, it increases perceived input lag, and it varies all the time. Wether the amounts of added lag are noticeable or not is anyones guess.
http://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking/11

It should depend mainly from the monitor and situation... But then again, we are on the problem of own perception. If you track something intensively you will finally find it, but if you just play.. there's other things you should notice before that.
 
Exactly! People seem to be getting into two silly camps here based on "FRAPS" or colored frame-style analysis... in reality both of them matter. You need to know not only what is showing up and when on the display, but the *contents* of that frame (i.e. the simulation time used by the game engine to generate it). You can't ignore either of those in a full analysis.
Do all game engines determine simulation time steps in the exact same way FRAPS is reporting it? Personally, I've always taken an average for a few frames.
 
Some early benchmarks for Sea Islands:

http://semiaccurate.com/2013/03/07/...d-range-gpus-hit-the-intertubes/#.UTqs1Rysh8E

According to Semiaccurate, it shines on tests where memory is emphasized. Not a big bump in low-midrange performance.

The bolded part depends on the driver, not about Sea Islands
However it’s worth noting that the newer OpenCL driver for Saturn 6641 has clearly a different approach to optimizing compute workloads, it shines on areas where memory usage is being highlighted and regressed on the rest.
The benchmarks were done on the new chip on 2 different driver sets
 
The bolded part depends on the driver, not about Sea Islands

The benchmarks were done on the new chip on 2 different driver sets

It sounded like the new set of drivers was made with Saturn in mind, but you're right. It makes me wonder if the new optimizations would translate back to Southern Islands. The hardware shader count and speeds aren't directly comparable either:

Saturn: 768 SPUs, 48 TMUs @ 1075 MHz.
Cape Verde: 640 SPUs, 40 TMUs @ 1000 MHz

A gain of merely 10% (albeit across different driver sets) seems small with the 25% bump in SPUs and 7.5% bump in clock speeds.
 
As you may have heard, AMD has officially released Richland, the APU that replaces Trinity. It improves on Trinity's Turbo by taking temperature under account to take advantage of any available headroom, like Intel's Turbo or NVIDIA's boost. More at the Tech Report: http://techreport.com/news/24482/amd-intros-35w-richland-mobile-apus

The reason I'm posting this here is that if AMD is doing this for APUs, I don't see why they wouldn't do the same for GPUs at some point either this year or in 2014. The obvious downside is that the new algorithm isn't deterministic anymore, but I guess AMD figured that an extra 2~10% or so in benchmarks was worth it.
 
Reviewers should start doing hot box tests, like some power supply reviewers do. Run the system in a poorly ventilated case of yesteryear in a non-ac room and compare the results. It's a shame that people can't be certain that they will get the performance they see in reviews from a product they buy, but I guess it's better now that all the big players will be on the same playing field...
 
Back
Top