Skyrim is heavily CPU bound at "normal" settings.
At 2560x1600, there is a noticeable FPS increase on my GTX from stock to OC'd settings.
Having my 2600k at 4.6 vs stock does help but the GPU OC is certainly showing me more benefit.
Skyrim is heavily CPU bound at "normal" settings.
At 2560x1600, there is a noticeable FPS increase on my GTX from stock to OC'd settings.
Having my 2600k at 4.6 vs stock does help but the GPU OC is certainly showing me more benefit.
I'm quite sure it was 250W with +20%, even though at least some reviews mention it being at 0%.
At least IMO even the official slides suggested it, mentioning Max Board Power as 250W, and "typical gaming power" at mere 190W
Anyone to confirm this?
Depends entirely on Apple pushing (or not) for quadrupling the resolution on Macs. If they do push, the prices could tumble very quickly.
You are running single card and not SLI is that correct?
Some quick number crunches:
7970: 2,048 SP * 925 MHz = 1894400 SP*MHz
3.79 TFLOPS / 1894400 SP*MHz ~ 2*10^-6 TFLOPS/SP*MHz (2.006334)
4.31B Transistors * 925 MHz = 3986.75 BTrans*MHz
3.79 TFLOPS / 3986.75 BTrans*MHz = 9.506*10^-4 TFLOPS/BTrans*MHz
6970: 1536 SP * 880 MHz = 1351680 SP*MHz
2.70 TFLOPS / 1351680 SP*MHz ~ 2*10^-6 TFLOPS/SP*MHz (1.997514)
2.64B Transistors * 880 MHz = 2323.2 BTrans*MHz
2.7 TFLOPS / 2323.2 BTrans*MHz = 1.162*10^-3 TFLOPS/BTrans*MHz
Also, pixel fillrate is a direct proportion of core clocks.
So, correct me if I'm wrong, but it looks like they bought no performance improvement with GCN (if you look at a pure TFLOPS metric which isn't a direct correlation to game performance I know) because the improvement in FLOPS is a direct relationship between the SP increase and clock increase.
However, the FLOP number is roughly a 40% increase, and 40% is right around what we're hearing in leaked benches.
So, they could have achieved the same thing by shrinking Cayman and scaling the architecture up theoretically.
It also seems they are spending more transistors to get the same FLOP performance, so there is obviously some extra functionality packed in there. Power gating, 3D support + other stuff?
I guess the takeaway is that GCN in itself isn't revolutionary and this iteration is more about the die shrink, added features, lower idle power and directx 11.1 compliance?
Your math just calculates the fact that they use FMA units.So, correct me if I'm wrong, but it looks like they bought no performance improvement with GCN (if you look at a pure TFLOPS metric which isn't a direct correlation to game performance I know) because the improvement in FLOPS is a direct relationship between the SP increase and clock increase.
Probably not. The expectation some have for linear improvement is unreasonable. There were signs Cayman's scaling over its predecessors was petering out.So, they could have achieved the same thing by shrinking Cayman and scaling the architecture up theoretically.
Your math just calculates the fact that they use FMA units.
Probably not. The expectation some have for linear improvement is unreasonable. There were signs Cayman's scaling over its predecessors was petering out.
Well that's the big part. The new architecture is supposed to be much more efficient no ? I know the leaked benches don't show that everytime but... we will see... Plus I guess they played it "safe", new architecture, new process, new a lot of things...
VLIW4/5 was already getting good utilization for most graphics workloads, what you need to look at is things like compute shaders to see the performance advantage for GCN over VLIW. So Civ 5 uses compute shaders, look at its performance improvement, what else uses them?
edit: quick google says shogun 2 does as well.
Can you elaborate on that? (something that shows the arch hitting a brick wall with scaling?)
Looks more like a 45% increase overall to me - with some compute stuff showing way better results obviously. So from that point of view it seems it is really more efficient per flop (plus as others mentioned, I don't think you could really upscale Cayman - give it twice the numbers of simds and performance probably would increase way less than even 50%).However, the FLOP number is roughly a 40% increase, and 40% is right around what we're hearing in leaked benches.
I'm looking at the Skyrim graph, and I'm 100% sure there aren't enough ROPs. Reports are that later this year well see 4k monitors @ 30-36", and 2880 @ 17". Giving Tahiti only 32ROPs is a colossal mistake. With the rumoured price, the people on 1080p res probably won't go for this card, and we're already seeing graphs where it's no faster than a 580 @ 2560. Failure all round.
But it's OK! WINZIP is accelerated. AMD, pull your head out your arse plz.
There were a number of games that had the 6970 uncomfortably close to the 5870. In some, the older chip was bracked in performance by the 6970 and 6950, despite a 20% increase in transistor count. Barts was rather close as well to the 6950 in some places.Can you elaborate on that? (something that shows the arch hitting a brick wall with scaling?)
Power improvement for the node jump was given as being ~40%, with no improvement in transistor performance or an increase in transistor count.Sure, it's more efficient in that you get more performance per watt, but that can be attributed to a die shrink.
I doubt they ever do that. Some custom AIB 7950 parts might come with only 1.5GB, but I don't think 7970's will ever, not even as custom designs.So.....how long do you guys think it will take for AMD to release 1.5GB versions of these babies (if ever)?
I doubt they ever do that. Some custom AIB 7950 parts might come with only 1.5GB, but I don't think 7970's will ever, not even as custom designs.
For one thing, the gap between the 6950 and 6970 is a lot smaller than theoretical FLOPS numbers would suggest. In fact, in practice a 6950 overclocked to the same frequency as the 6970 performs almost as well; which, by the way, was also true for the 5850 and 5870.
So you can't really assess how much of an improvement in efficiency Tahiti is, because you don't know how well—or poorly—Cayman would have scaled. However, there's a very good chance that either Pitcairn or Cape Verde will end up with a FLOPS count very similar to some Evergreen/NI chip (could be Cypress, maybe Barts, Juniper or Cayman). Playing around with clocks could even bring it to perfectly equal numbers. This would allow for more straightforward efficiency comparisons. Until then, calculating flops/transistor ratios won't tell you much.
There were a number of games that had the 6970 uncomfortably close to the 5870. In some, the older chip was bracked in performance by the 6970 and 6950, despite a 20% increase in transistor count. Barts was rather close as well to the 6950 in some places.
Each generation returned less than a 1:1 improvement for increases in execution resources, outside of areas where the previous generation tanked.
RV770 had some very large gains compared to RV630, but it should be noted that there were cases where it more than doubled in resources, and there were notable shortcomings in RV630.
Power improvement for the node jump was given as being ~40%, with no improvement in transistor performance or an increase in transistor count.
The shrink alone would not be enough to get power savings and an increase in overal performance as GCN appears to get.
Remember what the original 22" Apple Cinema Display (1999) cost? It was a relatively strange resolution but is the precursor of later 1680x1050 monitors. Hint, it wasn't even remotely cheap. And it would still be over 5 years before those were somewhat affordable.
Apple pushed the 23" Cinema HD Display in 2003. Again not very cheap or affordable. And again over 6 years until 23-24" (1080p or 1200p) were mostly affordable. Although at least they hit under 1k within 4 years.
Apple then pushed the 30" Cinema HD Displays in 2004. And... They are still not affordable to most. The most recent 27" version (with lower resolution although higher pixel density) is still 1k USD.
So pardon me if I don't hold my breath for 4k displays to be even remotely affordable in the next 5+ years. Especially with there being so many different "4k" resolutions (3840x2400, 3840x2160, 4096x2160, just to name a few).
Don't take this to mean that I don't hope it happens. I've been wishing for a doubling of panel pixel density for years and years now. And while this wouldn't be a doubling (only 1.5x the current 30" pixel density for 30" 4k displays), it's at least progress in that arena.
Regards,
SB