Hey Dave...from TR FCATs article, they noted your Radeon is outputting some weird, inaccurate colors...hope you can fix that...i wonder if it is the same thing as 360 Xenos gamma "correction" feature...plz go back with tru-calibrated colors yay?
I also tried it on a 4850 and it runs very smoothly at 1680x1050 in a D3D9 mode (?). It even looks good. So good I'm not sure what is missing from D3D11 unless I look it up. But it won't allow >low texture resolution on D3D9 (or the 512mb card maybe). The textures still looked good though.
Nobody noticed anything "wrong" until Nvidia seeded the tools to do so.
It should come as no surprise that AMD was only recently made aware of these performance issues and was kind of stunned to find out how bad they were. The truth is that AMD could “hack” together a fix to make our frame time graphs look better but it would likely be a solution that introduced more problems than answers without doing the research required to get it right. The driver team has told me several times over the past two weeks that they should have a testable driver to fix the CrossFire problems “in 2 to 3 months.” Until then, buyers that consider a multi-GPU solution a goal or a requirement will want to seriously debate dropping Radeon cards from consideration.
http://www.pcper.com/reviews/Graphi...ils-Capture-based-Graphics-Performance-Tes-12
What this means for the upcoming Radeon HD 7990 Release
While we aren't actually testing a Radeon HD 7990 here, we are basically testing the exact same configuration with a pair of Radeon HD 7970s running in CrossFire. This setup emulates the ASUS ARES II and the Powercolor Devil 13 pretty closely and from what we are hearing will be very close to what you'll find on the official reference HD 7990 as well. This comparison today wasn't in response to AMD's tease of information at GDC this week, but it is well timed none the less.
The results shouldn't surprise you, and it won't surprise AMD any more either - if released today, the HD 7990 would not perform well in our tests. AMD has told me that they are working on an option to meter frames in the way that NVIDIA is doing it, but offering users the options to enable or disable it, but we are months away from that fix. Until then, any dual-GPU Radeon HD 7000-series cards are going to show these problems represented as runts and dropped frames. We have many more pages of results to go over for the HD 7950/7870/7850/etc and those will be published in the coming days - but the story will look very similar as you'll find.
In all honesty, when AMD told me they were planning this card release I recommend they hold off until its driver fix is in place - myself and other reviewers are going to be hitting them hard on these issues until then, and any dual-GPU option with the Radeon name is going struggle to live up to any placed expectations.
Specifically on AMD hardware?
Why haven't reviewers been mentioning it up till now?
Lots of Nobody's noticed the problems. AMD just chose to ignore them or brushed them away.
AMD apologists (like yourself) denied they even existed.
The tools that Nvidia provided and was produced with the help of Ryan Shrout at PCper finally showed that the problems are real. AMD has now even admitted they exist and will work on getting fixes done in 2 to 3 months.
GURU3D thread: AMD comments on stutter issue H.H. 26/03/2013 )
For a couple of weeks now I have been working on a method as exposed on another website, with a framegrabber.
On my testing we have our traditional Game PC with the dedicated graphics card installed. We startup a game or benchmark sequence. The game is rendered, passes several stages and then each frame rendered is ready and served towards the monitor. It is precisely at that stage where we make a bypass.
The DVI-DL monitor output cable we connect towards a Dual Link DVI Distribution Amplifier (basically high resolution capable DVI switch). We connect out graphics card towards the input. Now the switch will clone the signal towards two outputs on that switch. One output we connect the monitor to but the second output we connect towards a framegrabber aka Video Capture Card.
Ours is a Single Channel 4 lane PCI Express bus with maximum data rate of 650MB/sec and support for a maximum canvas of 4kx4k HD video (we wanted to be a little future proof) capture for all progressive and interlaced DVI/HDMI modes. This card was 1500 EUR alone.
We are not there yet though as we need to place the framegrabber into a PC of course. Fast is good, so we are using a Z77 motherboard with Core i7 3770K processor. The encoding prices is managed by the processor on the frame grabber in real-time, to if IO is managed fast enough we'll have less then 5% CPU utilization while capturing 2560x1440 @ 60Hz streams in real-time.
Now we need to save the rendered frames in real-time, uncompressed as an AVI file. Now here's the problem:
Correct - That's ~450 MB each second continuesly (!)
- Capturing at 1920x1080 @ 60 Hz in real-time shows IO writes of roughly 200~250 MB/s.
- Capturing at 2560x1440 @ 60 Hz in real-time shows IO writes of roughly 400~475 MB/s.
The first time I notice that, yes I cursed and nearly vomited. At 2560x1440 The only way to tackle the real-time writes without clogging up system IO for the recording PC is to get multiple SATA3 SSDs setup in RAID Stripe mode. That will still create a CPU load somehow. So there is a more easy solution.
We contacted OCZ and asked them to send out the RevoDrive 3 X2. These PCie 4x based products have their own Hardware SSD and Raid controllers, thus lowering a lot of overhead for the PC. They can write sustained 500 MB/sec quite easily. And with 450 MB/sec writes (nearly a full GB for 2 seconds of recording, you'll need some storage volume space as well. So we got the 700 EUR 480 GB version. Which in theory will record 4-5 minutes before it's full.
But that's sufficient for our purposes. While doing all this high-end capturing we see a low CPU overhead of only 3-4%. Why am I so keen on low CPU utilization you might ask ? Because this is precise measuring and analyzing. We want to prevent accidentally recording dropped frames at all times. But yeah at this point we have spend like 3500 EUR alone on the frame grabber PC and a switch.
Once we have setup all the hardware we install the framegrabber. With supported software we can now tap in on the framegrabber and record the frames fired at us from that Game PC.
Recording an AVI and then what ?
Good question, we have the ability to grab and thus record all frames fired at the framegrabber PC. We record them in an AVI file. But that alone is not enough as how are you going to analyze date from an AVI file ?
So here science starts. We leave the framegrabber PC to rest for a minute and go back towards the Game PC that is rendering our game, benchmark or whatever.
On the game PC we have installed a small overlay utility with extremely low CPU overhead. Here's what is is doing, each frame that the GPU is rendering will get a colored bar assigned, example:
Frame 1 gets a Yellow colored bar top the left.
Frame 2 gets a Green colored bar top the left.
Frame 3 gets a Red colored bar top the left.
Frame 4 gets a Purple colored bar top the left.
Frame 5 gets a Blue colored bar top the left.
And so on ... so each rendered frame will have a color tag, that's simple enough to understand right ? Now we go back to the frame grabber PC and record our game play. The output of the Game PC including the color tags per frame from the overlay application is now recorded. Once we look at the AVI file, we can indeed see that with each frame that we pass we see a colored tag on the left side of the frame.
Going deeper
But that is still not enough right ? So here's where I'll simplify the explanation a little bit. We now fire off a perl script at the AVI. The Perl scrip will analyze the AVI file, each frame has a certain latency, each frame has a certain color and that way it can differentiate and distinguish frames and thus values from each other. It will output the date in an XML file. And once data is in an XML file, we can chart it.
We fire off another Perl script to make a nice chart out of the XML data and boom ... we have output that represent the frame experience you guys see and observe on your monitor.
------
So above just a quick part of that article. Unfortunately we are running into many issues software and hardware related. See if we want to catch frame experience / stutter issues then the above method is the only valid one. Unfortunately there is so much hardware and software involved that currently I see anomalies in the charts that should not be there. Even the RevoDrive 3 X2 is not fast enough as we see IO issues causing framedrops ... and that is the one thing that may not happen.
It will take a while (months) to get this refined. However I am fighting another problem, my work week is already 60+ hours, the methodology described above seriously EATS away time into tremendous numbers. So we're not sure how, if and when this new method will become effective. It would be a 2-3 page addition towards (on top of) our current reviews next to average framerates, for the sole reason to hunt down graphics anomalies.
Okay ... this post is too long ... but that said, we are working on a method that is accurate, measuring at DVI output is literally what you'll see on the monitor and thus on the charts. It's however complicated x10 and very time consuming.
This is the problem with the current graphics cards.
I got a 6950 and unlocked it to 6970 speeds the day it released. I believe I paid $350 with a $50 rebate.
Its getting 17/23
A new $350 investment would get me a 7950 and I will go to 25/33
We really have had little progression in years. If you bought a 5870 what 4 years ago now you don't actually need to upgrade
Both vendors. Crossfire seems to be a bit more problematic.Specifically on AMD hardware?
Because it's very hard to explain and it's a lot of work to test. Much easier to just follow the reviewer guides given out by AMD and NVIDIA.Why haven't reviewers been mentioning it up till now?
Because it's one big conspiracy, that's why.Why haven't reviewers been mentioning it up till now?
Went from 6970 *unlocked* to 7970 *overclocked beyond 1ghz* and ...so far im gaming at twice the fps...it is okay...i think. Surprisingly pleasant given that 7970 does not have twice the specs....and i got mine cheaper.
Yes the price of AMD has gone up...quite a lot... since the 4870...(and down lately)...more surprising is that the 7970GE is clearly outperforming the 680 finally...yet Nvidia dont give a fuck with adjusting the 680 prices....goes to show how powerful Nvidia image is...
Further surprising development...anyone noticed how the 580 is still trucking along? Thats...one powerful card even if it used to be reviewed unfavorably against 6970...I wonder is it time for AMD to go for a big die strategy....when Nvidia can sell a Titan yet AMD has nothing to show for...time for change?
Speaking of 6950...surprisingly, it is doing its job outperforming 5870 finally....my guess is the extra tessellation unit and 2GB ram? AMD needs to give more tess unit for their 8900 revision imo...
Lots of surprises im getting just by reading the graphs..
I really hope 22nm card hit this year
Don't Because you will be very disappointed
I can understand if it's not enough of a jump for you to upgrade, but a 43% improvement in one generation is pretty good.This is the problem with the current graphics cards.
I got a 6950 and unlocked it to 6970 speeds the day it released. I believe I paid $350 with a $50 rebate.
Its getting 17/23
A new $350 investment would get me a 7950 and I will go to 25/33
We really have had little progression in years. If you bought a 5870 what 4 years ago now you don't actually need to upgrade