AMD Vega Hardware Reviews

Hats off to HardOP for revealing the uselessness of blind gaming tests ... guess that's why they decided to use a 1080Ti.

How is it a useless test?

Those are people actually playing games. When you play games, do you look at the FPS counter or the action? If Vega is providing a the same or better end user experience doesn't that mean anything? Are FPS Graphs more important than hands on experience?

I thought we got away from that thought with microstuttering and mGPU issues.
 
1440p is a fairly popular resolution for gaming and the blind tests AMD had been showing were against 1080 where performance is currently equivalent. Regardless, the blind tests are representative of what was tested.

It was 3440x1440 so ~33% more pixels, and 100hz. 4k would have been limited to 60hz and I'm sure people would have complained about that as well. Honestly 3440x1440 @ high refresh rate is the best bet for gaming right now. You get to see more with UW and still have better IQ and higher FPS.
 
Those are people actually playing games. When you play games, do you look at the FPS counter or the action? If Vega is providing a the same or better end user experience doesn't that mean anything? Are FPS Graphs more important than hands on experience?
There's a lot of stuff missing. Why not add RX 580 and GTX 980 TI in there as well and then ask them to sort it from worst to best?
 
They used a Ryzen 1700 CPU? Maybe that's why nVidia's performance allegedly suffers? (if they needed a frame cap, I'd go give them the benefit of doubt - they probably figured that out and used one)
 
Isn't that an argument of benchmarks being superior to actual experience? Essentially image quality doesn't matter so long as framerates are high. Actual performance so far hasn't been the point of these demonstrations as no framerates are disclosed.
Benchmarks are admittedly imperfect attempts at measuring what the performance is for themselves, but it does have the benefit of providing some level of consistency, reproducibility, and values for measurement and comparison. Flaws or unexpected quirks related to those tests can be more readily found and analyzed, with a fair amount of shared terminology and processes.

You can derive conclusions of reasonable confidence when working with human perception, but if trying for a more scientific conclusion you will be faced with the daunting prospect that humans are gooey, twitchy, inconstant, non-deterministic, capricious, sometimes inarticulate, and often irrational instruments. Signal has to be extracted from a lot of measurements, and even slight quirks can create a signal that wouldn't exist if not for an unexpected influence added by the test. The set of variables is much wider and less understood, and it's challenging especially in that there are ways to influence a test that even a badly coded benchmark would be too rational to do.
Experimentiation in part is about trying to combat the human tendency to fool oneself, and adding a human element to more parts of the experiment allows for an explosion in the number of ways one can fool or be fooled about how they are fooling or being fooled about being fooled, and so on.

For example, maybe it would have made a difference if there were "four" systems, with some randomized mapping of the two systems to what the testers thought they were playing on. It might help avoid a player getting hung up on something subconsciously in one test run and their brain subsequently reinforcing its conclusions on what it knows is a retry.
Also, it seemed like the testing order was sequential and fixed, which might need to be controlled for if this were a more broad test. The brain running prior to a session on machine 1 isn't entirely the same one at the start of machine 2's run.
I could be misinterpreting the methodology section on that.

They admit they lacked the time and resources to do more, which may leave us uncertain on a lot of things.
Benchmarks are meant to try to optimize time and resourcing for getting a measurement of some sort.
Human beings, on the other hand, are complications wrapped in an enigma smothered in biology and dusted with cheetos.
 
Well, I guess we already know what is going to dominate the conversations when Vega is out, in case it does not perform well in benchmarks.
 
Reported Vega RX retail price including VAT is 9000 SEK ~ $1100 (excluding VAT $850)... Apparently Vega tech is not cheap.

https://www.nordichardware.se/nyheter/radeon-rx-vega-prislapp-sverige.html
My understanding is that's ~$50 more than a 1080 for Sweden. So not unjustified for what is likely a faster card.

They admit they lacked the time and resources to do more, which may leave us uncertain on a lot of things.
For having a bunch of gaming friends over for a BBQ I'd say it works to start a conversation. I would agree with all the points you made and a true scientific experiment would randomize the testing more to control variables.

Benchmarks are meant to try to optimize time and resourcing for getting a measurement of some sort.
Human beings, on the other hand, are complications wrapped in an enigma smothered in biology and dusted with cheetos.
True, but we may be seeing initial testing in regards to display latency. Pushing Freesync2 with the faster, direct tone mapping pass that may be difficult to measure. Not saying that is the case here, but it was a significant improvement with the new APIs and possibly relevant to upcoming driver features.
 
They should have done some interesting experiments. Have one group double blind and have them pick the one they think is best, have one group where they tell them which one is which and one group where they lie and tell them the vega rx is the 1080ti. I don't think it would lead to any conclusions about gpu performance, or conclusions about vega rx in general, but it might tear down some gaming equivalents of the audiophile "golden ear." My personal feeling is that there's always a "good enough" performance, where if you go beyond it most people cannot tell the difference. When you're buying a high-end gpu you're buying future proof, or maybe the ability to play at higher resolution at a lower performance target. Lots of people will drop tons of money because they think they can tell the difference between 120fps and 100fps, but can they really?

Actually, the best experiment would be to put the gpu in both but tell the people they're different. Lol.
 
Last edited:
My understanding is that's ~$50 more than a 1080 for Sweden. So not unjustified for what is likely a faster card.
Your understanding is way off. GTX 1080 can be had for well under 6000 SEK in Sweden.
Finnish pricing is expected to be 950 - 1100€+ depending on model, we have 24% VAT. Cheapest GTX 1080s are around 550€. Even GTX 1080 Ti is cheaper, cheapest models going around bit over 750€ and in general between 800 - 900 €

Either AMD is pulling stupidly daring PR-stunt and will blow everyone off with 1080 Ti or higher performance, or the card will be failure in consumer space.
 
Looks like Vega is DOA for end users.
Everything will go for pro-market, they'll sell 1000 cards worldwide to review sites and amd fans...
Noone in his mind will prefer it for gaming instead of 1080/1080ti
 
When you play games, do you look at the FPS counter or the action? If Vega is providing a the same or better end user experience doesn't that mean anything? Are FPS Graphs more important than hands on experience?
If you look at Kyle's past GPU reviews you'll see his focus is entirely on user experience when comparing benchmark results, and does NOT focus on providing benchmark tests using similar driver settings. It's probably where AMD got this "blind test" idea to begin with and why AMD chose Kyle, and Kyle chose the 1080Ti. Sounds like a match in Heaven because any driver's feature settings can be modified to provide similar end user experience using any card.

Most enthusiasts make their purchase based on FPS because people want to play any game at maximum settings(highest possible quality), and being able to set driver settings to max means not spending hours configuring the settings for the best playable end user experience on a per game basis.
 
In that case should not Gsync work to avoid stuttering? There was a Gsync monitor coupled with 1080Ti in the blind test.
It will remove it as long as fps are under (or equal to) the refresh rate of the monitor, once fps go above micro stutters will be back, so you'd need to use fps cap or an adaptive vsync technique. Or use a monitor with a much higher refresh rate.
How/why is this a bad setup for nvidia? The HW was identical except for the gfx card and the gsync vs freesync monitors, And the OS was reinstalled by Brent.
Upgrade the panels to 144Hz, or even 200Hz. 1080Ti is out of sync here, while Vega is staying within the VRR range, of course it will look smoother to some.
They used a Ryzen 1700 CPU? Maybe that's why nVidia's performance allegedly suffers?
The choice of CPU matters very little in Doom using Vulkan.
 
Aren't these pre-release prices always hugely speculative and not representative of what we see when product is available?
Yes, they're placeholders 90% of the time. The article itself says the distributors who talked to them are very suspicious of those numbers.
Why would a 8GB RX Vega cost 10% more than the 16GB Vega FE that carries Pro drivers?

We have that MSI rep going "on record" in a forum claiming Vega consumes a lot of power but it'll bring great performance for price. We also have AMD insisting on comparing the RX Vega with the GTX 1080 in these weird demos they've been doing, while claiming the RX Vega system is $300 cheaper. $300 is close to the usual G-Sync-over-Freesync premium, so we're probably looking at a similar price between the cards.
 
I know a guy who works for OCUK. He claims AMD cards are overpriced before the official numbers are revealed. It's absolutely not unusual to see ludicrous high prices before AMD's reveal.

Take these prices with a grain of salt.
 
It will remove it as long as fps are under (or equal to) the refresh rate of the monitor, once fps go above micro stutters will be back, so you'd need to use fps cap or an adaptive vsync technique. Or use a monitor with a much higher refresh rate.
.

Yes but we are speaking of above 100Hz here, so microstuttering should be really invisible. Especially when setting i.e. triple buffering and/or fastsync, which I think Kyle has done.
 
Those are people actually playing games. When you play games, do you look at the FPS counter or the action? If Vega is providing a the same or better end user experience doesn't that mean anything? Are FPS Graphs more important than hands on experience?
Yes? Of course they are. So we should expect all the review sites to only test subjective playing experience and give a RealFeel™ score out of 10 on how they thought the performance was for them personally? Screw frame times, how did it feeeeeeeel??

Noone in his mind will prefer it for gaming instead of 1080/1080ti
If I wanted to buy a 1080, I would have ages ago. I don't want another Nvidia and especially do not think the architecture will grow well to DX12/Vulkan and compute. So if Vega is truly that bad for performance and price when its release then I'll likely wait again and perhaps Volta will be better.
 
How is it a useless test?

Those are people actually playing games. When you play games, do you look at the FPS counter or the action? If Vega is providing a the same or better end user experience doesn't that mean anything? Are FPS Graphs more important than hands on experience?

I thought we got away from that thought with microstuttering and mGPU issues.

Really?! Just lol. Things are getting beyond silly. For the performance we´re seeing, 1080ti will sustain higher game frame rates for a lot more time. In 2018, when a game arrives and it can do 60fps and RX Vega 35/40, which will be better for "playing games"?
 
Back
Top