AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Is it certain that Quadros can't run games? It might not be as well-optimized, but what necessarily stops them?

Traditionally this wasn't the case. There was a time where Quadros didn't even have up-to-date DirectX drivers. Fermi was the first with 11.0, it took only 2 1/2 years to implement them, and Nvidia certainly has more head-start than the official publishing date.
 
Doesn't really prove the point either way though, I'm sure some of the cheaper ones probably can't.

timespy.png

Neither card targets gamers, but if you want to take a break from work and squeeze in some leisure time, either card will be up to the task. This also an important indicator of DX12-based rendering horsepower.
https://hothardware.com/reviews/nvidia-quadro-p4000-and-p2000-pro-workstation-gpu-review?page=6
 
Last edited:
Traditionally this wasn't the case. There was a time where Quadros didn't even have up-to-date DirectX drivers. Fermi was the first with 11.0, it took only 2 1/2 years to implement them, and Nvidia certainly has more head-start than the official publishing date.
Are you sure, you are remembering this correctly? Back "then" I used to game on my Quadro cards (nV10, nV28, G70 and the last was a GT200) quite regularly. Maybe they missed out on the latest optimizations for newly launched titles, but standard game acceleration has alsways been there.
 
What they say in the article is not necessarily true .... P4000 seems to do equal or better performance at $850 based on their reported benchmark data.
To approach the performance of the Radeon Vega Frontier Edition with an Nvidia product, you’d have to step up to at least a $2,000 Quadro P5000. A P6000 card, which is kind of the professional equivalent of a Titan Xp, is nearly $6,000.
Edit: One thing I hope reviewers do is check AMD and Nvidia drivers for power management mode features ... when installed both Geforce and Quadro drivers do not default to the highest performance setting ("Prefer Maximum Performance").
 
Last edited:
What they say in the article is not necessarily true .... P4000 seems to do equal or better performance at $850 based on their reported benchmark data.
I know I've been following this thread a little and saw that before. But I mainly posted for those thirsty for some gaming performance leaks.
 
But I mainly posted for those thirsty for some gaming performance leaks.
And the article/video contains really nothing for gamers, apart from "it seems to play as good as a Titan Xp". It's a token gesture and nothing more by AMD, purely some marketing to show people that there are cards, they're working, and we want prosumers to buy them. They know they're safe with a media outlet like PC Gamer who will definitely follow any rules provided by AMD.
 
Are you sure, you are remembering this correctly? Back "then" I used to game on my Quadro cards (nV10, nV28, G70 and the last was a GT200) quite regularly. Maybe they missed out on the latest optimizations for newly launched titles, but standard game acceleration has alsways been there.

I appologize, it wasn't fair. DirectX 11 only shipped 1 1/4 years after reveal, the GT200 has PS 4.1 which is 10_1 (even though you can run 10_1 on DX11.0 it's a PITA, no [edit: decent, shouldn't be so dismissive about this level ;)] compute fe.).
The point I tried to convey is that you couldn't use a Quadro to evaluate real-time performance on the API which becomes relevant when you ship/finish your project, your team also can't program against it because it doesn't have the features, except you do it in OGL. The ATI FirePro V5800 would have been a better choice.
 
Last edited:
WX 7100 being pretty much the same performance as Vega FE in the professional benchmarks tells me they're unfinished.
Or, it could mean that professional workloads don't really scale well on GCN. AMD's pro line has always suffered non linear scaling inbetween iterations. Besides, AMD didn't hint at that being the case at all.
 
Maybe they just have no idea, what they are talking about:

As for "Vega GPUs do not support Error Correcting Memory" (taken from MI25 specs):
16GB ultra high-bandwidth HBM2 ECC GPU memory.
With 2X data-rate improvements over previous generations on a 512-bit memory interface, next generation High Bandwidth Cache and controller, and ECC memory reliability; the Radeon Instinct MI25’s 16GB of HBM2 GPU memory provides a professional-level accelerator solution capable of handling the most demanding data intensive machine intelligence and deep learning training applications.

As for "do not support native double-precision":
The MI25 also provides 768 GFLOPS peak double precision (FP64) at 1/16th rate.
GCN architecture has native 1/16 double-precision support, there is not a single GCN-based GPU without FP64. It seems they are confusing native double-precision with fast (1/2) double-precision. Maybe they sould at least read specs of a product before publishing an article about it…
 
Another thing that's seems to be a bit different with the Vega-launch is the talk about "peak-performance" and not sustained clocks. Looks like the air cooled have the the same boost clocks as the water cooled. Meaning, the air cooled will probably throttle down a bit on heavy loads, but the liquid cooled stay at 1 600 mhz...
 
I really hope good performances. A 1600mhz Fiji without bottleneck would be awesome, so, with all the tweak involve, should be even better... But the delay, the strange communication (imo), I fear R600 all over again.
 
Or, it could mean that professional workloads don't really scale well on GCN. AMD's pro line has always suffered non linear scaling inbetween iterations. Besides, AMD didn't hint at that being the case at all.
Double the memory bandwidth, near double the shader count, about 30% higher geometry performance from clockspeed, double the ROPs, and you're telling me they couldn't get any more performance out of it than WX7100?
 
I appologize, it wasn't fair. DirectX 11 only shipped 1 1/4 years after reveal, the GT200 has PS 4.1 which is 10_1 (even though you can run 10_1 on DX11.0 it's a PITA, no [edit: decent, shouldn't be so dismissive about this level ;)] compute fe.).
The point I tried to convey is that you couldn't use a Quadro to evaluate real-time performance on the API which becomes relevant when you ship/finish your project, your team also can't program against it because it doesn't have the features, except you do it in OGL. The ATI FirePro V5800 would have been a better choice.
Yes, you could not project DX-performance 1:1 on corresponding Geforce cards, but Quadros were configured a bit differently most of the time (clocks, shaders, memory controllers - my GT200-Quadro has DDR3, but 4 GiByte graphics mem). But german c't magazine for example most of the time tested at least the common 3DMark at the time alongside SPEC Viewperf and other "pro" applications. To my knowledge, they never really complained about missing DX-API compliance or general performance (apart from what is to be expected given the configuration differences and a few optimizations).
 
Last edited:
Frontier Edition's product site has been updated and the drivers have been posted:
http://pro.radeon.com/en-us/product/radeon-vega-frontier-edition/
http://support.amd.com/en-us/download/frontier?os=Windows+10+-+64

From the first link:
Compute Units 64 nCU
Typical Engine Clock (MHz) 1382
Peak Engine Clock (MHz) 1600
Peak Half Precision Compute Performance 26.2TFLOPS
Peak Single Precision Compute Performance 13.1TFLOPS
Peak Double Precision Compute Performance 819GFLOPS
Stream Processors 4096
Peak Triangle/s (BT/s) 6.4
10-bit Display SupportYes

Typical Board Power < 300W
PSU Recommendation > 850W
Required PCI Slots2

Memory Data Rate 1.89Gbps
Memory Speed (Effective) 945MHz
Memory Size 16GB
Memory Type HBC
Memory Interface 2048-bit
Memory Bandwidth 483GB/s

H265/HEVC Decode 8 HD / 4K60
H265/HEVC Encode 1080p240 or 4K60 Mxed with Decode

OpenCL™ 2.0
OpenGL® 4.5
Vulkan® API
AMD Eyefinity Technology
AMD PowerTune Technology
ZeroCore Power Technology
5K Support (60Hz SST)
8K30 Support
AMD FreeSync™ Technology Support
TrueAudio Next technology
Bezel Compensation
Partially Resident Textures


Product FamilyRadeon
Product LineRadeon Vega Frontier Edition
ModelFrontier Edition
PlatformDesktop
Form factor and CoolingActive, Dual Slot, Full Height
Additional Power RequirementsYes
OS SupportWindows® 7, Windows® 10, Linux (64-bit)
 
From the footnotes:
http://pro.radeon.com/en-us/product/radeon-vega-frontier-edition/
Radeon™ Vega Frontier Edition is AMD’s Fastest Radeon™ VR Ready Creator Graphics Card. Testing conducted by AMD Performance Labs as of May 24th, 2017 on a test system comprising of Intel E5-1650 v3 @ 3.50 GHz, 16GB DDR4 physical memory, Windows 7 Professional 64-bit, Radeon™ Vega Frontier Edition/Radeon™ Pro Duo (Polaris)/ Radeon™ Pro WX 7100, AMD graphics driver 17.20 and LITEON 512GB SSD. Benchmark Application:
SteamVR Performance Test/VRMark
• Radeon™ Vega Frontier Edition SteamVR Performance Test Score: 11
• Radeon™ Pro Duo (Polaris) SteamVR Performance Test Score: 9.1
• Radeon™ Pro WX 7100 SteamVR Performance Test Score: 6.4

VRMark – Orange Room Scores
• Radeon™ Vega Frontier Edition VRMark: 8157
• Radeon™ Pro Duo (Polaris) VRMark: 6596
• Radeon™ Pro WX 7100 VRMark: 6588
[…] Performance may vary based on the use of the latest drivers. RPVG-009
formatted for better readability.
• Orange Room score 23,x% higher than WX7100 (downclocked RX 480) - again, a non-telling benchmark with orange room being the less intensive of the two parts. And from the post below:
• 4 triangles per clock
 
Back
Top