AMD Radeon VII Announcement and Discussion

Isn't the embargo lifted tomorrow? Can't we just wait until then?
 
I assume it still has the same 'broken binning rasterizer' issues as Vega? Is there any information on this?

Look at this: https://www.anandtech.com/show/13923/the-amd-radeon-vii-review/15
I like that! :love:
But those compute benches are always just a minor function - almost like throwing a dice. There really is a need for a proper benchmark here. You can just compare GCN vs. GCN but not AMD vs. NV, similar as with TFLOP numbers.
 
For now, I'd say there are diverging reports.
Maybe it's capped, maybe it's uncapped, maybe it's uncapped but AMD doesn't want to make it public too much. Maybe AMD is testing the waters by sending different info to different people to gauge the interest on FP64 to see if they'll enable FP64 on the firmware/drivers or not.
Who knows...

I'm a wizard.

AMD said:
Given the broader market Radeon VII is targeting, we were considering different levels of FP64 performance. We previously communicated that Radeon VII provides 0.88 TFLOPS (DP=1/16 SP). However based on customer interest and feedback we wanted to let you know that we have decided to increase double precision compute performance to 3.52 TFLOPS (DP=1/4SP).


(Looks like AMD couldn't get the mind right and finalized the Bios super late..)
BIOS may be several months old. Changing a single value in the configuration doesn't mean the microcode wasn't final.
Regardless, this might be a hint at the possibility of a simple BIOS hack enabling 1:2 FP64 throughput.
 
Didnt look so good for the Radeon VII in a swedish review atleast. The mid-end Navi might perform even less but perhaps better price/watt/performance ratio.
 
I assume it still has the same 'broken binning rasterizer' issues as Vega? Is there any information on this?

Look at this: https://www.anandtech.com/show/13923/the-amd-radeon-vii-review/15
I like that! :love:
But those compute benches are always just a minor function - almost like throwing a dice. There really is a need for a proper benchmark here. You can just compare GCN vs. GCN but not AMD vs. NV, similar as with TFLOP numbers.
Compute benches are utterly useless and ridiculous (especially the ones I'm seeing in most reviews) The type of workload being run has a gigantic impact on the processing time and the GPU architecture. I can run the same one an AMD GPU & NV GPU and have the AMD GPU get totally obliterated by the NV But have a different workload (usually more complex one) run 2x as fast on CGN gpus...Thoses reviews are as good as useless..

Also of note, it fares quite a bit better under Linux..
https://www.phoronix.com/scan.php?page=article&item=radeon-vii-linux&num=11
"When looking at the geometric mean of all the OpenCL benchmarks carried out, the Radeon VII was 12% faster than the GeForce RTX 2080 and a 52% improvement in compute performance compared to the Radeon RX Vega 64."
 
Last edited:
Thoses reviews are as good as useless..
Yes, exactly. I can trust only my own numbers, but my GPUs are out of date finally. Curious about Turing. Integer and floating point ops in parallel could be some gain.

These benchmarks are surely better: https://www.tomshardware.com/reviews/amd-radeon-vii-vega-20-7nm,5977-4.html
This is some real workloads, not just a prefix sum or some simple diffusion.
But i doubt those guys spend the necessary work on optimization as it is common in games, and ind if so unlikely for both vendors or even multiple gens. So useless too :(
 
I'll like to see the average frequencies... If the Vega 64 is averaging 1350-1450, and vega 7 1800, some of the gain can come from that. I'm wondering if the core frequency or the memory bandwidth is more important in those gains.
 
Yes, exactly. I can trust only my own numbers, but my GPUs are out of date finally. Curious about Turing. Integer and floating point ops in parallel could be some gain.

These benchmarks are surely better: https://www.tomshardware.com/reviews/amd-radeon-vii-vega-20-7nm,5977-4.html
This is some real workloads, not just a prefix sum or some simple diffusion.
But i doubt those guys spend the necessary work on optimization as it is common in games, and ind if so unlikely for both vendors or even multiple gens. So useless too :(
Nope their LuxMark results for the Radeon VII are totally bogus (compared to the ones in AMD's review guide, AnandTech and even my 4 years old FuryX). For example the Neumann workload should be around 30K on Radeon VII not 19K, Lobby should be around 7.7K not 6.2K etc... Never trust benchmarks unless you run them yourself.
 
Performance is in line with what AMD showed at CES.
Drivers aren't great yet, and Wattman just isn't working with the card.

Though it seems the Radeon VII is a lot more overly-volted than (even) Vega 64:

Q4aPJIR.png


These results are ridiculous. It's like AMD are shooting themselves in the foot by using unnecessary core voltages.
Perhaps when auto-undervolting works in the drivers, most cards will actually get efficiency numbers similar to Turing.



Anand has done iso frequency benchmarks
I wasn't expecting to see large differences between the two GPUs at ISO frequencies, but I'm even more surprised by that huge >30% difference in GTA V. That 5 year-old game seems to be swallowing bandwidth like a wale.
 
I'll like to see the average frequencies... If the Vega 64 is averaging 1350-1450, and vega 7 1800, some of the gain can come from that. I'm wondering if the core frequency or the memory bandwidth is more important in those gains.
Unfortunately unless someone has pulled out a late surprise, we're all going to be waiting a bit. AMD's SMU changes mean that the usual logging tools don't work.
 
Yeah, my VEGA FE chucks along at 950mV compared to the standard 1200mV.

Will be interesting seeing more tests with undervolting.
 
The thing is, AMD never really pushes RTRT because GCN can not do RTRT. It is interesting to note, that when AMD GPUs eventually support RTRT, that somehow will become ultra important talking point.

Exactly^
They are leveraging their strengths! But somehow AMD never seems to use their marketing to sell gimmicks. And that Nvidia's (proprietary) DLSS and RT cores will be replaced by Industry standards with DX12's async DXR & DirectML in games.



As for which card is better at $699..? Which game do you play and at what resolution? Both Nvidia & AMD trade blows back & forth on every review in every game. My take? They are equal. (unless you need FP64, 1/4)

I would like to make a note. Nvidia is highly promoting Battlefield 5 (I got free copy with my rtx2080), & they use BF5 in all their marketing and Nvidia's logo is even on BF5's own Marketing material. (And that Nvidia's BF5 driver's are polished)

And look at all the BF5 reviews released today...
 
Back
Top