AMD Radeon VII Announcement and Discussion

And that Nvidia's (proprietary) DLSS and RT cores will be replaced by Industry standards with DX12's async DXR & DirectML in games.

By the time that happens RT indeed has advanced that far RT cores arent like they are now. Suspect that will take awhile though, meanwhile nvidia had 3/4 years with primitive RT hardware at the least.
 
Report puts it at very low availability

So, yeah, just some binned ML50/60 cards that aren't actually being made as consumer focused products at all. The whole thing is just a way for AMD to scoop up some extra cash from cards not suited for higher price points.
Stock is definitely super low... but that Gibbo dude @Overclockers UK is a known liar (just look for his previous claims during the crypto craze..they lie about stock & performance before launch to jack up the prices of the GPUs they are selling..)
 
By the time that happens RT indeed has advanced that far RT cores arent like they are now. Suspect that will take awhile though, meanwhile nvidia had 3/4 years with primitive RT hardware at the least.

By the time what happens..?

It already happened. Nvidia themselves are showing Metro raytracing demo using DXR (DirectX Raytracing) instead of their own proprietary RTX.
DirectX12 is here to stay.
DirectX RayTracing is here to stay
DirectML (Machine Learning) is here to stay

These^ are the industry standards.
 
RTX is NVIDIA's execution backend for DXR. They are both stages of the overall execution model. It's like saying DirectX is going to replace CUDA Cores.

I was going to reply but then i thought how dumb his comment sounded, there was no counter-post needed for it. You explained it nicely though :)
 
NO, CUDA Cores are how Nvidia processes DirectX. (You are making my point.)
I think some confusion occurred when you said "And that Nvidia's (proprietary) DLSS and RT cores will be replaced by Industry standards with DX12's async DXR & DirectML in games." .

It's not obvious if you're referring to industry hardware standards (like programmable shaders replacing fixed function shaders), or industry software standards (like Glide being supplanted by DX8, and OGL), or both together.
 
I think some confusion occurred when you said "And that Nvidia's (proprietary) DLSS and RT cores will be replaced by Industry standards with DX12's async DXR & DirectML in games." .

It's not obvious if you're referring to industry hardware standards (like programmable shaders replacing fixed function shaders), or industry software standards (like Glide being supplanted by DX8, and OGL), or both together.


Remember the VII is nothing more than a Mi40.


And as much as you would like to think that RT cores are programmable, is sounds like DICE sure did an aweful lot of "tuning" of Battlefield, for Nvidia's RTX chip to work so well. (It just works?)


Tom's : https://www.tomshardware.com/news/battlefield-v-ray-tracing,37732.html

QUOTE
"DICE is also putting in a lot of work to make sure PC platforms don’t bottleneck Nvidia’s GeForce RTXes. During our interview, Dave James of PCGamesN asked Holmquist what DICE was doing to minimize the impact of ray tracing on Battlefield V.

"So, what we have done with our DXR implementation is we go very wide with a lot of cores to offload that work," Holmquist replied. "So we’re likely going to require a higher minimum spec and recommended spec for using RT, and that was the idea from the start. It won’t affect the gameplay performance, but we might need to increase the hardware requirements a little bit. And going wide is the best way for the consumer in this regard because you can have a four-core or six-core machine. It's a little bit easier these days for the consumer to go wide with more threads than have higher clocks."

It sounds like the engine is optimized for six-core CPUs with simultaneous multi-threading but may work well on 4C/8T processors as well.

GeForce RTX owners should get the option to turn ray tracing off. However, there is no DXR (DirectX Ray Tracing) fallback path for emulating the technology in software on non-RTX graphics cards. And when AMD comes up with its own DXR-capable GPU, DICE will need to go back and re-tune Battlefield V to support it.

Holmquist clarifies, “…we only talk with DXR. Because we have been running only Nvidia hardware, we know that we have optimized for that hardware. We’re also using certain features in the compiler with intrinsics, so there is a dependency. That can be resolved as we get hardware from another potential manufacturer. But as we tune for a specific piece of hardware, dependencies do start to go in, and we’d need another piece of hardware in order to re-tune.”



We also pressed the DICE team on performance differences between its DirectX 11 and DirectX 12 code paths, the former of which is considered faster on Nvidia hardware. Holmquist continued, “We did optimize some paths of DX12, but since most of this work is in the DXR API, what we did was that we made sure none of that was bottlenecking our throughput. So, playing DX12 performance will be similar to what we had in Battlefield 1
ENDQUOTE

It sounds like it was more of DICE (the game developer) that tuned BF5 to work with Nvidia's RT cores, the RT cores do what the RT cores do. Aren't they compute, not programmable?
 
That would be the RTX 2080 since it's faster than the Radeon VII in most tests, and it has RT and Tensor cores while the Radeon VII doesn't.
Not only that, but ComputerBase summarized the state of this GPU in their review "too loud, too slow and too expensive, but with 16 GB HBM2"
https://www.computerbase.de/2019-02/amd-radeon-vii-test/3/#diagramm-performancerating-2560-1440

PCGH also complained about it's almost RTX 2070 like performance on many 1440p results
http://www.pcgameshardware.de/Radeon-VII-Grafikkarte-268194/Tests/Benchmark-Review-1274185/2/

And so did TPU, at 1080p, the card is no faster than 2070, at 1440p it's slightly faster than it
https://www.techpowerup.com/reviews/AMD/Radeon_VII/28.html

Also it TechReport found out that MSAA performance is not fixed on Vega even after the mammoth 1TB bandwidth.
After bringing up this issue with AMD, the company advised us that using 8X MSAA in Forza Horizon 4 will reduce performance on its products more than it will on GeForces. That suggests the 64-ROP complement that's graced high-end Radeons since the Hawaii days is perhaps no longer enough to handle MSAA at 4K in this title.

My understanding is that MSAA delivers a one-two punch because it both relies on fixed-function hardware and can be quite memory-bandwidth intensive, even with modern color compression techniques. It appears that the Radeon VII's terabyte per second of theoretical bandwidth can't overcome bottlenecks elsewhere in the GPU.

https://techreport.com/review/34453/amd-radeon-vii-graphics-card-reviewed/4
 
Last edited:
Report puts it at very low availability

So, yeah, just some binned ML50/60 cards that aren't actually being made as consumer focused products at all. The whole thing is just a way for AMD to scoop up some extra cash from cards not suited for higher price points.
The chips may have been binned that way, but it's using different BIOS, PCB etc

HSA, TrueAudio, primitive shaders, etc...

That would be the RTX 2080 since it's faster than the Radeon VII in most tests, and it has RT and Tensor cores while the Radeon VII doesn't.
HSA a gimmick, really? I could see that with TrueAudio and possibly primitive shaders, but HSA?

Remember the VII is nothing more than a Mi40.

It sounds like it was more of DICE (the game developer) that tuned BF5 to work with Nvidia's RT cores, the RT cores do what the RT cores do. Aren't they compute, not programmable?
I wouldn't say call Mi40 accurate description of VII really just because the chip is the same.
RT cores are fixed function hardware, "compute" would rather refer to the programmable shaders, not FF
 
Performance is in line with what AMD showed at CES.
Drivers aren't great yet, and Wattman just isn't working with the card.

Though it seems the Radeon VII is a lot more overly-volted than (even) Vega 64:

Q4aPJIR.png


These results are ridiculous. It's like AMD are shooting themselves in the foot by using unnecessary core voltages.
Perhaps when auto-undervolting works in the drivers, most cards will actually get efficiency numbers similar to Turing.




I wasn't expecting to see large differences between the two GPUs at ISO frequencies, but I'm even more surprised by that huge >30% difference in GTA V. That 5 year-old game seems to be swallowing bandwidth like a wale.

Interesting. It may not be all that stable at the voltage tested here, but beyond this, since 7nm is still pretty new, it's likely that there's a lot of variability, hence the safety margin chosen by AMD. It will be interesting to see where things go from here as the process matures, and perhaps with better voltage management.
 
RTX is NVIDIA's execution backend for DXR. They are both stages of the overall execution model. It's like saying DirectX is going to replace CUDA Cores.
For nvidia, RTX is a broader term than an execution backend for DXR.
The pedantic semantics here are the distinction between "RTX = hardware blocks and software stacks that define Turing's exclusive features" and "RTX effects that are being implemented in games".
nVidia defines DLSS in games as a "RTX Technology", and that has nothing to do with DXR.
There's also Scott Herkelman hinting at Battlefield V using a not-so-purely-DXR implementation, which puts into question if future non-nvidia GPUs will ever be able to run BFV's raytracing at all. And even on that tomshardware interview DICE admits they're dependent on nvidia-specific intrinsics at the moment.

Interesting. It may not be all that stable at the voltage tested here, but beyond this, since 7nm is still pretty new, it's likely that there's a lot of variability, hence the safety margin chosen by AMD. It will be interesting to see where things go from here as the process matures, and perhaps with better voltage management.
It seems one of the few really new things in Vega 20 is the new system management unit, and its support on current windows drivers is.. completely broken experimental.
Radeon VII seems to consume some 30W less than Vega 64, and those who managed to undervolt the new card shaved off another 40W without a performance penalty, putting it on the same level as a RTX 2080.



I wonder why AMD launched a card with such rushed drivers on Windows. It's not like nvidia was moving huge volumes of RTX 2080 cards, the card was announced as a surprise to many so there any fan hype to answer for.
Surely a 1 or 2-week delay could have been better? Perhaps AMD is trying to distance Radeon VII from the mid-2019 Navi GPUs as much as possible.
 
I wonder why AMD launched a card with such rushed drivers on Windows. It's not like nvidia was moving huge volumes of RTX 2080 cards, the card was announced as a surprise to many so there any fan hype to answer for.
Surely a 1 or 2-week delay could have been better? Perhaps AMD is trying to distance Radeon VII from the mid-2019 Navi GPUs as much as possible.

Alternatively, they are simply committing the right amount resources to the low-volume, low-margin Instinct recycling program thrown together by since-replaced leadership.
 
Back
Top