If you want to have my 7970 you can have it, no need to wait
If I actually had a desktop computer to put it into, I'd definitely take you up on that!
If you want to have my 7970 you can have it, no need to wait
If I actually had a desktop computer to put it into, I'd definitely take you up on that!
AMD Radeon RX 6800 XT RDNA 2 Graphics Card 3DMark Time Spy Extreme Benchmark on Par With NVIDIA's GeForce RTX 3080
The user who had got early access to the AMD Radeon RX 6800 XT reveals that the graphics card scores around 8500 points in the 3DMark Time Spy Extreme benchmark. The GeForce RTX 3080 stock graphics card scores around 8900-9000 points in the same benchmark (graphics score) that puts the RX 6800 XT on par with NVIDIA's $699 US offering. The Radeon RX 6800 XT will be retailing at $649 US which is $50 less than the GeForce RTX 3080.
https://www.igorslab.de/en/amd-rade...ar-mode-turbo-mode-and-silent-mode-exclusive/Looking at the code, four modes stand out. Quiet and balance, rage and turbo. This presetting is implemented via the power limit, the target temperature and the fan control with Acoustic Limit and Target (RPM). It’s interesting here that Turbo Mode, which AMD didn’t talk about at all, is even a bit more aggressive. I am already happy that it has not been called Uber Mode again. If you now look at the 95 °C limit for the temperature in the MorePowerTool (MPT), you can almost get scared, because fireflies are not really trendy seasonally.
It seems as if the board partners have a free hand when determining the fan parameters, since they know their coolers and their performance (including possible reserves) best, but the power limit of +6% as a premium was probably given as a guideline or can be found in other BIOSes purely by chance. But in the end, these modes do nothing else that you could not have done since the release of MorePowerTool MPT.
A reference card with the officially stated 300 watts would then be with Rage Mode at the almost 320 watts I extrapolated and the board partner card shown above in the MPT would be far, far above that. Let’s be surprised, I promised you a hot autumn. At least this one is currently available and can be plugged in.
I fear for Cyberpunk to be honest. CDProjekt Red has a history of using nvidia-centric optimizations that hurt performance on AMD GPUs, like Hairworks on Witcher 3.
What ray tracing games are out now that use NVIDIA proprietary technology?
Brian: The vast majority of games released with ray tracing support use the industry standard Microsoft DirectX Ray Tracing (DXR) API. Three exceptions we are aware of include Quake II RTX, Wolfenstein: Youngblood, and JX3, which use NVIDIA ray tracing extensions for Vulkan.
(...)
I have seen stories that say that Cyberpunk 2077 ray tracing will only work on NVIDIA GPUs. Why is that?
Brian: Cyberpunk 2077 uses the industry standard DirectX Ray Tracing API. It will work on any DXR-compatible GPU. Nothing related to Cyberpunk 2077 ray tracing is proprietary to NVIDIA.
Thank you for clarifying this - BAR Size can still be less than 4 GB even with 64-bit base address registers, but since GCN supports at least 40-bit virtual address space, I guess it made sense to support the full range of sizes (up to 2^40=512 GiB).Any power-of-two up to the max supported by the PCIE spec. AMD GPUs going at least as far back as GCN1 (GFX6) support both 64-bit BARs and BAR resizing
Flushing the cache and moving entire pages does incur an significant overhead, comparing to a directory-based coherence protocol over PCIe like CXL - but I guess the latter is unlikely now, considering AMD's description of Smart Access Memory.There's really not a lot of overhead in flushing AMD's HDP cache on each command buffer submission
I'm not sure how they could get away with not flushing, because the driver can't know if the app wrote into a mapped portion of the BAR prior to the submission.
What's more is that ray queries aren't available with the Nvidia extension either which is necessary functionality to support inline raytracing so why would AMD support an extension that doesn't even expose their fast paths in their hardware/drivers just for a couple of titles ?
How is losing performance in double digit percentages supposed to be politically acceptable for AMD ?
Is it AMD’s position that inline is always faster on their hardware? According to Microsoft there’s a trade off and inline isn’t always the best option (e.g. due to higher register pressure or suboptimal scheduling).
I don't think Intel would deliberately support an extension that does more harm than good, and apparently they have no problem with implementing Nvidia's extensions.Here's a list of current changes for anyone interested so the cross vendor ray tracing extension obviously isn't a superset of the proprietary Nvidia extension. What's more is that ray queries aren't available with the Nvidia extension either which is necessary functionality to support inline raytracing so why would AMD support an extension that doesn't even expose their fast paths in their hardware/drivers just for a couple of titles ?
Intel plans to keep using Khronos' open-source ray tracing extensions as much as possible. However, they might look at implementing Nvidia's in-house extensions if they find more and more games using Nvidia compared to Khronos' open-source solution.
Well then it's up to MS to release Ultimate Direct RayTrace DX12.x that will support inline RT. AMD follows the full DX Ultimate path that MS has laid out. It's Nvidia that is doing it's own thing.No but based on Dave Oldcorn's presentation, I fail to see how the shader binding table approach is ever better in their case. They explicitly state that inline ray tracing is the "ideal API" (I assume on their end) to implement these "common raytracing techniques" ...
Well then it's up to MS to release Ultimate Direct RayTrace DX12.x that will support inline RT. AMD follows the full DX Ultimate path that MS has laid out. It's Nvidia that is doing it's own thing.
Would depend on the new features Nvidia added to Ampere and what Turing supports and what MS defines in DXR1.1 and if Turing has the feature set. It most likely could, just the feature won't be as advanced as the Ampere version for obvious reasons.So console games start as "DXR 1.1"? Won't run on Turing?...
So console games start as "DXR 1.1"? Won't run on Turing?...
If it supports DXR 1.0 then I don't see why It wouldn't support DXR 1.1