Ignore Ampere.
Look at performance relative to 5700XT. From as low as 30% faster to about 130%, averaging somewhere around 70% faster (50, 67, 86% faster at 1080p, 1440p, 4K respectively at Hardware Unboxed).
Just to clarify I didn't mean no scaling differences, just that it's another factor compounding the impression.
I notice some quite impressive multi monitor idle power draw (with 7 mhz memory, which only gives a quarter of the required bandwith).
https://www.techpowerup.com/review/amd-radeon-rx-6800-xt/31.html
I guess that's a strong hint that they are simply presenting the screen from infinity cache. Maybe basicly running the gpu with the infinity cache as main memory - I wonder how much they can extend this to other "2d" usage scenarios. Obviously they are not doing it for video playback right now, where video memory is running full! speed (see the same page above).
Also it seems that the (at least) 64mb requirement for 2*4k is too much for this mode:
https://www.computerbase.de/2020-11..._leistungsaufnahme_desktop_youtube_und_spiele (again probably falling back to full speed instead of some 2d memory clock)
When things settle a bit I'd like to see maybe some tests to "break" infinity cache for a better lack of a term. I feel it's a novel alternative to raw bandwidth but I'm wondering what other limitations (as in scaling) there are outside of just resolution.
For instance with this information it seems like behavior changes with 1x vs 2x monitors at least in terms of power consumption presumably due to where the GPU needs to fetch data. Would this also have a performance impact hypothetically in multitasking scenarios especially with dual monitors? Personally speaking I'm a dual monitor user and multitasker even with GPU usage while gaming nowadays so it's something that would be relevant to me. But outside of myself I think that scenario is rather common these days such as having perhaps video (youtube, stream) open while gaming.
The other thing is are we confident that future memory requirements won't also have an impact? If datasets were to grow it would mean the hit rate would correspondingly drop. At least I'm assuming (unless there's clear information otherwise?) that the amount of data in IC is not solely a factor of display resolution.
Is NVIDIA still bringing more FE's? Since FE is the only 3070 ever sold at MSRP, AIB cards are way higher, and it's FE was so limited only few select NVIDIA storefronts had any to sell (not betting my head on this, but I think it was mentioned somewhere only two European NVIDIA stores had any)
AMD has confirmed to produce references 'till sometime in Q1/21 and is expected to re-stock their own store too (which sells at MSRP guaranteed)
As far as I know Nvidia is also restocking FEs. But I think the reality of the situation is that for both AMD and Nvidia the reference cards are likely much lower margin actually than them providing to AiBs for customs (at least for retail DIY channel) and are more so there to fill certain targets. The heatsinks are and designs are more expensive essentially despite being the cheaper than AiB customs. Take the 6800/XT for example, they both use vapor chambers, and I wouldn't be surprised they're the only ones despite being the cheapest price wise against AiBs. While Nvidia seems to also want a specific type of design language differentiation from the rest of the market that drives up costs (that heavy ascent "gamer" design with AiBs is actually simpler and cheaper).
I wouldn't describe it as a mistake. I'm sure they would have loved to show something if it were possible. The reality is that MLSS is extremely hard to do. God knows how long Nvidia with all their expertise in ML and the tensor cores to rely on were working on DLSS before it launched. And even then it took another year before it was useable.
There's an advantage with being second to market for something that's going to be more subjective like this in that AMD's solution doesn't actually need to really match Nvidia's, it just needs to be a perceived alternative. For instance some people already consider CAS as a viable alternative to DLSS. As we can see with that DLSS thread it's somewhat opinionated in terms of just how much better DLSS is.
If anything I wonder if a wide implementation with a tradeoff in IQ and perf gains would actually be the more preferable.
@Frenetic Pony Most of the games released with DXR do not support DXR 1.1, which has some significant changes. I expect there to be some changes in how RT is used and how it performs going forward, especially as console devs dig into it and it's not just relegated to niche hardware. I'm sure the performance hit will remain large, but adoption should be pretty widespread.
I'm actually curious to see if there are different performance deltas between AMD and Nvidia as more games adopt DXR 1.1, which AMD seemed to be heavily involved in.
I'm just going to risk it but once things settle there might need to be an examination to confirm if there are any IQ differences with respect to implementation as opposed to just performance.