AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

That leaves 13.5GB for games, of which the majority is to be used by the GPU. Therefore, 12GB should be sufficient for getting VRAM parity with most multiplatform titles in the long run. 8GB should not.
So what you're saying right now is that 3060 will run "most multiplatform titles in the long run" better than either 3070 or 3080?
 
The ray tracing in MM PS5 doesn't scream 2080/Ti in any imaginary way.


....

Sorry, I wasn't clear in that sentence. What I was thinking was MM on PS5 already has reflections and we have roughly 2080(Ti) with 6800XT. So Navi 21 has enough performance to do more than shadows, but then we will have lower end cards and consoles developers has to take into account.

Hope this clears it a bit more.
 
The reflection are running 1/2 the FPS at 1/4 the resolution so not a stellar example that one.

See my clarification. Typing on the phone when a call interrupts your thoughts process resulted in confusing statement.

With 6800(XT) which was referred to as AMD RT card we should be able to see results closer to full FPS at half resolution in title like that.
 
With 6800(XT) which was referred to as AMD RT card we should be able to see results closer to full FPS at half resolution in title like that.
Yeah I think we can expect to see some good use of RT from talented devs like Insomniac, R*, Nixxes, etc. They do have to put in additional effort into PC ports though, which has a been a hit or miss. I don't expect to see anything good from Ubisoft.
 
Sorry, I wasn't clear in that sentence. What I was thinking was MM on PS5 already has reflections and we have roughly 2080(Ti) with 6800XT. So Navi 21 has enough performance to do more than shadows, but then we will have lower end cards and consoles developers has to take into account.

Hope this clears it a bit more.

Yes a 6800XT is double the ps5 gpu, id expect better rt performance from that.
 
MS doesn't decide anything on when some silicon is ready to launch in GPUs, GPU vendors do that and MS is then standardize their proposals - if it's even possible. The inclusion of DXR in DX was a result of a GPU vendor coming to MS and saying that they will have RT h/w in their next gen GPUs which they would like to be accessible via DX.

Yes and MS only includes that into DX if multiple hardware vendors (in this case at least 2) can deliver on those features. Nothing I wrote contradicts what you just replied with. It's highly likely that MS had been discussing the potential for RT hardware acceleration with both NV and AMD for over a decade as they've been doing R&D WRT RT in various forms for over 2 decades now.

Similar to how Tessellation wasn't included into DX until it was possible for NV to support it, despite AMD having support for hardware accelerated tessellation for years. NV was also unlikely to support tessellation unless MS pushed them on it. MS had been using tessellation (in the X360) prior to it being included in DX.

RT is no different. It isn't in the best interest of MS or consumers if only one hardware vendor can support a key feature of DX. This wasn't an optional feature flag, but a cornerstone of that version of DX.

Regards,
SB
 
Don't start skewing my statements. I made no performance comparison claims between the GTX980 and the 2013 consoles.
I don't know who set out to look for games that ran "much better" on the 2013 $400 consoles with 1.3/1.8TFLOPs than nvidia's $550 card from 2014 with 5TFLOPs, but if they did then it was a pretty stupid quest to start with.

The 4GB cards (RX290, GTX 980/970, Fury) aged badly from 2015 onwards, on the resolutions they were marketed to work at (1440p and 4K), as typical VRAM occupancy on 1440p between games released up to 2015 and what we had by 2017 rose quite drastically.
At Ultra Preset,
1440p Battlefield 4: 2280MB
1440p Battlefield 5: 5490MB

4K Battlefield 4: 2988MB
4K Battlefield 5: 6990MB

The practical difference is that while the Fury X was beating the 4GB 290X by almost 40% on the 2015 Battlefield 4 at 1440p, on Battlefield 5 it was losing by 24% with the 390X (same card as 290X but 8GB VRAM).

So don't worry, the VRAM usage spike due to 8th-gen consoles didn't just affect your precious GTX980. Perhaps it affected AMD cards even more.

Fair enough, if you're arguing that VRAM usage at high settings/resolutions will go up beyond 8GB or even 10GB over the course of the next few years then I don't disagree at all. I thought you were making the argument that 8 or 10GB would hamstring todays high end GPU's in relation to the new consoles which I think is far from certain based on how 4 and 6GB cards performed last generation, - although can't be discounted entirely as I mentioned above due to the SSD's active as memory amplifiers.

The Series X has 16GB total, out of which 2.5GB are allocated for the OS. The PS5 should be similar.

That leaves 13.5GB for games, of which the majority is to be used by the GPU. Therefore, 12GB should be sufficient for getting VRAM parity with most multiplatform titles in the long run. 8GB should not.

I read this after writing the above response to your earlier post. So it seems I was in fact correct that you are making this claim. So I ask you to support it with a precedent from the last generation... show me one multiplatform game that proves a significantly more powerful GPU with only 50% of the consoles memory (4GB) is insufficient to maintain parity with those consoles. The GTX980 or 290X seem like appropriate starting points given your post above.

No, you shouldn't. Most gaming PCs won't have the same up-to-6GB/s I/O of the SeriesX, much less the up-to-22GB/s I/O as the PS5.

Since PC game devs can't make games for the PC that require a >3GB/s NVMe, you can count on VRAM allocation on PC dGPUs being larger than on consoles.

Just because developers can't guarantee a specific base performance level of PC IO, doesn't mean they can't take full advantage of the higher performing systems. Streaming bandwidth requirements are trivially easy to scale. Simply halving texture resolution (from 4K to 2K) would drop streaming requirements by almost 75%. And there are plenty of other ways to scale things too (LOD levels, draw distance etc...). Developers could accommodate max settings on PC's with less VRAM provided they have a fast SSD, or slow SSD's provided they have more VRAM. And if you have neither, you simply drop texture resolution or some other scalable element. And that's without considering the impact of system RAM acting as an additional cache between the SSD and VRAM.
 
Has anyone seen a game use more than 16GB system RAM? Dishonored 2 was the first game that forced me to go up from 8GB because it was causing disk swapping. Having lots of room for system file caching certainly has benefits too.
 
Last edited:
I read this after writing the above response to your earlier post. So it seems I was in fact correct that you are making this claim. So I ask you to support it with a precedent from the last generation... show me one multiplatform game that proves a significantly more powerful GPU with only 50% of the consoles memory (4GB) is insufficient to maintain parity with those consoles. The GTX980 or 290X seem like appropriate starting points given your post above.
The 8GB GPU of today is 25% faster than a PS5. 980 was 200% faster than a PS4(before the Nvidia performance tax brought it down to <2x.) Not really a legit comparison.
 
The 8GB GPU of today is 25% faster than a PS5. 980 was 200% faster than a PS4(before the Nvidia performance tax brought it down to <2x.) Not really a legit comparison.

I wasn't comparing to any specific contemporary GPU. I was making the more general point that having 50% of the consoles VRAM didn't appear to act as a hard barrier to parity irrespective of GPU power last generation. If it did, then we'd see GPU's like the 980 falling behind the last generation consoles regardless of how much more powerful they are. Could the lack of VRAM be impacting on their relative performance though even at console settings? Yes perhaps. But it's not a hard barrier that justifies making a binary statement like 8GB (or 10GB) is insufficient for parity in multiplatform titles this generation. It may well turn out to be, but there's no precedent at the moment for making such a definitive statement.
 
World of Warcraft Ray Tracing benchmark, another AMD backed title: RTX 3080 is 34% faster than 6800XT @4K.

https://www.pcgameshardware.de/Worl...ecials/WoW-Raytracing-Vergleich-Test-1362425/

Interestingly enough, the 6800XT is 44% faster than 3080 but @720p! Very strange indeed, as the 3080 appears to be CPU limited quickly.
To me, the interesting part is not that the RTX 3080 is 34% faster at 4K. That was already the case without RT;

RT Shadows OFF:
RTX 3080: 86.7 avg, 67.0 min
RX 6800XT: 69.6 avg, 57.0 min

The RTX 3080 is already 25% faster here... Now let's compare their own internal scaling with RT;

RTX 3080
RT Shadows Off; 86.7 avg, 67.0 min
RT Shadows Fair; 74.6 / 56
RT Shadows Good; 74.3 / 55.0
RT Shadows High; 67.1 / 49.0

That translates in performance percentage to;
Fair; 86.0% / 83.6%
Good; 85.7% / 82.1%
High; 77.4% / 73.1

The same for the 6800XT gives us;

6800XT
RT Shadows Off; 69.6 avg, 57.0 min
RT Shadows Fair; 60.2 / 49.0
RT Shadows Good; 60.0 / 48.0
RT Shadows High; 50.2 / 40.2

That translates in performance percentage to;
Fair; 86.5% / 86.0%
Good; 86.2% / 84.2%
High; 72.1% / 70.5%

So in actuality, RDNA2 is scaling equal here to Ampere (and better with minimums) except at the high setting where it falls behind. And even at the high setting, it's not THAT much worse... That is what I find the most interesting about this.

nVidia will definitely push to crank RT higher to put their cards in the best light, even when not necessary. I'm getting GameWorks/Tessellation vibes.
 
Yeah, because RT is not necessary now apparently.
I held and still hold the position that all current hardware is not strong enough to use RT properly.

You can go look at the image quality comparison between off, fair, good and high. All three look better than off, but, considering the performance drop, does the high setting justify the performance loss over the good setting? The good setting justifies it over the fair setting, because the framerate cost is like zero. The framerates are playable.

And then you have to remind yourself that WoW is not a recent game at all, and has extremely outdated graphics. Any game that has graphics that are up to modern standards and also enables RT, inevitably means either atrocious performance, or having to lower settings or resolution. I won't pay $700+ to play at 1080p/60, if that.
 
That leaves 13.5GB for games, of which the majority is to be used by the GPU. Therefore, 12GB should be sufficient for getting VRAM parity with most multiplatform titles in the long run. 8GB should not.

That is not how this works at all. The RAM is shared between CPU and GPU, so the CPU will take a considerable amount of these 13.5 GB. Infact, Microsoft expects most games to use 3.5 GB as DRAM, that is why they connected 10 GB via the faster bus to act as video memory. Going by that, 10 GB VRAM is enough for parity.

However, that is not the end of the story. Next gen games definately will use more than 3.5 GB as CPU related tasks will increase dramatically, given the new Ryzen CPUs are a much more performant baseline than the old Jaguar cores. Actually, Watch Dogs Legion perfectly demonstrates that even for cross gen games it's already the case. Watch Dogs Legion has to run at Medium Texture Settings and DXR at Dynamic 4K (mostly between 1440p and 1600p) on the Series X, while a 16 GB DRAM + 8 GB VRAM PC has zero issues running the game with the high res textures pack installed and DXR at these resolutions, the high res texture pack makes a big difference in the overall visual quality of the game. If the whole 10 GB would be accessible as VRAM only on the Series X, the game could run the high res texture pack with ease, just like the 3080, but that is obviously not the case. This means the CPU takes more RAM than expected, meaning less memory as VRAM as its a shared configuration, on PC you obviously don't have that issue as it has seperate DRAM for the CPU.
 
This has been proven false several times already, even on the new consoles.
I disagree. But this is not the place to discuss this topic. And even if it was, I'm not interested. The fact that scaling seems about equal up to a certain point is a much more interesting topic to discuss, because it actually entails the RDNA2 architecture, rather than pointless repetition that RT is the next messiah of gaming. After all, this is the 6000 series thread.
 
Back
Top