So what you're saying right now is that 3060 will run "most multiplatform titles in the long run" better than either 3070 or 3080?
No, hitting VRAM capacity and forcing the GPU to swap critical data with system memory results in lost cycles, but it doesn't immediately kill performance.
I don't think having 20% more VRAM is ever going to make a 12GB 3060 run faster than a 10GB 3080, considering the latter has over twice the resources for everything else.
You're trying to associate to me this dishonest hyperbole you invented to diminish my previous statements, however in the particular case of the 3060 I was very clear in what card it might surpass in the long term, and it isn't the 3080 nor the 3070:
For example, this might put the 12GB 3060 oddly positioned against the 8GB 3060 Ti for higher resolutions, on the long run.
So I ask you to support it with a precedent from the last generation... show me one multiplatform game that proves a significantly more powerful GPU with only 50% of the consoles memory (4GB) is insufficient to maintain parity with those consoles.
I'm not going to look for a nonsensical comparison to pursue an argument that you made up.
Yes the 4GB 5 TFLOPs GTX980 from 2014 is always faster than the 1.3 TFLOPs XBOne from 2013 when rendering at 1080p and below. What's the point here?
I wrote the 4GB dGPUs didn't age well starting in 2015 onwards, and ever since then you've been trying to associate to me this completely fabricated and nonsensical claim that the 2013 consoles perform better than a PC with a GTX980.
I'm not going to play that game.
I disagree. But this is not the place to discuss this topic. And even if it was, I'm not interested. The fact that scaling seems about equal up to a certain point is a much more interesting topic to discuss, because it actually entails the RDNA2 architecture, rather than pointless repetition that RT is the next messiah of gaming. After all, this is the 6000 series thread.
Failing to comply with the
RT-performance-is-all-that-matterz narrative is going to get you harassed to no end in this forum.
Just a friendly warning.
That is not how this works at all. The RAM is shared between CPU and GPU, so the CPU will take a considerable amount of these 13.5 GB. Infact, Microsoft expects most games to use 3.5 GB as DRAM, that is why they connected 10 GB via the faster bus to act as video memory. Going by that, 10 GB VRAM is enough for parity.
1 - All non-OS RAM is shared and the proportion is dynamic, yes.
2 - The GPU takes the bulk of the RAM available for the game. Feel free to
visit the gamedev presentations, specifically the port mortem analyses to confirm.
3 - In the SeriesX, the GPU also has access to the slower 3.5GB GDDR6 available for games. It's wrong to think of the 10GB as a hard limit for VRAM.
3 - If you're aiming for
parity between a PC equipped with a ~100W desktop CPU + at least 16GB DDR4 RAM + 300W 10GB VRAM / 30TFLOPs RTX6080 and a ~220W 10.28TFLOPs console, then you already support my argument.
A high-end PC with a high-end RTX3080 will (
or should) always be expected to render at higher render resolutions with higher-resolution textures with higher-LODs, higher everything - which will obviously consume more VRAM - then you agree that in the long term those 10GB are bound to become a bottleneck for the dGPU.