Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

8GB is fine for a 3060 Ti. It's a high mid-range card for 1080p/1440p. My problem is with those high-end $500+ cards sporting 8GB and 10GB but advertised for 1440p/4K.

We're also 6 years removed from the RX 480 with 8 GB of ram. That's 8GB of GDDR5 ram in 2016, with a 256 bit bus, for 250+ GB/sec of bandwidth. On a $229 card.

For high-end and enthusiast cards, the amount of VRAM usable by the consoles is a good reference point. Out of 16GB, the PS5 has around 13.5 usable for games I believe so anywhere from 12 to 16GB is adequate for an 80 series. For a 70, 10GB is good enough.

Yup, this is how I've calculated (back of napkin/pulled-out-of-my-ass level calculations here) what PC GPU's will eventually need as we get deep into a console cycle. You needed at least 4GB cards in the PS4 era, 8GB to have the same texture quality as Xbox One X titles, and now 10+GB to be 'safe' in the majority of titles going forward I believe.

It is not surprising to me at all to see some games require more than 8GB at ~4k now, especially with RT. Hell, last-gen games at max texture settings at 4k can stress 8gb cards! Dishonored 2 and Deus Ex:Mankind Divided with Ultra Textures at 4k with max settings will definitely bump up against that 8GB limit and likely cause some additional stutters that you wouldn't get with 10+gb. I think Shadow of War also recommends 10+GB for its max texture pack at 4k. Horizon Zero Dawn really needs more than 8GB to display its textures in full at 4k.

All that being said, I think the writing is on the wall now. You never know with Nvidia, but I think the majority of the 4X line will have 10+, if not 12+ GB as standard for the $400+ segment. It would be insane to release a 4070 to me with 8GB of vram, hell even just 10GB would be highly questionable.
 
Last edited:
For high-end and enthusiast cards, the amount of VRAM usable by the consoles is a good reference point. Out of 16GB, the PS5 has around 13.5 usable for games I believe so anywhere from 12 to 16GB is adequate for an 80 series.

PS5 has 12.5 Gigs for Games at 448 GB/S.

Series X has 13.5 Gigs for Games, 10 Gigs at 560 GB/s and 3.5 Gigs at 336 GB/s.

As davis.anthony pointed out, the consoles also need to use that memory for program space, so it's not 100% VRAM only.
 
PS5 is around 12.5GB but that also includes what would also be 'system RAM' on a PC.

From what I've gathered, I believe both PS5 and Series X is presenting games a full 10 GB buffer for video memory operations. Series X has a literal memory distinction, where they have fast 10 GB partition and 6 GB slower partition. Also do note that those consoles do not have to use as much as system RAM as we do. From what I remember, video games on PCs copy a lot of textures to both RAM and VRAM. That's practically a price we're paying for not having a unified system.

"They actually use some read-back information, which we definitely need to copy stuff around [from system memory to video memory, for example] which they can just read [from unified memory on PlayStation]. But those are fairly minor changes. It's just an extra copy." (Michiel Roza, Nixxes, Digital Foundry article: https://www.eurogamer.net/digitalfo...an-remastered-on-pc-the-nixxes-tech-interview)

Its harmless, but it is an extra load that does not exist on consoles. Notice how latest PS4 ports like God of War does not even run properly on 8 GB RAM systems. Almost all games in recent 3 years need 16 GB for safe and secure operation. Let's admit, PS4 literally has a total of 5.5 GB budget. When you match PS4 settings in God of War, you get 4-4.5 GB VRAM usage (pretty reasonable), but also a whopping 4-6 GB of RAM usage. PS4 clearly does not have a total of 8-10.5 GB memory budget. It is stated by a lot of sources that 2-2.5 GB RAM is used by system itself for multitasking and background operations. So that 4-6 GB RAM usage is most likely game having lots of copied assets that are ready to be transferred to VRAM. Forza Horizon 5 literally uses 6 GB RAM even on low settings, which I believe Xbox One Uses, while still utilizing 4.5 GB VRAM, which again, makes no sense, given the total budget those consoles can allocate.

So at best, if we go by Series X's design, a 3.5 GB of memory budget should be enough for other various operations like sounds, physics and stuff. I'm not highly knowledgeable about this actually so I'm just theorizing based on what I've read on this forum and many other articles. Anyone can correct me if I'm wrong and I would be grateful.

So back to my original topic: I believe both SX and PS5 presents game a full 10 GB for GPU operations, and 3.5 memory for CPU operations. Now comes the 8 GB -10 GB discussion. This %80 VRAM allocation cap is not a thing that is exclusive to Spider-man. We do not have all the VRAM available to games in our gaming systems. Windows itselfl will always use something around 150 MB to 1 GB depending on your screen resolution, how many screens you have. Then comes the Discord, Spotify and other "Electron" apps that use hardware acceleration, which uses a bit of VRAM to... accerelate those APPs, since it is being accerelated by... the GPU. I believe most developers actually cap VRAM usage around %85-90 of what you have, Nixxes decide it to be %80. RDR 2 will also refuse to use more than 7.2 GB VRAM on 8 GB budget. Actually, it literally says 700 MB "other apps" usage, even when you don't actually have any other apps, and that is always funny to me. Borderlands 3 also hard-caps around 7.4 GB from what I've observed. Finally, Forza Horizon 5 literally tells you that you only have 6.8 GB VRAM budget when you go way too overboard. It actually currently has a bug that once engine breaches past 6.8 GB VRAM on 8 GB GPUs, it creates rainbow artifacts. It has yet to be fixed, and it is a widespread issue, too widespread to a point where it is eyeopening to see how widely 8 GB GPUs are used, and how NVIDIA played a dangerous game by flooding the market with such an huge amount of 8 GB GPUs.



https://www.reddit.com/r/forza/comments/w9hxmx/i_dont_know_what_happened_but_i_managed_to_get/
https://www.reddit.com/r/forza/comments/wgm2f7/is_this_the_proper_forza_experience/
https://www.reddit.com/r/forza/comments/wpbj2x/anybody_else_know_the_causefix_behind_these/,
https://www.reddit.com/r/forza/comments/vhx0nz/some_weird_graphic_bugs_on_my_3060ti/
https://www.reddit.com/r/ForzaHorizon/comments/wib7v1/are_anyones_textures_like_this/
https://www.reddit.com/r/ForzaHorizon/comments/wt35aq/forza_horizon_5_weird_foliage_textures/
https://www.reddit.com/r/ForzaHorizon/comments/wkuxrl/any_fixes_for_messed_up_grass_texture/
https://www.reddit.com/r/ForzaHoriz...res_are_fucked_in_hot_wheels_anyone_know_how/
https://www.reddit.com/r/ForzaHoriz...ts_causing_my_grass_textures_to_look_so_crap/
https://www.reddit.com/r/ForzaHorizon/comments/wqkr8x/odd_textures_pc_since_update/,
https://www.reddit.com/r/ForzaHorizon/comments/w4nc66/anyone_else_having_these_rock_textures/
https://www.reddit.com/r/ForzaHorizon/comments/vt2oj8/buggy_and_glitched_textures_after_update/
https://www.reddit.com/r/ForzaHoriz...nk_i_unlocked_the_bifrost_in_the_new_dlc_lol/ (one of the funniest one)
https://www.reddit.com/r/ForzaHorizon/comments/wcpxpb/does_anyone_know_why_im_having_these_texture/
https://www.reddit.com/r/ForzaHorizon/comments/w9753i/texture_glitch/
https://www.reddit.com/r/ForzaHorizon/comments/wzmpel/texture_glitch/
https://www.reddit.com/r/ForzaHoriz...g_this_weird_texture_corruption_and_stutters/
https://www.reddit.com/r/ForzaHorizon/comments/wbdcbt/whats_going_on_here_the_road_also_sometimes/,
https://www.reddit.com/r/ForzaHoriz...getting_texture_issues_on_the_road_and_grass/
https://www.reddit.com/r/ForzaHorizon/comments/vh4f0e/forza_horizon_5_texture_bugs/
https://www.reddit.com/r/ForzaHorizon/comments/w82bb1/hot_wheels_lsd_edition/
https://www.reddit.com/r/ForzaHorizon/comments/vqy2md/texture_bug_kinda_scared_me_a_bit/

This also happens on Tiny Tina's Wonderlands once you breach past 6.5 GB, but it is not widespread as this one (since it is not played as much as Forza 5 is played). These are very glaring issues. Almost all these issues happen exclusively on 6-8 GB GPUs. Most of these people try using ultra/extreme textures, not thinking that their fresh, new-gen, upper mid-range, or maybe lower high-end GPUs would handle it. The fact that this is merely a cross-gen game is however terrifying.

So all in all, we don't have a complete "8 GB" or "10 GB" budget to ourselves. Consoles actually have that exclusivity to them. With 8 GBs, you're looking at something between 6.5 to 7.4 GB of available budget, at least that is how developers decide on how to make engines run on 8 GB GPUs. You cannot manipulate this behaviour, you're practically at the mercy of developer. Cyberpunk 2077 actually goes hardcore, it can literally use the entire 8 GB if you have free memory available. And it did, for me. I found Cyberpunk 2077 to be succesful in this regard. Doom Eternal is also great in this regard, fully utilizing what budget you have.

I believe creating artifical caps like %80 or %85 or %90 can be hurting for people who do not use any software at all while gaming. They will have minimal amount of bloat, and will usually have a whopping 7.6-7.7 GB of budget. But as I said, until seeing this VRAM allocation cap behaviour, I also used to think like you. But now my views have changed completely regarding this. I believed with 7.5-7.8 GB budget, you could cut corners here and there and get good graphics, but now I'm looking at a budget with 6.4-6.8 GB, which is not that cool. Having that 1 GB can be crucial difference. Even ray tracing in most cases consumes 1-1.5 GB extra VRAM. I respect developers trying to give VRAM headrooms for background operations, but for people that would not need it, it is just free empty resource lying there.

By the way, I'm quite shocked this Forza Horizon 5 issue has never been discussed extensively in these threads. :D
 
Last edited:
From what I've gathered, I believe both PS5 and Series X is presenting games a full 10 GB buffer for video memory operations. Series X has a literal memory distinction, where they have fast 10 GB partition and 6 GB slower partition. Also do note that those consoles do not have to use as much as system RAM as we do. From what I remember, video games on PCs copy a lot of textures to both RAM and VRAM. That's practically a price we're paying for not having a unified system.

"They actually use some read-back information, which we definitely need to copy stuff around [from system memory to video memory, for example] which they can just read [from unified memory on PlayStation]. But those are fairly minor changes. It's just an extra copy." (Michiel Roza, Nixxes, Digital Foundry article: https://www.eurogamer.net/digitalfo...an-remastered-on-pc-the-nixxes-tech-interview)

Its harmless, but it is an extra load that does not exist on consoles. Notice how latest PS4 ports like God of War does not even run properly on 8 GB RAM systems. Almost all games in recent 3 years need 16 GB for safe and secure operation. Let's admit, PS4 literally has a total of 5.5 GB budget. When you match PS4 settings in God of War, you get 4-4.5 GB VRAM usage (pretty reasonable), but also a whopping 4-6 GB of RAM usage. PS4 clearly does not have a total of 8-10.5 GB memory budget. It is stated by a lot of sources that 2-2.5 GB RAM is used by system itself for multitasking and background operations. So that 4-6 GB RAM usage is most likely game having lots of copied assets that are ready to be transferred to VRAM. Forza Horizon 5 literally uses 6 GB RAM even on low settings, which I believe Xbox One Uses, while still utilizing 4.5 GB VRAM, which again, makes no sense, given the total budget those consoles can allocate.

So at best, if we go by Series X's design, a 3.5 GB of memory budget should be enough for other various operations like sounds, physics and stuff. I'm not highly knowledgeable about this actually so I'm just theorizing based on what I've read on this forum and many other articles. Anyone can correct me if I'm wrong and I would be grateful.

So back to my original topic: I believe both SX and PS5 presents game a full 10 GB for GPU operations, and 3.5 memory for CPU operations. Now comes the 8 GB -10 GB discussion. This %80 VRAM allocation cap is not a thing that is exclusive to Spider-man. We do not have all the VRAM available to games in our gaming systems. Windows itselfl will always use something around 150 MB to 1 GB depending on your screen resolution, how many screens you have. Then comes the Discord, Spotify and other "Electron" apps that use hardware acceleration, which uses a bit of VRAM to... accerelate those APPs, since it is being accerelated by... the GPU. I believe most developers actually cap VRAM usage around %85-90 of what you have, Nixxes decide it to be %80. RDR 2 will also refuse to use more than 7.2 GB VRAM on 8 GB budget. Actually, it literally says 700 MB "other apps" usage, even when you don't actually have any other apps, and that is always funny to me. Borderlands 3 also hard-caps around 7.4 GB from what I've observed. Finally, Forza Horizon 5 literally tells you that you only have 6.8 GB VRAM budget when you go way too overboard. It actually currently has a bug that once engine breaches past 6.8 GB VRAM on 8 GB GPUs, it creates rainbow artifacts. It has yet to be fixed, and it is a widespread issue, too widespread to a point where it is eyeopening to see how widely 8 GB GPUs are used, and how NVIDIA played a dangerous game by flooding the market with such an huge amount of 8 GB GPUs.



https://www.reddit.com/r/forza/comments/w9hxmx/i_dont_know_what_happened_but_i_managed_to_get/
https://www.reddit.com/r/forza/comments/wgm2f7/is_this_the_proper_forza_experience/
https://www.reddit.com/r/forza/comments/wpbj2x/anybody_else_know_the_causefix_behind_these/,
https://www.reddit.com/r/forza/comments/vhx0nz/some_weird_graphic_bugs_on_my_3060ti/
https://www.reddit.com/r/ForzaHorizon/comments/wib7v1/are_anyones_textures_like_this/
https://www.reddit.com/r/ForzaHorizon/comments/wt35aq/forza_horizon_5_weird_foliage_textures/
https://www.reddit.com/r/ForzaHorizon/comments/wkuxrl/any_fixes_for_messed_up_grass_texture/
https://www.reddit.com/r/ForzaHoriz...res_are_fucked_in_hot_wheels_anyone_know_how/
https://www.reddit.com/r/ForzaHoriz...ts_causing_my_grass_textures_to_look_so_crap/
https://www.reddit.com/r/ForzaHorizon/comments/wqkr8x/odd_textures_pc_since_update/,
https://www.reddit.com/r/ForzaHorizon/comments/w4nc66/anyone_else_having_these_rock_textures/
https://www.reddit.com/r/ForzaHorizon/comments/vt2oj8/buggy_and_glitched_textures_after_update/
https://www.reddit.com/r/ForzaHoriz...nk_i_unlocked_the_bifrost_in_the_new_dlc_lol/ (one of the funniest one)
https://www.reddit.com/r/ForzaHorizon/comments/wcpxpb/does_anyone_know_why_im_having_these_texture/
https://www.reddit.com/r/ForzaHorizon/comments/w9753i/texture_glitch/
https://www.reddit.com/r/ForzaHorizon/comments/wzmpel/texture_glitch/
https://www.reddit.com/r/ForzaHoriz...g_this_weird_texture_corruption_and_stutters/
https://www.reddit.com/r/ForzaHorizon/comments/wbdcbt/whats_going_on_here_the_road_also_sometimes/,
https://www.reddit.com/r/ForzaHoriz...getting_texture_issues_on_the_road_and_grass/
https://www.reddit.com/r/ForzaHorizon/comments/vh4f0e/forza_horizon_5_texture_bugs/
https://www.reddit.com/r/ForzaHorizon/comments/w82bb1/hot_wheels_lsd_edition/
https://www.reddit.com/r/ForzaHorizon/comments/vqy2md/texture_bug_kinda_scared_me_a_bit/

This also happens on Tiny Tina's Wonderlands once you breach past 6.5 GB, but it is not widespread as this one (since it is not played as much as Forza 5 is played). These are very glaring issues. Almost all these issues happen exclusively on 6-8 GB GPUs. Most of these people try using ultra/extreme textures, not thinking that their fresh, new-gen, upper mid-range, or maybe lower high-end GPUs would handle it. The fact that this is merely a cross-gen game is however terrifying.

So all in all, we don't have a complete "8 GB" or "10 GB" budget to ourselves. Consoles actually have that exclusivity to them. With 8 GBs, you're looking at something between 6.5 to 7.4 GB of available budget, at least that is how developers decide on how to make engines run on 8 GB GPUs. You cannot manipulate this behaviour, you're practically at the mercy of developer. Cyberpunk 2077 actually goes hardcore, it can literally use the entire 8 GB if you have free memory available. And it did, for me. I found Cyberpunk 2077 to be succesful in this regard. Doom Eternal is also great in this regard, fully utilizing what budget you have.

I believe creating artifical caps like %80 or %85 or %90 can be hurting for people who do not use any software at all while gaming. They will have minimal amount of bloat, and will usually have a whopping 7.6-7.7 GB of budget. But as I said, until seeing this VRAM allocation cap behaviour, I also used to think like you. But now my views have changed completely regarding this. I believed with 7.5-7.8 GB budget, you could cut corners here and there and get good graphics, but now I'm looking at a budget with 6.4-6.8 GB, which is not that cool. Having that 1 GB can be crucial difference. Even ray tracing in most cases consumes 1-1.5 GB extra VRAM. I respect developers trying to give VRAM headrooms for background operations, but for people that would not need it, it is just free empty resource lying there.

By the way, I'm quite shocked this Forza Horizon 5 issue has never been discussed extensively in these threads. :D
Fantastic post.

One thing I'll add is that there's a reason why these games are keeping a free buffer of memory and it's because the graphics memory is virtualized. The game doesn't know if you will make an unexpected turn and suddenly need to upload a gigabyte of data into VRAM. Leaving that bit of free memory means that the game can upload the gigabyte without having to first download a gigabyte back into system memory. So, while it does mean a lower overall framerate then it also helps to prevent stutters caused by data thrashing over the PCIe bus. It's basically the same reason your PC will start writing to pagefile before system RAM gets filled up because the application may otherwise crash.
 
Not trying to go too far in the defense of nxgamer's arguments, but if the consensus in the thread is "the 2070 underperforms because it has a lower vram ceiling than the ps5" doesn't that mean.... the 2070 is underperforming compared to the ps5? Last I checked vram was part of the hardware, not some entirely separate magical thing.
 
Not trying to go too far in the defense of nxgamer's arguments, but if the consensus in the thread is "the 2070 underperforms because it has a lower vram ceiling than the ps5" doesn't that mean.... the 2070 is underperforming compared to the ps5? Last I checked vram was part of the hardware, not some entirely separate magical thing.
Which is true but that's a bit disingenuous because that's using settings tailored for the PS5. For instance, it runs the game with a paltry 4X AF even in Fidelity Mode whereas the 2070 has absolutely no issue with 16X. That'd be like taking a scenario in which there are heavy RT effects where the 2070 shines over the PS5 and using those results to prove that RTX 2070>PS5 and call it a day. You have to be honest and point out the differences in their strengths and structures. The 2070 is the closest match to a PS5 on the NVIDIA camp but it still isn't a PS5 GPU so trying to argue which one is more powerful overall is a fool's errand because they're not 100% the same and are from different manufacturers. It's simply agreed that in general circumstances where one doesn't have a specific aspect bottlenecked by the game they're running, they should perform in the same ballpark.

In this game specifically, with PS5 settings, the PS5 performs much better than the 2070 due to using settings that take advantage of the PS5's strengths, not because of some insane architectural deficiency inherent to the PC environment.
 
From what I've gathered, I believe both PS5 and Series X is presenting games a full 10 GB buffer for video memory operations. Series X has a literal memory distinction, where they have fast 10 GB partition and 6 GB slower partition. Also do note that those consoles do not have to use as much as system RAM as we do.

True but they also don't have as much system RAM as we do.

So back to my original topic: I believe both SX and PS5 presents game a full 10 GB for GPU operations, and 3.5 memory for CPU operations.
We have already established PS5 has less than XSX.
This %80 VRAM allocation cap is not a thing that is exclusive to Spider-man.
It's not but it's no where near as common as you're making it out to be either. In raw percentage terms it's nothing.
We do not have all the VRAM available to games in our gaming systems.
Yes we do, Dying Light 2 on my 3060ti pushes 7.9GB of VRAM with task manager showing 7.7GB for the game itself.
Windows itselfl will always use something around 150 MB to 1 GB depending on your screen resolution how many screens you have.
I have never seen it use close to 1GB.....
Then comes the Discord, Spotify and other "Electron" apps that use hardware acceleration, which uses a bit of VRAM to... accerelate those APPs, since it is being accelerated by... the GPU.

I believe most developers actually cap VRAM usage around %85-90 of what you have,
To claim that you need to produce evidence that over 60% of games do that.
Nixxes decide it to be %80.
The game may be patched now it's been raised.

There are VRAM issues in some games yes, but the majority are completely fine.

Why don't you factor in next generation streaming systems which will be much more efficient with memory use in when talking about GPU's in the future?
 
Not trying to go too far in the defense of nxgamer's arguments, but if the consensus in the thread is "the 2070 underperforms because it has a lower vram ceiling than the ps5" doesn't that mean.... the 2070 is underperforming compared to the ps5? Last I checked vram was part of the hardware, not some entirely separate magical thing.
There is absolutely some truth in 'console magic/optimization', even if some people here are very reluctant to acknowledge it.

The problem with NXGamer is that he uses very flawed methodology to go about demonstrating things. Really, more to the point, he has a strong bias to start with that leads him to want to demonstrate certain things in the first place instead of just doing some objective testing and then concluding anything from that.

I might also accuse him of simply not understanding things well enough to know how to do good testing or good analysis, but I honestly dont believe that. I think he's entirely knowledgeable enough to avoid this if he wanted to. He just cant drop the Playstation fanboy mindset, though. It always pokes through and colors his judgements.
 
But how do you determine what's an adequate amount? I don't and have never questioned if my 3060ti has enough VRAM.

I don't think there's a right or wrong answer to this question/argument. It's a trade off, and which side you sit on that trade off is going to depend on each consumers individual circumstances. i.e. is it better to have a high performing GPU at a lower price point that will work fine in 90% of circumstances, but may need to make unpalatable trade offs in a small percentage of games due to VRAM limitations, or is it better to have the same GPU at a higher price point that will never be VRAM limited but in 90% of circumstances will also be wasting that extra VRAM. For those to whom cost is not a major concern, I'm sure the latter would be preferable, but if you're highly cost conscious, and already pushing your budget, then that extra 2GB on a 3080 for example might be the difference between being able to afford a 3080 that would have been fine in the vast majority of cases or having to drop down to a 3070 that will always be slower anyway.

One thing does seem clear though from the discussions in this thread and that is that an 8GB GPU this generation will be VRAM limited at console settings in an unknown number of games. And that's certainly troublesome if you've purchased a 2080, 2080S, a 3060Ti, or a 3070 if you (quite reasonably) expected that GPU to be able to point for point settings match or exceed the consoles for the rest of the generation.

The beauty of PC gaming though as pointed out already is it's flexibility. With a 3060Ti you may not be able to use the same quality textures in a small handful of games, but you'll be able to ramp other settings and/or framerate up past the console version.

The game doesn't know if you will make an unexpected turn and suddenly need to upload a gigabyte of data into VRAM. Leaving that bit of free memory means that the game can upload the gigabyte without having to first download a gigabyte back into system memory.

Couldn't the new transfer in to VRAM just overwrite the existing data rather than having to copy it back to system RAM first?

Not trying to go too far in the defense of nxgamer's arguments, but if the consensus in the thread is "the 2070 underperforms because it has a lower vram ceiling than the ps5" doesn't that mean.... the 2070 is underperforming compared to the ps5? Last I checked vram was part of the hardware, not some entirely separate magical thing.

I covered this in an earlier post but the issue isn't what has been compared, or the result (well aside from the CPU limited performance RT GPU comparisons anyway), it's the fact that the extent of the PS5 win is put down to architectural advantages and console efficiencies rather than what it actually is, a severe VRAM bottleneck. To make things worse, that VRAM bound result is then extrapolated to more powerful GPU's to suggest the PS5 is performing in line with them as well.

If the VRAM bottleneck had simply been called out (which to be fair he may not even have realised) then a far more favorable comparison to more 2070 optimised settings could have been made. Then rather than having a video which pretty misleadingly concludes that the PS5 is far more powerful than the 2070 because of console architectural advantages and efficiencies, he would have had a video that accurately highlights the PS5's advantage over the 2070 in VRAM capacity while showing what can be done on the PC side to mitigate with a more PC optimised mix of settings which could have produced very comparable results to the PS5 experience in what would ultimately have been a far more balanced review.

I want to see the exact console matched settings comparison as much as anyone so I've no problem with that comparison. But those settings matched comparisons are always optimal for the console. So the PC GPU is always starting from a position of disadvantage effectively. If that results in a GPU with usually comparable performance ending up running significantly worse (to the point of being unplayable), then I think there's a duty there to inform the viewer why it's happening and why it doesn't have to be the case with a few simply settings tweaks rather than just make the blanket conclusion of "consoles better".
 
There is absolutely some truth in 'console magic/optimization', even if some people here are very reluctant to acknowledge it.
This is definitely true but I also think you’re far more likely to find those advantages on the CPU side than on the GPU. My 2080 Ti in the same scene outperforms his 2070 by over 45fps (248%).

I think a much more interesting exercise would have been to investigate the CPU-specific optimizations and how the hardware-accelerated decompression helps. You need an extremely beefy CPU for this game. The CPU in the PS5 isn’t all that strong so there is something happening there. Much more so than with the GPU.
 
I don't think there's a right or wrong answer to this question/argument. It's a trade off, and which side you sit on that trade off is going to depend on each consumers individual circumstances. i.e. is it better to have a high performing GPU at a lower price point that will work fine in 90% of circumstances, but may need to make unpalatable trade offs in a small percentage of games due to VRAM limitations, or is it better to have the same GPU at a higher price point that will never be VRAM limited but in 90% of circumstances will also be wasting that extra VRAM. For those to whom cost is not a major concern, I'm sure the latter would be preferable, but if you're highly cost conscious, and already pushing your budget, then that extra 2GB on a 3080 for example might be the difference between being able to afford a 3080 that would have been fine in the vast majority of cases or having to drop down to a 3070 that will always be slower anyway.

One thing does seem clear though from the discussions in this thread and that is that an 8GB GPU this generation will be VRAM limited at console settings in an unknown number of games. And that's certainly troublesome if you've purchased a 2080, 2080S, a 3060Ti, or a 3070 if you (quite reasonably) expected that GPU to be able to point for point settings match or exceed the consoles for the rest of the generation.

The beauty of PC gaming though as pointed out already is it's flexibility. With a 3060Ti you may not be able to use the same quality textures in a small handful of games, but you'll be able to ramp other settings and/or framerate up past the console version.

I'm in the boat of reducing textures in that 10% of games but gaining in other area's.

I did genuinely look at a 12GB 3060 over the 8GB 3060ti.

But I ultimately decided to go with the 8GB ti version as I felt the extra VRAM in the none ti model wasn't worth the 20-25% drop in performance I would be getting.

Although I will likely have replaced my 3060ti before any VRAM limitations start to seriously affect my gaming experience.

I covered this in an earlier post but the issue isn't what has been compared, or the result (well aside from the CPU limited performance RT GPU comparisons anyway), it's the fact that the extent of the PS5 win is put down to architectural advantages and console efficiencies rather than what it actually is, a severe VRAM bottleneck. To make things worse, that VRAM bound result is then extrapolated to more powerful GPU's to suggest the PS5 is performing in line with them as well.

If the VRAM bottleneck had simply been called out (which to be fair he may not even have realised) then a far more favorable comparison to more 2070 optimised settings could have been made. Then rather than having a video which pretty misleadingly concludes that the PS5 is far more powerful than the 2070 because of console architectural advantages and efficiencies, he would have had a video that accurately highlights the PS5's advantage over the 2070 in VRAM capacity while showing what can be done on the PC side to mitigate with a more PC optimised mix of settings which could have produced very comparable results to the PS5 experience in what would ultimately have been a far more balanced review.

I want to see the exact console matched settings comparison as much as anyone so I've no problem with that comparison. But those settings matched comparisons are always optimal for the console. So the PC GPU is always starting from a position of disadvantage effectively. If that results in a GPU with usually comparable performance ending up running significantly worse (to the point of being unplayable), then I think there's a duty there to inform the viewer why it's happening and why it doesn't have to be the case with a few simply settings tweaks rather than just make the blanket conclusion of "consoles better".
Exactly this, it's how he presents his findings that I have issue with and how he doesn't attempt to recommend how best to use the 2070.

His conclusion to his video should have been....

"With matched settings PS5 offers higher performance and texture quality over the 2070, this is due to the 8GB VRAM on the 2070 as that is less than what PS5 can allocate, however the 2070 offers it's own advantage and this relates to ray tracing.

If you own a 2070 and have a CPU that's only a few years old I recommend you run the textures on high which is below PS5 but run ray tracing at the very high quality settings and have more detailed and higher resolution reflections, at the cost of texture detail.

If your CPU was released around the same time as my 2700x I recommend to keep ray tracing geometric detail the same as PS5 but increase the resolution of those reflections giving you cleaner reflections with slightly reduced texture detail."

That imo would have been the perfect conclusion and spoke abut the strengths and weaknesses of either platform.
 
Last edited:
True but they also don't have as much system RAM as we do.
Still, it doesn't matter that we can have endless system RAM, the game is tailored around console's total memory budget.

We have already established PS5 has less than XSX.

It's not but it's no where near as common as you're making it out to be either. In raw percentage terms it's nothing.

Yes we do, Dying Light 2 on my 3060ti pushes 7.9GB of VRAM with task manager showing 7.7GB for the game itself.

I have never seen it use close to 1GB.....
I said from 150 MB to 1 GB. 150-300 MB is for 1080p, 300-400 MB for 1440p and 500 MB-1000 MB for 4K. It can also grow with more elements on screen. It depends. So you never seeing it close to 1 GB does not make it any less true regarding DWM's VRAM usage on higher end monitors. Even now, as of today, my completely clean desktop at 4K desktop uses 427 MB of VRAM. I'm sure other 1440p/4K users can back me up in this regard.

To claim that you need to produce evidence that over 60% of games do that.
%60 is a brutal number. What I care is AAA games. I can use a sample of 20 big AAA games and that would be "personally" enough for me to see a "pattern" where devs cap the allocation. If you want to see 20 big 2019-2022 AAA games having artificial caps around %85-90, I can show it to you. I personally said that Cyberpunk 2077 and Doom Eternal uses almost all available VRAM, so DL2 might be in that group.
The game may be patched now it's been raised.
I'm using the latst patched and it is not still raised.

There are VRAM issues in some games yes, but the majority are completely fine.
That's the trick with NVIDIA, they give enough VRAM to a point where it appears to be fine, until it isn't. Of course 8 GB is going to be fine, even for 1-2 years down the line. However at the end, horrible things could happen to it 5 years later. You may even think you could get away with small texture quality drops, but that may not be the case here. When I lookat GTX 770 / AC unity requirements back in 2014 and see how funny some people have remarks, saying GTX 770 has enormously speedy VRAM compared to PS4, and how it wouldn't matter that it doesn't have much VRAM, and how it would still destroy PS4. Fast forward today, card is enormously bottlenecked by its tiny buffer on games PS4 casually trashes like Spiderman, God of War and RDR 2, and in all recent AAA games you practically has to use super ugly low res textures to fit something meaningful into its small buffer.

Why don't you factor in next generation streaming systems which will be much more efficient with memory use in when talking about GPU's in the future?

I will factor them in once I see them in action. I'm spectical, since I've never seen Windos nor NVIDIA giving special care or optimizations for older hardware. They invent something new and always leave old hardware behind. To my knowledge Sampler Feedback Streaming can be a huge VRAM optimization that Xbox consoles will use, but only the sampler feedback part of it exists on DX12 Ultimate compliant cards currently, and for the "streaming" part, you need dedicated hardware, a special texture streaming block that exists on Xbox consoles, yet it does not exist on any consumer GPU. It is problematic. NVIDIA and AMD also doesn't have any decompression block on their GPUs. Even the sampler feedback is not fully supported on Ampere, forcing Microsoft to leave it at "feature level 0.9" whereas RDNA2 cards have it at "feature level 1.0". Even before the thing is used, Ampere is already lagging behind, probably due to planned... you know what. It is just not pleasant to see these oddities. It feels like Pascal-Async all over again. So be spectical that those nextgen things will have useful results on older gen cards. Even older desktop motherboards could prove a challenge in this regard, if they decide to invent a new tech for newer gen motherboards. Who knows?
 
Last edited:
This is definitely true but I also think you’re far more likely to find those advantages on the CPU side than on the GPU. My 2080 Ti in the same scene outperforms his 2070 by over 45fps (248%).

I think a much more interesting exercise would have been to investigate the CPU-specific optimizations and how the hardware-accelerated decompression helps. You need an extremely beefy CPU for this game. The CPU in the PS5 isn’t all that strong so there is something happening there. Much more so than with the GPU.

True. Nixxes also briefly touched upon that. They supposedly work on cpu optimizations (besides other things), so its not really the final product yet. Optimizations are promised atleast.

That's the trick with NVIDIA, they give enough VRAM to a point where it appears to be fine, until it isn't. Of course 8 GB is going to be fine, even for 1-2 years down the line. However at the end, horrible things could happen to it 5 years later. You may even think you could get away with small texture quality drops, but that may not be the case here. When I lookat GTX 770 / AC unity requirements back in 2014 and see how funny some people have remarks, saying GTX 770 has enormously speedy VRAM compared to PS4, and how it wouldn't matter that it doesn't have much VRAM, and how it would still destroy PS4. Fast forward today, card is enormously bottlenecked by its tiny buffer on games PS4 casually trashes like Spiderman, God of War and RDR 2, and in all recent AAA games you practically has to use super ugly low res textures to fit something meaningful into its small buffer.

2GB indeed was not enough, half of the consoles vram budget at the time. Fortunately its not as bad today where consoles sit around 10gb vram (a guess), most gpus in the same performance class as consoles arent sitting at half the vram budget atleast.
 
By the do not forget that this person tried to convey normal system RAM usage the game has as VRAM usage. He doesn't even fully understand the tool he uses (capframex);

waQyA5V.jpg



jyo5Z8A.jpg

I also wondered if he re-named the parameters, but that's not it either. As you can see while his RX 6800 shows 15 GB of usage, his "Used mem game" is showing 6.05 GB, most likely it being normal system RAM usage. It did not stop him from saying his GPU was using a full 8 GB however. At this point I found this post really frantic, and beyond hopeless. Also the fact that actual VRAM usage for 2070 is nowhere to be seen is also somewhat suspicious.

I mean in general, his videos are a editorial mess. Every overlay he uses in different PCs are different, everything looks weird, we cannot see full GPU utilization most of the time, overlay is cut in half in side-by-side comparisons, it is uneven, unregulated. This is not how you do content. You must give some care and attention towards it. It just looks like some mess gobbled up together. Also stats he shows are incoherent blob, it just looks like a huge mess overall. He could've shown shared VRAM usage, instead we just spee "Bus speed" at 100 MHz instead for some unknown reason. We don't even get to see core clocks properly most of the time.
 
Hello! I was responding to the discussion, which always criticises NXG's contributions rather than discussing technical points. I've not watched your content and have no personal opinion, so I can hardly be biased! I'm only biased in favour of genuine technical discussion held at a competent level which is fact-based and can discuss methods and datapoints.

IF NXG tech breakdowns are low-grade as everyone here states (in interpreting scans of the discussion rather than being involved in depth with said discussion), they shouldn't be posting it here. There is no gate-keeping info on the internet, which is as it should be, but that doesn't make everything out there worthy of discussion. No-one should be posting anything that they consider to be poor data. People should only paste stuff in this thread if they believe it to be valuable content, at which point the discussion should be about the content. Ideally there'd be consensus on what content is worthwhile and what isn't so discussion don't get constantly bogged into arguments about reputation and biases.

I respect your appearance here and hopefully you can talk about methods and results in a way that contributes to the discussion. I will pay more attention to this thread for the time being to ensure the discussion is healthy and productive.

Edit: One suggestion, based on how people have spoken of NGX videos over the past coupled with your confusion here, I wonder if there's a communication issue, that the information you are presenting is being interpreted in a different way? Perhaps people should question what precise information you were meaning to share and what they heard? NX Gamer is here to talk to directly and ask questions about their positions without assumptions being needed. The discussion floor is open...
The message never came across as that, it came across as do not post that here or disuss 'his' work.

I appreciate the reponse and clarity here though, so thank you. And yes, I can confirm I am the NXgamer from the videos, that is said with tongue in cheek just because it sounds silly to me.
View attachment 6910

a usual case where you lose your half your framerate. it literally uses 4.5 gb worth of normal RAM as a substitute for VRAM. the fact that dedicated+shared gpu memory always amounts to something like 10 gb is proof that ps5 equivalent settings simply need/require 10 gb total budget

naturally, using this much normal RAM as a substitute VRAM tanks the cards performance, just like it happens on Godfall, and just like how it happened back in 2017 with AC Origins. This is a widely known thing: If your GPU starts seeping into the normal RAM, you lose huge amounts of frames. Entire computing power is going to waste because GPU actively waits for slow DDR4 memory to catch up. If you have, as I said, enough VRAM budget, this does not happen, and you simply get your compute worth of performance, whatever GPU you have.

DDR4 RAM is not speedy as GDDR6 VRAM is. Maybe DDR5+6400 mhz would lessen the tanking effect. but at that point, if you can afford ddr5/12th gen intel platforms, you wouldn't be gaming on a 8 gb 2070/3070 either...

So if you have a 3070/2070, you do not play with tanked framerates where almost %50 of your compute power is wasted on stalling. You instead lower the resolution to 1440p, or suck it up and use high textures. that way you can have the full power of your GPU, and in the case of 1440p, you will even have higher framerates, upwards of 70+ instead of huddling around 40 framerate or so.

@davis.anthony

that's my point. ps5's vrr fidelity mode is able to do very high textures at native 4k, and get upwards of 40 framerates. this is respectable. 2070 would be able to do so too, if it had enough budget. but this did not stop @Michael Thompson using as a comparison. I actually can get 55+ frames at native 4k with my 3070 with RT enabled with high textures all the time. using very high textures however tanks the performance a lot. 4k dlss quality gets me upwards of 70 frames, but I'm still unable to use very high textures, sadly, without it tanking the performance. I have literally linked a video where 3060 gets 36+ framerate average at native 4k with ray tracing. not quite a match for PS5, but still respectable, and proves that 2070 is hugely underperforming due to vram constraints (yes, I'm being a parrot. but I have to convey my point across)

2070/3060ti and to some extent 3070 are seen as highly capable 1440p cards. i wouldn't assign 3060ti to 1080p, even in this game at 1440p NATIVE, it is able to get upwards of 70 framerates with RT enabled. unless you chase 1440p/144, these cards are perfect match for 1440p. however I expect these VRAM issues to start creeping at 1440p too, so yeah, 8 GB seems like a value that will be required at 1080p with true nextgen titles 2-3 years later. 3060ti at 1080p is a safe bet for the next 3-4 years, but 8 GB 3070ti will really experience a lot of "having enough grunt but not enough memory" situations.

now you @Michael Thompson claim that no one can know if ps5 would perform better with more memory. to prove this, you can look at 24 gb rtx 3090 and 12 gb 3080ti. the fact that both have enough vram and perform same proves that game does not request more than 10 gb memory. if it did scale and performed better with even more memory as you suggest, 3090 would take the ball and run away. no, the game is specifically designed for 10 gb budget ps5 can allocate to from its total 16 gb pool. you know this very well, more than I do, as a matter of fact. so yeah, trying to compare 8 GB GPUs that are artficially handicaped to 6.4 GB pool to a fully allocated 10 GB pool of PS5 will naturally see your 2070 tank to half of what PS5 is capable of. Congratulations, you made a great discovery about VRAM bottlenecks. and no, a 36 gb ps5 would not make it render the game faster. it just has enough budget the game needs because the game is specifically engineered towards that budget. my example of 3090 not outperforming the 3080ti in any meaningful way proves this.
Great detail and I appreciate your effort here and for tagging me in it.

Rather than breakdown your post into chunked replies, I will summarise here so it is fast as I am doing this on my break currently.

VRAM - Yes, it is and issue and I have covered this years ago in multiple videos on how it CAN and WILL impact more than just performance and I am actually glad that so many here are now talking about that point which is something I have talked about for so long and it has never seemed to have sunk in. Even those here that disagree with me are really agreeing, including yourself, which is a positive. I covered it here in this video and others (such as my God of war and PS5 launch videos) in that VRAM makes a difference, RAM makes and data is paramount in all computer work of which games is likely one of the most demanding and performance impacting bar Rocket aviation, planes etc.

But, we need to keep in focus that VRAM is Hardware and is part of the GPU. My test(s) are designed to test the GPU at the same settings as the console (or as close as possible) which WILL and SHOULD include the VRAM. As such the results here (within the Fidelity mode only mind as I clearly sign-post in the video), represent what all 2070 owners and to some degree 3070 owners will see when matching those settings. If a 12GB 2070 existed I would have tested, so long as I had it. I did test and show this with the 16GB RX6800 which will allocate 13.5GB of VRAM for the game, showing it can use more and it does not suffer anywhere near the level the 2070 does. You may also realise that this scenario is exactly what the I/O, SSD, cache and other areas within the PS5 are designed to resolve or at least mitigate. RAM is expensive and as such cutting this down (as GPU cards often do) is done for cost or by maximising it (which is what the Consoles do and specifically in the PS5) as such the results here are not to show PS5 in a positive light (although many think that is my aim, it is not), or to show PC in a bad light (again not my aim although many think it is). As in everything I do, it is just data and information for owners and reflects how THIS game, in THIS mode, on THIS machine will run for all (give or take). Also by dropping settings be them textures, hair, particles what ever lower is not a like for like test. Had I dropped textures to High the IQ impact would still happen and then reductions in CPU load (texture decompress is reduced) bandwith, cache would all help. Which brings me on to my next point, why the VRAM 'Bug' as you call it on PC.

Part 2 below:-
 
The message never came across as that, it came across as do not post that here or disuss 'his' work.

I appreciate the reponse and clarity here though, so thank you. And yes, I can confirm I am the NXgamer from the videos, that is said with tongue in cheek just because it sounds silly to me.

Great detail and I appreciate your effort here and for tagging me in it.

Rather than breakdown your post into chunked replies, I will summarise here so it is fast as I am doing this on my break currently.

VRAM - Yes, it is and issue and I have covered this years ago in multiple videos on how it CAN and WILL impact more than just performance and I am actually glad that so many here are now talking about that point which is something I have talked about for so long and it has never seemed to have sunk in. Even those here that disagree with me are really agreeing, including yourself, which is a positive. I covered it here in this video and others (such as my God of war and PS5 launch videos) in that VRAM makes a difference, RAM makes and data is paramount in all computer work of which games is likely one of the most demanding and performance impacting bar Rocket aviation, planes etc.

But, we need to keep in focus that VRAM is Hardware and is part of the GPU. My test(s) are designed to test the GPU at the same settings as the console (or as close as possible) which WILL and SHOULD include the VRAM. As such the results here (within the Fidelity mode only mind as I clearly sign-post in the video), represent what all 2070 owners and to some degree 3070 owners will see when matching those settings. If a 12GB 2070 existed I would have tested, so long as I had it. I did test and show this with the 16GB RX6800 which will allocate 13.5GB of VRAM for the game, showing it can use more and it does not suffer anywhere near the level the 2070 does. You may also realise that this scenario is exactly what the I/O, SSD, cache and other areas within the PS5 are designed to resolve or at least mitigate. RAM is expensive and as such cutting this down (as GPU cards often do) is done for cost or by maximising it (which is what the Consoles do and specifically in the PS5) as such the results here are not to show PS5 in a positive light (although many think that is my aim, it is not), or to show PC in a bad light (again not my aim although many think it is). As in everything I do, it is just data and information for owners and reflects how THIS game, in THIS mode, on THIS machine will run for all (give or take). Also by dropping settings be them textures, hair, particles what ever lower is not a like for like test. Had I dropped textures to High the IQ impact would still happen and then reductions in CPU load (texture decompress is reduced) bandwith, cache would all help. Which brings me on to my next point, why the VRAM 'Bug' as you call it on PC.

Part 2 below:-

BUG - This is not a bug at all, I shared s hot earlier showing that the game has to reserve and duplicate RAM on System Ram for VRAM and on the 2070 that is approx 7.6GB System Ram and this will be a heavy portion of mirrored and or duplicated/active buffers etc in flight. For this game, but any game, the VRAM pool needs to have a buffer also to hide latency, unknown unknowns, so to speak which would cause the game to require a swapping of the pools to then load the new data or worse still crash out. As such the amount reserved will be specific to your entire system and VRAM budget (as noted in my RX6800) this will include an NON Reserved/Allocated chunk of address space for this so that the game does not stutter, stall and or crash from a data race or memory stomp issue. This is why reducing the RT and or textures on the 2070 helps reduce the Mip issue as the space is not all being used so does not sit outside of Vram. Which brings me to my penultimate point.

Architecture - I see that you are saying this is nothing to do with the PS5 architecture or console to PC, but it really is. As the point is that Console have a Unified Ram architecture (hUMA) and PC does not, well dGPU's mostly do not. As such if they did then changes here would not need to made to source to compensate (and whatever work the driver will be doing, although DX12 does but memory allocation almost entirely on the Developer now compared to DX11). The fact is the PC has to 'Waste' Ram, Bandwidth, CPU and GPU cycles etc etc where the Console does not. And the results present this, this is not to say things cannot be changed, improved etc but they cannot be solved only worked around or powered past (within reason). Hence why more VRAM helps, faster PCIe helps, faster CPU helps, faster SYSram helps etc etc.

Final piece is the CPU - The 2070 is hardly ever CPU bound in the Fidelity mode test but I do state that it is almost always either Memory bound or CPU bound in the others. I also state the RX6800 is nearly always CPU bound even in the fidelity mode due to the high CPU demands. Which brings me back to the point above, this is, again as I stated and to be clear Mark Cerny did at the reveal, that the dedicated decompression blocks, aligned SSD speeds and entire Architectual design here means the CPU demands on PC, in the current technical sphere in that space, will be far higher than previous generations. I did discuss this before the PS5 launched stating that the ability to achieve PS5 results will (as in identical) will need higher requirements and to double a 30fps title to 60fps would be extremely high.
 
BUG - This is not a bug at all, I shared s hot earlier showing that the game has to reserve and duplicate RAM on System Ram for VRAM and on the 2070 that is approx 7.6GB System Ram and this will be a heavy portion of mirrored and or duplicated/active buffers etc in flight. For this game, but any game, the VRAM pool needs to have a buffer also to hide latency, unknown unknowns, so to speak which would cause the game to require a swapping of the pools to then load the new data or worse still crash out. As such the amount reserved will be specific to your entire system and VRAM budget (as noted in my RX6800) this will include an NON Reserved/Allocated chunk of address space for this so that the game does not stutter, stall and or crash from a data race or memory stomp issue. This is why reducing the RT and or textures on the 2070 helps reduce the Mip issue as the space is not all being used so does not sit outside of Vram. Which brings me to my penultimate point.

Architecture - I see that you are saying this is nothing to do with the PS5 architecture or console to PC, but it really is. As the point is that Console have a Unified Ram architecture (hUMA) and PC does not, well dGPU's mostly do not. As such if they did then changes here would not need to made to source to compensate (and whatever work the driver will be doing, although DX12 does but memory allocation almost entirely on the Developer now compared to DX11). The fact is the PC has to 'Waste' Ram, Bandwidth, CPU and GPU cycles etc etc where the Console does not. And the results present this, this is not to say things cannot be changed, improved etc but they cannot be solved only worked around or powered past (within reason). Hence why more VRAM helps, faster PCIe helps, faster CPU helps, faster SYSram helps etc etc.

Final piece is the CPU - The 2070 is hardly ever CPU bound in the Fidelity mode test but I do state that it is almost always either Memory bound or CPU bound in the others. I also state the RX6800 is nearly always CPU bound even in the fidelity mode due to the high CPU demands. Which brings me back to the point above, this is, again as I stated and to be clear Mark Cerny did at the reveal, that the dedicated decompression blocks, aligned SSD speeds and entire Architectual design here means the CPU demands on PC, in the current technical sphere in that space, will be far higher than previous generations. I did discuss this before the PS5 launched stating that the ability to achieve PS5 results will (as in identical) will need higher requirements and to double a 30fps title to 60fps would be extremely high.
Problem is that you're fixated on the fact that developers have to find solutions or adhere to 8 GB VRAM limit. They do not have the. In reverse, 8 GB users do not have to match PS5 settings like for like either. I can use high textures in this game, and use very high geometry and get better RT IQ than PS5 while not incurring a VRAM contraint bottleneck. Also, it took merely ample Pascal and Maxwell stocks to put GTX 780/770 to minimum requirements, lower end GPU status. People will flock to new and shiny GPUs, if NVIDIA can provide them ample stocks. You may see a RTX 2070 being regarded as this low end GPU just 4 years later, which would coincide with actual nextgen title releases. 4 years later, 2070 will indeed be a lower end GPU, 3070 bein an upper lower end GPU. This is not a joke. Even RTX 3050 from 2021 can, in some cases, match a 1070ti from 2016. You know that 1070ti was an high end GPU, and it took a mere 5 years for it to drop the status of being low end.

Even this year, we may have potential 12 GB 4060s, we already have 12 GB 3060s. Can you please tell me if devs or gamers bothered with 2 GB 770s?? That gpu too had insufficient VRAM for the exact reasons you talk about, "costs" and "practicality". This did not stop from gamers abandoning them. They quickly see that if they were to stay on these GPUs: 1) they would get enormously lower performance compared to newer cards 2) they would get ugly texture quality. Naturally if a similar thing happens, 8 GB GPUs will be long forgotten and left behind. That is what I'm trying to tell you. At that one point if you have a 8 GB 3070, you're not entitled to PS5-level textures anyways. At that point, let's 2026 when an -actual nextgen PS5-only game lands on PC, you will have 12 GB RTX 6050 that roflstomps a theoritical RTX 4070. This is PC space, people always move on. I remember back in 2013, how high end 770 and 780 seemed to me. In mere 3 years, entire gaming community somehow abandoned the entiretiy of Kepler/Fermi lineup because NVIDIA had ample stocks of Pascal and Maxwell. GPUs like 1060 alone singlehandedly provided such a huge performance over GTX 770 and also had ample amounts of VRAM that comfortably run all PS4-ports and 3rd party games, and still continues to do so even to this day. If you existed with your mentality back the, you would do the exact videos for GTX 770, piting it against PS4 in AC Unity, showing how lackluster it performance was against the PS4. I took a look at videos, and it renders around 34-35 frames, 4 GB model actually gets upwards of 40 frames. It is the exact same situation.

There's no solution. You speaking about it won't change any facts. Devs won't care. 2070 being perceived as better as PS5 is not revelant either. You also use 750ti as a point of reference, but that GPU too is buckled by its VRAM in most cases. Solution for 8 GB cards is easy, play at 1440p, reduce texture resolution or simply wait for nextgen GPUs or if you do not care, do not use RT at all. You may see RTX 2070 has high end, but I bet you it will be a lowend GPU, as I said, just 2 years later when a mere RTX 5050 roflstomps it. It is just the nature of PC hardware. It is not even expensive or mindbreaking to upgrade to a new 60 series card every 2 year. You can also sell your old card each time.

You being fixated on 2070 is like someone being fixated on GTX 760 back in 2017. Imagine having a 4 year old GTX 760, supposedly midrange in 2013, but super-low end 2017 and thinking that you're entitled to having good graphics and architectures must shift around it. It won't, it has only 2 GB of buffer. GTX 1060 exist there, providing 3x-4x performance over it, while having 6 GB VRAM, which solves the entire myriad of problems in one package. It is also cheap, you can also get it cheaper by selling your 760 and move on. If for exmaple, VRAM constrainments come to a point where I find textures unpleasant in games, I will sell my 3070, get the 5060/5070 and move on. Those cards will most likely have 12/16 GB VRAM configurations. 12 GB 3060/4060/5060 alone would create such a huge presence that all of a sudden most people will see 8 GB 3070s/2070supers as relics. It is what it is.

You, somehow, act like buying a RTX 2070 is entitling you to a PS5 equivalent performance for long periods of times, or that you somehow make it seem like all RTX 2070s users should feel betrayed for experiencing this. Well, writing was always on the wall with VRAM. I'm sure %90 of RTX 2070 owners do not even know about your video. Most of them plays at native 1440p, which pretty much gets you a 60+ framerate experience. Even the most casual RTX 2070 user will enable DLSS Quality on top of that native 1440p which makes it almost impossible to run into huge VRAM bottlenecks. Even the, all RTX 2070 friends I have always had something above Ryzen 3600, and some of them even upgraded to 5600x, in which case they won't experience the 2700 dreadful bottleneck you show us. Most of them do not even actually care about how their GPU performs against PS5. They get good performance and be happy. Using a 8 GB 2070 on a native 4K configuration is an extreme niche, however you put it.

The fact that you still somehow compare a 750ti to PS4 is actually funny to me. Who stays at 750ti and why? Even for 160 bucks, you could get a 1650 super. People should not be that cheap. 160 bucks back the used to get you 750ti. Sell it for 50-60 bucks, get a new 1650 super adding 100 bucks on top of it, get 4 GB VRAM, and enormous performance increase, and enjoy games at proper fidelity levels?? How hard can it be? GTX 770-750ti situation, since 2018 is at a point where it is really pointless to compare them to PS4 and make conclusions. No one in their sane mind would stay with those cards, because why would they? Even 1050ti roflstomps a GTX 770 in modern games, which is a very, very cheap low end GPU. We're not even talking about super expensive cards here. We're actually talking about SUPER expensive cards existed back then that are being stomped by SUPER cheap cards. GTX 770 costed a fortune back in 2013 for a lot of gamers. If someone bought it and then excepted good performance with its 2 GB buffer till the end of the generation, they were met by cheap 1050ti who had 4 GB buffer that practically run all VRAM constrained games better than it while being extremely cheaper. Why'd even devs care for GTX 770 when 1050ti exists? They won't care for 2070 or 3070 once 12 GB 4060s and 12 GB 5060s flood the market. and 12-16 GB 4070/5070 would be first proper entry level 4K GPUs. Yes, they would stil be entry level 4K GPUs, as 3070 became at this point. It is no wonder that PS5 is also an entry level 4K GPU, as we have seen with Last of Us Remake.
 
Last edited:
As such the results here (within the Fidelity mode only mind as I clearly sign-post in the video), represent what all 2070 owners and to some degree 3070 owners will see when matching those settings.
No they won't because once again, your CPU is piss poor in Spiderman and does not represent the performance people with better CPU's will have.

Your results only represent what people with similar CPU performance to yours will see.

Why are you still not understanding that?
 
Problem is that you're fixated on the fact that developers have to find solutions or adhere to 8 GB VRAM limit. They do not have the. In reverse, 8 GB users do not have to match PS5 settings like for like either. I can use high textures in this game, and use very high geometry and get better RT IQ than PS5 while not incurring a VRAM contraint bottleneck. Also, it took merely ample Pascal and Maxwell stocks to put GTX 780/770 to minimum requirements, lower end GPU status. People will flock to new and shiny GPUs, if NVIDIA can provide them ample stocks. You may see a RTX 2070 being regarded as this low end GPU just 4 years later, which would coincide with actual nextgen title releases. 4 years later, 2070 will indeed be a lower end GPU, 3070 bein an upper lower end GPU. This is not a joke. Even RTX 3050 from 2021 can, in some cases, match a 1070ti from 2016. You know that 1070ti was an high end GPU, and it took a mere 5 years for it to drop the status of being low end.

Even this year, we may have potential 12 GB 4060s, we already have 12 GB 3060s. Can you please tell me if devs or gamers bothered with 2 GB 770s?? That gpu too had insufficient VRAM for the exact reasons you talk about, "costs" and "practicality". This did not stop from gamers abandoning them. They quickly see that if they were to stay on these GPUs: 1) they would get enormously lower performance compared to newer cards 2) they would get ugly texture quality. Naturally if a similar thing happens, 8 GB GPUs will be long forgotten and left behind. That is what I'm trying to tell you. At that one point if you have a 8 GB 3070, you're not entitled to PS5-level textures anyways. At that point, let's 2026 when an -actual nextgen PS5-only game lands on PC, you will have 12 GB RTX 6050 that roflstomps a theoritical RTX 4070. This is PC space, people always move on. I remember back in 2013, how high end 770 and 780 seemed to me. In mere 3 years, entire gaming community somehow abandoned the entiretiy of Kepler/Fermi lineup because NVIDIA had ample stocks of Pascal and Maxwell. GPUs like 1060 alone singlehandedly provided such a huge performance over GTX 770 and also had ample amounts of VRAM that comfortably run all PS4-ports and 3rd party games, and still continues to do so even to this day. If you existed with your mentality back the, you would do the exact videos for GTX 770, piting it against PS4 in AC Unity, showing how lackluster it performance was against the PS4. I took a look at videos, and it renders around 34-35 frames, 4 GB model actually gets upwards of 40 frames. It is the exact same situation.

There's no solution. You speaking about it won't change any facts. Devs won't care. 2070 being perceived as better as PS5 is not revelant either. You also use 750ti as a point of reference, but that GPU too is buckled by its VRAM in most cases. Solution for 8 GB cards is easy, play at 1440p, reduce texture resolution or simply wait for nextgen GPUs or if you do not care, do not use RT at all. You may see RTX 2070 has high end, but I bet you it will be a lowend GPU, as I said, just 2 years later when a mere RTX 5050 roflstomps it. It is just the nature of PC hardware. It is not even expensive or mindbreaking to upgrade to a new 60 series card every 2 year. You can also sell your old card each time.

You being fixated on 2070 is like someone being fixated on GTX 760 back in 2017. Imagine having a 4 year old GTX 760, supposedly midrange in 2013, but super-low end 2017 and thinking that you're entitled to having good graphics and architectures must shift around it. It won't, it has only 2 GB of buffer. GTX 1060 exist there, providing 3x-4x performance over it, while having 6 GB VRAM, which solves the entire myriad of problems in one package. It is also cheap, you can also get it cheaper by selling your 760 and move on. If for exmaple, VRAM constrainments come to a point where I find textures unpleasant in games, I will sell my 3070, get the 5060/5070 and move on. Those cards will most likely have 12/16 GB VRAM configurations. 12 GB 3060/4060/5060 alone would create such a huge presence that all of a sudden most people will see 8 GB 3070s/2070supers as relics. It is what it is.
I am sorry, but your reply has just gone off on a tangent and just sounds like a "yeah well, in 3 years the PS5 will be low end with the 2070" and other things I have not said or inferred.

e.g. the bolded above, I say the EXACT opposite both in the video and my post. You can POWER PAST the issue with more VRAM etc etc, as has always been the case for PC. But the reality is an 8GB GPU owner is going to have to lower settings because of their hardware. The fact you and others do not like that fact or play semantic does not make it less true, my videos are based on evidence-based reporting and analysis, not selling a product, making things look better or playing an angle.
 
Back
Top