Will GPUs with 4GB VRAM age poorly?

That 480 4GB with a copy of Civilization VI for $169 is incredibly tempting. Not sure I can justify another $70 just for 8GB when I only have 1080p screens.
 
At that value you can even buy one now and just sell it easily for ~$130 if/when you get a new monitor with higher resolution and need more VRAM.

Damn you guys get really good GPU prices. Cheapest RX 480 4GB here is 240€ and the cheapest 8GB model is €300.
 
4gb isn't going to be the end of the world, the PS4 Pro only has another 512mb, and while the PR claims of 5+ gbs are there they are "conditionally there" meaning the system OS can snap some of that up at any time.

If you're on a budget 4gb looks perfectly fine for this generation, especially if you're saving $70 for no performance drop otherwise.
 
.
ALU to memory access ratio will shift even more towards the ALUs in the future. It has to bandwidth just isn't following sharp rising in computational power.
This could mean relative to current levels, very larger L2 sram cache on dies. HBM3 is expected to be released in 2020. 8GB of the highest performing HBM3 offers up to theoretically 4TB/s bandwidth. Hopefully HBM2/3 costs will be cheap enough for $250-300 cards. If HBM2/3 can be affordably placed in PS5/Xbox0.25 we could see the needed bandwidth increase to nearly match the ALU increase.
 
I very much doubt HBM or some technology similar to it will be cheap any time soon. We've got to wait until well past the turn of the decade for that miracle to happen.
 
But by that time we had GTX 680/7970, and 3GB/4GB cards were a reality (4GB GTX 680's). However 1.5 GB cards were more than enough, and you only needed higher if you planned to use resolutions above 1080p, at least in those 3 games (GTA V, Max Payne 3, Rage).
4 GB GTX 680 was released much later. Most GTX 680s in the market are 2 GB models. And 670 was also very popular. 2 GB GTX 670/680 has become a very popular minimum requirement in current games.

Most sites downplayed the advantage of AMDs 3 GB memory vs Nvidia 2 GB memory. 3 GB didn't show any real world advantages at that time. Similarly later reviews of 4 GB GPU models didn't show any gains, except at highest settings and at very high resolutions (and with high amount of MSAA <- not something that current games support anymore). But unless you liked sub 60 fps gaming, these minor gains at max settings + max resolution were mostly a curiosity. At that time most people called high GB graphics cards a marketing trick.

Rage was running very well on PS3 (mostly locked 60 fps). PS3 only had 256 MB of graphics memory. I remember the PC port being bad (at start), but I don't remember it being this bad. Virtual texturing is a huge reduction in GPU memory usage. IIRC id-Software used a 8k*8k texture atlas (per layer) on PC for hold all their virtual texture data (8k*4k on consoles). With three DXT compressed material layers this is only 192 MB total (for all texture data).

A quote from Geforce.com about enabling highest texture cache size on Rage: "Be aware, however, that you will almost certainly require a video card with 1.5GB of Video RAM to enable 8K textures". With normal settings 1 GB graphics card was just fine.
In 2013, things got out of hands with the release of the likes of COD Ghosts, now we enter the territory where video games use crazy amount of VRAM irrespective of the user's resolution, and they use it just to store textures. at that time we saw the necessity of 3GB and 4GB cards. BF4 recommended 2GB+ cards for large maps, Arma 3 did the same, we then had Thief do the same thing as well! Then we had Watch_Dogs, Wolfenstein, Titanfall, Daylight, Dead Rising 3, Ryse, COD Advanced Warfare, Dying Light, Dragon Age Inquisition, Far Cry 4, The Evil Within, Just Cause 3, COD Black Ops 3, Evolve, all do the same, sometimes even pushing the 3GB limit into 4GB! Then we pushed further into the territory of 4GB+ games like Shadow Of Mordor, Batman Arkham Knight, Assassin's Creed Unity, Rise Of The Tomb Raider, Mirror's Edge Catalyst, Rainbow Six Siege, Doom, GTA V, And many of the AAA games released in 2016: Dishnonored 2, Gears Of War 4, Forza Horizon 3, Deus Ex ManKind Divided, COD Infinite Warfare, some of these games already push 7GB+ VRAM utilization!
Every single game in this list runs well on a 2 GB card at 1080p. Higher than 1080p resolution might not be available and you can't select highest texture quality settings. You might also see some occasional stuttering from vram swapping (overcommitted).

I am not claiming that 2 GB graphics card is perfect for playing modern games. You need to compromise. I am arguing that currently there's close to zero gains for having more than 4 GB, if you play at 1440p. There are games that overcommit video memory and "load" up to 7 GB textures at once. This still doesn't mean anything. DirectX 11 + graphics driver automatically swaps textures between CPU<->GPU memory. Even if a game overcommits 7 GB of textures, it still touches significantly less than 1 GB of texture memory per frame. The game will certainly fill your 8 GB card if that much memory is available, but there's no real frame rate advantage. As long as the active texture set doesn't change too quickly, the driver can transparently swap in/out all required textures without stalls.

Good 2GB vs 4GB benchmark:
http://www.eurogamer.net/articles/digitalfoundry-2015-nvidia-geforce-gtx-960-2gb-vs-4gb-review

Result: AC: Unity is the only game that has noticeable gains from 2GB -> 4GB (at both 1080p and 1440p on both Radeon and Geforce). Other games show practically zero gains from going to 4 GB. Even at 1440p + ultra quality. I am not convinced that 8 GB card is a necessary requirement for 1440p gaming. Future games might change this obviously.
What does it help, if the engine doesn't do it. I guess he suggests, taking the matter into his own hand and buy more RAM instead of praying that (to equal effect) _all_ engines are smart with memory, gives guaranteed improvements..
DX11 + graphics driver handles overcommitted graphics memory surprisingly well. It has become a common practice to overcommit graphics memory on PC. DX12/Vulkan doesn't support automatic overcommitment. You need to program CPU<->GPU memory swapping on your own. That's the main reason why some DX12/Vulkan ports need more GPU memory than DX11 versions.
 
This could mean relative to current levels, very larger L2 sram cache on dies. HBM3 is expected to be released in 2020. 8GB of the highest performing HBM3 offers up to theoretically 4TB/s bandwidth. Hopefully HBM2/3 costs will be cheap enough for $250-300 cards. If HBM2/3 can be affordably placed in PS5/Xbox0.25 we could see the needed bandwidth increase to nearly match the ALU increase.
How is HBM2/3 going to help your 8GB RX 480 or 8GB 1080? :) A GTX 480 (first NV GDDR5 board) had 7.5 flops per byte, GTX 680 was at 16 flops per byte and GTX 980 is at 20 flops per byte. With GTX 1080 we're at 25.7 flops per byte (it's kinda fun thinking about this, since a "flop" needs 12 bytes of input and produces 4 bytes of output). Of course Tesla P100 with HBM2 shifts it back a bit to 13.2 flops per byte. But P100 is strongly compute centric, meaning double precision arithmetics and larger register file take their (transistor) toll. If you were to scale GP102 up to GP100 size you could probably get a GPU with 40 SM and a 13 Tflops. Which combined with HBM2 would net you 18 flops per byte. Honestly I'm amazed at why some people want to see P100 in gaming.
If you have been following NV hints for next year Volta will increase compute density quite a bit again. Which will at worst make even the HBM2 parts match current GDDR5X Pascal ratios and at best will overshoot that. And I doubt we'll get an all HBM2 lineup next year.

I guess the memory craze started with R9 290X (4GB) -> R9 390X (8GB). But there was a point there. Hawaii and older GCN GPUs are not able to texture from compressed render targets. Meaning there is a implicit decompress step for every render target used later as texture and which does take up additional memory. That's no longer the case with newer chips.
 
That article is a year old, to be fair... A test with newer games would be better.
If 2 GB was acceptable for 1440p + ultra details one year ago, 4 GB should be acceptable for 1440p + ultra details this year. I don't believe game memory consumption has doubled in just one year.

Consoles are still the same. AAA games are developed mainly on consoles. On consoles you can spend around 2 GB - 3 GB for graphics memory (garlic), depending your CPU memory needs. PS4 Pro runs 1440p / 4K checkerboard just fine with 3 GB or less graphics memory. More complex game worlds (GTA) of course need more CPU memory (so even less for GPU). I don't believe we see a sudden GPU memory requirement jump, as all consoles are sticking with the same memory amount until Scorpio comes out late next year.

8 GB is obviously needed if you want to run some high quality PC texture mods, but the sole reason for this is that mods are not very well optimized. Texture streaming should handle this case fine. Also you'd like to have 8 GB for multimonitor setups and for 4K. 8 GB is also required for game development (running multiple game processes simultaneously and/or editor with uncompressed texture data, no streaming, etc). 12 GB Titan X is a popular card among game developers. Allows you to run unoptimized code & unoptimized data with multiple 1440p screens inside the development tools.
 
Last edited:
A quote from Geforce.com about enabling highest texture cache size on Rage: "Be aware, however, that you will almost certainly require a video card with 1.5GB of Video RAM to enable 8K textures". With normal settings 1 GB graphics card was just fine.
Sadly, It wasn't fine, at normal settings the game suffered texture pop ins, and general slow loading of textures, even with the slightest turning of the camera, we had to load the highest texrures to avoid these problems.

Every single game in this list runs well on a 2 GB card at 1080p. Higher than 1080p resolution might not be available and you can't select highest texture quality settings. You might also see some occasional stuttering from vram swapping (overcommitted).
At 1080p you couldn't select the highest texture setting without suffering massive hitching and stutters, you couldn't even do that at 720p! You had to select the lower texture level and suffer texture pop-ins or blurry textures depending on the game. For high end cards, that is not acceptable.
I am arguing that currently there's close to zero gains for having more than 4 GB
Doom, Mirror's Edge Catalyst, COD Black Ops 3, COD Infinite Warfare, Dishonored 2, Gears of Wars 4, Rise Of Tomb Raider (in extended gameplay sessions) use more than 4GB Of VRAM at the highest texture/shadow/reflection levels, they stutter like crazy or suffer frame drops when running on 4GB cards, again that is not acceptable.

4 GB should be acceptable for 1440p + ultra details this year. I don't believe game memory consumption has doubled in just one year.
The problem is: PC games use a much higher texture resolution than consoles, use more memory heavy effects (alpha transparency effects, higher LOD, more reflections and shadows resolution.. Etc). That combined with a bit of over-commitance often results in an abnormally high VRAM consumption, consoles don't suffer that issue because developers are much more careful and less liberal in their approach there.
 
Last edited:
Doom, Mirror's Edge Catalyst, COD Black Ops 3, COD Infinite Warfare, Dishonored 2, Gears of Wars 4, Rise Of Tomb Raider (in extended gameplay sessions) use more than 4GB Of VRAM at the highest texture/shadow/reflection levels, they stutter like crazy or suffer frame drops when running on 4GB cards, again that is not acceptable.

You can add at least Deus Ex: MD and Rainbow Six: Siege to the list.
 
Sadly, It wasn't fine, at normal settings the game suffered texture pop ins, and general slow loading of textures, even with the slightest turning of the camera, we had to load the highest texrures to avoid these problems.

Don't think that was a result of not enough VRAM. You could enable CUDA texture transcoding and any noticeable "pop in" would be significatly reduced. In my case, even with a GTX 950, it's nearly imperceptible.
 
Don't think that was a result of not enough VRAM. You could enable CUDA texture transcoding and any noticeable "pop in" would be significatly reduced. In my case, even with a GTX 950, it's nearly imperceptible.
Using 4k textures resulted in significantly more blurring and pop ins than 8k textures, 8k consumed upward of 1.5GB VRAM. Here is the comparison made through NV's guide:
-With 8k textures enabled the game was extremely crisp and nearly every object was highly detailed.
-Unexpectedly, the 4k textures were of an extremely low quality, a result we verified by visiting other outdoor areas.
-With the configuration file removed the Auto-Balancer was given free reign. The result was textures of a higher resolution, compared to the forced 4K textures, but well below that of the forced 8K textures. As such, we’d estimate that this represents 5-to-6k.
http://international.download.nvidi.../ragetexturetweak/RageComparison-Animated.gif

http://www.geforce.com/whats-new/ar...-resolution-textures-with-a-few-simple-tweaks
 
I remember my 1GB 560 Ti had some problems with Rage, only at 2720x1536 in some game areas though. Lets not talk about how my 6970 2GB fared with that game though. ;)

GPU transcode just minimizes pop-in. You can still spin around and see it in Rage, Wolf and Evil Within.
 
Last edited:
I remember my 1GB 560 Ti had some problems with Rage, only at 2720x1536 in some game areas though. Lets not talk about how my 6970 2GB fared with that game though. ;)

GPU transcode just minimizes pop-in. You can still spin around and see it in Rage, Wolf and Evil Within.

Sure, but with it enabled and the proper max ppf setting(and vt compress in wolfenstein) you can get it to the point where it's extremely minimal.
 
Doom, Mirror's Edge Catalyst, COD Black Ops 3, COD Infinite Warfare, Dishonored 2, Gears of Wars 4, Rise Of Tomb Raider (in extended gameplay sessions) use more than 4GB Of VRAM at the highest texture/shadow/reflection levels, they stutter like crazy or suffer frame drops when running on 4GB cards, again that is not acceptable.
My experience with GTX 980 isn't like this. It doesn't stutter like crazy in all new games (at 1440p). GTX 980 / 980 Ti were still Nvidia's flagship GPUs half year ago (before GTX 1080 was released last summer).

The highest (ultra) settings in many PC games are just silly. Huge additional perf / memory cost, but only minor IQ advantage over second highest settings. Obviously if game does stuff like brute force render shadow maps at 16k*16k, it will tank any GPU, no matter how fast or how much memory. Same goes for ultra high quality (separately downloadable) texture packs (and mods). 2x2 higher texture resolution takes 4x more memory. If you want these features and are willing to sacrifice the frame rate, then 8+ GB card obviously makes sense.

I am a 60+ fps gamer. If game doesn't deliver 60 fps I tune down the settings. I have been talking about quality settings that reach stable 60 fps in all levels. RX 480 and GTX 980 can't achieve stable 60+ fps in some games with "super ultra" graphics settings at 1440p. 4 GB memory is just fine if you don't use the absolutely highest brute force settings. I checked some newer 4 GB vs 8 GB reviews. Differences can be only seen in some games at ultra high graphics settings. But in these cases the game fails to deliver stable 60 fps. So I don't personally have any need to upgrade my 4 GB GPU to a 8 GB model right now.

Only 4% of gamers have more than 4 GB of graphics memory (newest Steam Hardware Survey). Game developers will certainly ensure that 96% of the gamers have a great gaming experience. I wouldn't worry about 4 GB right now. But 8 GB is of course more future proof for a new computer (especially if you prefer visuals over frame rate). And remember to stay away from the 3 GB GTX 1060 model :)
 
Last edited:
The brand new Pascal GTX 1050 has only 2 GB of memory :(

It is faster than current gen consoles (except PS4 Pro), but isn't going to be able to play all future console games at console quality settings, because of memory limitations. IMHO, 3 GB is still borderline acceptable for a brand new mid tier GPU, but 2 GB isn't going to cut it anymore.
 
Hell, 2 gigs wasn't enough on a GTX 660. That's what I got and I hit both RAM and GPU limits(Thank you, GPU-Z!) all the time on console games.

Fact: Evolve kind of sucks when you're running at a super unstable frame rate at all low except res(I don't cut that because it makes shit blurry, and in Evolve, that could get your ass killed). Doom4's much better at this, but still doesn't run super great even at all low.
 
Back
Top