Will GPUs with 4GB VRAM age poorly?

Display resolution, off-screen or intermediate buffer resolution, and textures are going to be primary consumers of memory and these are going to vary wildly dependent on a target platform and even user settings.

A game that runs on XBOX 360 and XBox One is going to have target resolution differences obviously. Likelihood is that the 360 version is running a lower textures resolution as well. Being DX9 the lighting on the 360 version may not be as complicated as the XBOX One version which in term could have ramifications on the other render buffers that you don't see / know about.

Even on a PC some of the offscreen buffers may or may not scale with resolutions and/or AA solution, etc.

Reference back to the Killzone PS4 demo postmortem and you can see an interesting breakdown of GPU memory use for a title that is targeted specifically for 1080p use: http://www.redgamingtech.com/killzone-shadow-fall-demo-post-mortem/
 
Over 4Gb of VRAM usage at 1080p is very much a worry for me. Intending to build my new HTPC soon to target 1080p gaming with a mid-range card, preferably 470. I understand this is likely more of a corner-case but hoping future titles won't also see this trend.
 
Well, that's what I mean. The game itself may not need much to run ("non-graphics-related memory") so a large portion of the 5GB may have been for the GPU consumption.

Things might even be worse on PC given how things are managed more directly on console, but I dunno.
 
It looks like video games are in frenzy for bigger VRAM since the introduction of the new consoles, however the precedence predates even those.

For example, Call Of Duty games have always shown preference to store massive amounts of high res textures/shadows in the GPU's RAM, even since the days of Modern Warfare 2. The game has now reached the point that even 4GB of VRAM is not enough even at 1080p.

DOOM also demonstrated the same behavior at the time of DOOM 3 back in 2004, when running Ultra textures which required massive amount of VRAM at that time. Infact most Id tech titles did the same, such as Rage and Wolfenstein. The 2016 DOOM just reiterated the situation with an Uber-Ultra visual options that surpasses the capabilities of 4GB cards also at 1080p.

Mirror's Edge catalyst is new to this however with it's Hyper setting, At 1080p it can eat more than 5GB without mercy. however it wasn't the only game new to this, Shadow Of Mordor, GTA V, Dying Light, AC Unity, Batman Arkham Knight and many others, all have shown they are able to eat through 4GB of VRAM, Some did it at 1080p, some did it at 1440p and some at 4K. Only difference is, Fury cards were able to withstand those with driver optimizations, now we have 5 cases in which they simply can't.
 
More VRAM = less texture streaming, less swapping. On certain games and application, this could be translated in a very sensible performance gain. Textures use fixed compression ratios, so for the same compression method doubling the dimension means quadrupling the size.
It's not only about back buffers size...

People should remember the sad story of the g80 320MB vs 640Mb... The first become completely unsuitable in less then two years to play the last game with the same quality and frame-rate of the second fatter twin..
 
Last edited:
I still remember people debating (2013) whether a 2 GB video card was enough to last this console generation. Nvidia's top of the line cards were equipped with less memory than AMDs (7970 had 3 GB). GTX 580 had 1.5 GB and 680 had 2 GB (later a 4 GB model released). Some people just couldn't believe a GTX 680 wasn't going to be good enough to outlast the consoles.

Consoles have 5 GB of usable memory (shared between CPU and GPU). A PC GPU with 4 GB is going to be good enough to run all console ports (including mid gen refresh) at 1080p. No resolution drops, locked 60 fps (vs 30 fps on consoles) and higher quality settings. Only 3% of PC gamers have larger than 1080p display (http://store.steampowered.com/hwsurvey).

Of course if you have a 4K display + a card that can smoothly run games at 4K then go for 6 GB+... but there aren't even any 4 GB cards that reach 60+ fps @ 4K, so it's hard to make the wrong choice here.
 
Last edited:
I remember not being very confident about 2GB being enough, but are there many examples of games running at around console settings, where a GTX 680 can't perform adequately?

Of course these mid gen upgrades are well outside the scope of the original discussion.
 
If a texture is 4x4 then it is 16 texels, if it's 8x8 then it's 64 texels. Geometry!
I meant the fixed compression ratios.
Consoles have 5 GB of usable memory (shared between CPU and GPU). A PC GPU with 4 GB is going to be good enough to run all console ports (including mid gen refresh) at 1080p. No resolution drops, locked 60 fps (vs 30 fps on consoles) and higher quality settings. Only 3% of PC gamers have larger than 1080p display (http://store.steampowered.com/hwsurvey).
True, but that didn't become the problem until developers decided to up the combination of texture/shadows/reflections resolution of PC ports even well above and beyond the consoles, necissitating more than 4GB @1080p.
 
Of course these mid gen upgrades are well outside the scope of the original discussion.
But then I wouldn't be surprised if MS decided to stick to 8GB (of GDDR5x) to keep cost in check. (Neo has, as far as we know the same 8GB GDDR3 as the PS4)
 
True, but that didn't become the problem until developers decided to up the combination of texture/shadows/reflections resolution of PC ports even well above and beyond the consoles, necissitating more than 4GB @1080p.
Of course you could crank up shadow/texture resolution to exceed even 12 GB card limits. But this mostly benefits 1440p and 4K. Gains are minimal at 1080p. Remember that we are talking about 4 GB cards = mainstream (all high end cards nowadays are 8 GB+). Brute force scaling all settings (especially shadow map resolution) to maximum has a huge performance impact (for mainstream cards), but a very small image quality improvement (esp at 1080p).

Modern shadow mapping algorithms don't need limitless memory to look good. Virtual shadow mapping needs only roughly as many shadow map texels as there are pixels in the screen for pixel perfect 1:1 result. Single 4k * 4k shadow tile map is enough for pixel perfect 1080p (= 32 MB). Algorithms using conservative rasterization need even less texel resolution to reach pixel perfect quality (https://developer.nvidia.com/content/hybrid-ray-traced-shadows).

Similarly texture resolution increase doesn't grow GPU memory cost with no limit. Modern texture streaming technologies calculate required pixel density for each object (GPU mipmapping is guaranteed not to touch more detailed data). Only objects very close to the camera will require more memory. Each 2x2 increase in texture resolution halves the distance needed to load the highest mip level. At 1080p a properly working streaming system will not load highest mip textures most of the time (less screen pixels = high mips are needed much less frequently). Of course there's some overhead in larger textures, but it is not directly related to the texture assets data size. There are also systems that incur (practically) zero extra memory cost of added content or added texture resolution. Virtual texturing reaches close to 1:1 memory usage (loaded texels to output screen pixels). You could texture every single object in your game with a 32k * 32k texture, and still run the game on 2 GB graphics card. Loading times would not slow down either. But your game would be several terabytes in size, so it would be pretty hard to distribute it :)
 
Isn't the next generation of consoles just a year away, anyway? Those will probably have more RAM.
 
Isn't the next generation of consoles just a year away, anyway? Those will probably have more RAM.
nope, not really. its a .5 , general idea being more compute for 4k/VR but still needs to run games on the .0 @ ~1080p. Atleast for the PS4.5, it looks like xbox 1.5 is going to be a fair bit later to market but a lot more powerful so MS might do something different to help attract a larger user base to 1.5.
 
If a texture is 4x4 then it is 16 texels, if it's 8x8 then it's 64 texels. Geometry!

Dunno what he meant about the back buffers.
The back buffer size is related to the screen resolution. But usually a game stores from 2 to 4 back buffers, so the screen resolution is not the most important factor when choose the VRAM size of a GPU.
Yes, there are other factors then textures and back-buffer sizes, like multi-sample resources, G-Buffers, etc. Some of them are still related to the screen resolution.
However, if we approximate the problem and just look the at it's input complexity, the answer will be O(n), where n = texture resources size bytes.
 
But then I wouldn't be surprised if MS decided to stick to 8GB (of GDDR5x) to keep cost in check. (Neo has, as far as we know the same 8GB GDDR3 as the PS4)

But they still push the processing power beyond of a GTX 680, so even if memory amount was good enough, it would likely fall behind.
 
Back
Top