Why is the amount of graphics ram suddenly so important?

Mendel

Mr. Upgrade
Veteran
It seems to me that there is relatively high performance difference going from 512 to 1024 megabytes of graphics ram, with for example 4870 512MB versus 4870 1GB version.

Also this is highlighter by some 4850 vs 5750 benchmarks, like for example this anandtech bench.

I seem to remember when the transition from 256 MB to 512MB was going on, there was practically no difference, there was an uncompressed textures mode for Doom 3 that showed some but not massive difference and the IQ difference was non existent to negligible to anyone but the most extreme IQ purist.

Going back a little more, when the first 256MB cards arrived, the advantage over 128 cards seemed even less significant and I seem to remember Ati and Nvidia having to justify making such cards in some interview by referring to resolutions and AA settings that weren't playable anyways and nobody had monitors that would run them.

So, what happened? Why is 1GB over 512MB suddenly so significant advantage? Is it because of the current generation of consoles and ports of their games? Is it simply because people are using bigger resolutions? Did textures suddenly get bigger? Or am I totally wrong and this isn't in fact significant or different at all? (If someone can refresh my memory of how much frame buffer is generally needed for 1080p or 2560x1600 resolutions, I would be delighted.)
 
We now have the compute power to do LOTS of antialiasing, which soaks up a lot of framebuffer. And very high resolution output devices are now at commodity prices, and that too soaks up a lot of framebuffer.

Even back in the 256mb -> 512mb days, if you played at high resolutions, then you would be able to notice the difference. So essentially, I chalk it up to economic factors allowing people to use up more of their framebuffer :)
 
It seems to me that there is relatively high performance difference going from 512 to 1024 megabytes of graphics ram, with for example 4870 512MB versus 4870 1GB version.

Also this is highlighter by some 4850 vs 5750 benchmarks, like for example this anandtech bench.

I seem to remember when the transition from 256 MB to 512MB was going on, there was practically no difference, there was an uncompressed textures mode for Doom 3 that showed some but not massive difference and the IQ difference was non existent to negligible to anyone but the most extreme IQ purist.

Going back a little more, when the first 256MB cards arrived, the advantage over 128 cards seemed even less significant and I seem to remember Ati and Nvidia having to justify making such cards in some interview by referring to resolutions and AA settings that weren't playable anyways and nobody had monitors that would run them.

So, what happened? Why is 1GB over 512MB suddenly so significant advantage? Is it because of the current generation of consoles and ports of their games? Is it simply because people are using bigger resolutions? Did textures suddenly get bigger? Or am I totally wrong and this isn't in fact significant or different at all? (If someone can refresh my memory of how much frame buffer is generally needed for 1080p or 2560x1600 resolutions, I would be delighted.)


1 gig cards have been around for two years almost...
 
We now have the compute power to do LOTS of antialiasing, which soaks up a lot of framebuffer. And very high resolution output devices are now at commodity prices, and that too soaks up a lot of framebuffer.

Also multiple render targets. :) Deferred shading/lighting implementations. FP16 buffers as well.
 
Well, much to AlStrong's point, we have games that are really pushing the envelope with all "cool" rendering technology that is now at our disposal in DirectX. I mean, no that we have ways of doing all this sweet stuff computationally and texturally, I can only imagine it's gonna get worse from here ;)
 
So, what happened? Why is 1GB over 512MB suddenly so significant advantage? Is it because of the current generation of consoles and ports of their games?
That's definately part of it, which is why you would have been a bit of a fool to buy one of the cheaper 256MB cards in the recent past instead of a slightly slower 512MB at the same price. Also related is that PC games were low poly before the recent generation of consoles, so you could keep vertex buffers in AGP/PCI-e without any noticeable performance impact. Now vertices are carrying more data and are a much bigger portion of the workload, probably requiring placement in GPU RAM.

So once you get to 300-400MB of video memory, high res can probably push you over the top. If you have 4xAA without FP16, you need 32 bytes per pixel just for the backbuffer. 2560x1600 then needs at least 140MB of space for RGBA8, 208MB for FP16. Add in buffers for fullscreen effects, and I can see why 1GB is needed for enthusiasts.

Personally, I think those kinds of resolution are insane. 8xAA at 1650x1080 is already more than I need, and when devs aim for playable resolutions beyond that then IMO they are not putting enough effort into realism. By the same token, I hate it when AF/AA is ignored.
 
Personally, I think those kinds of resolution are insane. 8xAA at 1650x1080 is already more than I need, and when devs aim for playable resolutions beyond that then IMO they are not putting enough effort into realism. By the same token, I hate it when AF/AA is ignored.
Though, aren't the higher resolution TFT screens also larger? So you're essentially getting more immersion, at the same 'detail' level: something Eyefinity takes to the extreme.

Since I got my 24", 1920x1200 doesn't seem so large any more: it seems like a minimum requirement to be able to take advantage of the screen space available (and that space still isn't 'enough' as I would probably benefit from going up to a 26" or 27" model).
 
Doesn't seem to me that significance of amount of ram has really increased. If you look at the hardware.fr results, there's a few cases 1GB vs 512MB helps, but in fact only in one case (far cry 2 on HD 4850, 1920x1200 4xAA) it matters really as all other results are too slow to be playable anyway even with the 1GB cards.
All low-end cards today come with 512MB (which is certainly enough for them) or 1GB. I think the only card where 512MB could be considered not really enough in scenarios you might likely encounter is the HD 4870. Everything above it comes with 1GB always anyway (or 896MB, respectively). And you can get 2GB graphic cards easily - just like before the biggest available memory size is pretty pointless.
And with new cards, the 1GB trend obviously continues - I haven't seen a HD 5750 with 512MB yet, though it is supposed to exist.
For a bit more mainstream 2GB graphic cards, we'll need to wait for GDDR5 2gbit chips which are nowhere to be seen (alternatively, low-end cards will probably arrive there sooner even if pointless with 2gbit ddr3 chips, though right now those are too expensive) - Fermi based cards will naturally bump the 1GB up to 1.5GB .
 
Personally if I have within a GPU generation the choice between X amount of ram and 2*X ram and the price difference is relatively small I don't see much reason to opt for X.

Granted in the majority of cases 2*X might not be needed, but it's a shame to not be able to take the highest possible out of any possible corner case because the framebuffer is limiting.
 
Though, aren't the higher resolution TFT screens also larger? So you're essentially getting more immersion, at the same 'detail' level: something Eyefinity takes to the extreme.

Since I got my 24", 1920x1200 doesn't seem so large any more: it seems like a minimum requirement to be able to take advantage of the screen space available (and that space still isn't 'enough' as I would probably benefit from going up to a 26" or 27" model).

Yeah, for PC gaming it just depends what the native rez of your monitor is. And anything over 22"=>1680X1050.

So saying "anything over 1680X1050 is useless" while an admirable idea, unfortunately doesnt work out with fixed pixel LCD's.

Also depends on the pricing sweet spot. When I bought a while back, 22" was the sweet spot, so thats what I got. Today I'm sure it's moved to 24" and higher being very affordable ($200-$300 dollar range).
 
to be honest i would have liked to have had 1 gig cards the norm with 2 gig cards at the higher ends .

Seems like with alot of games the textures could be alot better. Hopefully we get a dx 11 engine game that pushes texture memory. When was the last time that happened ? Doom3 with ultra textures ?
 
to be honest i would have liked to have had 1 gig cards the norm with 2 gig cards at the higher ends .

Seems like with alot of games the textures could be alot better. Hopefully we get a dx 11 engine game that pushes texture memory. When was the last time that happened ? Doom3 with ultra textures ?
Oblivion/Stalker/Crysis with mods.
 
Personally if I have within a GPU generation the choice between X amount of ram and 2*X ram and the price difference is relatively small I don't see much reason to opt for X.

Granted in the majority of cases 2*X might not be needed, but it's a shame to not be able to take the highest possible out of any possible corner case because the framebuffer is limiting.
I fail to see the point. Granted if you're going for the fastest card anyway why not get the maximum amount of ram, but otherwise it just drives price up (potentially almost to the level of another, faster card but with less ram). So would you prefer a HD4850 with 1GB to a HD4870 with 512MB? Or really opt for GTX 285 or HD4890 with 2GB (does a 20% price difference still fall into "relatively small" difference?).
And in case of lower end cards, you often pay for more ram not with money, but with slower ram instead, sometimes the difference isn't that large (like in case of HD4670 with 512MB gddr3 vs 1GB ddr3) but sometimes it's huge (when you're getting ddr2 instead of gddr3).
 
I fail to see the point. Granted if you're going for the fastest card anyway why not get the maximum amount of ram, but otherwise it just drives price up (potentially almost to the level of another, faster card but with less ram). So would you prefer a HD4850 with 1GB to a HD4870 with 512MB?

If there would be dilemma it would rather be a 4870/512 vs. a 4870/1GB as an example. And yes my post was on purpose quite vague to not go into too much detail. In that given example the price difference in the local market is at merely 8% or else 9 Euros.

The 512MB only framebuffer will be limiting in quite a few of today's cases and it's not worth saving 9 Euros.

Or really opt for GTX 285 or HD4890 with 2GB (does a 20% price difference still fall into "relatively small" difference?).

I don't think I'd ever have such a dilemma for that generation. Those models are exceptions anyway and it's likelier in such cases that other factors will limit first before the framebuffer.

And in case of lower end cards, you often pay for more ram not with money, but with slower ram instead, sometimes the difference isn't that large (like in case of HD4670 with 512MB gddr3 vs 1GB ddr3) but sometimes it's huge (when you're getting ddr2 instead of gddr3).

It must very hard to define where the primary possible bottlenecks for each GPU variant in the majority of cases really lie. I wouldn't buy a mainstream or low end GPU, drive a 30" monitor and expect it to deliver with AA/AF in its native resolution. The dilemma in such cases has a reasonable analogy for which resolutions/settings each GPU delivers playability.
 
If there would be dilemma it would rather be a 4870/512 vs. a 4870/1GB as an example. And yes my post was on purpose quite vague to not go into too much detail. In that given example the price difference in the local market is at merely 8% or else 9 Euros.

The 512MB only framebuffer will be limiting in quite a few of today's cases and it's not worth saving 9 Euros.
Ok if the price difference is that small (I thought it was about twice that initially) it makes sense.

I don't think I'd ever have such a dilemma for that generation. Those models are exceptions anyway and it's likelier in such cases that other factors will limit first before the framebuffer.
Why are those exceptions? I think the HD4870 (somewhat high-end card with maybe not quite enough ram) is rather the exception.
 
Well then I'm the exception because I have a 4870 512MB and I'm running a 30" screen that has native res of 2560x1600.

I'm installing a 5870 tonight though. :)
 
Ok if the price difference is that small (I thought it was about twice that initially) it makes sense.

The difference for those kind of amounts is thankfully very reasonable.

Why are those exceptions? I think the HD4870 (somewhat high-end card with maybe not quite enough ram) is rather the exception.
Because those are rather vendor initiatives then the initial IHV's actually. As I said with a bit of research it's not hard to define which amount of ram for each market segment is better.

Well then I'm the exception because I have a 4870 512MB and I'm running a 30" screen that has native res of 2560x1600.

I can't know what games you're playing and with what settings, but hand to the heart you're sure a 4870/1GB wouldn't had been a better choice?

I'm installing a 5870 tonight though. :)

Excellent choice; congratulations :)
 
I can't know what games you're playing and with what settings, but hand to the heart you're sure a 4870/1GB wouldn't had been a better choice?

Probably would have, but at the time when I bought that 4870, there were only 512MB models available. The curse of being an early adopter. Then upgrading for the same card but with more ram didn't make enough sense.
 
Though, aren't the higher resolution TFT screens also larger? So you're essentially getting more immersion, at the same 'detail' level: something Eyefinity takes to the extreme.
You would have a point if people stayed the same distance away from the screen with 30" screens. Maybe some do, but that seems nuts to me. As for Eyefinity, well, I just think that it's something you use to put some spark back into older games. I don't want devs to make their game playable at 12MP on RV870, I want them to use its power to make it realistic at 1-2MP.

Native resolution HUD/text (and maybe film grain) superimposed on an upscaled image rendered with AA looks fantastic to me, and I think most would agree.

Since I got my 24", 1920x1200 doesn't seem so large any more: it seems like a minimum requirement to be able to take advantage of the screen space available (and that space still isn't 'enough' as I would probably benefit from going up to a 26" or 27" model).
I'll agree with you for desktop usage, but not for games. Middling resolution with AA/AF and great lighting looks great to me.
 
Back
Top