Why medium range graphics cards can't benefit from more video memory?

Ika

Newcomer
Supporter
Greetings

First of all, I'm pretty sure this subject was discussed many times before, so I would like to ask for forgiveness from all of those who might find this thread offending or useless.

There are many games out there now which can use more than 1GB worth of texture data (modded games with extra HD texture packs: Skyrim for example can reach way above 2GB), yet same cards with different amount of memory configurations show no performance differences in resolutions below 1440p. I'm talking about medium range cards here, like the GTX650ti for example.
I would like to know the exact technical cause why is this happening, why is the difference only 1% or less? Overclocking the GPU or the memory delivers equal performance gains with 1GB or 2GB cards, so what is it then? Is it because the memory management and texture streaming is so efficient in these modern game engines or in the Nvidia drivers or is it because the PCI-E bus is so fast that fetching from the main memory will simply won't "show" with the "low" fps these cards can produce?

Any valuable answer would be greatly appreciated. Thank you in advance.
 
You are not gonna use all the textures at once, large world game like Skyrim will stream continuously, Perhaps also with midrange cards you did not use highest texture detail. There are tools showing resources utilization in real time.
 
You are not gonna use all the textures at once, large world game like Skyrim will stream continuously, Perhaps also with midrange cards you did not use highest texture detail. There are tools showing resources utilization in real time.

Thanks for the answer. I'm obviously talking about situations where the 2GB card has more than 1GB (1.5GB or above) loaded (according to MSI Afterburner or GPU-Z).
 
If you don't have enough texture memory, then the performance implications are very obvious. Stuttering is quite noticeable when PCIe is being used heavily for texturing. Try recent games on a 512MB card and you'll see. 1GB cards are impacted too and you can see this measured in Tech Report's recent reviews.
 
If you don't have enough texture memory, then the performance implications are very obvious. Stuttering is quite noticeable when PCIe is being used heavily for texturing. Try recent games on a 512MB card and you'll see. 1GB cards are impacted too and you can see this measured in Tech Report's recent reviews.

I would - of course - agree just by common sense, but the numbers I have tend to disagree. Could you please link one of the reviews which you are referring to, (preferably where the very same mid spec card was used with 1GB and with 2GB configuration)? Thanks

edit: my English:/
 
Last edited by a moderator:
You are right in thinking 1GB cards aren't a problem in most cases. Games seem to target that capacity at the moment. One time I saw bad stuttering with my 560 Ti was when I was playing Rage at 2720x1536 (downsample resolution trick). It would be at 60fps most of the time but in some areas it would drop to a fraction of that depending on which way I was looking. Bringing the resolution down a little solved it.

This review might be showing a significant disadvantage for the 7850 1gb vs 7850 2gb in one game (skyrim). But then the 560 1GB is mysteriously ahead? Hmmmm....
http://techreport.com/review/23690/review-nvidia-geforce-gtx-650-ti-graphics-card/8
 
You are right in thinking 1GB cards aren't a problem in most cases. Games seem to target that capacity at the moment. One time I saw bad stuttering with my 560 Ti was when I was playing Rage at 2720x1536 (downsample resolution trick). It would be at 60fps most of the time but in some areas it would drop to a fraction of that depending on which way I was looking. Bringing the resolution down a little solved it.

This review might be showing a significant disadvantage for the 7850 1gb vs 7850 2gb in one game (skyrim). But then the 560 1GB is mysteriously ahead? Hmmmm....
http://techreport.com/review/23690/review-nvidia-geforce-gtx-650-ti-graphics-card/8

Thank you for your answer, but I think that review is rather proving my point instead of contradicting it. I'm sorry if it's not really clear what I would like to ask, but I'm talking about situations where the same mid range Nvidia Kepler chip using 1GB vs using more than 1GB video memory. (for example: an 1GB vs a 2Gb version of the 650ti)
What puzzles me is that I can't detect any difference in 1080p (or perhaps I detect some but it's less that 1%) while the amount of textures in use is clearly far more then 1GB.

I would like to know why is that happening and how is that possible.
 
Many graphics engines (specifically those that use deferred rendering) consume large amount of memory when applying Anti-Aliasing , examples would include games like : Battlefield 3 , MOH Warfighter , Max Payne 3 .. etc these can easily exceed 1 GB even when playing @720p.

And then there are the games that just increase texture's resolution without actually increasing the details of the textures .. it's like having a 320p picture and then scaling it to 1080p and calling that high resolution picture .. it will be forced, blurred and ugly .. Games that use this technique just scale the textures up with no or very little of noticeabe difference , they eat large chunks of memory for nothing .

Some graphics settings also consume more memory ,like Tessellation and Level Of Details .

On the PC we are from the efficient and proper use of video memory.
 
Video memory is mostly consumed by
1) frame buffers (z, back, front, multiply by AA value)
2) textures.

Frame buffer sizes are directly propotional to screen resolution and AA mode.

Textures have mip-mapping which means that highest resolution versions of textures are only needed when either
A) running at high resolution
B) looking the texure from very close range(but then only small part of the texture might be visible, if the chip supports virtual texturing).

So the resolution used affect both texture and frame buffer sizes. And mid-end chips are typically used with lower resolution than high-end cards.
 
Greetings

First of all, I'm pretty sure this subject was discussed many times before, so I would like to ask for forgiveness from all of those who might find this thread offending or useless.

There are many games out there now which can use more than 1GB worth of texture data (modded games with extra HD texture packs: Skyrim for example can reach way above 2GB), yet same cards with different amount of memory configurations show no performance differences in resolutions below 1440p. I'm talking about medium range cards here, like the GTX650ti for example.
I would like to know the exact technical cause why is this happening, why is the difference only 1% or less? Overclocking the GPU or the memory delivers equal performance gains with 1GB or 2GB cards, so what is it then? Is it because the memory management and texture streaming is so efficient in these modern game engines or in the Nvidia drivers or is it because the PCI-E bus is so fast that fetching from the main memory will simply won't "show" with the "low" fps these cards can produce?

Any valuable answer would be greatly appreciated. Thank you in advance.

In addition to the answers provided above me, there is one other thing to consider.

The types of things that stress VRAM also tend to stress other parts of the GPU. If they can't cope or keep up, all the VRAM in the world means nothing.
 
I'll give you an example. If your card can only do 30fps because of the power of its shaders then you wont see an increase by adding more vram
 
Textures have mip-mapping which means that highest resolution versions of textures are only needed when either
A) running at high resolution
B) looking the texure from very close range(but then only small part of the texture might be visible, if the chip supports virtual texturing).
That's not really true, at least it used not to be.
Typically textures are uploaded in full, all mip levels (because it is really only the gpu which knows if it is going to use the the highest mip level). (Of course, maybe if you run on lower resolution turning down texture detail might make less of a difference but that's not something games would likely do on their own just because you lower resolution).
Though yes gpus do have the ability nowadays to not always require full textures but I don't think anyone has implemented that transparently in the driver.
 
Thanks for all the answers. I used modded Skyrim because I know that the uploaded data is just high resolution textures.
I think I will borrow the cards again and do some test with very low shader and effect settings, to stress the memory controller "only", if possible.
 
I'll give you an example. If your card can only do 30fps because of the power of its shaders then you wont see an increase by adding more vram

I think this is the best answer, the card is to slow anyway... it's not that in some rare scenarios you wouldn't notice a difference, but under normal conditions it's just irrelevant..
 
I think this is the best answer, the card is to slow anyway... it's not that in some rare scenarios you wouldn't notice a difference, but under normal conditions it's just irrelevant..

I understand just like how I also understand the limitations of the 128bit wide bus of the card and the limitations because of the number of the Shaders and/or the ROPs, and I'm far from being new to the subject, I fully understand what you mean, but :

1, The card is doing a lot more than (average) 30fps in 1080p
2, This is something I did not notice or observe in any of my previous tests, that's why I came here to ask those who know a lot about these things, maybe somebody else also noticed the same phenomena and can tell me what is going on.
 
Any half decent game engine will monitor GPU memory usage, and drop the lower mip levels for textures on distant objects to prevent running out of memory. Since most objects will be far enough away that it will be using the higher mip levels anyway, there is no image quality loss. Rather, the cost is having to load higher resolution textures every now and then, whenever the camera gets particularly close to something, perhaps every few seconds, or few hundred frames. While this won't effect average framerate very much, since only a handful of frames will have to wait for a new set of high resolution textures to load, it could very easily be noticeable to the user as stutter.

On newer cards, even the stutter is mostly mitigated, since textures can asynchronously be transferred by DMA, allowing the GPU to keep rendering using the high mip levels for a frame or two while the lower mips are loaded.
 
Any half decent game engine will monitor GPU memory usage, and drop the lower mip levels for textures on distant objects to prevent running out of memory. Since most objects will be far enough away that it will be using the higher mip levels anyway, there is no image quality loss. Rather, the cost is having to load higher resolution textures every now and then, whenever the camera gets particularly close to something, perhaps every few seconds, or few hundred frames. While this won't effect average framerate very much, since only a handful of frames will have to wait for a new set of high resolution textures to load, it could very easily be noticeable to the user as stutter.

On newer cards, even the stutter is mostly mitigated, since textures can asynchronously be transferred by DMA, allowing the GPU to keep rendering using the high mip levels for a frame or two while the lower mips are loaded.

This is exactly what I suspected, as I wrote in the first post:
Is it because the memory management and texture streaming is so efficient in these modern game engines....
and I also ran some additional tests again, and the difference was less than 1% in fps, so - most of the times - it's simply pointless to have a 2GB card if it's not a very fast top of the line GPU, or if the target resolution is not high enough (not 1440p or above).

This was exactly the answer I needed, thank you very much!
 
Back
Top