Is 4GB enough for a high-end GPU in 2015?

Albuquerque

Red-headed step child
Moderator
Legend
How about we make a new thread discussing the benefits of >4GB of vram in games? It's one thing to discuss the technical limitations of HBM (and why they might exist and how to get around them), but debating the current need of >4GB is really a different discussion. Finally I think we can dial down some of the attitudes. It's borderline hostile for a relatively straightforward discussion.

My personal opinion: it might be, but I'm spending >$500 to chance it in this day and age. I'm targetting more than 4GB for my next exorbitantly priced video card, most likely the 6GB 980Ti.
 
My personal opinion: it might be, but I'm spending >$500 to chance it in this day and age. I'm targetting more than 4GB for my next exorbitantly priced video card, most likely the 6GB 980Ti.

Why were 4gig cards at $500 okay a few months ago when NVidia put out the 980 series ? Hell they had a $550 4gig card , $400 3.5 gig card and a $300 2gig card and people were foaming at the mouth to get them.

I wouldn't buy a card right now because of the incoming micron drop and HBM2 would make everything today obsolete.
 
a $300 2gig card and people were foaming at the mouth to get them.
Actually, I waited for the 4GB GTX 960. And spent an extra 20% of the price on that. Not because I think 2GB is inadequate now, but because I think 4GB might be more future proof.
 
Why is Witcher 3 the "exception rather than rule" when it comes to VRAM usage? Is it more the fault of game developer's who "use huge succulent textures" instead of rendering quality shaders in-game?
Basically these results indicate 4GB should be more enough even with truly exceptional graphics if the game is properly developed.

index.php

We have seen some scenes use a little more and others a little less, but it's all cool really. We have not been able to pass 2 GB of graphics memory. Meaning the game doesn't use huge succulent textures, but rather renders quality shaders in-game, you can tell the programmers put a lot of work into this. Once you go to Ultra HD we'll close in at 2GB, but that's an incredibly small number for the game in terms of graphics memory.
http://www.guru3d.com/articles_pages/the_witcher_3_graphics_performance_review,9.html
 
Actually, I waited for the 4GB GTX 960. And spent an extra 20% of the price on that. Not because I think 2GB is inadequate now, but because I think 4GB might be more future proof.

I'd argue the performance of the 960 itself would render the extra ram moot.
 
Why were 4gig cards at $500 okay a few months ago when NVidia put out the 980 series ? Hell they had a $550 4gig card , $400 3.5 gig card and a $300 2gig card and people were foaming at the mouth to get them.

I wouldn't buy a card right now because of the incoming micron drop and HBM2 would make everything today obsolete.
There weren't "okay" for me, which is why I didn't buy them. You have to understand that there's no single, logically conclusive answer to this for everyone. Some games at some settings are not going to need 4GB of VRAM. Others, are. You and I might even play the same games, but that doesn't suggest that we have the same expectations of that game, which means our needs for video cards will be different.

There will always be that segment of the population who wants the newest and best, who purchase a card in basically every generation because they can. There will be others whose needs were perfectly suited to 4GB usage, and so purchasing a new 4GB card wouldn't be a hindrance to them anyhow.

My expectations of video game settings bend in the direction of needing more than the 3GB of VRAM that I have today. Does that mean I'd be happy with a 4GB card? Perhaps I would today, but purely within my own opinion of value, I'm not convinced that "only" 4GB of VRAM will satisfy my needs in the not-too-distant future.

I upgrade cards every-other generation at the earliest. Now that consoles have a far larger pool of video memory resource to draw against, I expect video memory utilization in the PC space to rise similarly. GTAV is probably the best current example, do not assume there will not be more.

My personal value opinion makes me lean to 6GB as my next card, or perhaps 8GB if somehow Fiji A: makes sense to me and B: actually comes in that capacity.
 
Why were 4gig cards at $500 okay a few months ago when NVidia put out the 980 series ?
Because back, with the exception of a few rare 8GB 290X, 4GB was the norm. With the 980Ti, the Titan X, and the upcoming AMD 300 series, the goal posts have been moved. It doesn't matter whether it's useful or not: nobody wants to be the kid with a lesser toy than his friends.

I wouldn't buy a card right now because of the incoming micron drop and HBM2 would make everything today obsolete.
Then don't! ;)
 
I'd argue the performance of the 960 itself would render the extra ram moot.
Right until there's some silly game I want to play that doesn't need the performance but does need the RAM.
Yes, ideally, games would not need excessive RAM – they'd be balanced according to the performance level. However, this is not an ideal world. Not to mention the possibility that some popular rendering technique comes along that needs RAM capacity and nothing else.
 
Because back, with the exception of a few rare 8GB 290X, 4GB was the norm. With the 980Ti, the Titan X, and the upcoming AMD 300 series, the goal posts have been moved. It doesn't matter whether it's useful or not: nobody wants to be the kid with a lesser toy than his friends.


Then don't! ;)

IT would depend on what resolution you want to play at. All the 4gig is not enough is at 4k .But at 1080p it should be fine.

I would also wager pricining is important also. the 980ti is $650 so a 4 gig card at $550 or less wouldn't be bad and would be priced in line with the 4gig 980 classic.
 
What facts/studies can we base a discussion on ?

There are a few reviews of 8GB variants of the 290X, with usually no difference, except in a small number of games.

But any such data could be invalidated by what Joe Macri alludes to in the following, if his claims are verified:

When I asked Macri about this issue, he expressed confidence in AMD's ability to work around this capacity constraint. In fact, he said that current GPUs aren't terribly efficient with their memory capacity simply because GDDR5's architecture required ever-larger memory capacities in order to extract more bandwidth. As a result, AMD "never bothered to put a single engineer on using frame buffer memory better," because memory capacities kept growing. Essentially, that capacity was free, while engineers were not. Macri classified the utilization of memory capacity in current Radeon operation as "exceedingly poor" and said the "amount of data that gets touched sitting in there is embarrassing."

Strong words, indeed.

With HBM, he said, "we threw a couple of engineers at that problem," which will be addressed solely via the operating system and Radeon driver software. "We're not asking anybody to change their games."
http://techreport.com/review/28294/amd-high-bandwidth-memory-explained/2


I say we just wait for Fiji to be released, bench it and see what happens. It's the only way to know for sure.
 
My colleagues did some quick tests while „simulating“ what they think supposedly is a R9 390X:
http://www.pcgameshardware.de/AMD-R...pecials/R9-390X-simuliert-Benchmarks-1161227/

Apart from the additional 4 GiB, there's a clock rate difference as well, mind you.

And while at it, I want to remind you of this:
https://forum.beyond3d.com/posts/1849624/

„One additional aspect for the 4-GiB-topic: Games are largely streaming based nowadays, meaning they do not load a whole level into local memory. Graphics drivers manage this according to the amount of memory present. If they have a larger wiggle room as in Titan X or FirePro W9100, they can afford to leave used assets untouched for a longer time. If that asset is used again they do not have to reload it from main memory, thus saving time and - depending on the capabilities of the engine - achieving smoother rendering. Having non-blocking DMA-engines help in that …

The amount of memory _really_ needed at a certain point in time cannot be measured this simply with tools like GPU-z or MSI Afterburner.“
 
I'd argue the performance of the 960 itself would render the extra ram moot.

I don't agree with this. The 960's about as powerful as my 670 and I can still near max out most games at 1080p (usually minus only high end AA and the most performance hungry Nvidia exclusive options) however a common theme in many recent games is that I have to set textures down 1 level from max. 4GB would allow me to max the texture settings in those games with little additional impact to performance.

Also, don't forget both the XBO and PS4 have way more vram than a 2GB 960 but both feature much weaker GPU's.
 
I'm certainly not against the "wait and see" approach, and I'm sure there are obvious ways to increase memory utilization efficiency of the near-GPU pool of VRAM driven by a more intelligent driver. Nevertheless, I also have to suggest that this is going to be the first attempt of an iterative process; software of this type is never perfect on the first attempt.

I expect problems to crop up early and often as newer engines and games continue to bend what were previously held as "the rules." Not because AMD is inept, far from it, rather many game developers DO NOT follow best practice, and as such will find creative ways to inadvertently break the system while the driver is continually adopted and adapted to work around these corner cases.

And if it really takes off? NVIDIA will be there too, maybe even with a driver that works on existing hardware for that matter. Truly, if the optimization can be kept only within the driver and OS stack, why couldn't it conceptually be applied to any reasonably new GPU architecture?
 
AMD says there is low hanging fruit in their driver in terms of allocating memory. And that they've (finally!) assigned a couple of engineers to fix it. Considering that Nvidia has typically had GPUs with less RAM than AMD, and that they have way more engineers, I consider it extremely unlikely that they haven't already spent time on this long time ago.
 
AMD says there is low hanging fruit in their driver in terms of allocating memory. And that they've (finally!) assigned a couple of engineers to fix it. Considering that Nvidia has typically had GPUs with less RAM than AMD, and that they have way more engineers, I consider it extremely unlikely that they haven't already spent time on this long time ago.

Typically with more memory you just leave more in memory, regardless of whether it is currently being used or will ever be used again. IE - textures that are no longer in use can be retained in memory, thus inflating the perception of how much memory is being used. Perhaps they'll be used in the future and save on PCIE bandwidth. Perhaps they'll never get used again.

A "dumb" approach would be to just leave everything in memory and start randomly swapping out textures (or whatever) whenever the memory gets full. A "smart" approach would be to analyse what textures are frequently used and retain those in memory while swapping out infrequently used textures when approaching the card's memory limit. The former will lead to more reliance on slow PCIE transfers, while the latter will presumably be less reliant on slow PCIE transfers.

I'd imagine that is part of the problem with AMD's memory optimizations at the moment. Nvidia, due to increasing memory capacity less frequently than AMD in the past, have probably already tackled this to an extent.

Regards,
SB
 
Why do people say 4k requires more vram? How much more would you need over 1080p? Most of the vram would be textures I'd imagine and both would use the same texture data. And what settings would people be playing at 4k where they get good framerate but is getting bottlenecked by the vram limit?
 
Why do people say 4k requires more vram? How much more would you need over 1080p? Most of the vram would be textures I'd imagine and both would use the same texture data. And what settings would people be playing at 4k where they get good framerate but is getting bottlenecked by the vram limit?
Well, so the math:
1920x1080 x 4 RGBA x 4 MSAA = 122MB.
3840x2160 x 4 RGBA x 4 MSAA = 488MB.

Difference: 366MB. Multiply by 3 for triple buffering: 1098 MB.

Add another 366 MB for z buffer, and you're at ~1.5GB extra with identical textures.

I'm sure there are other buffers (G, stencil, whatever) for stuff that I don't know anything about, and my 3x multiplication is probably pessimistic (maybe they reduce the MSAA to non-MSAA at the end of a frame?), but you get the idea. And some expert will probably point out all my mistakes...
 
Back
Top