Is 10GB needed for Gaming? *Spawn*

No no, Vega is allready capable of use 2x 8GB stack, its just that the 8GB stacked ram are not there yet or too much costly for the gaming part at this moment. But it is a different story for professional gpu's.
In fact, the MI25 who have been presented by AMD should allready sport 2x8Gb stack. ( and this card will be shipped before gaming parts if i have understand it well ).

Do we know if 8-hi stacks are in production now? Is it sk-hynix that's making them?

I suppose if the pro part is coming first and it's coming in 1H17, then somebody must be making this stuff.

Here's a rather depressing take on Vega:
http://techbuyersguru.com/ces-2017-amds-ryzen-and-vega-revealed?page=1

"Additinally, Scott made clear that this is very much a next-gen product, but that many of the cutting-edge features of Vega cannot be utilized natively by DX12, let alone DX11."
:(

Is Vega another 7970 that will take years before it's getting competitive?
Really? Haven't they learned anything?

In a way, that's depressing. But at least that's kinda nice from a consumer gaming perspective. Amd would be more likely to discount these chips and then you could pick them up knowing that they would age well. Meanwhile, on the Nvidia side, you're grabbing a card that won't ever hit fire sale-prices and it won't age well at all if Kepler or Maxwell are any indication.
 
Do we know if 8-hi stacks are in production now? Is it sk-hynix that's making them?

I suppose if the pro part is coming first and it's coming in 1H17, then somebody must be making this stuff.



In a way, that's depressing. But at least that's kinda nice from a consumer gaming perspective. Amd would be more likely to discount these chips and then you could pick them up knowing that they would age well. Meanwhile, on the Nvidia side, you're grabbing a card that won't ever hit fire sale-prices and it won't age well at all if Kepler or Maxwell are any indication.

Yes they are, but costly yet ( i dont know if we can speak about yields in this case.as it is just that production is limited ) ..

@gongo... if you need to alt + tab a game because 8GB is not enough ... this will mean that the game can only run on TitanX .GDDR5X.... so bad for 1080 and future 1080Ti gamers lol. i can imagine this game will justify the TitanX pricing ? ( a game with extremely bad memorry optimization maybe )

I dont think that Nvidia is enough stupid to push developpers to use let say 9Gb of Vram storage just for make an graphically average game for been able to run only on the TitanX ( or this will call for a lot of marketing ).

Serriously when TitanX have been released, everyone was somewhat laughing at this 12GB memory memory pool on a " gaming gpu "... it can be only justifiable if you use this gpu for computing and raytracing CUDA solution. not for gaming. so if the thing is now to say that 8GB is poor vs 12GB ... please.

Maybe Nvidia will release a 1080ti with 12GB... and games will suddenly use 10GB of memory ?
 
Last edited:
Serriously when TitanX have been released, everyone was somewhat laughing at this 12GB memory memory pool
If you are doing 4K, you won't be laughing, some games already hit 10GB at this resolution.
it won't age well at all if Kepler or Maxwell are any indication
Maxwell didn't age well? how so?


Dont run ultra?
It doesn't necessarily mean so, AMD has always preferred performance over quality for quite some time now, that's why they lagged so much behind NV in introducing new visual features on PC. Raja's statement is just an expression of this philosophy.
 
If you are doing 4K, you won't be laughing, some games already hit 10GB at this resolution.

Maxwell didn't age well? how so?



It doesn't necessarily mean so, AMD has always preferred performance over quality for quite some time now, that's why they lagged so much behind NV in introducing new visual features on PC. Raja's statement is just an expression of this philosophy.

  • DOOM
    Game insists on 5GB or more for the Nightmare texture and shadow settings
Honestly ? ... lazy developpers ? no of this game are close of the texture we use in CG, and you can multiply by a factor of 10 the resolution we use. And we dont compress them, we use them are they are...

We know that developpers cache everything in the Vram, it have been the case since 2 years. ... store all... the real usage on Vram is quite different. .. ( most of the data finish trashed and non used. ) WIthout speaking about the storage leak story of Rise of the Tomb Raider..

More seriously, you have a game in this list who dont have been charged by Nvidia after the FuryX release ? All i read is Nv recommend, Nv report, and Gameworks game.

Can i use the title you have advert when opening this "famous" thread, "is 4GB is enough for today" title ?

Fury cards seem to age poorly in any memory intensive game nowadays, the observation is based on two things:

1-Massive fps drops on the Fury cards compared to competition, when maximum visual settings are enabled.
2-390 cards having close (equal or better) fps than Fury cards due to having 8GB of RAM.
 
Last edited:
Honestly ? ... lazy developpers ?
I don't argue they are lazy, but that is irrelevant in this matter. People buy advanced GPUs to to play games with, when you have games that exceed 8GB @4K you don't bug out and say they are lazy, don't play them or buy them! you make the hardware that is capable of properly playing them.
More seriously, you have a game in this list who dont have been charged by Nvidia after the FuryX release ? All i read is Nv recommend, Nv report, and Gameworks game.
So what? they are games none the less! should we discard them from the discussion just because they have GameWorks? All the memory intensive stuff are not coming from GameWorks library, but from textures, shadow and reflections resolution. There also AMD backed games in that list, Deus Ex ManKind Divided are one example among others.
All i read is Nv recommend, Nv report
Actually I've put as many independent publications that confirm NV's findings as I can, some have actually exceeded NV's findings, they are there for reading not glancing over. The fact is, games are pushing beyond 8GB @4K, you simply can't wave these findings goodbye like they are nothing, just because you think developers are lazy.
 
Last edited:
I don't argue they are lazy, but that is irrelevant in this matter. People buy advanced GPUs to to play games with, when you have games that exceed 8GB @4K you don't bug out and say they are lazy, don't play them or buy them! you make the hardware that is capable of properly playing them.

So what? they are games none the less! should we discard them from the discussion just because they have GameWorks? All the memory intensive stuff are not coming from GameWorks library, but from textures, shadow and reflections resolution. There also AMD backed games in that list, Deus Ex ManKind Divded are one example among others.

Actually I've put as many independent publications that confirm NV's findings as I can, some have actually exceeded NV's findings, they are there for reading not glancing over. The fact is, games are pushing beyond 8GB @4K, you simply can't wave these findings goodbye like they are nothing, just because you think developers are lazy.

So the 1080 is completely irrelevant ( in fact, 1080 users are completely screwed ), only TitanX with his 12GB GPU are fine ? I hope that 1080TI will got too 12Gb so .. of GDDr5X ... not half ? who know ?

Will you think the same when AMD will release Vega10 gaming parts with 16GB of HBM2 ? ( if they do ) 16GB is too much ?

In fact, i really start to ask me, if Nvidia will not launch ( maybe in some days ) a GP"100"-110" gaming gpu's with HBM2 at 16GB instead of an GP102 1080TI. ) ( the stack of 4Hi permit them to use only 4GB die ram stacked, so they are allready available in mass production )
 
Last edited:
Sounds like Raja starting to 'pre-educate' viewers of Vega limitations..??

8GB?
Dont run ultra?
Alt+tab from game lags because of less vram???

He seemed sensible. 16GB of HBM2 is unnecessary and HBC should do wonders for multitasking. If he slipped some marketing speech there saying "your best experience will probably be higher fps with slightly less than ultra settings instead of adding effects you can't see any difference, like competitive gamers do. Either way I'm confident Vega will be great for any setting you throw at her"...
 
AMD has no preference of performance over quality and no features are lagging because of this mythical philosophy.
Well, it certainly seems this way, NV constantly pushes the the visual front, they introduced various forms of AO: HBAO+, VXAO, they have the capability to activate AO for a dozen games (through drivers) that didn't previously support it, they have various forms of AA, TXAA, MFAA, FXAA, they have FireWorks, WaterWorks, HairWorks, ShadowWorks (HFTS, PCSS, PCSS+). They still push for PhysX in games to this day. They have Adaptive V.Sync, Fast Sync, Adaptive half refresh rate .. etc. They were first with DSR and G.Sync as well. And these are just from the top of my head. AMD has answers to some of them obviously, but still not all. Hence why they lagged behind and focused on performance enhancing features like Mantle for example.

So the 1080 is completely irrelevant ( in fact, 1080 users are completely screwed ), only TitanX with his 12GB GPU are fine ? I hope that 1080TI will got too 12Gb so .. of GDDr5X ... not half ? who know ?
Yeah, GTX 1080 can not play these games at 4K. If the 1080Ti didn't have more than 8GB it is screwed as well!
 
Last edited:
Maxwell didn't age well? how so?

At this time, we can only confidently say that Kepler hasn't aged well (there are several write-ups about it - I'm sure you've seen one or two).

Maxwell is either "too soon to call" or anecdotal. We probably need at least 6-12 more months before we can confidently make a determination.

But anecdotally, I recently happened to look at a recent Techpowerup review and was surprised at how the 980 Ti appears to look "worse" relative to something like a Fury X than it did back in mid-2015. Rather than the 980 Ti being 2% better at 4K and 14% better at 1080p in the original Fury X review, in a recent review, the Fury X appears on top by 9% at 4K and the 980 Ti's "lead" closes to 3% at 1080p. Comparing the 980 and 390X, there's a similar effect.

Obviously, a differing distribution of game choice can affect that and this is just looking at one site's reviews (admittedly for that easily digestible percentage-based summary /shame). But considering how drastic Kepler's problems have been, I don't believe it's unthinkable that something similar could happen to Maxwell (perhaps less drastically).

@gongo... if you need to alt + tab a game because 8GB is not enough ... this will mean that the game can only run on TitanX .GDDR5X.... so bad for 1080 and future 1080Ti gamers lol. i can imagine this game will justify the TitanX pricing ? ( a game with extremely bad memorry optimization maybe )

I dont think that Nvidia is enough stupid to push developpers to use let say 9Gb of Vram storage just for make an graphically average game for been able to run only on the TitanX ( or this will call for a lot of marketing ).

Serriously when TitanX have been released, everyone was somewhat laughing at this 12GB memory memory pool on a " gaming gpu "... it can be only justifiable if you use this gpu for computing and raytracing CUDA solution. not for gaming. so if the thing is now to say that 8GB is poor vs 12GB ... please.

Maybe Nvidia will release a 1080ti with 12GB... and games will suddenly use 10GB of memory ?

When I talk about VRAM concerns, it's purely a marketing issue. I agree that 8GB will be sufficient in most game for the near future for a "functional" performance perspective.

But just solely because GP102 has a 384-bit bus and GDDR5X is only available in 8Gb capacities (as far as I know, Micron's catalog appears blank, but it used to only have 8Gb options), a fully enabled GP102 must have 12GB of GDDR5X (potentially 24GB if they double up). The 1080 Ti could release with some memory controllers disabled to get down to like 10GB, but if it has all of the memory guts that GP102 has to offer, then 12GB is effectively guaranteed.
 
Obviously, a differing distribution of game choice can affect that
Yeah, back then most AMD backed games were yet to be released, the recent review you refer to has now Hitman, Deus Ex MD, and Total Warhammer (the built in test), which all lean heavily towards AMD. Maxwell is holding up very well so far in all recent titles (Gears 4, Forza Horizon 3, Watch_Dogs 2, Dishonored 2, Battlefield 1, Titanfall 2 ..etc).
 
Isn't Raja's point that these games are filling the VRAM but not actually using it? That the actual accessed VRAM is about half?

Lol, yeah, that's what the video is mostly about. Not sure why my post got pulled into this thread, but oh well, lol.

What's the ideal GPU to use for identical performance but VRAM comparison.... RX480?

I don't think that's the right perspective to validate Raja's assertions. I know this thread collected a lot of info on the idea of "total" VRAM occupied, but Raja is saying that VRAM actually utilized is a much smaller portion of total usage. Are there existing tools to measure not only total "occupation" of VRAM, but actual "utilization"?

I think you're trying to assert that if a given game occupies, say, 6GB of the 8GB on a 480 but only actually utilizes 3GB of the 6GB, then if that same game were run on a 4GB 480, the game should intelligently occupy the 4GB available so that the "important" 3GB is contained within that 4GB. Right? Then you should be able to see no traditional performance (e.g. FPS, etc) degradation between the 8GB and 4GB versions. I'd say that if today's game already did that smart asset allocation in memory, then there would be no need for Vega to implement any fancy tech to manage memory as the games would already be doing it. I might be misinterpreting your post and I'd love to hear your thoughts.

Yeah, back then most AMD backed games were yet to be released, the recent review you refer to has now Hitman, Deus Ex MD, and Total Warhammer (the built in test), which all lean heavily towards AMD. Maxwell is holding up very well so far in all recent titles (Gears 4, Forza Horizon 3, Watch_Dogs 2, Dishonored 2, Battlefield 1, Titanfall 2 ..etc).

Yeah, that's the big problem. But simultaneously, that's what ends up causing the "performance degradation" that we talk about (i.e. it's not like Kepler somehow started getting lower FPS in the same exact games that were out from its release, it was newer games), so attempting to measure this stuff ends up being murky. Is it fair to include a game that obviously favors AMD? On one side, it's a new game like any other (and you can assume that roughly as many Nvidia-facoring games will show up as AMD-favoring games), but on the other side, it can cause unrealistic fluctuations.

I went full aspie :)smile:) and transcribed the per-game 1080p FPS figures from that recent TechPowerUp review in order to throw out games that might cause some unfairness. After throwing out the three obvious ones that you suggested, you still have the 980 Ti overtaking the Fury X by 4.7% (compared to about 3% unadjusted). That's still not nearing the 14% spread when the cards were released. For kicks, I removed all of the other games where the Fury X "wins" (DOOM, BF1, Arkham Knight, COD:BO3, F1 2016) and the 980 Ti's lead grows to about 12.2%. That's effectively at the 14% spread from mid-2015, but it omits a lot of marquee AAA titles (Battlefield, COD, DOOM, etc) while stuff like the infamous Fallout 4 remains. But it's gonna be muddy no matter what.

Another potentially fairer way to look at it is to compare Maxwell against Pascal. Based on that recent review that I shared, the 1080 has a 4k/1080p advantage over the 980 Ti of 39.1%/35.3%. Looking back at the 1080's original review, that has increased slightly from the original 37.0/31.6% advantage (4K/1080p, respectively). Again, that admittedly looks at a slightly different pool of games (but if it didn't, then the results wouldn't've changed). That slight difference is definitely within the margin of error for this kind of benchmarking, but it's in the direction that we'd "expect" it to be (especially over barely 6 months).

Ultimately, while it's too soon to be sure, every metric that I can think of shows Maxwell at least beginning to slow down relative to its peers. We'll know more later in the year or early next (when those much more thorough than myself dig into it).
 
I don't think that's the right perspective to validate Raja's assertions. I know this thread collected a lot of info on the idea of "total" VRAM occupied, but Raja is saying that VRAM actually utilized is a much smaller portion of total usage. Are there existing tools to measure not only total "occupation" of VRAM, but actual "utilization"?

I think you're trying to assert that if a given game occupies, say, 6GB of the 8GB on a 480 but only actually utilizes 3GB of the 6GB, then if that same game were run on a 4GB 480, the game should intelligently occupy the 4GB available so that the "important" 3GB is contained within that 4GB. Right? Then you should be able to see no traditional performance (e.g. FPS, etc) degradation between the 8GB and 4GB versions. I'd say that if today's game already did that smart asset allocation in memory, then there would be no need for Vega to implement any fancy tech to manage memory as the games would already be doing it. I might be misinterpreting your post and I'd love to hear your thoughts.
Yes that's exactly what I'm saying and why I suggested the 480. In theory, the performance should be identical or within a few percent variance.

It's possible, and I believe has been mentioned on this forum elsewhere, that the console 8Gb unified memory may be the cause of these modern titles consuming so much VRAM as the devs are basically throwing everything they can at the memory allocation irrespective of actual usage.

The 1080p titles consuming >4gb would be good tests for 480 and I suppose the 1060 as well since Nvidia created the ridiculous 3Gb version. I think the 480 would be more ideal though.
 
10 GB allocated VRAM doesn't mean that the game is actually actively using all of that. If a game engine detects large amount of VRAM, it most likely will increase texture/mesh streaming buffer sizes to that extra memory (otherwise it would be wasted). Less HDD traffic and slightly less potential texture popping issues. Unfortunately increasing cache size (assuming LRU) beyond active working set has only log(N) improvement. Even if some game scales up to 16 GB, doesn't mean that it brings any noticeable gains over 8 GB.

Instead of looking at the amount of committed VRAM, we should use tools that show actual amount of accessed memory (per frame or per area). AMD has teased on-demand paging in games on Vega. If this happens, then we will see VRAM usage much closer to the actual working set (accessed memory) instead of the allocated amount of memory. In all games, not just in those games that implement custom fine grained texture and mesh streaming.

Those "way beyond awesome quality" brute force shadow settings are just silly. With a bit more sophisticated tech, you get both better quality and significantly better performance. DICE mentioned in a Frostbite presentation that they are moving to GPU-driven rendering in the future, meaning that they can also do precise shadow map culling in the future (based on actually visible surfaces). I would expect techniques such as virtual shadow mapping (pages 55-56): http://advances.realtimerendering.c...siggraph2015_combined_final_footer_220dpi.pdf and this: https://developer.nvidia.com/hybrid-frustum-traced-shadows-0 to become popular in the future. Brute force n^2 scaling of shadow resolution is too big waste for processing and memory.
 
Instead of looking at the amount of committed VRAM, we should use tools that show actual amount of accessed memory (per frame or per area). AMD has teased on-demand paging in games on Vega. If this happens, then we will see VRAM usage much closer to the actual working set (accessed memory) instead of the allocated amount of memory. In all games, not just in those games that implement custom fine grained texture and mesh streaming.

Yes, yes, yes and YES! :)
 
Instead of looking at the amount of committed VRAM, we should use tools that show actual amount of accessed memory (per frame or per area). AMD has teased on-demand paging in games on Vega. If this happens, then we will see VRAM usage much closer to the actual working set (accessed memory) instead of the allocated amount of memory. In all games, not just in those games that implement custom fine grained texture and mesh streaming.
I agree, but generally "interesting" stuff starts to happen only when VRAM is actually over committed. Tools that are used to test "memory usage" by reviewers today don't actually tell you that. They give min(totalVram, allocatedAssets). You can't work out an actual working set from that. Yet people are treating this value just as such. You can't even work out if you're over committed or not.
 
Back
Top