Sounds like Raja starting to 'pre-educate' viewers of Vega limitations..??
8GB?
Dont run ultra?
Alt+tab from game lags because of less vram???
No no, Vega is allready capable of use 2x 8GB stack, its just that the 8GB stacked ram are not there yet or too much costly for the gaming part at this moment. But it is a different story for professional gpu's.
In fact, the MI25 who have been presented by AMD should allready sport 2x8Gb stack. ( and this card will be shipped before gaming parts if i have understand it well ).
Here's a rather depressing take on Vega:
http://techbuyersguru.com/ces-2017-amds-ryzen-and-vega-revealed?page=1
"Additinally, Scott made clear that this is very much a next-gen product, but that many of the cutting-edge features of Vega cannot be utilized natively by DX12, let alone DX11."
Is Vega another 7970 that will take years before it's getting competitive?
Really? Haven't they learned anything?
Do we know if 8-hi stacks are in production now? Is it sk-hynix that's making them?
I suppose if the pro part is coming first and it's coming in 1H17, then somebody must be making this stuff.
In a way, that's depressing. But at least that's kinda nice from a consumer gaming perspective. Amd would be more likely to discount these chips and then you could pick them up knowing that they would age well. Meanwhile, on the Nvidia side, you're grabbing a card that won't ever hit fire sale-prices and it won't age well at all if Kepler or Maxwell are any indication.
If you are doing 4K, you won't be laughing, some games already hit 10GB at this resolution.Serriously when TitanX have been released, everyone was somewhat laughing at this 12GB memory memory pool
Maxwell didn't age well? how so?it won't age well at all if Kepler or Maxwell are any indication
It doesn't necessarily mean so, AMD has always preferred performance over quality for quite some time now, that's why they lagged so much behind NV in introducing new visual features on PC. Raja's statement is just an expression of this philosophy.Dont run ultra?
If you are doing 4K, you won't be laughing, some games already hit 10GB at this resolution.
Maxwell didn't age well? how so?
It doesn't necessarily mean so, AMD has always preferred performance over quality for quite some time now, that's why they lagged so much behind NV in introducing new visual features on PC. Raja's statement is just an expression of this philosophy.
Fury cards seem to age poorly in any memory intensive game nowadays, the observation is based on two things:
1-Massive fps drops on the Fury cards compared to competition, when maximum visual settings are enabled.
2-390 cards having close (equal or better) fps than Fury cards due to having 8GB of RAM.
I don't argue they are lazy, but that is irrelevant in this matter. People buy advanced GPUs to to play games with, when you have games that exceed 8GB @4K you don't bug out and say they are lazy, don't play them or buy them! you make the hardware that is capable of properly playing them.Honestly ? ... lazy developpers ?
So what? they are games none the less! should we discard them from the discussion just because they have GameWorks? All the memory intensive stuff are not coming from GameWorks library, but from textures, shadow and reflections resolution. There also AMD backed games in that list, Deus Ex ManKind Divided are one example among others.More seriously, you have a game in this list who dont have been charged by Nvidia after the FuryX release ? All i read is Nv recommend, Nv report, and Gameworks game.
Actually I've put as many independent publications that confirm NV's findings as I can, some have actually exceeded NV's findings, they are there for reading not glancing over. The fact is, games are pushing beyond 8GB @4K, you simply can't wave these findings goodbye like they are nothing, just because you think developers are lazy.All i read is Nv recommend, Nv report
I don't argue they are lazy, but that is irrelevant in this matter. People buy advanced GPUs to to play games with, when you have games that exceed 8GB @4K you don't bug out and say they are lazy, don't play them or buy them! you make the hardware that is capable of properly playing them.
So what? they are games none the less! should we discard them from the discussion just because they have GameWorks? All the memory intensive stuff are not coming from GameWorks library, but from textures, shadow and reflections resolution. There also AMD backed games in that list, Deus Ex ManKind Divded are one example among others.
Actually I've put as many independent publications that confirm NV's findings as I can, some have actually exceeded NV's findings, they are there for reading not glancing over. The fact is, games are pushing beyond 8GB @4K, you simply can't wave these findings goodbye like they are nothing, just because you think developers are lazy.
Sounds like Raja starting to 'pre-educate' viewers of Vega limitations..??
8GB?
Dont run ultra?
Alt+tab from game lags because of less vram???
Well, it certainly seems this way, NV constantly pushes the the visual front, they introduced various forms of AO: HBAO+, VXAO, they have the capability to activate AO for a dozen games (through drivers) that didn't previously support it, they have various forms of AA, TXAA, MFAA, FXAA, they have FireWorks, WaterWorks, HairWorks, ShadowWorks (HFTS, PCSS, PCSS+). They still push for PhysX in games to this day. They have Adaptive V.Sync, Fast Sync, Adaptive half refresh rate .. etc. They were first with DSR and G.Sync as well. And these are just from the top of my head. AMD has answers to some of them obviously, but still not all. Hence why they lagged behind and focused on performance enhancing features like Mantle for example.AMD has no preference of performance over quality and no features are lagging because of this mythical philosophy.
Yeah, GTX 1080 can not play these games at 4K. If the 1080Ti didn't have more than 8GB it is screwed as well!So the 1080 is completely irrelevant ( in fact, 1080 users are completely screwed ), only TitanX with his 12GB GPU are fine ? I hope that 1080TI will got too 12Gb so .. of GDDr5X ... not half ? who know ?
Maxwell didn't age well? how so?
@gongo... if you need to alt + tab a game because 8GB is not enough ... this will mean that the game can only run on TitanX .GDDR5X.... so bad for 1080 and future 1080Ti gamers lol. i can imagine this game will justify the TitanX pricing ? ( a game with extremely bad memorry optimization maybe )
I dont think that Nvidia is enough stupid to push developpers to use let say 9Gb of Vram storage just for make an graphically average game for been able to run only on the TitanX ( or this will call for a lot of marketing ).
Serriously when TitanX have been released, everyone was somewhat laughing at this 12GB memory memory pool on a " gaming gpu "... it can be only justifiable if you use this gpu for computing and raytracing CUDA solution. not for gaming. so if the thing is now to say that 8GB is poor vs 12GB ... please.
Maybe Nvidia will release a 1080ti with 12GB... and games will suddenly use 10GB of memory ?
Yeah, back then most AMD backed games were yet to be released, the recent review you refer to has now Hitman, Deus Ex MD, and Total Warhammer (the built in test), which all lean heavily towards AMD. Maxwell is holding up very well so far in all recent titles (Gears 4, Forza Horizon 3, Watch_Dogs 2, Dishonored 2, Battlefield 1, Titanfall 2 ..etc).Obviously, a differing distribution of game choice can affect that
Isn't Raja's point that these games are filling the VRAM but not actually using it? That the actual accessed VRAM is about half?
What's the ideal GPU to use for identical performance but VRAM comparison.... RX480?
Yeah, back then most AMD backed games were yet to be released, the recent review you refer to has now Hitman, Deus Ex MD, and Total Warhammer (the built in test), which all lean heavily towards AMD. Maxwell is holding up very well so far in all recent titles (Gears 4, Forza Horizon 3, Watch_Dogs 2, Dishonored 2, Battlefield 1, Titanfall 2 ..etc).
Yes that's exactly what I'm saying and why I suggested the 480. In theory, the performance should be identical or within a few percent variance.I don't think that's the right perspective to validate Raja's assertions. I know this thread collected a lot of info on the idea of "total" VRAM occupied, but Raja is saying that VRAM actually utilized is a much smaller portion of total usage. Are there existing tools to measure not only total "occupation" of VRAM, but actual "utilization"?
I think you're trying to assert that if a given game occupies, say, 6GB of the 8GB on a 480 but only actually utilizes 3GB of the 6GB, then if that same game were run on a 4GB 480, the game should intelligently occupy the 4GB available so that the "important" 3GB is contained within that 4GB. Right? Then you should be able to see no traditional performance (e.g. FPS, etc) degradation between the 8GB and 4GB versions. I'd say that if today's game already did that smart asset allocation in memory, then there would be no need for Vega to implement any fancy tech to manage memory as the games would already be doing it. I might be misinterpreting your post and I'd love to hear your thoughts.
Instead of looking at the amount of committed VRAM, we should use tools that show actual amount of accessed memory (per frame or per area). AMD has teased on-demand paging in games on Vega. If this happens, then we will see VRAM usage much closer to the actual working set (accessed memory) instead of the allocated amount of memory. In all games, not just in those games that implement custom fine grained texture and mesh streaming.
I agree, but generally "interesting" stuff starts to happen only when VRAM is actually over committed. Tools that are used to test "memory usage" by reviewers today don't actually tell you that. They give min(totalVram, allocatedAssets). You can't work out an actual working set from that. Yet people are treating this value just as such. You can't even work out if you're over committed or not.Instead of looking at the amount of committed VRAM, we should use tools that show actual amount of accessed memory (per frame or per area). AMD has teased on-demand paging in games on Vega. If this happens, then we will see VRAM usage much closer to the actual working set (accessed memory) instead of the allocated amount of memory. In all games, not just in those games that implement custom fine grained texture and mesh streaming.