Will GPUs with 4GB VRAM age poorly?

Dishonored 2 seems to run badly on FuryX as a result of memory shortages, it's performance is equal to GPUs with 4GB of RAM

1080.png


https://www.techpowerup.com/reviews/Performance_Analysis/Dishonored_2/4.html

Also:
https://www.overclock3d.net/reviews/software/dishonored_2_pc_performance_review/6
https://www.computerbase.de/2016-11/dishonored-2-benchmark/3/#diagramm-dishonored-2-1920-1080
http://www.pcgameshardware.de/Disho...s/Benchmark-Test-Systemanforderungen-1212965/
 
Last edited:
If Xbox Scorpio proves a large success, a large gaming userbases will have 6-7GB of memory available to the Scorpios gpu (12GB total = 3GB is reserved for the OS, generally game cpu code doesn't scale much so many games are generally probably around 2-3GB used by the cpu, thus leaving 6-7GB available to the gpu). The larger the Scorpio userbase becomes the more developers will likely take advantage of that with high end visual features. And if you have a card with 6tflop floating point performance but only 3-4GB vram (like the GTX1060 3GB, GTX980 4GB or the RX480 4GB) you simply won't be able to take advantage of those higher settings.

Microsoft employee Spitfiresix on IGN thinks there is a good chance they can hit the $399 pricepoint next christmas.
If Scorpio is $399 at launch I don't see why after a few years it couldn't have many many millions of users.

Of course Microsoft's own 1st party games may take advantage of that large amount of ram available to the gpu right away. The next Forza 7 on PC & Scorpio may have some absolutely jaw dropping visuals effects requiring 6-7GB vram since that is likely what will be available to the Scorpios GPU and they will take full advantage of the hardware to push visuals as its a system seller.
 
Last edited:
That much RAM being used depends on if the GPU can support it. After all, we remember all those old Geforce super low end cards with insane amounts of memory...
 
Texture setting in games usually don't make any changes to performance except if you're running out of vram.

As for Dishonored 2, 970 isn't a 4GB card.
 
Dishonored 2 seems to sit at about 3.5 GB usage on my 970 regardless of detail settings. According to HWINFO. The "bus traffic" measurement increases if settings are kicked up higher and then frame rate starts to hesitate more.

The game is in need of some tweaking on Arkane's side though. It has some pretty annoying performance problems at the moment. And TXAA is making me feel blind because of its blur. I wonder if a sharpening filter would be an idea. In the distance you can often see TXAA causing texture/geometry flickering as it alternates pixels or however it works. Also, if you disable AA entirely, it's like Doom with really extreme aliasing. The FXAA options are, as usual, not so pretty unless you just stand still.

CPU usage is the typical of idTech5, with my i5 usually at near 100% utilization. Must be transcoding megatextures. If I stare at the ground it will drop to 50%.

There is no specific anisotropic filtering option but I wonder if it's still limited to 4x.
 
Last edited:
Dishonored 2 PC Game Analysis
Dishonored 2 has some serious issues on PC. Frame rates are far too low across the board and while you can get some decent average frame rates, the minimums are much lower than we would like and really disrupt the experience overall. As you can see from our graphs, it is also very clear that this game heavily favours Nvidia graphics cards. Even the R9 Fury X failed to hit a 60 frames per second average at 1080p, which is crazy.

Dishonored-2-1080p-Graph.png

http://www.kitguru.net/components/graphic-cards/matthew-wilson/dishonored-2-pc-game-analysis/
 
Yeah with Dishonored 2 you'd be surprised how similarly the Ultra and Lowest presets run. And that's not a good thing. ;) I wish it ran better, but it's not like it's unplayable.

I haven't had any crashes yet.

I really like how they much they kicked the texture resolution up this time. The previous game had amazing artwork, but the texture resolution was so low that it was ruined. I saw someone recommend Reshade's Adaptive Sharpen to reverse the TXAA blur and it does work very well (at the cost of some frame rate, unfortunately.)



(this one has Adaptive Sharpen)
 
Last edited:
Some great prices on RX480 4Gb versions. Seen the custom ones going for $160 usd at Newegg.
 
Some great prices on RX480 4Gb versions. Seen the custom ones going for $160 usd at Newegg.
I'm not sure 4GB is a wise choice anymore. Though for $160 it's almost low-end budget pricing and man that is sweet hardware for such little money.
 
I'm not sure 4GB is a wise choice anymore. Though for $160 it's almost low-end budget pricing and man that is sweet hardware for such little money.

specially for peoples who only own a 1080p or even a 1440p monitors, the 4GB will not do much a difference over a 8GB ones.. the 480 is not an high end gpu's anyway, and at this price, thats a sweet deal. ( who know maybe it is even possible to flash them to 8GB version if lucky ).
 
specially for peoples who only own a 1080p or even a 1440p monitors, the 4GB will not do much a difference over a 8GB ones.. the 480 is not an high end gpu's anyway, and at this price, thats a sweet deal. ( who know maybe it is even possible to flash them to 8GB version if lucky ).
It's only possible even on theoretical level if you can still find 4GBs using original reference PCB - all the AIB custom ones are real 4GBs, and most of references should be real 4GBs now too
 
specially for peoples who only own a 1080p or even a 1440p monitors, the 4GB will not do much a difference over a 8GB ones.. the 480 is not an high end gpu's anyway, and at this price, thats a sweet deal. ( who know maybe it is even possible to flash them to 8GB version if lucky ).
Anything above 1080p a 4Gb would be a big no no for me. A 480Gb 4Gb is just perfect for my HTPC for gaming though.
 
Anything above 1080p a 4Gb would be a big no no for me. A 480Gb 4Gb is just perfect for my HTPC for gaming though.
Geforce GTX 980 (non Ti) also had 4 GB of memory. I know several persons with 1440p display + GTX 980. We also have them at work. I also know people with GTX 970 + 1440p display (and that card had "only" 3.5 GB usable memory). I don't agree that 4 GB GPU + 1440p is a bad pairing. Current games run just fine.

Lots of people are still using Geforce GTX 680 paired usually with a 1080p display. All new games must be designed to work perfectly at 1080p with a 2 GB GPU. Popular Nvidia flagship GPUs had just 2 GB memory a few years ago. Devs just can't ignore those. GTX 680 is roughly as powerful as PS4 Pro.
 
Last edited:
Well to me the fact that the games can run fine at 1080p on 2Gb of VRAM is irrelevant. Something like the 480 is powerful enough to run high end games at full quality which is going to push well past 2Gb. I'm currently running a 970 at 2560x1080 ultrawide and I'm pushing 3.5Gb quite frequently at 75% of the pixels of 1440p.

For the average PC gamer consumer? Sure 4Gb is fine at 1440p as a lot of them don't even go into video settings. For me personally? I'd rather not be pushing that limit and look forward more when making a new purchase.
 
Yea there are more than a few games now that will use more than 4GB. I really don't think buying a 4GB card is a good idea anymore. But sure they can still run most games alright, with some reductions.

I don't have any recent 2GB cards so no experience with what that's like now. My 6970 is thoroughly ancient history. Sold off my notebook with 2GB 860M but that didn't really have much performance potential.
 
Last edited:
Geforce GTX 980 (non Ti) also had 4 GB of memory. I know several persons with 1440p display + GTX 980. We also have them at work. I also know people with GTX 970 + 1440p display (and that card had "only" 3.5 GB usable memory). I don't agree that 4 GB GPU + 1440p is a bad pairing. Current games run just fine.

Lots of people are still using Geforce GTX 680 paired usually with a 1080p display. All new games must be designed to work perfectly at 1080p with a 2 GB GPU. Popular Nvidia flagship GPUs had just 2 GB memory a few years ago. Devs just can't ignore those. GTX 680 is roughly as powerful as PS4 Pro.

At the time of GTX 480/ 5870 (2009-2010), 512MB cards were the norm and almost all games were designed around that limit, there was only one game that ever needed more than 1 GB of VRAM on PC, that was GTA IV, it needed 1.5GB @1080p, at that time 1080p wasn't as widespread as today. There was also COD Modern Warfare 2 which required close to 1GB+ of VRAM at 1080p. As well as Arma 2, but those were nothing special.

By the time GTX 580/6970 got out, we saw 2GB+ cards become the standard at high end (NV later released 3GB versions of GTX 580), but you couldn't really require that much RAM unless you wanted to play with ultra wide screen resolutions or heavily modded games. So outside of GTA IV All was well and dandy.

That changed though in 2011 era with Rage, which required 1.5GB of VRAM at the highest texture level, to prevent streaming issues, and later with he advent of Max Payne 3 which pushed the same memory usage as Rage and GTA V. But by that time we had GTX 680/7970, and 3GB/4GB cards were a reality (4GB GTX 680's). However 1.5 GB cards were more than enough, and you only needed higher if you planned to use resolutions above 1080p, at least in those 3 games (GTA V, Max Payne 3, Rage).

In 2013, things got out of hands with the release of the likes of COD Ghosts, now we enter the territory where video games use crazy amount of VRAM irrespective of the user's resolution, and they use it just to store textures. at that time we saw the necessity of 3GB and 4GB cards. BF4 recommended 2GB+ cards for large maps, Arma 3 did the same, we then had Thief do the same thing as well! Then we had Watch_Dogs, Wolfenstein, Titanfall, Daylight, Dead Rising 3, Ryse, COD Advanced Warfare, Dying Light, Dragon Age Inquisition, Far Cry 4, The Evil Within, Just Cause 3, COD Black Ops 3, Evolve, all do the same, sometimes even pushing the 3GB limit into 4GB! Then we pushed further into the territory of 4GB+ games like Shadow Of Mordor, Batman Arkham Knight, Assassin's Creed Unity, Rise Of The Tomb Raider, Mirror's Edge Catalyst, Rainbow Six Siege, Doom, GTA V, And many of the AAA games released in 2016: Dishnonored 2, Gears Of War 4, Forza Horizon 3, Deus Ex ManKind Divided, COD Infinite Warfare, some of these games already push 7GB+ VRAM utilization!

In my personal experience, having 16GB of system RAM can sometimes negate the need for having more than 3GB or 4GB of Video RAM @1080p, but that depends on the game (tried that with Shadow Of Mordor, Batman Arkham Knight, Assassin's Creed Unity), and sometimes even the levels within those games. But for those who seek the maximum visual quality in the current roster of PC games, 4GB is not enough sadly.
 
Well to me the fact that the games can run fine at 1080p on 2Gb of VRAM is irrelevant. Something like the 480 is powerful enough to run high end games at full quality which is going to push well past 2Gb. I'm currently running a 970 at 2560x1080 ultrawide and I'm pushing 3.5Gb quite frequently at 75% of the pixels of 1440p.

For the average PC gamer consumer? Sure 4Gb is fine at 1440p as a lot of them don't even go into video settings. For me personally? I'd rather not be pushing that limit and look forward more when making a new purchase.
Future pixels wont cost the same as todays. If you compare GeForce 680 and GeForce 1080:
- pixel fillrate went up by 3x
- texture fillrate went up by 2x
- flops went up by 2.7x if you adjust for the fact that Kepler needs co-issue then it went up 4.1x
- memory bandwidth went up only 60%
- memory pool went up by 4x
And let's not even start with Fury...
Now of course game can hog entire 4GB. They can just as easily hog 8GB or even 16GB. After all many games will treat local GPU memory basically as texture cache. Now even if game is consuming entire memory pool it doesn't mean there will actually be any benefit from that. Virtual texturing is also no longer something that was strapped on an API that was never meant to be used like that. Today we have control over async copy engines and proper tilled textures.
ALU to memory access ratio will shift even more towards the ALUs in the future. It has to bandwidth just isn't following sharp rising in computational power.
 
What does it help, if the engine doesn't do it. I guess he suggests, taking the matter into his own hand and buy more RAM instead of praying that (to equal effect) _all_ engines are smart with memory, gives guaranteed improvements.

On a side note, I never understood why there is no huge market of middle-wares for all kinds of engine sub-systems. The stuff is so complex now, no one company can produce optimal algorithms for all of them, not even speaking of innovating on all fronts.
 
Back
Top