GPU prices crashed today?

to buy an RTX 3080, then sell my RTX 3070 after it back from RMA.

or be patient and just wait for RTX 3070 to back from RMA....

and my gamepass ultimate expires in a dew days ....
 
The resell value of your RTX 3070 might not be what you think it is depending on when you get it back.

The GPU price fall and volatility is actually a really mixed situation, and I've challenged people on this viewing it as a purely a good thing for gaming buyers.

For those who plan on selling existing Ampere/RDNA2 GPUs (or older) for future upgrades, by Q4 this year you are almost certainly going to be selling against the actual mining glut with Ethereum going PoS. The actual real sell off hasn't began, there's a lot of misinformation being spread and hopefuls still clinging.
 
Last edited:
They're still high but getting better.

RTX3060ti is £400 now which is the GPU I've been waiting for as I feel it's the only card that will give me a decent enough upgrade over my GTX1070.

Although I do worry about it's 8Gb frame buffer.
 
Nvidia's cards are powerful at RT, but on the verge of obsolescence due to their niggardly quantities of vram.

AMD are fast and actually have some VRAM, until you use RT, at which point you might as well just close your eyes and remember the times you saw real light rays irl at real life FPS.
 
Nvidia's cards are powerful at RT, but on the verge of obsolescence due to their niggardly quantities of vram.

AMD are fast and actually have some VRAM, until you use RT, at which point you might as well just close your eyes and remember the times you saw real light rays irl at real life FPS.

I only game at 1080p so VRAM should last that little bit longer and if games start being more efficient with streaming it should be OK.
 
RT is still obscenely heavy, even on rtx 3070. I need to run cyberpunk2077 at 4k + dlss performance with ultra wide aspect ratio to get 50-60 fps.
 
I only game at 1080p so VRAM should last that little bit longer and if games start being more efficient with streaming it should be OK.
I was very surprised when I bought my wife a 3070 recently and at 1080p in Watch Dogs Legion, with DLSS quality, it was using over 7Gb VRAM.
 
I was very surprised when I bought my wife a 3070 recently and at 1080p in Watch Dogs Legion, with DLSS quality, it was using over 7Gb VRAM.
Yea, there will always be games that will just try to keep as much data in VRAM as possible for a buffer.

I wish more games and programs did that to be honest, if you have 32Gb RAM and Windows is only using 3Gb it seems like a waste, just fill it with most common used programs to reduce lag.

Same with games, if you I have spare VRAM then store part of the next stage in memory ready.
 
Last edited:
Yea, there will always be games that will just try to keep as much data in VRAM as possible for a buffer.
In this case it wasn't me monitoring the GPU VRAM to see that but rather the game video settings telling me it will use that much VRAM.
 
I'm sure it's because the game preloads, it works fine on lower VRAM GPU's from what I remember looking at performance reviews.

Those performance reviews aren't always very definitive. I feel people have oversimplified VRAM related testing and might not realize how complex actually determining that requirement might end up being. It's not as simple as say 10+ years ago (or maybe even longer now) where universally it was just a matter of looking for stutters.

A lot of modern engines now heavily rely on being very dynamic in terms of what data is streamed in and LOD. This complicates VRAM testing as unless you have some way to directly profile or basically compare two games side by side you can't actually know for sure whether or not the engine is somehow compensating for VRAM limitations by things such as using lower LOD, streaming in textures later, or just plain lower textures.

Some games will also let you set graphics settings in the menu but not actually use those settings and enforce certain limits in the background. It's a psychological thing with knowing that users don't like to feel they aren't able to "max" settings.

Another issue is that how VRAM ends up being managed and used is means testing for thoroughness is tricky and testing for consistency may not reflect actual play. Two common issues are that memory budget can very greatly depending on where the game and what is happening. In terms of management there is also a factor in terms of how long you are playing.

I recall for example just during the onset of the last console generation with one of the very early games, Shadow of Mordor, recommending 6GB for max texture settings even at 1080p. User reports on this ended up very mixed with some people reporting that 3GB was fine while others were not. Why such discrepancy? Well depending on how you were playing, where you playing, and what your threshold of tolerance you'd have a different response.

So say with an open world game if only a small subset of areas end up stuttering briefly due to VRAM during fast movement/pans especially during environment transitions, is there enough VRAM? What if it's otherwise fine for 90%+ of the rest of game play scenarios? 95%? Is having to wait a few more seconds more consistently for texture stream in not enough VRAM? Not a very simple pass/fail answer here.

I'm not saying that is the case here but just to actually deep dive and tackle this issue would be much more intensive than just running a single scene (or even a couple) on fresh loads and comparing fps numbers these days.

Then there's also another factor here in terms of background applications and how newer WDDM and Win10+ manages memory compared to the Win7 and older era.
 
Last edited:
I only game at 1080p so VRAM should last that little bit longer and if games start being more efficient with streaming it should be OK.

Well PRT+ with DX12U might help, but only so long as anyone uses it. I also game game at about the same res, and 1080p is going gangbusters on 8GB so far, but I've been caught out by the lower Nvidia options before with my previous "GTX 680 triple fan overclock" (partly my fault, because I kept the card so long).

Those performance reviews aren't always very definitive. I feel people have oversimplified VRAM related testing and might not realize how complex actually determining that requirement might end up being. It's not as simple as say 10+ years ago (or maybe even longer now) where universally it was just a matter of looking for stutters.

A lot of modern engines now heavily rely on being very dynamic in terms of what data is streamed in and LOD. This complicates VRAM testing as unless you have some way to directly profile or basically compare two games side by side you can't actually know for sure whether or not the engine is somehow compensating for VRAM limitations by things such as using lower LOD, streaming in textures later, or just plain lower textures.

Some games will also let you set graphics settings in the menu but not actually use those settings and enforce certain limits in the background. It's a psychological thing with knowing that users don't like to feel they aren't able to "max" settings.

Another issue is that how VRAM ends up being managed and used is means testing for thoroughness is tricky and testing for consistency may not reflect actual play. Two common issues are that memory budget can very greatly depending on where the game and what is happening. In terms of management there is also a factor in terms of how long you are playing.

I recall for example just during the onset of the last console generation with one of the very early games, Shadow of Mordor, recommending 6GB for max texture settings even at 1080p. User reports on this ended up very mixed with some people reporting that 3GB was fine while others were not. Why such discrepancy? Well depending on how you were playing, where you playing, and what your threshold of tolerance you'd have a different response.

So say with an open world game if only a small subset of areas end up stuttering briefly due to VRAM during fast movement/pans especially during environment transitions, is there enough VRAM? What if it's otherwise fine for 90%+ of the rest of game play scenarios? 95%? Is having to wait a few more seconds more consistently for texture stream in not enough VRAM? Not a very simple pass/fail answer here.

I'm not saying that is the case here but just to actually deep dive and tackle this issue would be much more intensive than just running a single scene (or even a couple) on fresh loads and comparing fps numbers these days.

Then there's also another factor here in terms of background applications and how newer WDDM and Win10+ manages memory compared to the Win7 and older era.

I tend to look to frame rates vs resolution these days, and try to infer ... something ... about the future from that. Doom at 4K with RT absolutely dies on its arse with less than 10GB (with even a shitty 16GB card destroying the fastest 8GB card), but that might be because of Vulkan - with Driver managed optimisations under vendor specific optimisations mitigating things, until they stop caring.

I keep my components a long time (current GPU was launched 5 years ago), but even without RT it's clear that 4 and 6 GB cards are getting hammered at higher resolutions even without RT, and watching the relationships between architecture, fps and memory size it's clear to see that once drivers vendors and developers stop protecting 8GB cards they are going to get wrecked hard by RT.

As far as I'm aware, without specific work by the developers memory requirements of RT won't automatically scale with resolution like texture resolution frame buffers sizes. (Excuse me, it was late).
 
Last edited:
I don't see the current 8GB cards aging as poorly as 2GB was for the last console generation. Relatively speaking 8GB is at 1/2 console memory and should age along the lines of 4GB cards. Whereas 2GB was at 1/4. Also judging by various factors 8GB cards may likely be sold for longer and we aren't going to be really seeing VRAM increases per price point next gen (certainly not the way we saw from 2012->2014->2016) and might even get worse.

There's also an interesting question that could be explored is the relevancy of "higher" texture settings at lower resolutions.

That being said I do feel at least some 8GB cards in the current lineup are very problematic given how much you are paying for them. A 3060ti with 8GB (or the 6600/xt) is one thing but the 3070ti with 8GB is another. I'm not still not entirely comfortable going with a 3060ti (and I'm able to get them at MSRP), but I do have some specific usage cases (including content creation) that I know will push the VRAM. But I'm in the presumably minority camp that would've preferred a 192 bit bus 3060ti with 12GB VRAM with the associated perf hit even at the same price.
 
Relatively speaking 8GB is at 1/2 console memory

It's not like the consoles are able to use the full 16gb as VRAM. A portion is used for the OS (3/4gb for last gen) and then we might have some reserved for the always-recording feature/quick resume (and PS's equalivent), and probably something for game-code. We wont (ever) have exact numbers, but its probably around 10 or so GB at most for VRAM/graphics memory used by games truly maxing it.
That vram portion has to share its bandwidth with the CPU too whereas with desktop/laptop gpu's its all its BW for its own. You potentionally need less if you can swap it out much faster.

I suspect that a 3060Ti's 8GB will hold up quite well, in special considering the use of dlss which could reduce the ram footprint due to a lower render resolution.
 
It's not like the consoles are able to use the full 16gb as VRAM. A portion is used for the OS (3/4gb for last gen) and then we might have some reserved for the always-recording feature/quick resume (and PS's equalivent), and probably something for game-code. We wont (ever) have exact numbers, but its probably around 10 or so GB at most for VRAM/graphics memory used by games truly maxing it.
That vram portion has to share its bandwidth with the CPU too whereas with desktop/laptop gpu's its all its BW for its own. You potentionally need less if you can swap it out much faster.

I suspect that a 3060Ti's 8GB will hold up quite well, in special considering the use of dlss which could reduce the ram footprint due to a lower render resolution.

But it's as you say last gen consoles had the same situation which why the analogy is apt. 8GB relative to this gen is the same as 4GB to last gen roughly speaking.

As for whether or it holds up I think this is subjective and also does also come down to expectations which is why this question in general is hard to answer even without forecasting. How well did 4GB VRAM hold up last gen? I think even with the power of hindsight you'll see mixed opinions on whether or not 4GB was enough or whether or not you needed 6GB+.

The expectation part is important and I can see this also being very mixed depending on the person. For me personally at the price points we are talking about in some cases my expectation point isn't just to the match the console experience, it needs to be console+ in every way with no drawbacks especially if we are moving to something like the 3070ti in terms of price. Low enough, say with the 3060ti/3060 or 6600/xt line you can argue it doesn't have to be clearly console+ maybe.

For example regardless of the reasons for it we know that there are already existing games in which 8GB VRAM provides a different experience at 1440p compared to higher amounts. Whether or not that is ultimately "acceptable" granted is debatable.

With last gen for example I consider 2016 gen GPUs (Pascal and Polaris at it's segment) as clearly "good enough" for the entire generation given their price segments. 2014 GPUs I think were only arguable.
 
Back
Top