Nvidia Ampere Discussion [2020-05-14]

I always set game graphics options for "0.1% lows". I find the worse case in a game during early game play and tweak from there.

A stutter every 5 to 10s is not playable in my opinion. Variable refresh rate can soften the blow, but how well is up for debate.
How do you account for all the background processes and random stuff happening in Windows, do you plug your computer off the internet and kill everything when you game? Because those are often big reason behind 0.1% lows.
 
How do you account for all the background processes and random stuff happening in Windows, do you plug your computer off the internet and kill everything when you game? Because those are often big reason behind 0.1% lows.
To identify those patterns and filter them out in his written analysis (not the graphs) is one job of the reviewer.
 
Another thing I'd like to mention is that both Xbox and PS5 are designed so that they can manage their VRAM allocation far more efficiently with granular streaming of textures off the SSD into their VRAM allocation. This will also be available on PC with DirectStorage. Most games waste huge amounts of VRAM storing mip levels they don't need etc. Ampere and RDNA2 will both be DIrectStorage ready and that should alleviate a lot of VRAM pressure. It might take a couple of years for new games to start leveraging that, but I do think next-gens 8GB will be managed differently than last-gen's 8GB.
 
Another thing I'd like to mention is that both Xbox and PS5 are designed so that they can manage their VRAM allocation far more efficiently with granular streaming of textures off the SSD into their VRAM allocation. This will also be available on PC with DirectStorage. Most games waste huge amounts of VRAM storing mip levels they don't need etc. Ampere and RDNA2 will both be DIrectStorage ready and that should alleviate a lot of VRAM pressure. It might take a couple of years for new games to start leveraging that, but I do think next-gens 8GB will be managed differently than last-gen's 8GB.

But in terms of practical impact it should be noted that it's more so edge cases/outliers this gen that can be pushed to have issues with 8GB. I'm sure there will also be outliers/edge cases next gen that may even have problems with 16GB+ regardless of what tools are available.
 
What you really want to see is a frametime graph instead of a bunch of bar graphs that show you a number without context, or at least I want reviewers to look at frametime graphs and then show me when they had results with spikes etc.

https://www.capframex.com/blog/post/The challenge of displaying performance metrics as FPS

Digital Foundry does frame time graphs as well and while not as easy to read as the PCGH charts, they do hint at a similar situation in some games. Although interestingly the 3070 has massive frame time spikes in Metro Exodus which aren't present on the 2070 so it's possible something else is at play here. Perhaps a driver issue.
 
But in terms of practical impact it should be noted that it's more so edge cases/outliers this gen that can be pushed to have issues with 8GB. I'm sure there will also be outliers/edge cases next gen that may even have problems with 16GB+ regardless of what tools are available.

Maybe. I think if you're only loading mip0 when you actually need it instead of keeping it in vram, you're probably going to save a ton of memory. On top of that, if you're tiling, you're only storing the part of mip0 that you actually need. PC ultra-type settings tend to bloat vram because you end up with a pile of mip levels you never even sample.
 
0.1% performance, which is missing from pretty much every review, shows that 3070 isn't 2080Ti equivalent in some key ways.

It's not consistently reported. My go to sites are Anandtech, Techspot and Techpowerup and of those 3 only Techspot reports 1% numbers.
 
Another thing I'd like to mention is that both Xbox and PS5 are designed so that they can manage their VRAM allocation far more efficiently with granular streaming of textures off the SSD into their VRAM allocation. This will also be available on PC with DirectStorage. Most games waste huge amounts of VRAM storing mip levels they don't need etc. Ampere and RDNA2 will both be DIrectStorage ready and that should alleviate a lot of VRAM pressure. It might take a couple of years for new games to start leveraging that, but I do think next-gens 8GB will be managed differently than last-gen's 8GB.
Yes, 16GB may turn out to be the most memory we'll ever "need" on a graphics card.

I've been linking to Bang4BuckPC Gamer's youtube videos precisely because we can see that many games, on a 3090, are not "allocating-oh-I-meant-using" 24GB of VRAM, even with 8K resolutions and/or texture packs. Somewhere between 10 and 24GB is the sweet spot for all games for the rest of time. That's my guess.

It's not consistently reported. My go to sites are Anandtech, Techspot and Techpowerup and of those 3 only Techspot reports 1% numbers.
Techspot, with 3000 series, has at least made a point of showing that Doom Eternal at Nightmare settings causes performance problems, contrasting that with Nightmare combined with Ultra textures for a large performance boost on cards with less VRAM.

Despite the corner cases, I think 3070 is a very safe 1440p gaming purchase. I'll happily recommend it to my 1440p gamer friends. Navi 22 looks like it's months away, so 3070 is a slam dunk. They wouldn't spend 3080 money, anyway, so you could say it's not a difficult bit of advice for me to give!
 
Maybe. I think if you're only loading mip0 when you actually need it instead of keeping it in vram, you're probably going to save a ton of memory. On top of that, if you're tiling, you're only storing the part of mip0 that you actually need. PC ultra-type settings tend to bloat vram because you end up with a pile of mip levels you never even sample.

It's often case though with outliers/edge cases that there are "questions" with the software decisions, I don't see that changing in practice. So ultimately if you are a stickler to cover all cases as much as possible it's likely gonna have to be via brute VRAM amount.
 
Ampere isn’t a gaming architecture. It doesn’t scale well at 1440p ...

Apparently no one is making gaming architectures anymore.
RDNA1 and 2 seem pretty gaming-focused (i.e. they don't seem particularly better at anything other than gaming).
Ampere just seems focused on compute throughput instead.
 
Are you able to attach a profiler to most games? I haven't tried. I would have thought the series profiling tools would require an executable compiled with debug code. Or is it as simple as turning on performance counters in drivers, or using developer drivers?

It's even simpler than that but with restrictions. If you have a Turing or Ampere GPU you can just download Nsight and profile any DX12 or Vulcan application. So it's easy but limited to those configs.

Worked fine for me on a 1650 Ti and TimeSpy.

nsight.png
 
It's even simpler than that but with restrictions. If you have a Turing or Ampere GPU you can just download Nsight and profile any DX12 or Vulcan application. So it's easy but limited to those configs.

Worked fine for me on a 1650 Ti and TimeSpy.

nsight.png

wow. I just assumed that wouldn’t work.
 
"Max settings" (geometry LOD is not improved with "Extra Details" setting) with best ray tracing reflections and DLSS set to quality, less than 50fps outdoors:


Just over 10GB of RAM.

It's a decent looking game but the graphics in no way justify that level of performance. Not even close.
 
It's a decent looking game but the graphics in no way justify that level of performance. Not even close.

I was thinking the same thing. You can sometimes blame expansive open worlds for poor performance but that starting sequence in a confined tunnel didn't show visuals deserving of 60 fps on a 3090.
 
Back
Top