Nvidia Ampere Discussion [2020-05-14]

You may not agree that the additional power is being used in an efficient way, but it doesn't change the amount of additional power that is present. In some of the examples shown, the 980 is running at double the framerate of the current gen consoles, even with improved graphical settings. 60fps vs 30 fps is considered a significant improvement to some. And while that video limits the resolution to 1080p, it's likely the 980 could be matching or exceeding the consoles performance at 1440p or more.

Ultimately, the value either of us place on the improvements reaped by the additional power is irrelevant. My point was that a 4GB GPU is clearly exceeding (seemingly with ease) the performance of an 8GB console. So that's probably quite relevant to the amount of VRAM Ampere is shipping with today.
Double the framerate in the titles using less modern rendering. In Doom, COD:MW, Control, RDR 2 and HZD you get a very similar experience to the base PS4. This is also on a GPU with nearly 3.5x the compute of the PS4 GPU at its 1.5 ghz overclock. The VRAM situation is hard to foresee. Current base consoles had an abundance of VRAM relative to their compute and intended resolution. Next gen consoles have a much lower relative amount. How effective their SSD streaming proves to be will heavily influence 8GB becoming an issue I think.
 
Double the framerate in the titles using less modern rendering. In Doom, COD:MW, Control, RDR 2 and HZD you get a very similar experience to the base PS4. This is also on a GPU with nearly 3.5x the compute of the PS4 GPU at its 1.5 ghz overclock. The VRAM situation is hard to foresee. Current base consoles had an abundance of VRAM relative to their compute and intended resolution. Next gen consoles have a much lower relative amount. How effective their SSD streaming proves to be will heavily influence 8GB becoming an issue I think.

Doom Eternal: 980GTX - fixed 1080p, High settings, 91fps average / PS4 - Dynamic 1080p dropping to near 720p, Medium settings, 60fps kinda fixed

COD:MW: 980GTX - fixed 1080p, High settings, 89fps average / PS4 - Dynamic 1080p dropping to 960x1080p, 'Normal' settings, 60fps kinda fixed

Control: 980GTX - fixed 1080p, High settings, 45fps average / PS4 - 900p, <'Normal' settings, very shaky 30fps with drops to 10(!) fps.

All console information taken from DF face offs. I didn't bother checking the rest of the games as the trend is abundantly clear from the first 3 you listed above.
 
Doom Eternal: 980GTX - fixed 1080p, High settings, 91fps average / PS4 - Dynamic 1080p dropping to near 720p, Medium settings, 60fps kinda fixed

COD:MW: 980GTX - fixed 1080p, High settings, 89fps average / PS4 - Dynamic 1080p dropping to 960x1080p, 'Normal' settings, 60fps kinda fixed

Control: 980GTX - fixed 1080p, High settings, 45fps average / PS4 - 900p, <'Normal' settings, very shaky 30fps with drops to 10(!) fps.

All console information taken from DF face offs. I didn't bother checking the rest of the games as the trend is abundantly clear from the first 3 you listed above.

Doom and COD would have framerates not much lower if unlocked on PS4. From the available info the dynamic games spend most of the time at 1080p ands 60 fps. Control was patched and PS4 performance improved a lot.

https://www.nvidia.com/en-us/geforce/guides/control-graphics-and-performance-guide/
https://www.computerbase.de/2020-03/doom-eternal-benchmark-test/

None of those games offer much visual scaling above consoles outside of the RTX implementations. That applies to performance costs as well. It's hard for me to see how anyone could objectively consider the 980 experience to be more than mildly improved in these examples. More so when you consider its 6.2 TFLOPS against 1.84.
 
Doom and COD would have framerates not much lower if unlocked on PS4.

I assume you have evidence to back this up?

From the available info the dynamic games spend most of the time at 1080p ands 60 fps

As above, what evidence do you have of this, and how does it compare in the most taxing scenes to a locked 1080p at whatever framerate the 980 is pushing?

Control was patched and PS4 performance improved a lot.

Was the resolution increased from 900p? The settings increased from sub-normal? Does the framerate now match the 980's average of 45?

None of those games offer much visual scaling above consoles outside of the RTX implementations.

You're confusing the available performance with your personal interpretation of how well it's used.

That applies to performance costs as well. It's hard for me to see how anyone could objectively consider the 980 experience to be more than mildly improved in these examples. More so when you consider its 6.2 TFLOPS against 1.84.

This isn't about how much you value the improvements. It's about how much those improvements cost in terms of overall GPU performance.

Incidentally, while the 980 may have significantly more shader performance than the PS4, the memory bandwidth comparison stands at 224GB/s vs 176 GB/s so I think it's a little disingenuous to make out that 980 is far less efficient because it doesn't achieve 4x the PS4's performance. Based on the above it looks to be achieving near double the performance in the latest games with only half the VRAM.... For only 27% more bandwidth I'd say that's rather good.
 
I assume you have evidence to back this up?



As above, what evidence do you have of this, and how does it compare in the most taxing scenes to a locked 1080p at whatever framerate the 980 is pushing?

Doom Eternal fps stats
COD fps stats

Both have headroom well above an avg 60 if uncapped. We also don't know if that GTX 980 benchmark tested the heaviest scenes. Based on the little bit of footage provided it points to a clear no.


Was the resolution increased from 900p? The settings increased from sub-normal? Does the framerate now match the 980's average of 45?

My mistake i thought Control was 1080p. Where are you seeing it uses such low settings?

You're confusing the available performance with your personal interpretation of how well it's used.


This isn't about how much you value the improvements. It's about how much those improvements cost in terms of overall GPU performance.

Incidentally, while the 980 may have significantly more shader performance than the PS4, the memory bandwidth comparison stands at 224GB/s vs 176 GB/s so I think it's a little disingenuous to make out that 980 is far less efficient because it doesn't achieve 4x the PS4's performance. Based on the above it looks to be achieving near double the performance in the latest games with only half the VRAM.... For only 27% more bandwidth I'd say that's rather good.
FWIW the 980 scales almost linearly with core overclocking. It was not particularly bandwidth limited. PS4 will also never have 176 GB of bandwidth to its GPU.
 
Or buy a 3090? You cant eat your cake and keep it. You want 16 GB VRAM for 8 GB money
I think a 20GB 3080 will do the job, at around $900.

Or a 16GB 3070 at around $600. I can't decide whether that will be "adequate" at 4K. It's meant to be a 1440p gaming card. So console gamers will be playing at a "higher resolution" on their 4K tellies?...

Some people simply prefer gaming on a PC
Yeah I'm one of those.

I plan to try a PS5 hooked up to my 77" OLED as an experiment in sofa gaming though. Never owned a gaming console (or handheld). I hate console controllers, so I expect the experiment to be painful. But, watching 4K playback of games (courtesy of YouTube) on the telly is "transformative".

The console experiment will be in competition with a 48" OLED on my desk though (hoping to buy soon). So...

So I do think it's valid concern to understand whether a GPU bought today for say £450 is going to be able to provide an minimum of a console level experience for time that I expect to own it. The real life example of that for me is the RTX 3070. I expect that in non-RT games it should offer a moderately better experience than consoles for the next couple of years at least.
16GB 3070 should be solid, but I wonder about 4K on that.

while offering a much better experience in RT
AAA games ported from console might get better reflection quality or shadow resolution/draw-distance, I suppose.

I don't buy in to the RDNA 2 will be crap at ray tracing theory though.

and DLSS enabled games.
We may see that MLSS (machine learning super-sampling) in console games becomes a thing. Otherwise I suppose it's a question of how much money NVidia gives to companies porting from console.

It's advantage will likely start to tail off within 4 years but I'd likely be replacing it at that point anyway. If however I found that it's 8GB meant I had to reduce some settings compared with the console baseline within that timeframe then I'd be pretty disappointed.
If you're spending $500 on a 3070, then the whole PC will be ~$1100 or more. $100 extra on 16GB card instead of an 8GB card seems like good insurance to me.

I'd personally rather pay an even higher premium for a clearly better experience.
I agree. I'm also biased because I like to fit and forget.

The weird thing with this coming generation is that PC gamers are only about 2% at 3840x2160:

https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam?platform=pc
 
The weird thing with this coming generation is that PC gamers are only about 2% at 3840x2160:

https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam?platform=pc

I don't find that odd at all. There are diminishing returns with 4K, especially with computer screens, unless someone has a 42 inch+ one on their desks. Plus the big thing of this generation IMO is HDR. 4K without HDR is not nearly as impressive. And 4K PC screens with HDR are rare and very expensive. As for me I'm going back to PC master race from being mostly a PS4 this generation. Reason being that I'm pretty much invested in VR (my PS4 library is 90% VR), which is posed to evolve much quicker in the medium. As for non VR games I'm going to be streaming to the TV, as I rather much prefer to be on my couch than desk when playing. Especially after spending all day at the desk working from home in this environment.
 
Or a 16GB 3070 at around $600. I can't decide whether that will be "adequate" at 4K. It's meant to be a 1440p gaming card. So console gamers will be playing at a "higher resolution" on their 4K tellies?...

Since a 3070 is more powerfull it will be more of a 4k gpu then consoles, in special at 16gb. Im not factoring in dlss or rt.

3060 will be a match and then some in normal rendering.
 
Since a 3070 is more powerfull it will be more of a 4k gpu then consoles, in special at 16gb. Im not factoring in dlss or rt.

3060 will be a match and then some in normal rendering.

If the 3070 is ~2080 Ti performance it won’t be enough for 4K max settings in recent games.
 
A console+ experience throughout the lifespan of the PS5/XSX while also factoring value just isn't realistic at this point for a PC graphics card. Consoles don't need margins (and may sell at a accounting loss if not a material loss) and they have greater economies of scale. Trying to match that at the same "tech" level (as they are releasing at the same time, so no benefit of industry forward progress) in the PC space just isn't realistic. The Covid economy, in terms of demand and supply effects, will likely exacerbate the issue currently. If we look at both the last gen and generation prior to that it wasn't until the post console release cycle that we really started having console+ capabilities for good value.

There's also going to be a resolution jump this generation (shift to 4k) that will compound the issue on the PC side. Due to the diminishing visual returns I'd expect console games to pretty liberal in terms of resolution scaling. The problem is that same mindset and approach won't be as universal on the PC side unless something new develops. So in practice at the moment the situation will be 4k all the way or nothing. While DLSS (or similar alternatives) does offer a potential solution to address this issue it's still going to face the challenge of requiring developer intervention, which in reality simply won't be as ubiquitous as if say developing for the fixed console platforms.

There's also a general industry slow down in terms of progress. We simply aren't really going to be seeing >2x improvements every 18 months (roughly) for the same cost compared to over 10 years ago. This likely will mean PC components won't have the same "pace" to out grow the consoles from a cost perspective compared to the past.

We can see this with the memory slow down. Even the next release we aren't likely (unless something drastic happens) a halving of the bit/$ of memory. This plateauing of cost reductions is also why we're seeing a struggle now of GPUs upping in VRAM at the same pace of long past.

But it still is worth remembering how past GPU generations stacked up against console generations. Remember Maxwell was 1+ years after (980ti 18 months), and Pascal was almost 3 years after PS4/Xbox One. I'd say it wasn't until really Pascal's generation that we had basically "overkill" on the PC side versus consoles. With that in mind it's highly unrealistic to expect current GPUs to reach that this soon.

Which coincidentally mean that it will likely do fine even with 8GBs.

I think it depends, which is also why I think there's an issue when discussing this stuff if we use open ended subjective qualifies like "is it enough?" instead of a more concrete objective measure.

The problem is the single largest setting that impacts VRAM is going to be texture quality and related settings, which while heavy on VRAM requirements are also very light on overall processing demands. With the baseline move up to 4k if the "max" texture settings are then developed higher with those resolutions in mind they could balloon.

Which then leads to a sticking issue in what someone considers "enough." Texture settings seem to be very sticky for a significant amount of people, they want it maxed, even if it might mean it's 4k (or 8k) class textures intended for 4k+ resolutions and they are playing at 1080p. At the same time some might not care.

Which is why it's an issue when we start using terms like "good enough." As someone's GTX 960 4GB VRAM example above, it's interesting that even without future forecasting we can't have universal agreement of what was good enough for this generation due to highly varied subjective criteria.
 
If the 3070 is ~2080 Ti performance it won’t be enough for 4K max settings in recent games.
I would tend to agree, but we don't have a good idea why 2080Ti is "slow"...

I suspect some games are "lost causes", e.g. Gears 5 from:

https://www.techspot.com/review/2105-geforce-rtx-3090/

so right now may be a pain point in terms of a subset of graphics engines which generally don't port well from console to PC. Apart from streaming textures continuously from storage, we might expect AAA console games to be more like straight DX12 or Vulkan ports to PC as time goes by.

Also, PCMR likes to boast about texture packs, which typically demand lots of VRAM. This is something I know pretty much nothing about, the only game I've ever texture-packed is Deus Ex (Revision, currently playing)... Will texture packs die off for "new console games" henceforth?
 
The problem is the single largest setting that impacts VRAM is going to be texture quality and related settings
Will it though with next gen moving to ultra-fast SSD streaming and things like RTX I/O / DirectStorage? Buffers needed for 4K+RT is something which can't be smaller than absolutely possible, the rest can vary wildly depending on the implementation and what the engine is doing with data.
 
There's also a general industry slow down in terms of progress. We simply aren't really going to be seeing >2x improvements every 18 months (roughly) for the same cost compared to over 10 years ago. This likely will mean PC components won't have the same "pace" to out grow the consoles from a cost perspective compared to the past.

On the GPU side of things, its looking like at launch AMDs own gpus will be more then twice as fast as the consoles (around 25TF vs 10 for ps5), with more memory to play with aswell at 16GB just to vram. Im talking AMD and normal rendering there. That's a rather huge difference.
Taking NV into account, they are atleast in the same performance area regarding raw performance as AMD, probably faster at the highest end of products.
Then we can take ray tracing into account, the biggest and most important graphical difference as compared to previous generations, just about every next gen game seems to be in that. Here the pc is already ahead too, with the consoles said to be being on Turing level (2060 at that) in that regard.
DLSS should not be forgotten imo, as it really does alot to performance if you can enable 4k gaming with 1080/1440p requirements.
That's now, at the very beginning, or even before the consoles even launch.

Zen3 is here and it has a rather large IPC improvement, already a generation ahead there, Intel will answer and AMD will try to compete. Dont forget much higher clocks too, which is still important in special in higher refresh rate gaming.

Regarding storage and IO, consoles might have an advantage, probably the PS5 there. But then, if we believe that MS/NV didnt straight out lie and being corrupt and other conspiracy theories aside (not that the topic for that), they claim 14GB/s, atleast very competitive with whatever the consoles have. The new optimized DMC game loads about as fast on PC, not bad.
I think Optane is more for a latency monster then speeds. Also dont forget PCs usually have something called system ram, which still is faster then any SSD in about every way.

With Xbox being PC basically, as MS sees it, software implementations also seem much better then say ten years ago. VA and DX12/vulkan made their entrance and i assume will improve even more so. Seeing how well old pc hardware from 2012/2013 has held up, much better so then during 6th and 7th generations, i dont see this declining.
A 3060 equiped pc, with matching cpu and other components will tag along just fine. Hence the reason Alex is planning to use a 3060 as his comparison to PS5 setup.

So yeah, gap is already rather big for being the time frame, and things will only evolve over time, in special with AMD and NV competing so much, Intel joining the party and MS seeing the pc as more important then before. Things are moving fast, 2021/2022 and we see even faster RT and improved DLSS solutions, besides increased normal render capabilities.
AMD jumped from 10TF 5700XT to 25TF in one generation. NV went from 16TF 2080ti to 36TF 3090. GDDR6 to GDDR6x and close to 900GB/s BW. Zen3 huge IPC improvements. RT improved what, 200%? Stagnating hardware, nah ;)
Prices will go down obviously, fresh new hardware never has come cheap in the pc space. We have seen NV atleast going the better route atleast with Ampere. That and consoles have gotten more expensive too, even for games itself.

Will texture packs die off for "new console games" henceforth?

Modding on pc will probably never die off. Some games are transformed alot by it, GTA, RTX quake, skyrim, even ray tracing for vice city has been done. Texture packs fall into that same category for the most, they often go hand in hand with modding/improving graphics for games. Its one of the reasons people play on pc, aside from the many other reasons.
 
Last edited:
Doom Eternal fps stats
COD fps stats

Both have headroom well above an avg 60 if uncapped. We also don't know if that GTX 980 benchmark tested the heaviest scenes. Based on the little bit of footage provided it points to a clear no.

They're really interesting stats. I didn't realise that kind of thing was available for consoles. I will draw your attention though to those 1% and 0.1% lows though which give a reasonable indication of how much headroom there is.

980GTX
Doom 1%: 65 fps
Doom 0.1%: 44 fps

PS4 (excluding more demanding final scene whatever that is)
Doom 1%: 49 fps
Doom 0.1%: not given but in the 83.33ms + range which equates to no more than 12fps I believe.

Also bare in mind that we're comparing fixed 1080p with dynamic resolution here so that performance deficit in the PS4 is likely even larger since it'll be running at a lower resolution to the 980GTX. And then there's the fact that the 980 is running at higher settings too.

Doing the same comparison with COD:MW is even less favorable to the PS4 (71 vs 46 fps for 1% lows, again at higher resolution and settings).

My mistake i thought Control was 1080p. Where are you seeing it uses such low settings?

I took everything from Digital Foundry face offs and performance analysis vids/articles.

I think a 20GB 3080 will do the job, at around $900.

Or a 16GB 3070 at around $600. I can't decide whether that will be "adequate" at 4K. It's meant to be a 1440p gaming card. So console gamers will be playing at a "higher resolution" on their 4K tellies?...

Presumably at lower settings though much like this generation. i.e. you can play many games on an Xbox One X at 4K which even a GTX 1080 can't manage at that resolution on the PC side when everything is maxed out. But the One X is usually running at Xbox One settings with occasionally some minor enhancements, so reducing settings accordingly will allow you to reach 4K on the GTX 1080 and lower.

I expect many games next gen will compromise graphical settings on console vs the PC's maximums in order to achieve the check box of 4K.

Yeah I'm one of those.

I plan to try a PS5 hooked up to my 77" OLED as an experiment in sofa gaming though. Never owned a gaming console (or handheld). I hate console controllers, so I expect the experiment to be painful. But, watching 4K playback of games (courtesy of YouTube) on the telly is "transformative".

The console experiment will be in competition with a 48" OLED on my desk though (hoping to buy soon). So...

Couch gaming is great for lots of genres when I can get access to the TV! Most of the time I'm limited to the monitor though even though it's in the lounge with the TV as the wife and kids are using the TV.

I assume you're not setup to run the PC to the main TV as a secondary output? It's a nice option to have for sure.

We may see that MLSS (machine learning super-sampling) in console games becomes a thing. Otherwise I suppose it's a question of how much money NVidia gives to companies porting from console.

This will be interesting to follow. If Nvidia somehow manages to make DLSS game agnostic and applicable through the control panel (not requiring specific developer support) on the PC then it would be a major game changer. They obviously realise that and so must be working on it but I've no idea of whether there is some kind of fundamental restriction. The consoles main hurdle for MLSS is performance IMO. DLSS isn't cheap - Nvidia just makes it look that way with the Tensor cores. The XSX apparently has only half the ML capability of an RTX2060 so any decent implementation of it in the console space may be impractical. Of course there are other upscaling options (checkerboard, the XSS hardware upscaler etc...) which I expect to see heavily used this generation.

If you're spending $500 on a 3070, then the whole PC will be ~$1100 or more. $100 extra on 16GB card instead of an 8GB card seems like good insurance to me.

I agree in principle but I don't spend that $1100 all at the same so the difference between $500 and $600 seems relatively bigger at the time of purchase. My concern would be in spending the extra, but then not needing that extra RAM before I upgrade to say a 5070 in 4 years that would likely come with 16-32GB as standard. That said, given the choice I'll probably get a 16GB version because I hate having to turn any setting down!
 
Back
Top