Nvidia Ampere Discussion [2020-05-14]

And why is that?
Multiple reasons. Since it's launch a month ago, 3080 has not been listed as available in germany by any of the large etailers, so I cannot see that the availability situation is gonna be remedied anytime soon. Four weeks should, under normal circumstances, be enough for supply shipments to arrive in europe (via ship and in greater numbers than the first, expensively air-freight-shipped batch).

Then, with Radeon 6000, I firmly believe that it will have more than 10 GBytes of graphics memory and apart from actual performance, this will make the 10 GByte look more meagre than it is already, given that 4-year old 1080 Ti had the same amount. Then there's the gaming consoles, which will have more memory starting from november. Games developed for them thus will use that memory. 16 GByte 3070 and 20 GByte 3080 will surface as well.

Last but not least there's Cyberpunk 2077. If, by another months time, there still will be no 3080s on shelves in decent quantities (so that prices will return to SEP levels), I will be into the game - raytraced or not - and I will probably not start from scratch should I buy another card. If I finish the game before 3080s are available, I might re-think if I even need that much graphics power at that time.

And there's Folding@Home. I've been running a 3080 for more than one or two work units and I haven't seen one that gave me more thant 5.5M PPD est.; that's a laughable increase over 2080 Ti given the asked price and the massive FP32 throughput. I'm beginning to believe, F@H might be limited by SFU.

Depending on final availability (some rumors say, it'll be well into 2021), time to refresh or next-gen might get really short.

But hey, that's just me. Your mileage may vary wildly.
 
Not really. Clock ranges were essentially the same after the fix..

Please Tell me if you had the same issue with a 2080ti when you rise the clockcurve up? With Turing you had 10% gap between maxium (over)clock curve and what nvidia used. At this time Nvidia has 0% gab between maximum (over)clock curve and standard clock curve.

Nvidia wanted this time to have a small gap between safe zone and whats maximum possible and thats where my theory cick in.
You have now 2 choises:
1. Nvidia didn't test enough cards to find these issue
2. Nvidia testet enough cards. Theire internal tests verifyed the clock curve. But in real production the chip range from good to bad is more worse than nvidia expected.

This is a hardware verification issue and not a "software" issue. Software solves only the issue. The main Problem is that Nvidia understimate how bad the yields and chip qality at Samsung are.

They lowerd the clock range at the upper limit. You can see here a diagramm:
https://www.hardwareluxx.de/index.p...force-rtx-3080-mit-neuem-treiber-im-test.html


ASUS-RTX-3080-Voltage-Curve_2A883232CEA74E0388C17FA0F923EAFA.jpg



I think it’s been well documented by now that the issue was caused by sudden and short lived spikes in clock speed to ~2050Mhz.
And there were no clock speed spikes. There were power spikes but no clock speed spikes.

you should read igorslab test what realy happen and also he said that nvidia has issue with the chip quality, thats why there are cards crashing also with good condensators.

https://www.igorslab.de/wundertreib...-gleich-noch-die-netzteile-verschont-analyse/
 
I don't know about the 2021 timeframe for real availability but I do agree this seems like an overly rushed launch to beat out AMD's Nov deadline.
First mass production silicon (that I could find) is date coded the last week of June. They put the wheels into motion in July, countdown in August, launch in Sept.

Likely some sort of fab excursion occurred, so we are only seeing the first couple months of wafers. I would be surprised if they didn't increase wafer starts after the excursion and we start seeing real shipments before the end of the year, late Nov/Dec.
 
Multiple reasons. Since it's launch a month ago, 3080 has not been listed as available in germany by any of the large etailers, so I cannot see that the availability situation is gonna be remedied anytime soon. Four weeks should, under normal circumstances, be enough for supply shipments to arrive in europe (via ship and in greater numbers than the first, expensively air-freight-shipped batch).
Lack of availability won't do anything to a product appeal. It could keep it at record high for a longer time even.

Then, with Radeon 6000, I firmly believe that it will have more than 10 GBytes of graphics memory and apart from actual performance, this will make the 10 GByte look more meagre than it is already, given that 4-year old 1080 Ti had the same amount.
It would only make 3080's appeal less if it will actually be faster with its >10GB of memory, especially in modes where such memory size matters the most - meaning 4K+RT.
I do agree though that there are a lot of weird people who buy GPUs solely based on how much VRAM they have and this may impact 3080's appeal somewhat.

Then there's the gaming consoles, which will have more memory starting from november. Games developed for them thus will use that memory.
They'll have more total memory than a PC with 3080 in it? Somehow I doubt that, unless there are users who put their 3080s into a PC with 4GB RAM.

16 GByte 3070 and 20 GByte 3080 will surface as well.
So... 3080's appeal will be lower because a 20GB 3080 will appear?
3070 is an interesting one though as I fully expect that many people will go with a 3070/8GB instead of 3080/10GB once the former will hit the market. So these may help with lowering demand for 3080, sure.

Last but not least there's Cyberpunk 2077. If, by another months time, there still will be no 3080s on shelves in decent quantities (so that prices will return to SEP levels), I will be into the game - raytraced or not - and I will probably not start from scratch should I buy another card. If I finish the game before 3080s are available, I might re-think if I even need that much graphics power at that time.
There is no CP2077 anywhere right now, doesn't seem to have any effect on 3080's appeal. There will be other demanding games with RT after CP2077 too. People don't buy GPUs to play one game.

And there's Folding@Home. I've been running a 3080 for more than one or two work units and I haven't seen one that gave me more thant 5.5M PPD est.; that's a laughable increase over 2080 Ti given the asked price and the massive FP32 throughput. I'm beginning to believe, F@H might be limited by SFU.
Yeah, this is a big one. 3080 is done for.

Depending on final availability (some rumors say, it'll be well into 2021), time to refresh or next-gen might get really short.
This is exactly what I've been listening to at the moment of Turing launch. How short was that time between Turing and Ampere again? Did the "Super" refresh do anything which made Turing a lot better than it was originally? Why should we expect anything different from Ampere?

But hey, that's just me. Your mileage may vary wildly.
Of course.
 
They'll have more total memory than a PC with 3080 in it? Somehow I doubt that, unless there are users who put their 3080s into a PC with 4GB RAM.

He also just assumed consoles will be dedicating 16gb to just vram. Its not even close. A 16gb 3070 will have more vram to start with.
While 8gb seems on the low side (depending on resolutions etc), i think 10gb will be fine, even though 16 or even better, 20, will future proof it more.
XSS is going to play every next gen game too, and that has less ram them either.
 
I'm still not convinced 8GB isn't enough to match next gen consoles throughout this generation. As a comparison point, is there any game that a GTX 980 4GB can't run at equal settings to the base PS4 or XBO at similar or greater performance?

8 GB seems small because we already had it in mainstream GPU's 4 years ago, so from a PC perspective I've no doubt it won't be enough to max out many games over the next 5 years. But I'm less convinced it'll be a problem for matching console settings and performance. As noted above, a decent amount of system RAM should be able to make up for a lack of VRAM in relation to consoles since it can be used to very quickly swap new pages into VRAM that on consoles would have had to be stored in VRAM from the get go. The SSD's may change that dynamic this generation though so it may be more important for PC's to have a closer volume of VRAM to the consoles than it was last gen.
 
New I'm still not convinced 8GB isn't enough to match next gen consoles throughout this generation. As a comparison point, is there any game that a GTX 980 4GB can't run at equal settings to the base PS4 or XBO at similar or greater performance?
He got a point. PC ports for titles which did already only work due to dynamic resolution scaling on the consoles do already exceed 4GB VRAM on a regular basis. How about the troubles Horizon Zero Dawn on PC had so far?

It's not that the games would need that much if they had been designed with a smaller VRAM in mind (virtual texturing is a thing, after all, so no real excuse), but as soon as the primary platform no longer makes such optimizations necessary, they simply won't be present in the port either.

So in the end, there is a hard argument for the consumer to never go below the spec of a current-gen console, assuming the worst case scenario how the consoles resources could have been utilized. Add another +50% or even +100% VRAM on top to enabled native 4k-gaming for titles which did only run in native FHD on a console. And for the same reason, add another +50% or +100% raw performance to account for the scenario.

For many, the expectation is that you can get twice the performance of a console, for the price of a console. Or at least en-par, if you buy a laptop rather than a desktop. Either way, only half of that budget is even allocated for the GPU...

The drive to get a GPU in that class is there, but Ampere (and neither AMDs upcoming series) can't satisfy the demand yet as they are unable to hit the perf/cost ratio they would need to keep up with heavily discounted / cross-financed consoles. There isn't even a GPU yet which could keep up with the consoles for the price of an entire console for the GPU alone. Let a alone one that fits the "twice the power, half the price" rule-of-thumb.

Launching Ampere after the next-gen consoles would had made it a questionable choice, whether to stick to PC gaming at all. For this year, and maybe even next, there are no GPUs in sight with which you could build a PC system with sufficient headroom over a similar-priced console.
 
Last edited:
8GB isn't enough to play Doom Eternal on maximum settings. "Console equivalent" is not an argument: just buy a console.

On maximum, consoles are actually not running maximum either. Low/mid settings are there even, also dynamic resolutions. Also, i think its a valid argument to match a console, not everyone wants to play on console or be restricted in their closed box environments, alter settings etc.

For many, the expectation is that you can get twice the performance of a console, for the price of a console. Or at least en-par, if you buy a laptop rather than a desktop. Either way, only half of that budget is even allocated for the GPU...

Consoles will always offer better value at launch no doubt at like for like specs. On the other hand, for the more hardcore gamers atleast, i dont think building a 10tf, 16gb ram, and 3.6ghz zen2 cpu in late 2020 is anything such gamer would go for anyway. Such a gamer would probably want a 20TF, /16gb vram32gb system ram and 4.5ghz zen3 in that day and age. Coupled with a 7gb/s nvme or something, possibly Optane. And pay the premium over a console.

Launching Ampere after the next-gen consoles would had made it a questionable choice, whether to stick to PC gaming at all. For this year, and maybe even next, there are no GPUs in sight with which you could build a PC system with sufficient headroom over a similar-priced console.

A 3070 or equal navi2 gpu is going to be much and much more powerfull then the gpus found in the consoles. Yes they cost more but not that much more then the entire console. You pay more, but you get more. If thats worth it? Yes for many it seems like that.
RTX 3060 would be a rough match probably and then some (depends on ram), entry zen2 8 core and nvme drive. You wont be missing out much.
 
Maximum streaming buffer size you mean. Which as far as I can tell gives you the exact same IQ as the one next to maximum if you're running off a fast enough storage.
You're forgetting it's a last gen game with static lighting and normal mapping instead of geometry.
 
8GB isn't enough to play Doom Eternal on maximum settings. "Console equivalent" is not an argument: just buy a console.

Some people simply prefer gaming on a PC regardless of the graphical comparison compared with consoles (controls, customizability, flexibility, backwards compatibility etc..), while others like me, appreciate all that, provided it's delivered with a baseline level of graphics quality, that baseline being the console version. So I do think it's valid concern to understand whether a GPU bought today for say £450 is going to be able to provide an minimum of a console level experience for time that I expect to own it. The real life example of that for me is the RTX 3070. I expect that in non-RT games it should offer a moderately better experience than consoles for the next couple of years at least. while offering a much better experience in RT and DLSS enabled games. It's advantage will likely start to tail off within 4 years but I'd likely be replacing it at that point anyway. If however I found that it's 8GB meant I had to reduce some settings compared with the console baseline within that timeframe then I'd be pretty disappointed.

He got a point. PC ports for titles which did already only work due to dynamic resolution scaling on the consoles do already exceed 4GB VRAM on a regular basis. How about the troubles Horizon Zero Dawn on PC had so far?

As far as I'm able to tell though, the troubles with 4GB start at >console settings. Take this video for example:


HZD is running at an average 49fps at 1080p on high settings (console is medium). In all the other games shown it seems to be easily ahead of PS4 performance despite only featuring half the memory.

There isn't even a GPU yet which could keep up with the consoles for the price of an entire console for the GPU alone.

A 2070S should be playing in the same ball park, at least as the PS5 anyway. Possibly even better in RT performance. But I take your point. Naturally the console is still a far better value proposition as by the time you've added in everything else to even equivalent console specs you're paying at least 2.5x more. Paying that much more for equivalent graphics even if you do have all the added flexibility etc... that I mentioned above would be a bitter pill to swallow. I'd personally rather pay an even higher premium for a clearly better experience.
 
As far as I'm able to tell though, the troubles with 4GB start at >console settings. Take this video for example:


HZD is running at an average 49fps at 1080p on high settings (console is medium). In all the other games shown it seems to be easily ahead of PS4 performance despite only featuring half the memory.

In half of those games, which also tend to be the more modern renderers, the 980 arguably doesnt offer much of an improved experience over a base PS4, and an inferior experience to a PS4 pro. I hadn't realized Maxwell was performing quite so poorly. This was released a year after PS4 for 40% more money. It’s also heavily overclocked by the user.
 
In half of those games, which also tend to be the more modern renderers, the 980 arguably doesnt offer much of an improved experience over a base PS4, and an inferior experience to a PS4 pro. I hadn't realized Maxwell was performing quite so poorly. This was released a year after PS4 for 40% more money. It’s also heavily overclocked by the user.

Define "much of an improved experience over a base PS4". I'm seeing higher graphics settings across the board (which are often very taxing on the hardware) plus significantly higher frame rates. Overclocking will get you a few percentage points of performance, nothing that will change the overall dynamic.
 
Define "much of an improved experience over a base PS4". I'm seeing higher graphics settings across the board (which are often very taxing on the hardware) plus significantly higher frame rates. Overclocking will get you a few percentage points of performance, nothing that will change the overall dynamic.
Playing at the same resolution with marginally improved visual settings at the same 30/60 fps limit with a few of the drops cleaned up is not much of an improvement IMO. Overlocking on Maxwell was about a 15-20% performance improvement at the 1.5 ghz his card hit.
 
It's advantage will likely start to tail off within 4 years but I'd likely be replacing it at that point anyway.

Hm, maybe, i think though that a RTX3070 (in pure power) last an entire generation. You might have to alter some settings. Also, many also switch to mid gen refreshes, so about four years in one can change the GPU instead.

In half of those games, which also tend to be the more modern renderers, the 980 arguably doesnt offer much of an improved experience over a base PS4, and an inferior experience to a PS4 pro. I hadn't realized Maxwell was performing quite so poorly. This was released a year after PS4 for 40% more money. It’s also heavily overclocked by the user.

7970ghz 6gb will do.
 
Playing at the same resolution with marginally improved visual settings at the same 30/60 fps limit with a few of the drops cleaned up is not much of an improvement IMO.

You may not agree that the additional power is being used in an efficient way, but it doesn't change the amount of additional power that is present. In some of the examples shown, the 980 is running at double the framerate of the current gen consoles, even with improved graphical settings. 60fps vs 30 fps is considered a significant improvement to some. And while that video limits the resolution to 1080p, it's likely the 980 could be matching or exceeding the consoles performance at 1440p or more.

Ultimately, the value either of us place on the improvements reaped by the additional power is irrelevant. My point was that a 4GB GPU is clearly exceeding (seemingly with ease) the performance of an 8GB console. So that's probably quite relevant to the amount of VRAM Ampere is shipping with today.
 
Back
Top