Nvidia Ampere Discussion [2020-05-14]

How is that comparable? If anything you could compare this to the 'PCIe gate' on RX400 series launch. The difference of course being PCIe slot power draw slightly above spec caused no issues whatsoever. And yet I still remember tech media grilling AMD to no end.

That was different. AMD claimed a false power consumption. The advertised "average" boost clock is still the same.
 
If anything you could compare this to the 'PCIe gate' on RX400 series launch. The difference of course being PCIe slot power draw slightly above spec caused no issues whatsoever. And yet I still remember tech media grilling AMD to no end.
Yes, this is the closest thing I can recall from an AMD GPU launch. They went above-spec on an electrical limit and they shouldn't have, regardless of having no real-life repercussions. It was solved through a driver update soon enough, but then the noise generated around that issue had already inundated the cards' launch window.
 
That was different. AMD claimed a false power consumption. The advertised "average" boost clock is still the same.

What the heck are you even talking about? What false power consumption? I'm talking about power draw over PCIe slot slightly exceeding specs (75W) on RX480.
 
Well at least it's still performing very well.
It's performing "very well" if Navi 21 can only reach performance parity with RTX 3070Ti (cut-down, double memory GA102 or GA103?). If Navi 21 matches or exceeds 3080, then that looks like a fail.

We shall see...

Remember, AMD hasn't just switched to an entirely new node, Navi 21 is a tweak of an existing chip on a tweaked node.
 
What the heck are you even talking about? What false power consumption? I'm talking about power draw over PCIe slot slightly exceeding specs (75W) on RX480.

The reference RX 480 was advertised with a 150W TDP and had one 6-Pin connector. It used 165W under load with a problematic distribution.
 
The reference RX 480 was advertised with a 150W TDP and had one 6-Pin connector. It used 165W under load with a problematic distribution.

Something being advertised as 150W TDP and consuming 165W was not an issue here it was the load distribution like you correctly pointed out and is what I was talking about.
 
That hunt is still on, though. Partially because there is still up to a 10-15% spread in achievable boost clocks, independent from the TDP limit.

Capacitors are not significant reason for performance spread. der8auer took board with supposedly bad capacitors. Then he changed capacitors and got 40MHz more clock. It's insignificant improvement. Performance differences are due to other reasons like chip quality and different power limits in various boards.

 
It's performing "very well" if Navi 21 can only reach performance parity with RTX 3070Ti (cut-down, double memory GA102 or GA103?). If Navi 21 matches or exceeds 3080, then that looks like a fail.

We shall see...

Remember, AMD hasn't just switched to an entirely new node, Navi 21 is a tweak of an existing chip on a tweaked node.
Is what now?
 
Would you kindly keep AMD and Navi mentions to a minimum in this Nvidia Ampere thread? Maybe continue in the speculative gpu performance thread?
 
Capacitors are not significant reason for performance spread. der8auer took board with supposedly bad capacitors. Then he changed capacitors and got 40MHz more clock. It's insignificant improvement. Performance differences are due to other reasons like chip quality and different power limits in various boards.

yup, it's not the capacitors, as nVidia mentioned.

Thing is..., cocky and know-it-all youtubers like JayTwoCents should be embarrassed for spreading bad information and repeating things like a parrot. Sorry I can't stand the guy.

 
There's something else I'm a bit worried about, EMC. We do have reports of glitches induced into sound cards, and just going by the numbers, this is still likely to be critical.

EDIT: It's not going to fail EMC under EU regulation that easily, as the casing is going to filter about all issues. Compatibility with other parts inside the same system is still at peril though.

One thing that I suspect is playing into this somewhat - this is Nvidia's first PCIe 4.0 GPU, and there are an awful lot of people using PCIe riser cables to mount the GPU parallel to the motherboard. While it works, and may even work at PCIe 4.0 link speeds depending on the riser, that's going to spray a ton of noise everywhere inside the case and I'm not surprised if adjacent sound cards / circuitry on the motherboard are not too pleased about it.
 
You are giving me deja vu of Charlie claiming that Fermi was a compute focused architecture quickly adapted for graphics.. which was a huge pile of shit, when it was revealed that it was a geometry monster and shockingly outperformed HD5870 in tesselation.

But, compute based raster view geometry is already maxxed out. A PS5 can do UE5's version at 1440p in a just a handful of ms, mesh shaders shouldn't be much different. RDNA2 and Ampere both, assuming you've got the card to resolution ratio right, are already at about 1 triangle per pixel. And with modern mesh filtering you don't need anymore.

What's needed, what's going to be scaleable, is raytracing performance and memory bandwidth and etc. Sure you can get a 3090 to run Doom Eternal at 8k *now*, a super heavily optimized last gen 60 fps target game. And as soon as next gen starts getting targeted, well then goodbye 8k. Certainly shouldn't work once Eternal's promised "next gen" upgrade arrives.

I just don't see it as a balanced, scaleable arch. The die sizes alone don't fit, those are huge. If I were Nvidia I'd want to sell those dies to professionals at twice the price. You want a new 4k+ video rendering card, fork it over.
 
But, compute based raster view geometry is already maxxed out. A PS5 can do UE5's version at 1440p in a just a handful of ms, mesh shaders shouldn't be much different. RDNA2 and Ampere both, assuming you've got the card to resolution ratio right, are already at about 1 triangle per pixel. And with modern mesh filtering you don't need anymore.

What's needed, what's going to be scaleable, is raytracing performance and memory bandwidth and etc. Sure you can get a 3090 to run Doom Eternal at 8k *now*, a super heavily optimized last gen 60 fps target game. And as soon as next gen starts getting targeted, well then goodbye 8k. Certainly shouldn't work once Eternal's promised "next gen" upgrade arrives.

I just don't see it as a balanced, scaleable arch. The die sizes alone don't fit, those are huge. If I were Nvidia I'd want to sell those dies to professionals at twice the price. You want a new 4k+ video rendering card, fork it over.

Isn't UE5 a really good example of how things are going to be compute heavy, meaning Ampere's doubling of fp32 alu could be a very good choice?
 
@Scott_Arm

The thing with Ampere is, that they didn't doubled the FP32 Units. You always need INT format. The Minimum which wash showed was 20% Using of Int. I think you always have to deal with 80-60% of FP32 which you can use.

They did double fp32. Some workloads may use int32 and then you'll lose some fp32 performance, but even in those cases you have vastly more fp32 capability than with turing.
 
Theoreticly they did but practicaly its not achievable, you always have to deal with a 20-40% decrasse in FP32 performance in most of the programmes. This was realy a big marketing trick.
 
Back
Top