NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
RDNA2's advantages over RDNA1 are hardly due to "tons of on-die cache".
Agreed, that's vastly oversimplifying it, but it's certainly one major design philosophy change.
Thinking iso-process, if you don't like RDNA1 -> RDNA2 as an example, then Kepler -> Maxwell (GK107 -> GM107, or GK104 -> GM206) which are all at least in the ballpark of each other die-size wise.

In particular for the lower end cards, bringing them into the DLSS ecosystem alone would be worth the change.
 
Last edited:
Agreed, that's vastly oversimplifying it, but it's certainly one major design philosophy change.
Thinking iso-process, if you don't like RDNA1 -> RDNA2 as an example, then Kepler -> Maxwell (GK107 -> GM107, or GK104 -> GM206) which are all at least in the ballpark of each other die-size wise.

In particular for the lower end cards, bringing them into the DLSS ecosystem alone would be worth the change.
I mean I get what you're saying but I doubt that Ampere->Lovelace will be anywhere close in architectural change even to RDNA1->2 let alone Kepler->Maxwell.
IMO the worst which can happen here is if the low end will remain on Turing - meaning no modern video outputs or video decoders and possibly no raytracing support.
Something like GA107 or even GA106 for the low end alongside Lovelace dies in the higher segments will very likely be just as feature full as upcoming Lovelace, and it's not clear that this lower end of die sizes is even benefiting that much from additional cache or an advanced process (AMD's 6500XT fiasco is still fresh in our memory).
 
These power consumption numbers are really concerning. I need a ~100w replacement for my 1650 super but looks like that may never happen.

It’s also not clear why power consumption needs to go so high. Is it necessary to be competitive with RDNA 3?

The market has shown that most consumers pay primarily based on performance and other functional features in the desktop space. This effectively means any performance left on the table is less money. People do bring up efficiency but if pressed it tends to become that they want that efficiency to come with savings as well where in reality efficiency is effectively a premium feature.

Personally I've already had to revise down expectations from a year ago of getting 16GB of VRAM in that low 200w space. Now I'm hoping for 12GB with somewhat reasonable manual lowering of power limits on a cutdown AD104 model.
 
The market has shown that most consumers pay primarily based on performance and other functional features in the desktop space. This effectively means any performance left on the table is less money. People do bring up efficiency but if pressed it tends to become that they want that efficiency to come with savings as well where in reality efficiency is effectively a premium feature.

Personally I've already had to revise down expectations from a year ago of getting 16GB of VRAM in that low 200w space. Now I'm hoping for 12GB with somewhat reasonable manual lowering of power limits on a cutdown AD104 model.

The only times efficiency has been a real issue is with comparisons to the more efficient competition - NV30, Fermi, Hawaii, Vega. IHVs have only pushed power deep into the low efficiency part of the curve when it was necessary to remain competitive. If these Lovelace power consumption rumors are accurate I suspect it means RDNA 3 is a much more efficient architecture. Otherwise Nvidia could have aimed for say only 1.5x performance instead of 2x and kept power consumption in a reasonable range.
 
Last edited:
Imagine the uproar, two years ago just before Ampere appeared, if 3090Ti's TDP of 450W was rumoured, versus 250W for 2080Ti and was also rumoured to be much less than double the performance.

I don't think we're in that situation now - NVidia's gone back to TSMC, it should be fine. A100 and H100 aren't "broken".
 
Imagine the uproar, two years ago just before Ampere appeared, if 3090Ti's TDP of 450W was rumoured, versus 250W for 2080Ti and was also rumoured to be much less than double the performance.

I don't think we're in that situation now - NVidia's gone back to TSMC, it should be fine. A100 and H100 aren't "broken".

I don’t think AD102 is broken either. I was optimistic that 5nm would be an efficiency play. Say 1.5x performance at < 1x power. Rumors are pointing to a performance uplift at any cost.

Question is why are they aiming so high for performance unless AMD is pushing them. Maybe they’re worried about Arc.

:LOL:
 
Question is why are they aiming so high for performance unless AMD is pushing them.
AMD brought Infinity Cache to market while NVidia was still designing its own version?

I think it's reasonable to assume the rumoured substantial jump in Ada's cache size is something NVidia has been working on for a while, so NVidia will have had years of internal data to prove its value. Therefore, I think, it's reasonable to presume NVidia is taking RDNA 3 way more seriously than RDNA 2.
 
The only times efficiency has been a real issue is with comparisons to the more efficient competition - NV30, Fermi, Hawaii, Vega. IHVs have only pushed power deep into the low efficiency part of the curve when it was necessary to remain competitive. If these Lovelace power consumption rumors are accurate I suspect it means RDNA 3 is a much more efficient architecture. Otherwise Nvidia could have aimed for say only 1.5x performance instead of 2x and kept power consumption in a reasonable range.
We're probably just going to have to agree to disagree on this because this is not the general sentiment I see.

For the market in general which do you think most people would pay more for?

1) 2x perf at 1.5x the power. 33% efficiency improvement.

2) 1.5x perf at 1x power. 50% efficiency improvement.

When people say that would rather 1.5x performance at lower consumption, they are also mean they want pricing more commensurate with 1.5x performance. They also don't want to pay as much for that 2x perf at high power on basically the same chip. But if it's the same chip, why would the vendor from their perspective not than choose the option that they can sell more for?

It's an imperfect analogy but on the AMD side we did have a situation in market that somewhat illustrated this issue with the Fury, Fury Nano and Fury X. From AMD's perspective they launched Nano at the same MSRP as Fury X given they were the same silicon. But in their round of price cuts they ended up having drop the Nano to the Fury price tier as they were closer together in terms of performance. The niche value the Nano provided, contrary to some vocalizing it would be worth something, was not to the public at large (well the AMD side only to some extent) as important as just overall performance.
 
Last edited:
For the market in general which do you think most people would pay more for?

1) 2x perf at 1.5x the power. 33% efficiency improvement.

2) 1.5x perf at 1x power. 50% efficiency improvement.

When people say that would rather 1.5x performance at lower consumption, they are also mean they want pricing more commensurate with 1.5x performance. They also don't want to pay as much for that 2x perf at high power on basically the same chip. But if it's the same chip, why would the vendor from their perspective not than choose the option that they can sell more for?

You could ask the same question for any hardware generation though. Why didn't Nvidia push power consumption higher on Pascal? The reason is that they didn't need it in order to win. Ergo the only reason they would do it for Ada (or any generation) is that they can't win without it. Not because they're trying to justify higher prices.

It's an imperfect analogy but on the AMD side we did have a situation in market that somewhat illustrated this issue with the Fury, Fury Nano and Fury X. From AMD's perspective they launched Nano at the same MSRP as Fury X given they were the same silicon. But in their round of price cuts they ended up having drop the Nano to the Fury price tier as they were closer together in terms of performance. The niche value the Nano provided, contrary to some vocalizing it would be worth something, was not to the public at large (well the AMD side only to some extent) as important as just overall performance.

Of course efficiency isn't a substitute for performance. However Nano and Fury pricing were dictated primarily by the (performance) competition at the time.
 
So we got 600W, 900W and now 800w max for AD102. I guess that next leak will be 700W and we will have all power levels covered. Then they will say "I told you so" :poop:

Then in real world useage were probably looking at numbers below all of that anyway. Typical 'leakers' like what happened to the consoles, everywhere between 9 and 20TF, 16 to 32GB ram, 8 cores to 16 cores etc etc. Good time to be a leaker i guess :p
 
Status
Not open for further replies.
Back
Top