NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
Would NVIDIA push GDDR6X to the sidelines already?
Samsung didn't really say when the actual "launch" will happen, though, nor with which GPU company (and for what it's worth, they did talk about launches in plural)
Why would Nvidia care about G6X? Their MCs support either (obviously) so if Samsung will be able to supply fast G6 in addition to G6X from Micron I don't see what would stop Nvidia from using both.
 
Why would Nvidia care about G6X? Their MCs support either (obviously) so if Samsung will be able to supply fast G6 in addition to G6X from Micron I don't see what would stop Nvidia from using both.
No, not really prevent, but GDDR6X is a lot more expensive option overall to my understanding (signaling needs with PAM4 and whatnot), which makes it undesirable at least in midrange and lower end. In the highend for same reasons it would be strange to use GDDR6X on slower models and "old" GDDR6 in the very highest end.
And they might care about GDDR6X due the amount of the money and time they poured into developing and implementing it.
 
No, not really prevent, but GDDR6X is a lot more expensive option overall to my understanding (signaling needs with PAM4 and whatnot), which makes it undesirable at least in midrange and lower end. In the highend for same reasons it would be strange to use GDDR6X on slower models and "old" GDDR6 in the very highest end.
And they might care about GDDR6X due the amount of the money they poured into developing and implementing it.
You think this Samsung ultra fast G6 will be cheaper than the G6X which has been used for two years now?
 
You think this Samsung ultra fast G6 will be cheaper than the G6X which has been used for two years now?
Per memory chip probably not, but PAM4 to my understanding is far more complicated and expensive to implement compared to NRZ
(Also there are higher clocked bins of GDDR6X which haven't been used for 2 years, if you don't count that you should the same way consider GDDR6 being used for 4 years)
 
Per memory chip probably not, but PAM4 to my understanding is far more complicated and expensive to implement compared to NRZ
(Also there are higher clocked bins of GDDR6X which haven't been used for 2 years, if you don't count that you should the same way consider GDDR6 being used for 4 years)
24Gbps G6 certainly hasn't been used for 4 years while 21Gbps G6X was. I don't think that this new G6 will be in any way less expensive than a similarly specced G6X. Why would it be? Samsung loves making money as any other company on the planet, and I'd imagine that this G6 will in fact be in higher demand than the G6X which is essentially used exclusively in Nvidia cards.
 
24Gbps G6 certainly hasn't been used for 4 years while 21Gbps G6X was. I don't think that this new G6 will be in any way less expensive than a similarly specced G6X. Why would it be? Samsung loves making money as any other company on the planet, and I'd imagine that this G6 will in fact be in higher demand than the G6X which is essentially used exclusively in Nvidia cards.
21 Gbps G6X was used where 2 years ago? Not in any reference design, that's for sure, first (and only so far) is RTX 3090 Ti which launched earlier this year. RTX 3090 uses 19.5 Gbps bin and the rest 19 Gbps bin.
G6 doesn't need to be cheaper than G6X to be cheaper option, because G6X is more expensive to implement on the card with it's PAM4 signaling compared to G6 with NRZ signaling.
 
21 Gbps G6X was used where 2 years ago? Not in any reference design, that's for sure, first (and only so far) is RTX 3090 Ti which launched earlier this year. RTX 3090 uses 19.5 Gbps bin and the rest 19 Gbps bin.
AFAIR all G6X chips used on Ampere are 21Gbps. They are downclocked on 3080 and 3090 because of thermal issues with the board design - and the apparent lack of much gain when pushing them to 21 anyway.

G6 doesn't need to be cheaper than G6X to be cheaper option, because G6X is more expensive to implement on the card with it's PAM4 signaling compared to G6 with NRZ signaling.
I'm not sure how a more robust signalling is more expensive to implement. If you mean in MCs then sure but that's done already. If you mean physically then I'd wager that running G6 at 24Gbps will in fact be more hard and thus more expensive - precisely because G6X is using PAM4 which likely makes the physical implementation cheaper.
 
PAM4 on paper needs half the frequency to deliver the same bandwidth as NRZ. On the flip side it’s less tolerant of noise and crosstalk. If the net result is only 15% lower power consumption it doesn’t seem worth it at all. Will be interesting to see a head to head comparison of G6 and G6X at the same bandwidth.
 
I don't follow the sentiment here at all. What's the point of comparing two things that do completely different things? A hair dryer consumes more energy than either a refrigerator or a GPU. So does a car. It doesn't make any sense to me to use any of these as a benchmark to judge the excessiveness of the other because they solve entirely different problems. A GPU could be consuming 1mW or 1MW, that doesn't make it more or less effective at its job in relation to a refrigerator.

Instead we can compare it against other computing devices. And you know how that story goes -- we've enjoyed a freaking *exponential* increase in efficiency over the past several decades.

Yeah it sucks is that that exponential efficiency growth is slowing down, and it's getting expensive to sustain it. We're just spoiled. I doubt there is any other field in the history of human civilization that has seen a sustained exponential efficiency growth. And that growth hasn't happened by magic -- it has happened due to the efforts of many, many smart human beings. Let's give them their due credit.

With consoles powerdraw aswell as heat ouput at an all-time high, and them being the ’baseline’ for low end hardware, were likely not seeing any changes anytime soon.
 
900W then?
No. Being tested now is 450W (FE design) and AIB reference with 550W PL. But die qualification board max TGP is 800W (not intended for production, only for qualification). Then you will have the AIBs custom boards coming a bit later.
If I had to guess, I think this 160fps is for >600W
 
No. Being tested now is 450W (FE design) and AIB reference with 550W PL. But die qualification board max TGP is 800W (not intended for production, only for qualification). Then you will have the AIBs custom boards coming a bit later.
If I had to guess, I think this 160fps is for >600W
I was being sarcastic.
But if it's Control with all RT active then >=2.2x doesn't seem that high tbh considering that we're still expecting ~2x in rasterization and without OC.
 
Status
Not open for further replies.
Back
Top