NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
The point being driven home is that talking about 8K in the era of no 8K display where 4K is still punishing is foolish.

You don't have to defend NVIDIA at every turn. The moment I saw it was you quoting me, I knew exactly what to expect.
You can run 8K via DSR on any 4K display.

You don't have to bash Nvidia at every turn.
 
What’s a ‘smart’ power adapter?
The adapter has an active circuit that detects and reports the power capacity to the graphics card based on the number of 8-pin connectors plugged in. This enables the RTX 4090 to increase the power headroom for overclocking when 4x 8-pin connectors are used versus 3x 8-pin connectors.
I think "2 points" implies the 4090 would have multiple power limits depending on the power cable connected.
i.e. A single 12VHPWR can boost up to >3GHz, while 3x8-pins (via adapter) boost up to 2.8GHz. :unsure:
Guessed one right. :LOL:
 
Powerlimit is 450W. So it is well within the limit of 3x 8PIN and PCIe (should be 525W).
The problem is that "dumb adapters" don't actually draw power equally from each.
12VHPWR Connector To 3 x 8-Pin Adapter In 450W Test Load:

1 x 8-Pin Connector = 25.34A or 282.4W (88% Increase Over 150W Rating)
1 x 8-Pin Connector = 7.9A or 94.8W (Within 150W Power Rating)
1 x 8-Pin Connector = 6.41 or 76.92W (Within 150W Power Rating)
 

Edit: Already mentioned above.

Based on the performance gain that looks like DLSS2 only. Great news regardless though given this games struggles with image quality/AA. And it's performance level doesn't warrant DLSS 3 anyway, especially not on those crazy GPU's.
 
That was done with reference adapter shipping with a 3090 Ti. EVGA did "proper adapter" but it's made specific for their top of the line card and does 5x8 to 2x16 (because it's card specific they only needed enough 8-pin to satisfy it's needs rather than what 12VHPWRs could do)
Read nVidia's FAQ. They found this problem while testing their card and reported it to the PCI-SIG. So all these adapters bundled with the cards should work without problems.
 
So what's with the ATX3.0 power spec and the extra 4 connectors, which tell the GPU what power is available from the PSU? I'm reading that without that, any dumb connectors that take multiple 8-pins, the GPU will default to lowest power spec, which is 150w.
 
That's interesting. In 65nm-generation we had large die-size high-end cards for 200 Euros/Dollars: GTX 280 at 300 and GTX260 as cheaper harvest parts around the 200 mark. According to this chart and the mentioned correlation, that should not have been possible. Yet, it was possible. And why? Because of competition and the fight for market share. That's the thing we're missing: Roughly equal performance from at least two GPU vendors with no blatant weaknesses in certain areas that the opposing marketing team can capitalize on.
Oops, sorry I had missed this. I don't understand your math when you say it "should not have been possible". Note that the Y-axis is cost per 100M *gates*, not cost per mm^2.

Let's compare a 65nm top-bin (e.g., GTX280) with a 10nm top bin (e.g., RTX3090). The transistor/gate count has gone up 1.4B->28B = 20x while the cost-per-gate has only gone down ~2x according to the chart. So that's a 10x increase in Si cost at a time when the cost-per-xtor was still *decreasing*, just not at the same exponential pace as density! Again, the specific numbers probably aren't accurate but the trends are real.

And to reiterate the earlier discussion, Si cost is just one element of the pie chart, but it's an albatross that's growing exponentially heavier. Competitive pressure may force a reduction in GM but there isn't much room to maneuver. And just to be pedantic the GTX280 launched at $650 (which is $900 in today's dollars) and I believe was later dropped to $400 ($550 today).

I hope that's enough information (and discussion) to convince people to put their pitchforks away. Main takeaways are:
(1) Si valley corporations aren't our friend, but they also aren't vampires that are trying to come up with convoluted schemes to trick us out of our precious gaming dollars.
(2) There's only so much that competition can do if the underlying cost structures are problematic. An absurd competitive downward spiral can end up with something like the airline industry -- perpetually "bankrupt" on paper, with the only profits coming from increasingly absurd loyalty programs and nickel-and-diming every single thing.
(3) We are not entitled to a magical 40% increase in perf/$ per generation in raw rasterization.

And no, this does not mean the end of innovations in computer graphics. It's just that the innovations need to move higher up the stack. We'll see more carefully-curated hardware specialization (RT, tensor cores), better reconstruction algorithms to work smarter instead of harder, better uarch/VLSI (just look at what AMD has been doing the past few generations) and possibly a complete upheaval of the way we do rendering (Instant NeRF). Exciting times ahead!

Alright, I'm taking a break from this forum for some time. I'll see you folks in a few months.
 
On another note Nvidia seemingly cuts L2 cache slices in RTX 4090 down from 8192 kByte to 6144 kByte, if they are still attached to the memory controllers, that is.
Sorry if this was mentioned here already.

Oops, sorry I had missed this. I don't understand your math when you say it "should not have been possible". Note that the Y-axis is cost per 100M *gates*, not cost per mm^2.

Let's compare a 65nm top-bin (e.g., GTX280) with a 10nm top bin (e.g., RTX3090). The transistor/gate count has gone up 1.4B->28B = 20x while the cost-per-gate has only gone down ~2x according to the chart. So that's a 10x increase in Si cost at a time when the cost-per-xtor was still *decreasing*, just not at the same exponential pace as density! Again, the specific numbers probably aren't accurate but the trends are real.


Alright, I'm taking a break from this forum for some time. I'll see you folks in a few months.

That's also 10X in cost (200 to 2000 Euro for Top-bin 3090 Ti). And that does not take into account 14 years of inflation. I just don't buy all the excuses for companies raising prices.
Have fun and find time to relax in your break though!
 
(1) Si valley corporations aren't our friend, but they also aren't vampires that are trying to come up with convoluted schemes to trick us out of our precious gaming dollars.
There's nothing convoluted about it. It's the brazenness of it that's the most insulting. They are really trying to treat us like we're dumb.

And I dunno, reading some posts trying to defend them for it, maybe they're right. Maybe they really do expect that people will either not be able to see it, or will be so brand loyal that they'll still try and argue it's all good!
(2) There's only so much that competition can do if the underlying cost structures are problematic. An absurd competitive downward spiral can end up with something like the airline industry -- perpetually "bankrupt" on paper, with the only profits coming from increasingly absurd loyalty programs and nickel-and-diming every single thing.
Nvidia has just come off record profits.

And this is just an absurd slippery slope argument in general. Nvidia is doing really well in the big picture. Nobody is arguing they need to start slimming margins down to pennies to make people happy. Just dont try and take us for a ride like they're clearly doing here. It's not an unreasonable ask like y'all keep trying to suggest.
(3) We are not entitled to a magical 40% increase in perf/$ per generation in raw rasterization.
Technically, we're not entitled to anything by this logic. They could charge $5000 for the 4080 and by your own logic, I'd get to call you 'entitled' if you had anything negative to say about it. This logic essentially boils ALL consumer complaints of any kind into an accusation of 'entitlement'. It's such a lame attempt to try and shut down discussion.

At some point, if significant increases in performance per dollar aren't being delivered with new generations of products and over time, then people are going to complain, and rightfully so. From a consumer perspective, this is really THE most important thing, and what has driven the PC gaming GPU market its entire existence.

These companies absolutely can still offer this. If they dont, it's not because they just cant or would put the business under threat, it's because they think they can get away with not doing so in order to boost revenue/margins as much as they possibly can. A company with a dominant edge in a given market acting greedy like this is NOT some loony conspiracy.
 
I do not think this is the topic to go hardcore ranting. You might be correct however thats also for other large tech companies, even consoles. Maybe have this ranting into a specific topic for that elsewhere, since this one is about Ada/Lovelace speculation, not how greedy and evil large tech companies are.
 
Status
Not open for further replies.
Back
Top