DegustatoR
Legend
Different games have different performance levels. Shocking news.NVIDIA: 8K/60fps
Also NVIDIA: 59fps in Cyberpunk at 1440p
Different games have different performance levels. Shocking news.NVIDIA: 8K/60fps
Also NVIDIA: 59fps in Cyberpunk at 1440p
The point being driven home is that talking about 8K in the era of no 8K display where 4K is still punishing is foolish.Different games have different performance levels. Shocking news.
You can run 8K via DSR on any 4K display.The point being driven home is that talking about 8K in the era of no 8K display where 4K is still punishing is foolish.
You don't have to defend NVIDIA at every turn. The moment I saw it was you quoting me, I knew exactly what to expect.
What’s a ‘smart’ power adapter?
The adapter has an active circuit that detects and reports the power capacity to the graphics card based on the number of 8-pin connectors plugged in. This enables the RTX 4090 to increase the power headroom for overclocking when 4x 8-pin connectors are used versus 3x 8-pin connectors.
Guessed one right.I think "2 points" implies the 4090 would have multiple power limits depending on the power cable connected.
i.e. A single 12VHPWR can boost up to >3GHz, while 3x8-pins (via adapter) boost up to 2.8GHz.
The problem is that "dumb adapters" don't actually draw power equally from each.Powerlimit is 450W. So it is well within the limit of 3x 8PIN and PCIe (should be 525W).
12VHPWR Connector To 3 x 8-Pin Adapter In 450W Test Load:
1 x 8-Pin Connector = 25.34A or 282.4W (88% Increase Over 150W Rating)
1 x 8-Pin Connector = 7.9A or 94.8W (Within 150W Power Rating)
1 x 8-Pin Connector = 6.41 or 76.92W (Within 150W Power Rating)
That was done with reference adapter shipping with a 3090 Ti. EVGA did "proper adapter" but it's made specific for their top of the line card and does 5x8 to 2x16 (because it's card specific they only needed enough 8-pin to satisfy it's needs rather than what 12VHPWRs could do)Every card comes with an adapter. So i suspect that these adapters are not "dumb".
Forza Horizon 5 is Finally Getting DLSS Support, but DLSS 3 isn’t Quite a Lock
It seems the beautiful Forza Horizon 5 is finally getting DLSS support. But is it DLSS 3? We don't know about that yet.wccftech.com
Edit: Already mentioned above.
Read nVidia's FAQ. They found this problem while testing their card and reported it to the PCI-SIG. So all these adapters bundled with the cards should work without problems.That was done with reference adapter shipping with a 3090 Ti. EVGA did "proper adapter" but it's made specific for their top of the line card and does 5x8 to 2x16 (because it's card specific they only needed enough 8-pin to satisfy it's needs rather than what 12VHPWRs could do)
Things like RacerX warrant DLSS3 since i cant imagine that running natively on even a RTX4090.
True, although hopefully it will be playable on a 4090 with DLSS2 only or else even with DLSS3 it probably won't be playable on lower end 4xxx cards.
True, although hopefully it will be playable on a 4090 with DLSS2 only or else even with DLSS3 it probably won't be playable on lower end 4xxx cards.
Oops, sorry I had missed this. I don't understand your math when you say it "should not have been possible". Note that the Y-axis is cost per 100M *gates*, not cost per mm^2.That's interesting. In 65nm-generation we had large die-size high-end cards for 200 Euros/Dollars: GTX 280 at 300 and GTX260 as cheaper harvest parts around the 200 mark. According to this chart and the mentioned correlation, that should not have been possible. Yet, it was possible. And why? Because of competition and the fight for market share. That's the thing we're missing: Roughly equal performance from at least two GPU vendors with no blatant weaknesses in certain areas that the opposing marketing team can capitalize on.
Oops, sorry I had missed this. I don't understand your math when you say it "should not have been possible". Note that the Y-axis is cost per 100M *gates*, not cost per mm^2.
Let's compare a 65nm top-bin (e.g., GTX280) with a 10nm top bin (e.g., RTX3090). The transistor/gate count has gone up 1.4B->28B = 20x while the cost-per-gate has only gone down ~2x according to the chart. So that's a 10x increase in Si cost at a time when the cost-per-xtor was still *decreasing*, just not at the same exponential pace as density! Again, the specific numbers probably aren't accurate but the trends are real.
Alright, I'm taking a break from this forum for some time. I'll see you folks in a few months.
There's nothing convoluted about it. It's the brazenness of it that's the most insulting. They are really trying to treat us like we're dumb.(1) Si valley corporations aren't our friend, but they also aren't vampires that are trying to come up with convoluted schemes to trick us out of our precious gaming dollars.
Nvidia has just come off record profits.(2) There's only so much that competition can do if the underlying cost structures are problematic. An absurd competitive downward spiral can end up with something like the airline industry -- perpetually "bankrupt" on paper, with the only profits coming from increasingly absurd loyalty programs and nickel-and-diming every single thing.
Technically, we're not entitled to anything by this logic. They could charge $5000 for the 4080 and by your own logic, I'd get to call you 'entitled' if you had anything negative to say about it. This logic essentially boils ALL consumer complaints of any kind into an accusation of 'entitlement'. It's such a lame attempt to try and shut down discussion.(3) We are not entitled to a magical 40% increase in perf/$ per generation in raw rasterization.