NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
I think you’re saying that Ampere pulls ahead with triangle based geometry because of its advantage there and not primarily due to being faster at traversal.
We can see that Ampere versus Turing shows a strong dependence upon triangle intersection rate.

Scene 3, which mixes triangles and procedural, appears to have the most complex BVH. It's 61% faster on Ampere, which appears to demonstrate that it's triangle acceleration that's the win.

Scene 4, which admittedly has nearly no triangles and presumably has the smallest BVH of all five of these test scenes, shows a 35% gain. This may be fully procedural?

Scene 5 shows an 87% gain for Ampere in something that looks to be entirely triangle based (though the Cornell box may be procedural).

So two scenes that are dominated by triangle-based geometry are scaling strongly with Ampere's triangle acceleration.

Scene 3 with seemingly the most complex BVH is the same speed on 6900XT and 2080Ti. There's obviously some procedural geometry to "slow down 2080Ti", but the scene is dominated by rough materials (which creates a lot of ray divergence) along with caustics that suck up rays like a sponge. The depth of field in this scene adds to ray divergence, too.

There's a lot of reasons for ray divergence to be seen as a problem in this scene, but it doesn't seem to be dominant.

There’s probably truth to that but the other way to interpret the test results is that RDNA 2 is faster at intersecting procedural geometry which negates Ampere’s traversal advantage. We would need more controlled experiments to be sure.
I agree we need better experiments.

It looks like RDNA 2 is faster at intersecting procedural geometry, but why? It's shader code isn't it? Shouldn't that be "FLOPS-bound"? Maybe it's not FLOPS-bound because it has to do with scheduling and latency-hiding.
 
So, given the state of DXR, what can nVidia (cause we are on a nVidia topic but I'm no fanboy) do to accelerate RT even more outside of bruteforce it with more rt core/increased frequency ?

Bigger caches, better BVH compression. More concurrency within each RT core.
 
With TSMC N5/N5P and a likely 450W target I do wonder a 4090 will be 2x over a 3090/3090 Super or even more than that. I mean Samsung 8nm to TSMC N5 is a huge jump.
 
450W good freakin' lord.

We're two iterations away from a viable hotplate for cooking dinner -- like, searing a steak, not just reheating cold leftover veggies.

It’s inevitable. Transistor size is dropping faster than transistor power. We must be quickly approaching a power wall though unless pc form factors evolve to support more efficient cooling systems.
 
450W good freakin' lord.

We're two iterations away from a viable hotplate for cooking dinner -- like, searing a steak, not just reheating cold leftover veggies.
Most custom 3080/3090/6800XT/6900XT cards are at 450W for almost a year now. It's non-news really.
 
We've had triple and even double SLI/CF systems consuming a lot more than 450W for many years. No idea why people are so surprised by 450W figure now. It's not like there will be only 450W products now.
 
My point wasn't the uber-overclock cards hitting 450W max, and it wasn't a bunch of cards chained together to aggregate a bunch of wattage either.

It's a single die, at "typical" power draw, finally getting into the 450W category is getting pretty steep. We could reasonably extrapolate the OC cards of that era are going to be into the 600W range. Which then goes back to my statement: you're about two full iterations away from a "typical" wattage being in the mid to high 600's, with OC cards nearing 800W or more. That's literally a hotplate sufficient for searing steak.
 
My point wasn't the uber-overclock cards hitting 450W max, and it wasn't a bunch of cards chained together to aggregate a bunch of wattage either.

It's a single die, at "typical" power draw, finally getting into the 450W category is getting pretty steep. We could reasonably extrapolate the OC cards of that era are going to be into the 600W range. Which then goes back to my statement: you're about two full iterations away from a "typical" wattage being in the mid to high 600's, with OC cards nearing 800W or more. That's literally a hotplate sufficient for searing steak.
I wouldn't be suprised if after RTX 40 or 50 series they'll focus on power effiency more. I wonder if Intel Foundry plan is successful we could see Intel 14A GPUs in say 2027? May Nvidia skip a year if possible to bring a RTX 60 in 2027 than 2026 on such a process to get power consumption down?
 
My point wasn't the uber-overclock cards hitting 450W max, and it wasn't a bunch of cards chained together to aggregate a bunch of wattage either.

It's a single die, at "typical" power draw, finally getting into the 450W category is getting pretty steep. We could reasonably extrapolate the OC cards of that era are going to be into the 600W range. Which then goes back to my statement: you're about two full iterations away from a "typical" wattage being in the mid to high 600's, with OC cards nearing 800W or more. That's literally a hotplate sufficient for searing steak.
That "single die at typical power draw" won't go against single die competition.
The OC means nothing. Power is decided by what is the expected performance a product must hit to be competitive.
If you're not okay with that then don't buy products which will consume that much power. Simple.
And if you're expecting some competitors to do better then you're in for a disappointment. Competition is essentially who is pushing the power up trying to beat Nvidia at the moment.
 
Maybe it's about time game developers start optimizing their games again, then we wouldn't need to brute force everything with expensive and wasteful upgrades on a 2 year cadence.
 
I wouldn't be suprised if after RTX 40 or 50 series they'll focus on power effiency more
Oops, nope.
It's all about them watts and clocks babay.
:claps:
Maybe it's about time game developers start optimizing their games again
They do.
On consoles.
And if you're expecting some competitors to do better then you're in for a disappointment.
?
N31 is way less watts for a lot more perf.
Competition is essentially who is pushing the power up trying to beat Nvidia at the moment.
?
N21 is less watts cuz wattage is evil.
 
That "single die at typical power draw" won't go against single die competition.
The OC means nothing. Power is decided by what is the expected performance a product must hit to be competitive.
If you're not okay with that then don't buy products which will consume that much power. Simple.
And if you're expecting some competitors to do better then you're in for a disappointment. Competition is essentially who is pushing the power up trying to beat Nvidia at the moment.
I think you doth protest too much.
 
Status
Not open for further replies.
Back
Top