They pushed into chiplets, I think the tech they are going for is cheaper silicon. If they don’t take advantage or can’t, that’s a little disappointing for me.Well, it looks like Navi 31 only needs to be 3x faster with ray tracing to be competitive with 4090...
I doubt that Nvidia expects N31 to be competitive with 4090.Well, it looks like Navi 31 only needs to be 3x faster with ray tracing to be competitive with 4090...
I've not looked at anything yet, so is that including with DLSS3?Well, it looks like Navi 31 only needs to be 3x faster with ray tracing to be competitive with 4090...
Before Nov 3?Well it seems we will have some answers soon
No, native, with DLSS they have no chance whatsoever.I've not looked at anything yet, so is that including with DLSS3?
As I expect most demanding games to include DLSS3, FSR2.x & XeSS
We will have to see how the other chips in the product line stack up. The I am still skeptical on how the 4080 runt performsWell, it looks like Navi 31 only needs to be 3x faster with ray tracing to be competitive with 4090...
Probably no launch but some details.Before Nov 3?
Not this time, market conditions no allow the ultra-halo part.If they don’t take advantage or can’t, that’s a little disappointing for me.
Of course they do, their companal is nowhere near the naivete of Intel's.I doubt that Nvidia expects N31 to be competitive with 4090.
Those are the N32 grazing fields.But it may end up being pretty competitive with 4080s - which aren't exactly cheap this time around.
He is talking about RT performance not pure raster.Not this time, market conditions no allow the ultra-halo part.
Of course they do, their companal is nowhere near the naivete of Intel's.
Those are the N32 grazing fields.
I wonder what they've done to improve RT performance aside from that. Hopefully added BVH traversal in HW?He is talking about RT performance not pure raster.
Although, I expect pretty good gains in RT for RDNA3 thanks to huge compute improvement.
It's been a very long time since GPUs didn't benefit massively from hand-tweaked under-volting and -clocking. Well over 10 years.AMD could have a winner if they are able to keep power requirements low.
Because some applications use that power in a way that's meaningful for "professionals" and "content creators" and NVidia can build a cooler that works nicely.I wonder why Nvidia set such high power targets
AMD could have a winner if they are able to keep power requirements low. I wonder why Nvidia set such high power targets
I guess the better question is "why wouldn't they"?View attachment 7182
AMD could have a winner if they are able to keep power requirements low. I wonder why Nvidia set such high power targets
Performance wins benchmarks, and from the graph above, the difference between 100% power target and 60% power target is ~11%, or roughly the difference between the original 3080 10GB and the original 3090, to say nothing of the 3080 12GB and the 3080Ti wedged in between, you're basically giving up an entire product tier's worth of performance for free.
People who are concerned about power efficiency can always reduce the power target if they want, but the default settings are the ones used for benchmark comparisons and reviews.
What I would love to see would be for AMD or NVidia to add a driver feature to write a new power target into NVRAM in the vBIOS. Rather than making people fight with tools that get wiped out with OS reinstall, etc., it would allow everyone to set a permanent but easily changeable power target with zero extra fuss.
It would even let both AMD and NVidia market their firebreathing top-end SKUs to people who don't *yet* have the PSUs to effectively run them.
"Only have a 650W power supply? Never fear! You can still buy a 7950xtx and with two clicks you can restrict it to only consuming 250W like your old RX6800 with no risk of damaging your PSU, and the instant you upgrade your PSU, two more clicks and you can unlock the rest of your performance!"