Copium when games like Modern Warfare 2 exist.One person makes a 533mm^2 chip obsolet.
Yes, that's why I reckon 7900XTX is a $700 card. But AMD thinks it can sell this for $999.
Also, notice, I said 3080 with (Ti) in brackets
As far as future games and ray tracing is concerned, I think AMD's only escape route is UE5...
Not happy to hear AMD seemingly talk about proprietary RT code for RDNA 3.
Your CPU would not be fast enough for a 4090, it seems and maybe not for a 4080 16GB, either... I can't tell what resolution you use for games.
On the other hand, I hope that ray tracing in more games will make talk about CPU-limited graphics cards redundant. RTX 4090 should have no trouble demonstrating real value against RX 7900XTX.
I can't help thinking that RDNA 4 needs to arrive autumn 2023 or else AMD can join Intel in the irrelevant graphics card tech business.
Add 10% more performance for the 4090 because TPU used the regular 5800X CPU, which makes the 4090 CPU limited even at 4K, TPU doesn't also test with max ray tracing settings .. I will do proper ray tracing comparisons soon.
View attachment 7429
If accurate to what we see in real benchmarks then its a pretty big mixed bag . of course its still $600 less in price.
Software RT in UE5 currently requires a separate exe/Project File/Game export from the editor
Some simple calculated performance figures.
This is EXCELLENT performance for the price. Wow! AMD is killing it.
While I really love RT I realize for the vast majority of gamers, AMD is the far better choice compared to Nvidia. Most people don't care about RT.
Most games will be based on UE5 in the future, and as we know from the UE5 thread, it does have a very competent Software RT solution which still looks great in most instances. So theoretically, if devs were to use that, the 7900XTX would still perform very close to the 4090 even in a next generation game because no Ray accelerators / RT cores are in use then.
You are right about that. Still, even if you enable HW-RT for your shipped project, the fallback mode for cards that are not capable of HW-RT is still present automatically. And given there is a CVar that disables HW-RT, I imagine you could create a setting ingame that disables HW-RT in a game configured with HW-RT enabled easily.Software RT in UE5 currently requires a separate exe/Project File/Game export from the editor
I am not sure If UE5 games realistically will ship with Software AND Hardware, rather just one or the other. I Imagine that fact means Games will ship with Just one or the other only
The numbers the user made were in error as two of the games were non raytraced numbersAdd 10% more performance for the 4090 because TPU used the regular 5800X CPU, which makes the 4090 CPU limited even at 4K, TPU doesn't also test with max ray tracing settings .. I will do proper ray tracing comparisons soon.
It'll skullfuck it alright, that's by design.a good chance of out performing the 4080 in rasterization
Something seems off about RDNA3. I have the feeling they wanted to do at least one dual GCD card to compete at the top but couldn't get it work properly
Man, hauling ~5.3TB/s offdie on 2.5D is an utterly nuts idea in retrospective (which is why they're only doing it once, really).
It'll skullfuck it alright, that's by design.
Now can someone normalize RT perf per clock per WGP/TPC for science's sake.
Please.
Of course it does, it's 5.3TB/s aggregate over 2.5D.Maybe the chiplet interconnect needs a lot of power.
I just want to see something iso CU/SM count.In a pure RT workload? Probably not pretty. We haven’t seen those benches yet though.
The numbers the user made were in error as two of the games were non raytraced numbers
AMD RDNA3 Specifications Discussion Thread
Decent of them to target Ubisoft games since they're virtually guaranteed benchmark staples but they'll remain out of reach for now in terms of both transistor count and die size ...forum.beyond3d.com
View attachment 7431
Here is the updated numbers from the comparision. In traditional rasterization its a beast and at least based on tehse numbers it would be above a 3090ti in most ray tracing applications. Obviously we need to know how it really does in non marketing benchmarks. But Depending on how the 4080 16gig and whatever the 408012 gig become it could be a decent alternative
Taking a look at the 40x0 series is interesting
The 4090 is 16384 cuda / 2.52ghz clocks / 384bit memory /1018GB
The 4080 16 9728 cuda / 2.51ghz clocks / 256bit memory / 742GB/s bandwidth
The 4080 12 7680 cuda / 2.61ghz clocks/192bit memory /557GB
The pricing is $1600,$1200,$900. The two amd cards have a good chance of out performing the 4080 in rasterization and loosing a bit in ray tracing but coming in at $200/$300 less and the 4080 12 gig which we all know is going to come back since its already made and wil just be renamed has only half the cuda cores of the 4090 and a little more than half the bandwidth.
I think it will be really interesting to see comparisons.
Lumen could improve improve even further on the quality of their software RT by implementing planar reflections for flat and highly specular surfaces ...You are right about that. Still, even if you enable HW-RT for your shipped project, the fallback mode for cards that are not capable of HW-RT is still present automatically. And given there is a CVar that disables HW-RT, I imagine you could create a setting ingame that disables HW-RT in a game configured with HW-RT enabled easily.
Software RT in UE5 currently requires a separate exe/Project File/Game export from the editor
I am not sure If UE5 games realistically will ship with Software AND Hardware, rather just one or the other. I Imagine that fact means Games will ship with Just one or the other only
You are right about that. Still, even if you enable HW-RT for your shipped project, the fallback mode for cards that are not capable of HW-RT is still present automatically. And given there is a CVar that disables HW-RT, I imagine you could create a setting ingame that disables HW-RT in a game configured with HW-RT enabled easily.
It does seem that way. All of the presenters seemed a bit deflated even Lisa. And the constant prompting for applause didn’t help.
355w for 61TF also seems a bit high. Maybe the chiplet interconnect needs a lot of power.
In a pure RT workload? Probably not pretty. We haven’t seen those benches yet though.
"AI engines" are just new instructionsMore importantly: AI engines and FSR3 just straight out. Lolwut.
What's about huge perf dips in Cyberpunk 2077 when you simply stand in front of a mirror with a single quarter res planar reflection?best of all there's no acceleration structure to worry about either.
Gosh, noise comes from physically correct brdf with stochastic sampling. RT reflections obviously don't produce any noise for absolutely flat surfaces, but they do for rough ones. Planar reflections don't support stochastic sampling due to rasterization limitations and thus can't be physically correct at all.Planar reflections also have the benefit of being higher quality (no noise and potentially use the highest LoD models) as well
Metro Exodus is fast enough even on RDNA2 .. it was designed for RDNA2 consoles after all.
I assume it is TSMC based tech... but I have to idea based on those speeds. M1 Ultra interconnect is 2.5TB/s and I haven't seen any good details or deep-dives into their special sauce.Anybody got a clue what the GCD/MCD connection is? At 5.7TB/s peak that's already 45W at 1pJ/bit.
Bretty sure the slide is named "4k ultra" something something.lolMaybe AMD tested it at those low settings
Does the 6900XT LC and 6950XT have the same performance?Good, so here are more comparisons:
Cyberpunk 2077, native 4K:
4090: 40fps
3090Ti: 23fps
6900XT LC: 11fps
7900XTX: 17fps (50% faster than 6900XT LC)
The 4090 is 2.3X faster than 7900XTX, the 3090Ti is 35% faster.
Hitman 3 native 4K:
4090: 43fps
3090Ti: 23fps
6900XT LC: 16fps
7900XTX: 26fps (85% faster than 6900XT LC)
The 4090 is 65% faster than 7900XTX
Dying Light 2 native 4K:
4090: 44fps
3090Ti: 24fps
6900XT LC: 11fps
7900XTX: 20fps (56% faster than 6900XT LC)
The 4090 is 2.2X faster than 7900XTX, the 3090Ti is 20% faster
Geforce RTX 4090 am Limit: Schafft sie Raytracing in Ultra HD ohne Upscaling? Plus 5K-DLSS-Tests
Nvidias neue Geforce RTX 4090 ist prädestiniert für hohe Auflösungen wie Ultra HD und darüber hinaus - stemmt sie Raytracing ohne Upscaling flüssig?www.pcgameshardware.de