anexanhume
Veteran
PS5 has a chance to mitigate some of the BW disadvantages with their cache scrubbers.
Yes, except for their memory bandwidth needs.So should be linear to the GPU core clock and CUs then no?
This question was asked and tested it would appear!
https://www.eurogamer.net/articles/...3-nvidia-geforce-rtx-2080-super-review?page=2
The biggest factors for pure RT performance would appear to be
cores + memory bandwidth
Yes, except for their memory bandwidth needs.
You need bandwidth to access the structure, RT BVH structures are as large as 1.5 GB IIRC. So they aren't being held in cache.The biggest factor for Nvidia RT performance is cores + memory bandwidth but that's not what we are talking about here, we are talking about the AMD implementation and we have no such information on that.
You need bandwidth to access the structure, RT BVH structures are as large as 1.5 GB IIRC. So they aren't being held in cache.
The only thing remaining is to have hardware traverse the BVH structure for intersection.
If you look at the Quake 2 RTX benchmarks you'll see a benchmark that is specific to RT performance.
Sorry. That's why I always try to keep my mouth shut and if that fails, only posting in "non-technical" marked threads.The tsunami of 'other forum' users on here during new console launches is always so fun.
/s
a 400mhz advantage on fewer CUs mean nothing. best case scenario for ps5 is that its a 10.3 tflops machine which it is not vs a stable 12.16 tflop machine. there are literally no benefits to ps5s gpu vs xbox. these are objective facts. and not to mention RT which xbox blows ps5 out of the water with meaning more workload for the ps5 gpu to do GI. this is a moot conversation and im not even going to discuss the lower bandwidth for the RAM.
So it begins. The great sauce battle of our times.
In practice/testing they're both 2000-2100mhz cards when gaming. Even my 2070S hits that out of the box.But if you'll recall Nvidia also separates its RT performance from it's compute performance.
The 2080 TI is clocks much lower than 2080 Super, but has significant performance advantage over it in ray tracing.
2080 Regular:
1515 MHz and 1710 MHz
2944 CUDA cores
2080 Super:
1650 MHz and 1815 MHz
3072 CUDA cores
2080 TI:
1350/1545 MHz
4352 CUDA cores
I get that, but what I'm saying is the peak performance of the RT hardware is not linear to just the number of CUs but to the number of CUs and the core clock just like TF's. Which honestly makes a lot more sense if you think about it, it means that AMD won't be left with a situation where they have too much or to little raytracing performance on there desktop cards if it scales linearly with both CUs and core clock.
You're gonna have to pay quite a lot more for a PC of equivalent performance, let alone better.So people who are completely obsessed with flops is a console really the answer? PC+gamepass will demolish xbox and play most of the same games. something like 20-25% difference is practically nothing, it all comes down to how much developers optimize the game for specific platform.
So people who are completely obsessed with flops is a console really the answer? PC+gamepass will demolish xbox and play most of the same games. something like 20-25% difference is practically nothing, it all comes down to how much developers optimize the game for specific platform.
Is this still the case? Studios will have more flexibility in this regard, and don't some studios currently optimize PC first? (CD Projekt)games are going to be optimized for the xbox first anyway.
You're gonna have to pay quite a lot more for a PC of equivalent performance, let alone better.
I think nearly 3x more.You're gonna have to pay quite a lot more for a PC of equivalent performance, let alone better.
I’d question the base assumption to begin with. Why would developers optimize to potentially two consoles in a family whose predecessor were behind the competition in install base by a margin of over 2:1?Is this still the case? Studios will have more flexibility in this regard, and don't some studios currently optimize PC first? (CD Projekt)
I’d question the base assumption to begin with. Why would developers optimize to potentially two consoles in a family whose predecessor were behind the competition in install base by a margin of over 2:1?