Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
This question was asked and tested it would appear!

https://www.eurogamer.net/articles/...3-nvidia-geforce-rtx-2080-super-review?page=2

The biggest factors for pure RT performance would appear to be
cores + memory bandwidth

The biggest factor for Nvidia RT performance is cores + memory bandwidth but that's not what we are talking about here, we are talking about the AMD implementation and we have no such information on that.

Yes, except for their memory bandwidth needs.

I suspected as much, the difference shouldn't be anywhere near 44%.
 
The biggest factor for Nvidia RT performance is cores + memory bandwidth but that's not what we are talking about here, we are talking about the AMD implementation and we have no such information on that.
You need bandwidth to access the structure, RT BVH structures are as large as 1.5 GB IIRC. So they aren't being held in cache.
The only thing remaining is to have hardware traverse the BVH structure for intersection.
If you look at the Quake 2 RTX benchmarks you'll see a benchmark that is specific to RT performance.
 
You need bandwidth to access the structure, RT BVH structures are as large as 1.5 GB IIRC. So they aren't being held in cache.
The only thing remaining is to have hardware traverse the BVH structure for intersection.
If you look at the Quake 2 RTX benchmarks you'll see a benchmark that is specific to RT performance.

I get that, but what I'm saying is the peak performance of the RT hardware is not linear to just the number of CUs but to the number of CUs and the core clock just like TF's. Which honestly makes a lot more sense if you think about it, it means that AMD won't be left with a situation where they have too much or to little raytracing performance on there desktop cards if it scales linearly with both CUs and core clock.
 
a 400mhz advantage on fewer CUs mean nothing. best case scenario for ps5 is that its a 10.3 tflops machine which it is not vs a stable 12.16 tflop machine. there are literally no benefits to ps5s gpu vs xbox. these are objective facts. and not to mention RT which xbox blows ps5 out of the water with meaning more workload for the ps5 gpu to do GI. this is a moot conversation and im not even going to discuss the lower bandwidth for the RAM.

Having high (er) clocks gives you some advantages, but for most situations probably not. A 2080Ti is MUCH more performant in everything you trow at it, then a 2080 which is quite higher clocked.
Maybe having both, many CU’s and a high clock.
1800+ mhz for a console is already a high clock, didn’t think that would happen a year ago. But here we are, 2,3ghz, things gonna break the soundbarrier :p
 
Please pardon this public service announcement.

Reply Ban's do exist.
A Reply Ban has been issued for 2 weeks to TeamGhobad for this thread.
They can be issued to others, if required.
Make sure your posts are honest and conductive to positive open discussions.
 
So it begins. The great sauce battle of our times.

No like he said they can compress texture by a factor of two probably and read another comment in the twitter thread 5.5 is superior to 4.8 even if PS5 had a 0% texture loading capacity they can load more than the Xbox and you have other stuff taking tons of place geometry(there will be an explosion on geometry assets next gen), animation and audio.

At the end the PS5 SSD is faster. They could not use this solution because Binomial crunch was not a solution very common in 2016 and they needed to have the SSD finished end of 2016 for all game from 2017 to be tailored around the SSD technology like HZD 2 for example. Because this is not magic it takes time to tailored a game engine around the SSD.

Fabian giesen speak about a very tight schedule to deliver the hardware encoder probably in 2016 because listen Mark Cerny told again the research and development for the SSD was done in 2015 and 2016* and he said when he visited developer Kraken was a new product but beginning to be very popular.



* All the patent were filed from 2015 and 2016 because it needed to be ready to be used by team having a new project from 2017.
 
Last edited:
But if you'll recall Nvidia also separates its RT performance from it's compute performance.

The 2080 TI is clocks much lower than 2080 Super, but has significant performance advantage over it in ray tracing.

2080 Regular:
1515 MHz and 1710 MHz
2944 CUDA cores

2080 Super:
1650 MHz and 1815 MHz
3072 CUDA cores

2080 TI:
1350/1545 MHz
4352 CUDA cores
In practice/testing they're both 2000-2100mhz cards when gaming. Even my 2070S hits that out of the box.
 
I get that, but what I'm saying is the peak performance of the RT hardware is not linear to just the number of CUs but to the number of CUs and the core clock just like TF's. Which honestly makes a lot more sense if you think about it, it means that AMD won't be left with a situation where they have too much or to little raytracing performance on there desktop cards if it scales linearly with both CUs and core clock.

The EG article showed that clocks _do_ matter on Nvidia hardware. He made a comparison of a 2080 Super vs a 2080 Super OC.
The RT cores cannot evade clock speed differences, the whole card has to run at the same speed.
A 100 Mhz overclock provided it approximately a 10% improvement. Which now makes the clock speed difference 400 Mhz at the base level.
But 100 Mhz is a bigger deal at their clocks because they are no where close to 1825 and 2230 respectively.

Anyway, the difference in CUDA cores is... 41%

2080 Super OC (1750 - 1950) to 2080 TI (1350- 1550) is pretty apt in that benchmark to what the performance difference we should see here. (though we are comparing a 20% clock difference on consoles compared to 30% clock difference here)

If we compared a 2080 Super vs 2080 TI we get a 36% differential in RT performance using a 22% difference in clocks with over a 44% CUDA core difference


We are likely looking at least a 20% differential up to 25% if the ray tracing hardware outpaces bandwidth such that the bottleneck is memory bandwidth and up to 36% differential if we didn't somehow hit a wall earlier.
 
So people who are completely obsessed with flops is a console really the answer? PC+gamepass will demolish xbox and play most of the same games. something like 20-25% difference is practically nothing, it all comes down to how much developers optimize the game for specific platform.
 
So people who are completely obsessed with flops is a console really the answer? PC+gamepass will demolish xbox and play most of the same games. something like 20-25% difference is practically nothing, it all comes down to how much developers optimize the game for specific platform.
You're gonna have to pay quite a lot more for a PC of equivalent performance, let alone better.
 
So people who are completely obsessed with flops is a console really the answer? PC+gamepass will demolish xbox and play most of the same games. something like 20-25% difference is practically nothing, it all comes down to how much developers optimize the game for specific platform.

Im a pc gamer but i can see why people would get an xsx instead, cheaper to buy, compact box and ease of use. The SSD tech probably is faster then pc also, atleast in the beginning.
25% might seem small but some buy a 2080Ti over a 2080s, even smaller difference in percentages.
There are 1001 reasons why people buy one thing over another.
And even if that high end pc demolishes that xbox, games are going to be optimized for the xbox first anyway.
 
You're gonna have to pay quite a lot more for a PC of equivalent performance, let alone better.

Sure, but then why argue flops is deciding factor on what to buy? I just see it being a pretty stupid argument to make decision based on flops when other factors could be equally or even more important.

Something like gamepass is a total gamechanger. Something like sony exclusives can be gamechanger. Even the silly ssd in ps5 can be gamechanger if rockstar/fifa/madden happens to take advantage of it. Few more pixels or less popup/better textures/better/more varied models. Even something like fifa could benefit from faster ssd when zooming into players etc. as it would be very feasible to swap in super highres assets. Then is more pixels or less popup better?


For me the solution is ps5 this year, 1-2 year later upgrade pc to spec better than consoles. My old pc anyway needs upgrading but consoles seem so awesome I'll delay pc upgrade a bit. I still have old 4 core cpu and overpowered 1080ti paired with it. PC's seem to last such a long time nowdays that I have no problem investing pretty big amount of money to pc and keep same machine for 5-8 years. Perhaps upgrade gpu but otherwise keep machine same. My current pc was bought 2013, I see my next pc lasting even better than my current pc.
 
Is this still the case? Studios will have more flexibility in this regard, and don't some studios currently optimize PC first? (CD Projekt)
I’d question the base assumption to begin with. Why would developers optimize to potentially two consoles in a family whose predecessor were behind the competition in install base by a margin of over 2:1?
 
I’d question the base assumption to begin with. Why would developers optimize to potentially two consoles in a family whose predecessor were behind the competition in install base by a margin of over 2:1?

I ment for MS first party games (xsx or pc to buy), not 3rd party games.

Oh and, we dont know yet how install bases will be for next gen. That was not my point.
 
Status
Not open for further replies.
Back
Top