Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
TFLOPs is actually not a terrible way to ballpark overall performance. It's probably the best we have. Obviously there are a lot of other bottlenecks a dev can run into, and comparisons between different architectures don't work out, but if you're looking for a rough idea, it's not incredibly far off.
 
And bits offcourse.
That mostly predates PS1/2. It was bits and ...hertz primarily when comparisons started. Even that was stupid as early on as the early 80s where CPUs like the 68000 had aspects that were 16, 24, and 32 bits. The MHz comparison broke completely when AMD could outperform Intel's x86s despite Intel's being clocked way higher. MIPS sometimes reared its ugly head, especially when Acorn nerds tried to boast how fast their ARM-powered computers were. 64 bits died out with the N64, where Sony managed to just ignore it I think. Polygons were the initial measure of 3D consoles and provided PS2 with some really big numbers, not at all indicative of on-screen workloads. I think it was PS3 gen where we had Gflops take off, but we also had the one and only 'bandwidth wars' with MS trying a 'total internal bandwidth' manoeuvre which just made much cry. this gen was all about the TFs, because the systems were so similar that provided the only basic difference. The rest of the systems seemed fairly balanced for those TFs so I think they largely represented relative power up to the XB1X.

Of course, if next-gen machines are identical architectures than TFs will be a good comparison of the two machines, as will clock speeds.
 
TFLOPs is actually not a terrible way to ballpark overall performance. It's probably the best we have. Obviously there are a lot of other bottlenecks a dev can run into, and comparisons between different architectures don't work out, but if you're looking for a rough idea, it's not incredibly far off.
Yes, but when comparing two reasonably balanced systems that set out to achieve the same thing, any parameter is equally useful, and you could just as well use Bandwidth, for instance. (In the past, you could substitute the vast majority of GPU benchmark runs with a fill rate table.)
It's enough that the systems only differ by trying to achieve equivalent results via different methods for the underlying assumption to fall apart. (Compare for instance doing parts of lighting via ray tracing, or via other approximations. You can end up with similar results at vastly different cost. How would you compare them? Frames per second? FLOPS? Netflix perceptual evaluation model? Toss in differences in storage, latency due to software stack, et cetera, and it quickly gets clear that the usefulness of FLOPS is confined to forum wars.)
The PS5 and XBsX may be similar enough in terms of everything that you can pick any of the variables that go into the balancing act to get a ballpark estimate of capabilities, but remember that as soon as that ballpark estimate has a variability that is of similar magnitude as the difference of the one parameter you choose, FLOPS, the predictive value approaches zero.

So I agree that FLOPS probably gives a not terrible ballpark estimate. I'd also contend however, that it is largely useless.
 
That mostly predates PS1/2. It was bits and ...hertz primarily when comparisons started. Even that was stupid as early on as the early 80s where CPUs like the 68000 had aspects that were 16, 24, and 32 bits. The MHz comparison broke completely when AMD could outperform Intel's x86s despite Intel's being clocked way higher. MIPS sometimes reared its ugly head, especially when Acorn nerds tried to boast how fast their ARM-powered computers were. 64 bits died out with the N64, where Sony managed to just ignore it I think. Polygons were the initial measure of 3D consoles and provided PS2 with some really big numbers, not at all indicative of on-screen workloads. I think it was PS3 gen where we had Gflops take off, but we also had the one and only 'bandwidth wars' with MS trying a 'total internal bandwidth' manoeuvre which just made much cry. this gen was all about the TFs, because the systems were so similar that provided the only basic difference. The rest of the systems seemed fairly balanced for those TFs so I think they largely represented relative power up to the XB1X.

Of course, if next-gen machines are identical architectures than TFs will be a good comparison of the two machines, as will clock speeds.

People forget memory bandwidth, it is the reason of the bigger than TFlops difference in multiple title between PS4 Pro and Xbox One X where Xbox One X has two times the resolution.

Same in some title the PS4 Pro framerate(1440p or checkerboard rendering) is inferior to PS4 (1080p) because PS4 has a better bandwidth at 1080p than PS4 Pro at 1440p.
 
People forget memory bandwidth, it is the reason of the bigger than TFlops difference in multiple title between PS4 Pro and Xbox One X where Xbox One X has two times the resolution.

Same in some title the PS4 Pro framerate(1440p or checkerboard rendering) is inferior to PS4 (1080p) because PS4 has a better bandwidth at 1080p than PS4 Pro at 1440p.
This is another interesting point. I made some calculations and BW per TF for PS4/Pro/XSX sys v PC GPU counterparts was always ~25% higher.

Something to keep in mind as we can also get ballpark based on total system BW.
 
People forget memory bandwidth, it is the reason of the bigger than TFlops difference in multiple title between PS4 Pro and Xbox One X where Xbox One X has two times the resolution.

Same in some title the PS4 Pro framerate(1440p or checkerboard rendering) is inferior to PS4 (1080p) because PS4 has a better bandwidth at 1080p than PS4 Pro at 1440p.

It's probably worth keeping in mind that performance scaling can be non-linear for a given pixel:texel density while Scorpio also featured a larger GPU L2 cache. i.e. less thrashed cache or need to hit main memory if the title isn't needing to sample from higher-than-base resolution textures for a given screen tile.

Certainly, the ROP delta compression may not have been enough for certain scenes.
 
but remember that as soon as that ballpark estimate has a variability that is of similar magnitude as the difference of the one parameter you choose, FLOPS, the predictive value approaches zero.
Yeah, if we assume both are using different RT methods, then direct TF comparisons in RT workloads will become meaningless.
 
https://community.amd.com/community/gaming/blog/2019/06/09/amd-powers-microsoft-project-scarlett

Last year AMD already announced "next-generation Radeon™ RDNA gaming architecture ". Some people argue that it doesn't mean "next-generation of RDNA". But since AMD''s words are straightforward we don't need to be so surprised to see RDNA2 announcement.

Besides AMD already admitted xbox using their RT solution early but they never mentioned PS5. This is why so many speculation about different solution of RT in PS5.
https://www.techquila.co.in/amd-raytracing-navi-radeon-rx-gpu/
Mithun Chandrashekhar said:
AMD as a company…strongly believes in the value and capability of raytracing. RDNA 2, the next-gen, will support raytracing. Both the next-gen Xbox and PlayStation will support hardware raytracing with Radeon natively. We will be sure to have the content that gamers can actually use to run on those GPUs

We believe in our raytracing, and we will have it when the time is right.

Don't want to read too much into this quote, but I thought that this was not only proof that the PS5 RT solution is not a seperate chip, but also that AMD considers it natively integrated.
 
https://www.techquila.co.in/amd-raytracing-navi-radeon-rx-gpu/


Don't want to read too much into this quote, but I thought that this was not only proof that the PS5 RT solution is not a seperate chip, but also that AMD considers it natively integrated.
The quote is likely fake, how could quote like that come from AMD to couple small sites but missed by everyone else?
It's also against AMDs strict policy not to say anything about customer chips which the customer hasn't said themselves first. And said policy can lead to things like confirming something on XB and not on PS simply because MS had earlier confirmed some things and Sony hadn't
 
Yeah, if we assume both are using different RT methods, then direct TF comparisons in RT workloads will become meaningless.
Yes. And the principle is rather wide ranging.
If there is a meaningful difference in capabilities between the consoles, the developers will try to achieve "the same" result but with different methods. Meaning that they strive for similar performance with slightly different visuals. So the comparison moves from the numerical (FLOPS, fps) to the perceptual domain. Thus comparing very different platforms using a single figure of merit is pointless because they will not be doing the same jobs. So you can only really compare extremely similar systems that way, but then they are likely to be so close so as to be effectively (visually) equivalent, since if they weren't, choice of method would change. (And the further we push up the visual fidelity ladder, the larger the computational differences needed not to fall into the "negligible" bracket.)

It can still be interesting to see how the manufacturers choose to balance their systems, obviously.
 
The quote is likely fake, how could quote like that come from AMD to couple small sites but missed by everyone else?
It's also against AMDs strict policy not to say anything about customer chips which the customer hasn't said themselves first. And said policy can lead to things like confirming something on XB and not on PS simply because MS had earlier confirmed some things and Sony hadn't

Apparently, this video has him saying it.

 
Makes sense, Sony isn't going to have their own exotic ray tracing hardware solution afterall.

Especially if you view those comments in the context of the discussion.

Statement:"We believe strongly in raytracing"
Supporting Statement:"Both next-gen Xbox and next-gen PlayStation support raytracing natively."

PlayStation rolling their own custom solution for raytracing doesn't illustrate AMD's own support for raytracing, so it wouldn't make sense to mention it if that were the case. There could be customizations, sure, but the base tech is almost certainly AMD's.
 
Especially if you view those comments in the context of the discussion.

Statement:"We believe strongly in raytracing"
Supporting Statement:"Both next-gen Xbox and next-gen PlayStation support raytracing natively."

PlayStation rolling their own custom solution for raytracing doesn't illustrate AMD's own support for raytracing, so it wouldn't make sense to mention it if that were the case. There could be customizations, sure, but the base tech is almost certainly AMD's.

Wouldn't be that great either if Sony had their own RT tech, since almost no games would have it on PS5, just the exclusives which aren't many.
 
Wouldn't be that great either if Sony had their own RT tech, since almost no games would have it on PS5, just the exclusives which aren't many.
I think it is a stretch to assume custom Sony RT would automatically mean that only Sony's first party would make use of it. Is there no tech shared between Nvidia and AMD that is implemented differently for each of them?
 
Status
Not open for further replies.
Back
Top