That mostly predates PS1/2. It was bits and ...hertz primarily when comparisons started. Even that was stupid as early on as the early 80s where CPUs like the 68000 had aspects that were 16, 24, and 32 bits. The MHz comparison broke completely when AMD could outperform Intel's x86s despite Intel's being clocked way higher. MIPS sometimes reared its ugly head, especially when Acorn nerds tried to boast how fast their ARM-powered computers were. 64 bits died out with the N64, where Sony managed to just ignore it I think. Polygons were the initial measure of 3D consoles and provided PS2 with some really big numbers, not at all indicative of on-screen workloads. I think it was PS3 gen where we had Gflops take off, but we also had the one and only 'bandwidth wars' with MS trying a 'total internal bandwidth' manoeuvre which just made much cry. this gen was all about the TFs, because the systems were so similar that provided the only basic difference. The rest of the systems seemed fairly balanced for those TFs so I think they largely represented relative power up to the XB1X.And bits offcourse.
Yes, but when comparing two reasonably balanced systems that set out to achieve the same thing, any parameter is equally useful, and you could just as well use Bandwidth, for instance. (In the past, you could substitute the vast majority of GPU benchmark runs with a fill rate table.)TFLOPs is actually not a terrible way to ballpark overall performance. It's probably the best we have. Obviously there are a lot of other bottlenecks a dev can run into, and comparisons between different architectures don't work out, but if you're looking for a rough idea, it's not incredibly far off.
That mostly predates PS1/2. It was bits and ...hertz primarily when comparisons started. Even that was stupid as early on as the early 80s where CPUs like the 68000 had aspects that were 16, 24, and 32 bits. The MHz comparison broke completely when AMD could outperform Intel's x86s despite Intel's being clocked way higher. MIPS sometimes reared its ugly head, especially when Acorn nerds tried to boast how fast their ARM-powered computers were. 64 bits died out with the N64, where Sony managed to just ignore it I think. Polygons were the initial measure of 3D consoles and provided PS2 with some really big numbers, not at all indicative of on-screen workloads. I think it was PS3 gen where we had Gflops take off, but we also had the one and only 'bandwidth wars' with MS trying a 'total internal bandwidth' manoeuvre which just made much cry. this gen was all about the TFs, because the systems were so similar that provided the only basic difference. The rest of the systems seemed fairly balanced for those TFs so I think they largely represented relative power up to the XB1X.
Of course, if next-gen machines are identical architectures than TFs will be a good comparison of the two machines, as will clock speeds.
This is another interesting point. I made some calculations and BW per TF for PS4/Pro/XSX sys v PC GPU counterparts was always ~25% higher.People forget memory bandwidth, it is the reason of the bigger than TFlops difference in multiple title between PS4 Pro and Xbox One X where Xbox One X has two times the resolution.
Same in some title the PS4 Pro framerate(1440p or checkerboard rendering) is inferior to PS4 (1080p) because PS4 has a better bandwidth at 1080p than PS4 Pro at 1440p.
People forget memory bandwidth, it is the reason of the bigger than TFlops difference in multiple title between PS4 Pro and Xbox One X where Xbox One X has two times the resolution.
Same in some title the PS4 Pro framerate(1440p or checkerboard rendering) is inferior to PS4 (1080p) because PS4 has a better bandwidth at 1080p than PS4 Pro at 1440p.
Yeah, if we assume both are using different RT methods, then direct TF comparisons in RT workloads will become meaningless.but remember that as soon as that ballpark estimate has a variability that is of similar magnitude as the difference of the one parameter you choose, FLOPS, the predictive value approaches zero.
Its 64.What is the best guess for number of ROPs and TMUs for Xbox Series X with 56 active CUs?
Its 64.
64 ROPs 224 TMUs. There are 2SE's in Arden data and 64ROPs. 56*4 for TMUsI thought the texture units were in the CU and therefore tied to CU count?
4 per active CU so 216?
https://www.techquila.co.in/amd-raytracing-navi-radeon-rx-gpu/https://community.amd.com/community/gaming/blog/2019/06/09/amd-powers-microsoft-project-scarlett
Last year AMD already announced "next-generation Radeon™ RDNA gaming architecture ". Some people argue that it doesn't mean "next-generation of RDNA". But since AMD''s words are straightforward we don't need to be so surprised to see RDNA2 announcement.
Besides AMD already admitted xbox using their RT solution early but they never mentioned PS5. This is why so many speculation about different solution of RT in PS5.
Mithun Chandrashekhar said:AMD as a company…strongly believes in the value and capability of raytracing. RDNA 2, the next-gen, will support raytracing. Both the next-gen Xbox and PlayStation will support hardware raytracing with Radeon natively. We will be sure to have the content that gamers can actually use to run on those GPUs
We believe in our raytracing, and we will have it when the time is right.
The quote is likely fake, how could quote like that come from AMD to couple small sites but missed by everyone else?https://www.techquila.co.in/amd-raytracing-navi-radeon-rx-gpu/
Don't want to read too much into this quote, but I thought that this was not only proof that the PS5 RT solution is not a seperate chip, but also that AMD considers it natively integrated.
Yes. And the principle is rather wide ranging.Yeah, if we assume both are using different RT methods, then direct TF comparisons in RT workloads will become meaningless.
The quote is likely fake, how could quote like that come from AMD to couple small sites but missed by everyone else?
It's also against AMDs strict policy not to say anything about customer chips which the customer hasn't said themselves first. And said policy can lead to things like confirming something on XB and not on PS simply because MS had earlier confirmed some things and Sony hadn't
Makes sense, Sony isn't going to have their own exotic ray tracing hardware solution afterall.
Especially if you view those comments in the context of the discussion.
Statement:"We believe strongly in raytracing"
Supporting Statement:"Both next-gen Xbox and next-gen PlayStation support raytracing natively."
PlayStation rolling their own custom solution for raytracing doesn't illustrate AMD's own support for raytracing, so it wouldn't make sense to mention it if that were the case. There could be customizations, sure, but the base tech is almost certainly AMD's.
I think it is a stretch to assume custom Sony RT would automatically mean that only Sony's first party would make use of it. Is there no tech shared between Nvidia and AMD that is implemented differently for each of them?Wouldn't be that great either if Sony had their own RT tech, since almost no games would have it on PS5, just the exclusives which aren't many.