In the sense that you're asking; they cannot all be true. But the reality is that we're missing data. Context in particular. A bit like how sarcasm is more difficult to read in words than it is to hear it directly from the person. You can only determine truth with information that is usable, and a fair amount of the information isn't useful and even true information can be wrong. Could you imagine an improperly coded test in which the test had bugs that results in poor results? It's not like AMD comes out and said, yup this is a bug free leak boys, go at it. There's so much unknown, I don't place a lot of stock in any of it.And at this point (probably up until late Q2 2020 I'd say) the PS5 can also get an improved cooling solution together with improved VRM to be able to push the clocks up.
No one is suggesting Sony was using a 40 CU chip up until late 2019 and now they'll adopt a 60 CU model.
What we have right now is the following info:
- Devs telling journalists and insiders that the PS5 and XBSX are close in performance
- Github leaks: Oberon with 36 active CUs (tested @ 2GHz back in June 2019) and XBSX with 56 active CUs
- Aquarius-something SoC measurements: ~315mm^2 PS5, ~380mm^2 XBSX
- Phil Spencer claiming XBSX is 2x XBoneX.
How can all of these be true?
There are some things that we know that are universally true though, and that we have multiple methodologies in producing a number; like measuring the die size of a chip. Knowing how dense and how many transistors can be packed per mm2. These are things we can tangibly work with, use stable ground to test areas with uncertainty. I'm sure you know a great deal many more things than I do, and I think leveraging that background of yours, bucket information that is useful to funnel possibilities.
The only usable piece of information here is point 4. And that's only because of you latter statement of looking at some shotty 1 year old proof. I mean I could argue that MS is not using an AMD solution either. Error Proofing is a typical developers sigh moment right?1 - All the journalists and insiders think that 9.2 TFLOPs is close to 12 TFLOPs (which IMHO is a downright stupid assumption, 30% was never close in hardware performance, it's way more than RX 5700XT vs. RTX2080 Super and no one considers those close. Besides, the "close" or "similar specs" wording probably came from devs themselves and those would definitely not consider a 30% gap as "close").
2 - All the journalists and insiders that claimed close performance were given false information (hardly, given the track record of some of them).
3 - The PS5 is using an off-chip ray-tracing accelerator that levels things between both consoles.
4 - The PS5 SoC is planned to use more than 36 CUs out of a total of 40 present in the silicon (the PS4 Pro also has 40 total CUs), together with very aggressive clocks. I'd be wary of this option, if it wasn't for the 1 year-old "proof" already showing a whopping 2GHz.
5 - XBSX is actually closer to 9.2 TFLOPs than it is to 12 TFLOPs, maybe due to low GPU clocks around 1.4GHz.
6 - ?
Assume you have a flag in which you use to test Ray Tracing: what should the results say if you choose no? or choose yes?
Does No = Blank? (The reason people believe Sony is using custom hardware)
Yes = you make it run a benchmark for ray tracing
Okay so if "yes"
What if it doesn't support ray tracing?
Do you write out the result of 0 for a division by 0 catch statement?
or or do they just have MRay/Sec but with no value in front of it? (this is the assumed case for Xbox, a metric with no value)
Do you leave the field blank? (People assume this is the case for Sony)
or is the test not run at all in which would beg to ask why bother with the flag anyway.
or there was never a benchmark run for Ray Tracing and it was nothing else other than checking some instructions and whether they are available yet or not.
Everyone is so convinced that the ray tracing test worked on that 1 year old shotty benchmark, but no one would be able to reasonably tell me how XSX scored on it because there are no values. But they seem to be 100% confident about the CUs and clock speeds, and it being a AMD RT based solution. (not saying it isn't, just there is reason to leave doubt)
Far too many ifs. It's just easier to wait for better information and discuss speculation in context with what we do know, instead of what we don't. Fun to speculate though, but not worth coming back to this thread for every hour.
I mean shit.. everyone is calling it regression testing. But do people know what it means? Simple Google here:
Regression Test Selection
- Instead of re-executing the entire test suite, it is better to select part of the test suite to be run
- Test cases selected can be categorized as 1) Reusable Test Cases 2) Obsolete Test Cases.
- Re-usable Test cases can be used in succeeding regression cycles.
- Obsolete Test Cases can't be used in succeeding cycles.
It was found from industry data that a good number of the defects reported by customers were due to last minute bug fixes creating side effects and hence selecting the Test Case for regression testing is an art and not that easy. Effective Regression Tests can be done by selecting the following test cases -
- Test cases which have frequent defects
- Functionalities which are more visible to the users
- Test cases which verify core features of the product
- Test cases of Functionalities which has undergone more and recent changes
- All Integration Test Cases
- All Complex Test Cases
- Boundary value test cases
- A sample of Successful test cases
- A sample of Failure test cases