There hasn't been any substantial sources of PS5 supposedly wanting to launch last year, it doesn't fit with any timeline.
Wasn't Jason Schreier who said Sony was planning for a 2019 release at some point, but they delayed one year to guarantee a stronger launch lineup?
If so, I'd say he's a very substantial source.
I mean if you want to use infinity fabric to connect a die to the system you can but even then it's a huge latency penalty. At that point, you'd probably move CPU off die too to balance it out. Why even make a ~300mm^2 die for the APU?
Maybe because the SoC is strictly AMD IP made on TSMC whereas the other is Sony/PowerVR/whomever IP maybe even built on another fab or node.
I disagree that its bonkers. I don't consider it highly plausible, but we've a number of pointers including Sony working on realtime RT solutions and employing RT talent.
There's also the fact that the github leak is showing RT tests for SeX's SoC, but none for the alleged PS5 SoC.
Add to that the fact that AMD claimed that Microsoft would be using their RT RDNA solution but didn't state the same for Sony, plus that Aquarius-something TSMC insider who's been on spot leaking die areas (did so for the SeX SoC also) saying the PS5 has a separate chip for RT.. and you have an actually good case for an off-chip RT solution on the PS5.
He said
“There is ray-tracing acceleration in the GPU hardware,” he says, “which I believe is the statement that people were looking for.”
Fair enough.
Does it cease to be GPU hardware if the RT is on a separate chip?
Would you consider the Xbox Xenos' daughter chip which had 10MB eDRAM and the rasterizer units, and was fabbed by NEC while the "main GPU" chip was made by TSMC and Globalfoundries, as not part of the GPU hardware?
I mean all "dedicated hardware" is going to offload it..that's the point of dedicated hardware is it not?....
No, you can have dedicated hardware to
partially offload raytracing. Which is the case for Turing's implementation of fixed-function "RT units" that takes very little die area but is extremely taxing on the shader processors / SM units.
If Sony developed their own separate chip, I'd expect it to come with it's own set of non-flexible but very optimised ALUs. At the very least because having a RT chip that does everything with the help of the main GPU's ALUs could put a bottleneck into any inter-chip bandwidth.
Perhaps it would have also its own set of eDRAM too, since RT is very demanding on bandwidth.