Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Has there ever been proven any meaningful benefit to close CPU<>GPU communication? Is the direct bus of the APU making a difference?

Doesn’t the PS4 already rely on that for a sync computers it can snoop the CPU caches? Seems that would be complex or rather inefficient to do over a traditional motherboard.
 
Tweaktown reports of the AMD's 7nm plans, with some fresh [first concrete?] PS5 insider info
https://www.tweaktown.com/news/6303...avi-gpu-architecture-7nm-portfolio/index.html

"Navi on 7nm should be unveiled somewhere in 2H 2019, where I'm sure we'll see the bigger picture of Navi closer to Computex 2019 and even beyond. It will surely have to be as close as humanly possible to the unveiling of the PS5... because as soon as we know details about Navi, we'll be able to piece together how good the PS5 will be. Expect an 8C/16T processor in the form of a Ryzen 7 2700-esque offering in the new PS5. Separate GPU and CPU? I've been told to expect a discrete GPU, but that would mean the PS5 has some beast-like hardware... even on PC standards as we'll only be getting 7nm GPUs with Navi."

Initial batch done with interposer with redesigns down the line with APU? Or forever stuck with separate chips on mobo?

*cough* https://forum.beyond3d.com/posts/2041814/ :mrgreen:
 
Depends on where lithography goes. Sony may be looking at a very limited set of cost-saving die-shrink options (same as everyone else) and to ensure decent yields of large parts, is deciding to keep the dies split. If combining them down the line becomes a sensible option, they'll do so. Otherwise, they won't. ;) And the persistently high price of PS3 and PS4 suggests they are happy to keep the entry level price higher over the full life of the platform.

Edit: Actually, that's not really what you were asking. :oops: Are they going to gamble on interposer with a view towards APU, or do two dies forever (MCM long term option). I'd think the second. Given how cost scaling isn't happening these days, with the latest nodes costing as much or more as the larger nodes, the expectation would be that an economical single-chip solution won't be realistic within 5 years of launch. If you're going to go max performance, and so need two larger dies, find the cheapest way to do that. More silicon up front would be with a view to a more powerful launch unit and a longer life, IMHO.

I wonder if Sony/AMD will use Infinity Fabric technology as the tie (interposer) between the CPU and GPU, instead of a traditional PCI bussing system. Seems plausible (see below); maybe memory and other communication controllers as well.

https://en.wikichip.org/wiki/amd/infinity_fabric
A key feature of the coherent data fabric is that it's not limited to a single die and can extend over multiple dies in an MCP as well as multiple sockets over PCIe links (possibly even across independent systems, although that's speculation).
 
Doesn’t the PS4 already rely on that for a sync computers it can snoop the CPU caches? Seems that would be complex or rather inefficient to do over a traditional motherboard.
It features on PS4, but is it any use and are devs/games befitting? Or is it a nice idea in theory but with no real practical application in actuallity? With GPUs able to create their own jobs, how much CPU <> GPU communication is actually needed?
 
If they went discrete gpu, what would the RAM target be? I can't imagine it being 4GB of VRAM, so probably 6GB or 8GB? Then 8 or 12GB of RAM?
 
If they went discrete gpu, what would the RAM target be? I can't imagine it being 4GB of VRAM, so probably 6GB or 8GB? Then 8 or 12GB of RAM?

I don't think there will be a split memory configuration. I believe both GPU/CPU will access to the same pool of GDDR6.
 
Unified memory was top of the request list from devs when Sony did the PS4 "what do you want?" tour. Is that really going to be the case for PS5/XB2?

No one will ever need more than 8GB of VRAM will they?

Said with full knowledge of the many times in computing people have said no one will every need more than x of something. :)
 
Even if the CPU is 8c/16T, I expect it to be clocked lower and with a smaller cache than the PC counterparts so I don't think it will be a beast. but very capable nonetheless.

I think they'll only dedicate at most 1/3 the power and silicon budget for the CPU.
 
Yah, I guess in my head I've gotten used to APUs in the console space, but 360 had a unified pool and it was a discrete gpu.

Yeah. 25.6 GB/s bandwidth from GPU to main memory and then the GPU feeds the CPU through a 22 GB/s link. So all memory traffic travels through the GPU's memory controller.

Be interesting if we see a return to something similar to that.

Regards,
SB
 
Yeah. 25.6 GB/s bandwidth from GPU to main memory and then the GPU feeds the CPU through a 22 GB/s link. So all memory traffic travels through the GPU's memory controller.

Be interesting if we see a return to something similar to that.

Regards,
SB
Zen CCXs are already connected to on-die and remote memory controllers with Infinity Fabric. If 7nm economics dictate smaller dies and decoupled CPU-GPU, I don't see an issue with using one remote CCX. Unless we're assuming an SoC would have provided a faster interconnect. However a smaller GPU die may limit memory interface width :(

The bespoke CPU die could also include the southbridge and a cheap RAM interface for the OS like PS4 pro. Or even a larger pool for gaming workloads. I'm not convinced a unified memory pool is necessary if neither are constrained.
 
Last edited:
Separate dies sounds like a good idea only if a single chip would be too large. Which isn't very plausible for a $400-$500 console, unless the yield will become a big enough problem on 7nm that the overhead of having two chips is worth the gain in yield?
 
Status
Not open for further replies.
Back
Top