Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
I expect bog standard Zen 2 chiplets in next gen consoles. Probably a six core variant with sub par frequency compared to desktop counterparts. While six core variants can still command a pretty good ASP, lower frequency ones ... not so much. This way MS and Sony tap into the lower value tail of AMDs CPU sales.

The GPU is pure speculation at this point, but I'm guessing something with computational prowess equivalent to a 40 CU Vega running at 1.8GHz, ~18-19 TFLOPS with 16bit FP, using a 256 bit 14GHz GDDR6 memory subsystem. Radeon VII's 7nm die is 331mm² with 60 of 64 CUs active, a similar system with 40CUs (of 44) would be ~230mm².

IMO, the most interesting bit is the storage system. A lot has changed in the past 4 to 5 months. Dram is now at $5/GB, but flash has completely collapsed, with a wafer spot price of <$5 for a 512Gbit die, that's less than 8 cents per GB. There is not doubt GDDR6 is more expensive than DDR4, but let's use the $5/GB as a lower bound. Putting 32GB in a next gen console will cost $80 over a 16GB unit. That $80 will buy you a TB of flash storage. I could imagine one or both vendors to opt for 16GB DRAM and >1TB flash storage, using a very high throughput storage controller.

Cheers
 
GPU shure is a big question mark at the moment.

But if the latest benchmarks are indeed true, Navi could be a real surprise!

The bellow benchs are of a supposed Navi GPU pitted against an RX 580


AMD-Navi-Radeon-RX-vs-AMD-Radeon-RX-580_Compute-442x1024.png


The Navi GPU has 20 Compute Engines at a non specified Clock Speed. The RX 580 has 36 Compute Units wich I believe are the standard 1257 MHz with Boost Clock of 1340 MHz.

Although the Navi gains can be due to increased clock speeds, what we are seeing here is a 20 CU GPU beating a 36 CU GPU.

This means that the 20 CUs GPU is outputing more than the 6,175 Tflops of the RX 580.

With 20 CUs, accepting the SP limit of 64, if I made the match correctly, that would mean a clock speed of 2412 Mhz. A not very likely speed!

To reduce the clock speed to something like 1.8 Mhz we would have to accept that the new CUs are more efficient, or have more Stream Processors in each CU.

But regardless, if 20 CUs can output something equivalent or superior as a 6 tflops GPU, 40 CUs can output the equivalent of a 12 Tflops GPU.

With the expected saves in power consumption in Navi, due not only to the node reduction, but also changes in architecture, I have high hopes for Navi on consoles.

As I stated before we had recent patches on linux that reveal a preparation for a new System Management Unit to be found on future ASICs, responsible for power management tasks, and we also know for a fact that some Engineers from the Ryzen team were diverted to Navi to implement changes in power consumption. All this is accompanied by statements like the one from chiphell, that claims Navi power consumption is set to be surprising.
 
Last edited:
I'm am neither agreeing or disagreeing with your point. Zen2 is 7nm designed by default, we know next gen is 7nm. Otherwise you gotta pay costs to shrink Zen1 into 7nm. That's a strong argument.

A lot of studios have tools that they use. A lot tools can be coded in C#, Lua, whatever it may be. If you're using Ryzen processors because you have a deal with AMD and it's cheaper than running a massive Corei7 because of threadripper, why not make compilation optimization so that those tools run better. You're getting better performance for cheaper and that can be across all the studios that develop tools for Sony or for themselves. But those tools are being run on Zen processors, not necessarily on the console. You still get massive gains because anytime we can get faster iteration we get lower development costs.


In case there’s any ambiguity about exactly what SN systems does.

We create development tools for PlayStation®platforms including PlayStation®4 Pro, PlayStation® VR and PlayStation®Vita.

From debuggers and performance analyzers to toolchains and target communication, our products are designed to make developers' work easier and more efficient.

Part of Sony Interactive Entertainment, we have a deep understanding of game developers' needs, and use our technical expertise to create a range of advanced programming tools.
 
Does Zen 2 tech already exist without a chiplet design ? If Sony use Zen 2 like the previous posts about LLVM commits strongly suggest, wouldn't that cost much less to re-use existing chiplets already designed and manufactured for the PC market ?
 
Does Zen 2 tech already exist without a chiplet design ? If Sony use Zen 2 like the previous posts about LLVM commits strongly suggest, wouldn't that cost much less to re-use existing chiplets already designed and manufactured for the PC market ?
That's the beauty of the chiplet. You can mass produce one design, sell lower bin parts to console manufacturers, and still provide custom IO/GPU dies.
 
Last edited:
Does Zen 2 tech already exist without a chiplet design ? If Sony use Zen 2 like the previous posts about LLVM commits strongly suggest, wouldn't that cost much less to re-use existing chiplets already designed and manufactured for the PC market ?
If it does it hasn't been demonstrated or shown anywhere.

That's the beauty of the chiplet. You can mass produce one design, sell lower bin parts to console manufacturers, and still provide custom IO/GPU dies.
But let's assume you want to leverage HSA-features, would the on package buses offer enough bandwidth and low enough latency compared to monolithic APU?
 
So you do not have to rework your patches when other people change stuff. So you have people skilled at working with LLVM. Having forks are a high maintenance burden and something you do not want to do.
That's a pretty good reason, but this also means Microsoft who have been slowly been open sourcing their work away from their Windows C compiler with LLVM/Clang will be picking these up as well.

I frankly don't think it matters, but it appears mutually beneficial for both platforms. But in my brief history with Sony hardware, they're really quiet and protected about all their documentation and everything. Just hard for me to believe they would let this portion just slide.
 
T
I frankly don't think it matters, but it appears mutually beneficial for both platforms. But in my brief history with Sony hardware, they're really quiet and protected about all their documentation and everything. Just hard for me to believe they would let this portion just slide.

According to the Internet*, Sony has the best dev tools (including debuggers). They probably want to continue to have that advantage.

* I am to lazy to find sources right now....
 
Perhaps with the sea change at Sony, they are more open and inclusive these days as they feel that serves them better?
 
Perhaps with the sea change at Sony, they are more open and inclusive these days as they feel that serves them better?
I’m just confused on how we know it’s Sony’s work designed for PS5 being put back into the main branch before PS5 has been announced.

Lots of other news sites have reported on LLVM 9 supporting Zen 2 as a recent update and that’s just for people that do application work. Only rumour sites seem to make a connection to Sony. Should we review all the other LLVM variants to see whom has been making the commits for all this time ?

How do we know that other people aren’t working on this and these two guys are just the lead members who are responsible for code review and commits?
 
I have no idea. I'm not saying it is, just countering the notion that Sony still keep their cards close to their chest. 2012 was a long time ago and Sony have changed in several ways, so that doubt of yours doesn't seem to hold, but that does not disprove the entirety of your argument.
 
I have no idea. I'm not saying it is, just countering the notion that Sony still keep their cards close to their chest. 2012 was a long time ago and Sony have changed in several ways, so that doubt of yours doesn't seem to hold, but that does not disprove the entirety of your argument.
They still seem pretty locked up if the lack of information available on anything is any indication :)

Just to make sure I’m following this story right:
A) Sony has the among the best compiler teams in the world - not worth debating but I’m sure some gcc people will debate that. It’s a moot point, the important part is that they are heavily connected to LLVM.
B) Sony’s lead developers work with LLVM and use it internally for their tools and have been working with it for a long time. I suppose we should see The PS3 processor in LLVM then ? I’m trying to explore precedent.
C) Sony creates the optimizations for Zen 2, and commits it back to main branch.
- this ignores all the other frameworks and languages that leverage LLVM and would need it to support Zen 2, shouldn’t the needs of the other communities matter here?

So by committing the code back to the main branch, will MS benefit from these optimizations for Scarlett? They support LLVM/Clang for Dx12 shader compilation since 2017.

Anyway I’m not even sure what was committed so perhaps this is out of line.

So because of the above 3 points, PS5 is guaranteed Zen 2? Did I miss anything ?
 
C) Sony creates the optimizations for Zen 2, and commits it back to main branch.

Did I miss anything ?

Those commits were done by AMD.

C) Sony developer Code Reviews optimizations for Zen 2.
 
Status
Not open for further replies.
Back
Top