D
Deleted member 91098
Guest
[...]
It wouldn't surprise me if the blade boards have high speed interconnects so that 2 or maybe all 4 boards can be stacked.
[...]
Would make sense, Microsoft is a member of all next generation interconnect consortiums after all (Gen-Z, CCIX, CXL and OpenCAPI) and with the exception of CXL AMD is part of those consortiums as well.
I think for ps5 monolith maybe the choice, even though it will also be used for game streaming.
Yeah, I'm thinking the same and I'm starting to believe that they really wanted to release in 2019 since a user mentioned the bink changelog: http://www.radgametools.com/bnkhist.htm
Back in 2012 they added two secret platforms which was ~16 months before the PS4 and Xbox One launched. Around the same time last year they "Added a new secret NDA-ed platform.", which could be the PS5 and would fit a ~november 2019 launch. They recently mentioned added support for Stadia which could be this NDA-ed platform. On the other hand they never mentioned that they added PS4 or Xbox One support in the past, I could only find "Added PS4 and Xbox One docs." with ctrl+f.
They also never mentioned "Added Nintendo Switch support" but at the same time they added a secret platform in 2016 which could fit with the Switch launch in 2017. The first thing they mention about the Switch is "Updated to Nintendo Switch SDK 3.0 [...]", so it would be weird if Stadia was the NDA-ed platform and then get's mentioned again with "Bink now has a Stadia target" but consoles get never mentioned.
Also, add to that the rumors about Navi problems, the rumors about AMD RTG dedicating 2/3 engineers to Sony, the AMD slide where Navi was targeted at 2017 (but will only launch later this year) and the rumor about AMDs 70% yield for a tiny ~74mm² chiplet on 7nm, which could translate to ~30% yield for a die with 300mm² or more like a monolithic PS5 APU would probably be.
Unlike AMD who can sell their datacenter GPUs for multiple thousands and use the worse ones as a high priced gaming and creator card thanks to Nvidia, Sony would have a huge problem with such a bad yield.
Maybe they just said "forget it, we pay the 20 million (?) dollar per mask and go for 7nm EUV at the end of 2020 to increase the yield", the PS4 is still selling really well anyway.
The biggest problem is in the pc space. Server and console not so much.
Servers probably already have to handle mgpu type set ups.
DirectX already has mgpu functionality.
It may not be invisible, but it may not be as big a deal in a static box. Patch unity, unreal, other engines to handle it in a basic way which may leave some performance on the table, but be nice fallback.
For MS the loss in performance may be worth it for the overall benefits.
Just waiting to be told that I'm 100% wrong.
Yeah, you could be right (I have no clue and am waiting to be told that's wrong as well, heh). Another question would be how they deliver the memory bandwidth to the chiplets? Would the chiplets have their own memory (HBM?) or would they connect to an IO die? In the latter case they would need an interconnect faster than current Infinity Fabric (100 GB/s or 200 GB/s with two links). Gen-Z claims it can deliver such bandwidth.
Is the GPU's actually very different or is it just disabled?
I'm wondering how much actual die saving is there if you don't have double precision?
Edit :
It seems that AI and ML tend to prioritize lower precision not higher, which is one of the use cases Phil gave. So maybe not having FP64 may not be a big deal for the intended work loads.
As far as I know they are different and don't have the silicon for it. For example Vega 10 (64, Instinct Mi 25 etc.) has not a single card which can deliver good double precision performance, all Vega 10 cards only have 1:16 double precision. But they can also decide to actually limit the double precision rate even though the silicon is there, as can be seen on Vega 20 (Radeon 7).
[...]
To be honest I don't get it how this rumor is discussed so much. This person is a "third party small developer from EU" but for some reason he knows the roadmap/plans of big shot studios like Take 2 (GTA 6), Guerilla Games (Horizon), EA/Dice (Battlefiled), UbiSoft (Assassins Creed) and Sonys remaster plans.
Then there is the unrealistic price for PSVR2 which would technically compete with StarVR that sells for 3200 dollars. How would a "third party small developer from EU" acquire such information? Just doesn't compute for me.
Deep down I'm chearing for this rumor though, since PSVR2 would be just like StarVR and eye tracking would allow for foveated rendering to bring down the hardware requirements. In a console environment where developers can take full advantage of everything and know exactly what hardware the users have this would be just too cool. Especially at this unrealistic price point of 250 dollars this would win Sony the entire consumer VR market, and I'm sure many businesses would choose the PS5 as well. Heck, Sony could probably sell a much more expensive business edition of the PS5 which includes support contracts etc.
Right? Why would these systems need 2 to 3 times as much memory? It doesn't sound necessary to me. I think ~16GB with cery high bandwidth would make more sense... These leaks state some obvious things and then some things which sound reasonable until you dig a little deeper.
Unless they use HBM or the consoles don't have 10+ TFLOPs then bandwidth for the GPU and CPU. If the 50 GB/s bandwidth per TFLOP "rule" from current consoles still stands you would need 500 GB/s for a 10 TFLOPs console. Maybe even more since ZEN is much faster compared to Jaguar.
8 or 16 GB of currently available GDDR6 can deliver ~384-512 GB/s on a 256 bit bus which would be enough for 10 TFLOPs. 12 or 24 GB with a 384bit bus would be 672-768 GB/s which would be enough for 13-15 TFLOPs. So if the "rule" still stands and they want to go over 10 TFLOPs then they need more bandwidth.
16 GB of HBM2 on the other hand could be anything between 205 GB/s and 1640 Gb/s but I doubt we see HBM2 in this generation of consoles.
Phil Spencer said the hardware would be put to use for "machine learning and other non-entertainment uses".
AFAIK machine learning has little use for DP, and all other purposes could exclude scientific workloads that need DP.
Just think of all the professional uses that Vega 10 got, such as ML on Alibaba's servers, video rendering, offline 3d rendering, etc.
Just because they'll be renting the servers for other uses, it doesn't mean they need their servers to be capable of all uses.
Yeah, you're right, I assumed all or nothing and ignored that there could be something in between. :O
Last edited by a moderator: