Playstation 5 [PS5] [Release November 12 2020]

Power requirements wouldn't be a surprise. Specifics of cooling while maintaining minimum noise might require some investigation and implementation though.

That was my second point. It could be sony wants to optimize cooling on per ssd basis. One could interpret this as optimization == business as usual or throw FUD and speculate about sony being in trouble with overheating SSDs.
 
That was my second point. It could be sony wants to optimize cooling on per ssd basis. One could interpret this as optimization == business as usual or throw FUD and speculate about sony being in trouble with overheating SSDs.
Well, the m.2 SSD slot has its own ventilation shaft and holes to get the hot air out. So there should always be and active air flow around the m.2 ssd that has no impact on the airflow for the rest of the components.
But if you look at the m.2 ssd of the xbox which get hot really quick and is also cooled by an airflow, you can assume that even faster NAND-chips (or just more of them) + the controller-chip (which also must be faster than Sonys solution) could get hot really quick. Maybe Sony underestimated how much heat such a solution would generate if the SSD is really under pressure all the time (not really realistic).
The internal SSD has not that problem, because Sonys solution was to "just" use more NAND chips (less heat because frequencies don't need to be that high) and their controller chip is actively cooled also be the big-cooling solution.

Maybe we need to wait a bit longer for an m.2 SSD that delivers the data fast enough and does not get that hot.
 
Sony is hiring engineers to work on deep learning image processing
New job 2021/4~, seems for PSVR2

https://cmc-co.jp/projects/34779.html
Working development of image processing for a certain game console.
-You will be responsible for the development and implementation of a 3D object detection system that uses deep learning to process images from a camera.
-You will be responsible for developing and implementing a range of algorithms to maximise the performance of the H/W. You will read and understand academic papers (in English), use described algorithms and sample code, implement (e.g. port to GPU, speed up) and evaluate.
 
But the PS5 doesnt have hardware ML implementation right?
So is this a custom software solution I suppose?
What is "hardware ML implementation"? Surely you can't mean you'd need to have matrix-crunchers to count has "hardware ML"?
RDNA2 supports INT4 (8:1FP32) and INT8 (4:1FP32) precisions, which are useful for ML.
 
But the PS5 doesnt have hardware ML implementation right?
So is this a custom software solution I suppose?

They support it, but not at the faster speeds, if I recall the MS HotChips presentation correctly on customizations. Could be mistaken though.
 
What is "hardware ML implementation"? Surely you can't mean you'd need to have matrix-crunchers to count has "hardware ML"?
RDNA2 supports INT4 (8:1FP32) and INT8 (4:1FP32) precisions, which are useful for ML.
ps5 does not support mixed precision dot-product instructions INT8/4 so probably will be fp16
 
ps5 does not support mixed precision dot-product instructions INT8/4 so probably will be fp16

This kind of assumes that Sony had not done similar customizations, which could be possible.

Anyways, the closest thing I could find to the customizations was from the DigitalFoundry article and a slide from Hot-Chips, so apologies for pulling in non-Sony or non-PS5 material.

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs
"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning."

202008180228381_575px.jpg
 
This kind of assumes that Sony had not done similar customizations, which could be possible.

Anyways, the closest thing I could find to the customizations was from the DigitalFoundry article and a slide from Hot-Chips, so apologies for pulling in non-Sony or non-PS5 material.

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs


202008180228381_575px.jpg

Not sure how MS claims it's "special hardware support" or customizations when AMD includes it in every RDNA2-product?
Except for PS5 apparently.
ps5 does not support mixed precision dot-product instructions INT8/4 so probably will be fp16
 
Not sure how MS claims it's "special hardware support" or customizations when AMD includes it in every RDNA2-product?
Except for PS5 apparently.

This could be something Ms wanted added and so into RDNA 2.0 it went. Still doesn't mean that it wasn't special hardware that MS wanted in but now its just been included.

I have no idea of Sony has this or not.
 
This could be something Ms wanted added and so into RDNA 2.0 it went. Still doesn't mean that it wasn't special hardware that MS wanted in but now its just been included.

I have no idea of Sony has this or not.
True enough, but it is rather questionable no matter whose idea it was, IMO, for MS to claim it's "added special hardware support" when it's base architecture feature, which MS definitely knew at the time.
 
Why is there an assumption that the PS5's RDNA2 GPU lacks the same 4xINT8 / 8xINT4 throughput capabilities that have been present in all RDNA GPUs since Navi 14 which released over a year before both consoles?
Microsoft claimed they had custom optimizations for ML loads ("special hardware support for this specific scenario"), not that they invented 4xINT8 / 8XINT4 mixed dot products for RDNA ALUs that had been inside already released GPUs a year before. We're probably looking at hardware support for custom ML instructions.


Furthermore, are the faster INT8 and INT4 throughputs any useful for inference-based image upscaling? AFAIK INT4 and INT8 are useful for weight values in inference but how often are those going to appear when calculating for pixel color values?
Pixel shader calculations can go down to FP16 precision in some cases, but in most cases it's using FP32 and I doubt the framebuffer holds pixel values at less than 24bit. If they downgraded the pixel color values down to 8bit then it would probably look really bad.
Furthermore, I don't know of any document or official statement about DLSS1/2 that suggests it's using INT8 throughput. For all I know it's using FP16 Tensor FLOPs with FP32 accumulate (highest possible precision on the Tensor cores) which still has an enormous throughput on Turing and Ampere alike.




Sony is hiring engineers to work on deep learning image processing
New job 2021/4~, seems for PSVR2

https://cmc-co.jp/projects/34779.html
This seems to be for object identification when using a 3D stereo camera. I can't tell if it's for PSVR2, it could be for some AR games like the Playroom on PS4.
 
It probably wasn't a base architectural feature when MS specified it. XSX is basically older than any PC side RDNA2 product - it was in mass production months earlier and most likely feature locked sooner too.

Like Cerny stated in road to PS5, if their customisations make sense AMD might fold them into their GPUs. That's up to AMD.
 
What is "hardware ML implementation"? Surely you can't mean you'd need to have matrix-crunchers to count has "hardware ML"?
RDNA2 supports INT4 (8:1FP32) and INT8 (4:1FP32) precisions, which are useful for ML.
Dont ask me difficult questions :LOL:
 
Back
Top