Playstation 5 [PS5] [Release November 12 2020]

Don't forget to TASTE the Mountain Dew on your breath and to SMELL the horrendous BO releasing from your unwashed body because you haven't stopped playing in the past few days to at least bathe.
 
His comments were made a month ago, even if you never saw the tweets the ideas have been discussed incessantly on forums for weeks now. Nevermind that it's extremely early to be talking about the potential of either console.

I'm more interested in hearing how developers are going to tap into the potential of the hardware. What's the SSD and audio tech going to make possible? If we truly can start designing levels based around the game play requirements not memory restrictions I'll gladly transition to next generation and hardly look back. Reducing load times and cut screens really has me looking forward to PS5 and next Xbox.

I also like different approaches to the hardware, it will be interesting to see how the design decisions play out in terms of performance. With performance not being exclusive to graphics.
 
Any post that contains the word Xbox in this thread that is about playstation risks being deleted on sight and anyone repeating that mistake risks a brief holiday from the forum.

If you need to compare and contrast, use the next-gen comparison thread or some other more appropriate place.
 
Is there anyway to calculate how much data needs to come from the SSD to supply the RAM with enough assets to run a 4k game at full tilt?
Like, do we know how data typically a 4k60 game will display per second on the screen? Now I know all games and ingame scenarios differ, but do we know an average or a range?
And then can we calculate that out through the RAM and back onto the SSD?
We know the SSD can provide 5.5gb/s raw data, so I'm trying to figure out if that's all that will be needed or will they have to compress it?
I don't see how they could need more than 5.5gb/s fed from the SSD. I mean, a game might have a total of 100gb, so that would mean the SSD could send the entire game to the RAM in 20 seconds.
Now I know I'm obviously off track somewhere here, but hopefully you follow my question.
 
Is there anyway to calculate how much data needs to come from the SSD to supply the RAM with enough assets to run a 4k game at full tilt?
Like, do we know how data typically a 4k60 game will display per second on the screen? Now I know all games and ingame scenarios differ, but do we know an average or a range?

It's so highly game dependant that without having profiled a very large number of games we couldn't know. And the games that will really push "4K games at full tilt" have barely even begun to appear, and can't rely on even having an SSD. All hardware makers have to extrapolate to some degree to target their hardware.

I think Sony are taking the approach that if they make something so fast the baseline, then that will actually shape the future and in doing so change (upwardly shift) exactly the kind of figures that you're looking for.
 
Is there anyway to calculate how much data needs to come from the SSD to supply the RAM with enough assets to run a 4k game at full tilt?
In the case of tiled assets, I made it about 60 mb/s. Although that ignores geometry. So basically, there's no such thing as the right amount of data. It depends on the game and the engine. Indeed, once devs get access to this streaming tech, they'll develop techniques to leverage it. Ergo, the amount of BW needed to stream 4K games at full tilt will be whatever the system provides, and then moreso because there's never an excess. You may as well ask 'how much processing power do you need?" The answer is "infinite." ;)
 
In the case of tiled assets, I made it about 60 mb/s. Although that ignores geometry. So basically, there's no such thing as the right amount of data. It depends on the game and the engine. Indeed, once devs get access to this streaming tech, they'll develop techniques to leverage it. Ergo, the amount of BW needed to stream 4K games at full tilt will be whatever the system provides, and then moreso because there's never an excess. You may as well ask 'how much processing power do you need?" The answer is "infinite." ;)

I remember in the ReRam thread you were actually skeptical about the potential benefit of a low-latency high-bandwidth storage to games. But you were right about ReRam being a long shot. The no-cache is also better with it comes to immediacy of everything and allows boot in a second as per Cerny.

I can't wait to see the potential of the SSD being tapped. As for ReRAm, I'm moving the goalpost and hoping they will have it for the PS6 using NVME 6.
 
I remember in the ReRam thread you were actually skeptical about the potential benefit of a low-latency high-bandwidth storage to games.
Huh? I've been a huge advocate of SSD from the very beginning, saying it would be better to invest in that than other aspects of the console.

eg. 1st September 2017:
Just making it bigger because you expect it to get bigger is poor engineering. How much memory will actually be needed, based on time to populate it and how much devs can spend to fill it? That there will give you the RAM requirement. I'd far prefer less RAM and an SSD, and I'm sure devs would too. Far more flexible.

And more generally on system balance from 2012 (when SSD prices were too high to be plausible)
Including any 'extras' increases cost. But if the choice is 4GB fast RAM, big GPU, slow HDD, or 2GBs fast RAM, moderate GPU, fast SSD and slow HDD, and the SSD model gives better real-world performance on less powerful processors because it renders the techniques of the day more effectively, then it's a better investment.

Balanced systems aren't just about the most powerful single components or biggest numbers.

My argument against ReRAM was that being the technology used to provide the low-latency storage solution as opposed to NAND.
 
Last edited:
Is this still true for low latency high bandwidth persistent storage using NAND?

Artificial Intelligence (AI) and Machine Learning (ML) applications are being developed for the enterprise and consumer markets at an exponential rate, but few developers are aware that persistent memory can play a critical role in optimising access to large data sets.

AI and ML technologies create highly demanding IO (input and output) and computational performance for GPU accelerated Extract, Transform, Load (ETL). The key challenge developers must overcome is to reduce the overall time to discovery and insight within data-intensive applications. Varying IO and computational performance is driven by bandwidth and latency. Therefore, the high-performance data analytics needed by AI and ML applications can be addressed by persistent memory solutions that offer the highest bandwidth and lowest latency.

Can we expect the SSD to be leveraged for better AI next-gen?
 
Is this still true for low latency high bandwidth persistent storage using NAND?



Can we expect the SSD to be leveraged for better AI next-gen?
not in the sense the article is written. This article is referring to training and insight development. Not execution of models, which is post insight and development stage.
 
If so, what are the benefits ?

The patent only says that it provides a higher degree of freedom for component layout. I don't know how this impacts thermal conductivity and overall cooling.
 
Many modern PC mainboards have heatpipes and sinks wrapping around them and going through them, as a good example.

Really ? I see heatpipes going vrm => rad, but never through the pcb. My guess is I took the word "through" literally in this patent ?
 
Back
Top