Playstation 5 [PS5] [Release November 12 2020]

Yeah cause vita games were entirely different skus. What nintendo did with switch was something only they with their catalog and unique place in the industry could have done. They deserve tons of credit for the foresight and prep that must have required of their teams, third parties and development workflow in general. Despite not aiming for power parity with the other machines long before that, it must have been quite an adjustment period.

Going back to our PS5 cooling situation discussion, i think the hype around the cooling solution isnt just because Pro's was a cheap solution, but more importantly because of how Cerny has hyped it up and especially due to these extreme clocks they are promising will stay near constant most of the time. Even Xbox Series X which is relatively speaking much lower clocked had to fundamentally rethink console design to work around the cooling and heat dissipation issues they were facing, even with more leeway from a higher CU chip.
 
The only possibility I can see for a ps5 portable is a separate platform with an independent catalog, but same OS and as binary compatible as possible with ps5 to make porting extremely easy, with no mandate tying the platforms together. Anything above 10W is DoA or niche. Ps4 portable isn't possible at 7nm.

2c/4t zen2
9 CU rdna2 (10 with 1 disabled)
8GB lpddr5 (2 chips, 128bit @ 5500)
256GB ssd
Power cap at 10W

Calling it PS5P with the same logo, copy paste the P with the same kerning, spending millions in typographic research.
 
Last edited:
As far as I understand, from a traditional perspective when using a HDD a game like Wipeout would essentially have the entire track within memory at any one time including all textures (or potentially loading them in/out from the HDD) and trackside detail.

HDD: if applied to a PS5 it’d be ~16GB (give or take a couple of GB depending on the OS) of track and associated data. There may be instances where textures are loaded in/out of RAM directly proportional to the speed of the HDD and movement of the vehicle.

SSD: in a simple implementation could have 10GB assigned to static unchanging track and data, with 6GB assigned as variable usage RAM. This 6GB could then have significant amounts of trackside detail loaded in/out depending on distances of the viewpoint/vehicle. The 5.5GB/s of the SSD could then be used to change trackside detail for every one second of gameplay (likely flushed in/out at the periphery of the player). This *could* then in a best case scenario take a track from have 16GBs of data assigned to it, to having something like 370GB assigned to it (10GB + 6GB*60, for a 60 second lap length). It seems a little preposterous, but certainly a possible use of the SSD. The bottleneck would then become one of storage space and download speeds.

Anyone more techy minded that can explain to me why this wouldn’t be the case?

Maybe we’ll see more technology for procedurally generated textures in order to avoid taking up too much SSD.
 
Anyone more techy minded that can explain to me why this wouldn’t be the case?
Might be easier to say that both setups would likely be streaming textures as they needed it. but with the traditional platter, they needed to hold a larger size of the track in memory in case something happened and it would suddenly not have anything to render.

Because it needs to hold a larger size of the track in memory, since the capacity of memory is the same (assume 16GB), then each texture must be of smaller size. So if we assume because the platter speed is 1/50th the speed of the SSD solution, it may need to hold 50x more track data at any time than the SSD. Just to ensure the game can run. But you're still held by the finite capacity of the VRAM.

So 4K textures are about 8MB per texture. 8K textures are 32MB.
The slow HDD platter could only afford to use 160KB sized textures as it needs to hold these textures for a bigger part of the track because streaming it in is going to be much slower and less immediate then the SSD solution. While the SDD solution can hold significantly less track data and support a much larger 8MB per texture in memory.
 
Sounds like you're agreeing, but considering only textures? I would have thought you'd be able to include any data; sounds, textures, polygons, etc.

The detail of the track would then be proportional to the speed of the streaming device (assuming the GPU can render that much detail).
 
Sounds like you're agreeing, but considering only textures? I would have thought you'd be able to include any data; sounds, textures, polygons, etc.

The detail of the track would then be proportional to the speed of the streaming device (assuming the GPU can render that much detail).
Yea, I tried to use the words 'track loaded' as sort of trying to encapsulate everything.
But there are some things, like audio, that even if it's not visible you need to keep around based on radius.
Polygons and models, I don't think the data is very large on those relatively speaking, I could be wrong though.

We had a developer allude to how memory management here is the most important thing for extracting performance out of the console. And SSDs will help in alleviating the amount of memory they need to reserve in the capacity just because they can rely on calling it in on shorter notice. It lets them push it to the edge, but I mean there are going to be upper bounds on how much you can hold on the screen, and how much can be happening because you'll eventually run out of VRAM memory capacity.
 
Wipeout seems a perfect candidate for streaming of the ssd, but doesnt that wear fast on the ssd? Imagine transferring at 5gb/s almost constantly, someone playing hours of sessions, day after day.
 
Wipeout seems a perfect candidate for streaming of the ssd, but doesnt that wear fast on the ssd? Imagine transferring at 5gb/s almost constantly, someone playing hours of sessions, day after day.
those are peak values. Just like TF, don't expect to hold the peak values for more than a blip.
 
those are peak values. Just like TF, don't expect to hold the peak values for more than a blip.
With something like a 16K block size and the queues kept a healthy length, it can definitely maintain that peak as long as it needs. But yeah, it won't really do this during gameplay it will go up and down based on what happens in the game. I can certainly see one of the spiderman speedrun challenges keeping this constantly near the max.

TF is limited by memory access, cache misses, branching, the algorithm being vectorizable or not, how many of those instructions are MACCs, etc... There's a million factors for ALU occupancy which I don't really understand, but storage is quite straightforward.
 
Last edited:
SSD's wear on writes. Reads should be of no concern when ssd durability is thinked about.
yea, with the game directories being mapped as memory, it should mainly be reading only. The only time you're going to write during gameplay is to page things out of vram to make space.
 
With something like a16K block size and the queues kept a healthy size, it can definitely maintain that peak as long as it needs.
Game code is unlikely to hit it. Variation from 1 frame to the next is minimal. Once it's loaded into memory, you're not going to toss it if you have to use it for next frame.
 
We could apply the same rules to any game, you'd have core assets that'd always remain in a pool of ~10GB RAM; character model and textures, physics, rendering engine, etc. Then essentially everything else is somewhat variable within a set radius of the player.

It means detail would drastically increase.

Scenarios where constant flushing of data would make the best use of the SSD speed. Devs would then presumably need to consider which variables are static/changeable and what the radius should be.

those are peak values. Just like TF, don't expect to hold the peak values for more than a blip.

Is that right? I would have thought youd be able to maintain those speeds, but really I have no technical knowledge to really comment further.
 
A game like wipeout is perfect for disk streaming, because you can only go two directions on the track. Predicting what you need to load is incredibly easy. If you buffer data for 1 or 2 seconds ahead of you on the track, then you can probably take good advantage of disk io. Racing games should look MUCH nicer this gen. Hopefully the track details will catch up to the cars. The cars honestly already look very good.
 
Back
Top