Playstation 5 [PS5] [Release November 12 2020]

Maybe some of it, if the developer is that short on resources. The 3D audio/system wavefront would consume a large fraction of the throughput. 8 Zen 2 cores at 3.5 GHz are just shy of 0.9 TF, and the GPU is just short of 10.3 TF.
If the developer could use all of Tempest, that's 11% of CPU peak and 1% of the GPU, but I don't think the system-reserved features would allow that. With a likely single-digit percentage of CPU capability and less than a percent of the GPU, would developers be that tightly resource constrained to look to Tempest?

I didn't think audio processing required that much CPU resources. I recall it barely using a few percentages of CPU back in the day of general AC97-level of hardware. Not sure exactly where I had seen that, but recall it being somewhere around the time of ESS Technology before Creative dominating the market. Obviously CPU performance has increased since then, so it shouldn't require more. Or is this ratcheted up several fold for excessive effects?
 
It guarantees quality of service. Those 3D audio is a constant stream of high priority data, updated every 5ms or so for hundreds of sound sources. They need to complete on time all the time. Accelerating them all in one hardware unit is a clean and efficient way to contain the problem. Otherwise, they need to handle unexpected edge cases in resource contention.

The amount of processing power is not really the main problem. If they needed to, they would have used a bigger unit. They believe what they have now is above their needs; there's enough "buffer" in case things boil over.

With the headroom, they are also comfortable to let developers use Tempest for other purposes.
 
Last edited:
So, regardless of bill of materials: Sony Plans Limited PlayStation 5 Output in First Year

This screams to me "we're gonna charge $500 or so at launch" and then hope to quickly the costs a bit lower and charge $450 or so in... X amount of time, whatever X is. How to do that second part, 6nm die tapeout, higher GDDR6 supply, etc? Whatever. Point is it seems possible they'll charge super fans more for launch, then go after the more mainstream market. Probably fits with trying to time the big virus recession as well.
 
So, regardless of bill of materials: Sony Plans Limited PlayStation 5 Output in First Year

This screams to me "we're gonna charge $500 or so at launch" and then hope to quickly the costs a bit lower and charge $450 or so in... X amount of time, whatever X is. How to do that second part, 6nm die tapeout, higher GDDR6 supply, etc? Whatever. Point is it seems possible they'll charge super fans more for launch, then go after the more mainstream market. Probably fits with trying to time the big virus recession as well.
I think its more to do that they are behind MS in terms of hardware. Rumors put MS having functional xbox series x hardware before the holidays with Phil having one at home early this year. I am sure with everyone working remotely and the shut downs rippling through china and other parts of the world its hard to wrap a system up and have it ready in mass quantities for a fall launch world wide. Also if the rumors of the dev kits having heat issues and sony spending more on cooling to fix it are true that could also have affected their time line.

I also think with the world wide depression we are entering it wont really matter if sony can make a lot of $500 consoles or Microsoft, I don't think as many people will be buying it. I also think that's why if lockheart exists then microsoft may have stumbled their way into a huge advantage. A machine capable of next gen games and graphics at a reduced price can really help them out.
 
Anticipating the notice Entire lot of price prediction and social analysis moved to specific thread, the lockdown is the perfect storm for selling consoles.
At this point, they could even launch in the middle of the summer at any price, and people will be at home bored hoping to buy anything to alleviate the lockdown.
There'll be a crisis in some sectors, governs will have to intervene, and other sectors will continue to work and earn from home, with no cinema, no holidays, nothing but just their precious home entertainment.
By this fall governs will parachute ps5 and xsx in people's house as a survival measure.
 
They made that mistake in Cell. Found out gaming needs were too different. It would be more cost effective to use other solutions in the relevant industries.

It seems this “GPU coprocessor” will have the same challenges.

Yes, but I was thinking more of buying the audio software development company and doing the HRTF mapping etc. That might have some usage outside Playstation.
I also doubt the hardware can be repurposed in any meaningful way.
 
So, anyone wanna do an educated guess on how much faster PS5 is than 5700 XT taking into account of RDNA 2 efficiency gain per watt? What Nvidia card it would roughly be on par with?
 
In raw TF numbers, they are about on par, the 5700XT is actually around 10TF if we assume max theoretical performance (boosts). The PS5 gpu and 5700XT are actually rather similar in bandwith (448), amount of CU's (36) and clocks to some extend, most 5700's clock rather high, the PS5 gpu is higher clocked at max clocks. Both are 7nm aswell.
But that's raw power, we don't know yet how much RDNA2 improves in efficiency gains, dont expect any magical numbers like 50%.
It will be mostly the features that will differ, like ray tracing, audio hardware, primitive shading etc that add to it, and thereby perhaps performance.

As comparing to NV, well, kinda hard to compare. We shouldnt forget DLSS2.0/tensor cores and the other turing features like VRS. We know too little to compare, but i guess a vanilla 2070 would be close (remember that the stated TF's arent the theoretical max performance numbers), if DLSS is used a 2060. That's the 2018 GPU's were talking about then.

Soon the next NV gpu's will launch, DLSS will improve over time, RTX3000 series will have considerable arch/efficiency improvements too, together with the next stage of ray tracing hardware.
I guess that RDNA2 full fat GPU's will still be trailing behind NV's latest offerings, but closer then before atleast. Close to 20TF GPU's isnt unlikely to happen during the coming one or two years.

The situation isn't that different compared to the 2013 console GPU's, with their 7850/70 variants they where mid tier or maybe higher-mid tier at the time. On the GPU side, were looking at the same thing this time around.
The CPU is another story as it isnt gimped this time around, even though they are on zen2 instead of 3 and lower clocked. I think that, 2021 and onwards, 12 core zen3 cpu's arent that strange for a higher-end pc.
Around 16GB HMB2 or gddr6 for the vram in higher end gpu's will be the norm in time, with everything between 16 and 32gb system ram.

It's the SSD where the consoles, and with that the PS5 is keeping up with the pc this time. We are seeing 7gb/s solutions on pc by the time, its direct storage/velocity engine that will have to prove themselfs to support those drives in the W10 eco system.

Offcourse that's my try at a educated guess, it might differ from others perspective. We know too little to conclude anything accurate.
 
I didn't think audio processing required that much CPU resources. I recall it barely using a few percentages of CPU back in the day of general AC97-level of hardware. Not sure exactly where I had seen that, but recall it being somewhere around the time of ESS Technology before Creative dominating the market. Obviously CPU performance has increased since then, so it shouldn't require more. Or is this ratcheted up several fold for excessive effects?
My question revolves around what could be done with the remaining fraction of the Tempest CU's throughput, once the system-controlled 3D audio wavefront takes its cut. In the case of audio, many of the instances that have been discussed had low consumption.
However, even if a developer did have an audio pipeline that needed a small amount of throughput, does Tempest's form of compute justify the effort to leverage it, particularly if they already have the CPU path?
If audio does get a boost in throughput, it would more quickly reach a ceiling with Tempest than on the CPU. Being forced to straddle Tempest and the CPU if they overwhelm it is a larger headache and source of complexity than staying in the CPU or GPU pool where there's orders of magnitude more capacity.
The CPU would be the most flexible in terms of programmability, and there would be a higher probability that resources would be available to develop and maintain code on the CPU than on a more niche not-GPU DMA-based compute unit.
Audio loads on the GPU would have more total throughput to work with, which may be enough to justify the more constrained programming model.
Tempest has none of the throughput advantages, similar batching requirements, little existing infrastructure or coder pool, and an additional DMA-management consideration for the programmer.

Is there a value-add or hook to using Tempest that can provide a benefit besides its small to rounding-error throughput?
If the question came down to whether a game could reduce some effect by 2%, or find 1% of slack time somewhere in the CPU/GPU, or develop for a tiny compute unit with limited throughput and incompatible model, how often is the last option a winner?
 
My question revolves around what could be done with the remaining fraction of the Tempest CU's throughput, once the system-controlled 3D audio wavefront takes its cut. In the case of audio, many of the instances that have been discussed had low consumption.
However, even if a developer did have an audio pipeline that needed a small amount of throughput, does Tempest's form of compute justify the effort to leverage it, particularly if they already have the CPU path?
If audio does get a boost in throughput, it would more quickly reach a ceiling with Tempest than on the CPU. Being forced to straddle Tempest and the CPU if they overwhelm it is a larger headache and source of complexity than staying in the CPU or GPU pool where there's orders of magnitude more capacity.
The CPU would be the most flexible in terms of programmability, and there would be a higher probability that resources would be available to develop and maintain code on the CPU than on a more niche not-GPU DMA-based compute unit.
Audio loads on the GPU would have more total throughput to work with, which may be enough to justify the more constrained programming model.
Tempest has none of the throughput advantages, similar batching requirements, little existing infrastructure or coder pool, and an additional DMA-management consideration for the programmer.

Is there a value-add or hook to using Tempest that can provide a benefit besides its small to rounding-error throughput?
If the question came down to whether a game could reduce some effect by 2%, or find 1% of slack time somewhere in the CPU/GPU, or develop for a tiny compute unit with limited throughput and incompatible model, how often is the last option a winner?
Maybe if the advantage of the tempest architecture is it's low latency and guaranteed operation time, it could be useful for VR, and not just the 3d audio aspect. The tracking processing have critical latency, in fact I can't think of anything in gaming where low latency and guaranteed deadline of operation is more important.
 
The tracking processing have critical latency, in fact I can't think of anything in gaming where low latency and guaranteed deadline of operation is more important.

Trophy Completion and TimeStamping ...
 
In this link a few quotes are put on to the screen that allegedly come from someone who is working for a supplier for Sony:
The source suggested that the SSD controller for PS5 is a 12 controller part (confirmed ?) would actually easily exceed 5.5 GB/S. Therefore the source thought Sony might be using relatively cheaper Flash chips. How much cheaper that would make the SSD I don't know and of course the R\D plus chip real estate of the bespoke controller might just offset that but who knows.
added:
The haptic feedback and such must cost significantly more than the rumble in the PS4 controller. The retail might be kind of high so they might take that into account when pricing the system itself.

This has a lot of interesting information
 
This has a lot of interesting information
Well speculation and information yeah. This came out with the wave of dev centered comments about how the PS5 is darn quick just after the Cerny reveal. I won't vouch for anything else on the channel but it was interesting to me at least.
 
Well speculation and information yeah. This came out with the wave of dev centered comments about how the PS5 is darn quick just after the Cerny reveal. I won't vouch for anything else on the channel but it was interesting to me at least.

True it could be gamernexus worthy of material. I'd put more trust in digital foundry who has actually contact with the people behind these consoles themselfs.
 
Designing a game around an ultra fast SSD like the one in PS5 is unprecedented, devs are in uncharted territory so it would be quite interesting to see what they come up with in terms of pushing super detailed models, textures, density, LOD management, streaming and god knows what else. With so many devs getting excited for it I'm sure there's more merit than just reducing the loading time, maybe Sony could do a new Rubber Duck demo to demonstrate its capabilities.
 
I think a lot of people would call the PS4 slow based on the GPU specs alone, particularly in the PC realm. But how powerful the machine will end up as a sum of its parts is very hard to predict right now.

EDIT: what he said ;)
 
Back
Top