Playstation 5 [PS5] [Release November 12 2020]

I’m still deciding if I should get a PS5 coupled with upgrading my PC. It’s just a lot of decisions to be made :)
My PC is still a gtx780 and i5 of that era, I will probably upgrade to a 3080 and zen3 which will be the biggest jump in the history of my PC.

I wish we had advance knowledge whether there will be a PS5-Pro...
 
My PC is still a gtx780 and i5 of that era, I will probably upgrade to a 3080 and zen3 which will be the biggest jump in the history of my PC.

I wish we had advance knowledge whether there will be a PS5-Pro...
Hehe. Yea. New mobo , CPU, memory, and if you want to keep up with PS5 you need to find a PCIE4 SSD. I am also interested to get a 3xxx series upgrading from 1070, I guess the price point scares me, 2080TI is well over $1000. I can’t imagine 3080 being cheaper.

Someone wrote about newer faster SSDs coming soon but my worry is that they are PCIE5.

it’s a waiting game right now to figure out my next steps are going to be.
 
Any game worth its salt is going to spread its workload across at least four cores, so I don't see a ton of value in comparing to single-core boost speeds.

I don't think it was the case last gen.
Most of the shit came flying from the gameplay code (single-threaded lua stuff and similar)
 
Hehe. Yea. New mobo , CPU, memory, and if you want to keep up with PS5 you need to find a PCIE4 SSD. I am also interested to get a 3xxx series upgrading from 1070, I guess the price point scares me, 2080TI is well over $1000. I can’t imagine 3080 being cheaper.

Someone wrote about newer faster SSDs coming soon but my worry is that they are PCIE5.

it’s a waiting game right now to figure out my next steps are going to be.
Not buying any nvme until we know which drive will work with the PS5!

It was simpler when I got the ps4 I used the 960GB ssd from my laptop. It won't work this time around.
 
I guess we’ll find out how open Mark Cerny is when we find out what “a few percent” ends up meaning.

Where did Mark Cerny use the expression "few percent"?

From what I listened in the video and DF's article, he said reducing the GPU clocks by a couple percent would result in 10% lower power consumption.
 
it’s a waiting game right now to figure out my next steps are going to be.

The best is to wait to a bit in to 2021, prices will be much more human for parts that outclass consoles.

Most of the shit came flying from the gameplay code (single-threaded lua stuff and similar)

Depends if SMT could be disabled during gameplay (and use the higher clock where needed), with todays powersaving tech that wouldn't sound too crazy.
 
Where did Mark Cerny use the expression "few percent"?

From what I listened in the video and DF's article, he said reducing the GPU clocks by a couple percent would result in 10% lower power consumption.

Dumb nitpick, but anyway, the whole point of my post was we'll see how open he was about the specs when we see what the clock impacts really are.
 
I wish we had advance knowledge whether there will be a PS5-Pro...
Isn't this kind of baked into the cake ? MS will be coming up with their next in the series in 4 or so years so you will need something at that point.
You can basically see how doubling up the CUs like with the PS4-Pro along with sorting out the RT situation and making a run at 8K-ish 60 frames would likely just seem a reasonable product.
 
Dumb nitpick, but anyway,
It's not a dumb nitpick because had Cerny said "a few" it'd be open to the possibility of needing 10% lower clocks to reach 10% lower power. It's not 10%, it's 2% clocks in exchange for 10% lower power.
It's not like we're lacking people grasping at straws here to suggest the GPU will rarely be working at above 2GHz, even after Sony stating as such.

And to be clear, I'm not talking about you.

the whole point of my post was we'll see how open he was about the specs when we see what the clock impacts really are.
We'll probably never know this, though.
 
It's not a dumb nitpick because had Cerny said "a few" it'd be open to the possibility of needing 10% lower clocks to reach 10% lower power. It's not 10%, it's 2% clocks in exchange for 10% lower power.
It's not like we're lacking people grasping at straws here to suggest the GPU will rarely be working at above 2GHz, even after Sony stating as such.

And to be clear, I'm not talking about you.


We'll probably never know this, though.

We might if PS5 gets somehow jailbroken.
 
Assuming it didn't have an issue with the 2.4ghz waves spewed by the microwave, I suspect it will simply shut itself off.

And refuse to turn on if it's already / still hot.

(those are also the behavior of ps4 and PS4 pro)
I was considering a conventional or convection oven rather than a microwave oven. A lack of military-grade EMP protection is forgivable.
Does the PS4 enforce a shut-off past a specific ambient temperature, or does it depend on whether the APU ramps up to an unsafe temperature?
That can make a difference in the PS5's case. The cooler will have a max ambient temperature in order to ensure there's a sufficient gradient across the cooler, and there's a limit to how over-engineered a heatsink can be to raise it (unless we're thinking chiller/peltier). Assumptions baked into the power modeling and hot-spot protection don't work if it's not kept.
If the PS5 is allowed to start or ambient rises after startup, throttling in a fail-safe mode would be a reasonable solution. To go back to when I was discussing how SSDs vary in how they handle abrupt power loss, depending on how Sony's custom storage hierarchy works there's varying tolerance for an abrupt thermal shutdown.

Could it be the case, for example, that one CPU core has dedicated lanes to the memory controller/DDR4 for the OS?

And is there any difficulty in keeping one CPU core in SMT mode at a lower clockspeed, while the other seven cores run without SMT at a higher clockspeed?
A core interfaces with the internal network of the CCX that connects with the other cores and the L3. The CCX has an entry point into the on-die infinity fabric, which links to various other clients in what is probably some kind of crossbar or mesh setup. A core doesn't physically have the ability to directly link to a memory controller, and limiting the OS to a single channel can lead to difficulties. Even if the bulk of the system reserve is in one address range, there are system services and events that can deliver data to the game or synchronize with other parts of the SOC, and those relatively rare but critical events are in the purview of the OS. Having those pile up in one place by forbidding the OS from spreading its signals or shared buffers across multiple channels can lead to parts of the system stalling more than they should.

No, I definitely dont agree with you. If you think Mark just set on his chair and made presentation for first deep dive into PS5 (that has been watched 10+ million time in 3 days) without input from marketing dept then you are kidding yourself.

You dont do deep dive for developers and talk why TF dont tell the whole story or what are CUs and why its better to have less of them with higher clocks.

Notice he didnt guarantee anything, he didnt say what "close" is and what "most of time" means. This is textbook example of wiggle room. I dont doubt var freq is better then fixed for them, not only for performance but also for marketing, but if by his own admission they struggled to lock at 2.0GHz, then I have a bit more conservative expectations of 2.23GHz, especially since no clear guarantee, numbers or percentages were given.
Some of the terms he was using remind me of the financial calls from the various tech companies. So perhaps not just public relations, but legal and investor relations. Whatever Sony tells the general public is also part of the information available to investors, and in this case the information may help gauge where Sony's flagship product is fitting into its market.

Boost mode is going to show its greatest performance benefits early on when the hardware is less well utilised. That's also when it's most important to not look significantly weaker than the Xbox Series X.
I wonder if this could subtly encourage developers to adopt frame-pacing measures and more closely optimize when the GPU and CPU can idle. With made-up numbers: a scenario where a game has a 16ms frame budget, and it can readily get 1-2ms where the GPU can idle, then the CPU is guaranteed max clock. Or maybe if the pause menu goes to 200 FPS, the developer can expect the overall power ceiling to clamp down on it and there may be a period of time where their game's power budget is closer to zero and there may be a period of reduced clocks for a period after exiting the menu.

While this is better known as an Intel penalty, changing vector widths incurs either performance or power penalties. AMD doesn't specifically list a penalty in the same way, but flip-flopping on vector width at intervals that trigger power gating transitions can make iffy code more power-intensive than it should be.

Without knowing how rapidly the monitoring and power emphasis can be changed, maybe trading power priorities during a frame is possible. The GPU can have high priority during the ramp-up phases of various shader stages or around barriers, and the CPU can get more in periods where the GPU is more clearly limited by bandwidth or one of those "fast" SSD asset loads, when the GPU is running low on commands, or with system operations that the game or GPU may potentially stall on. That might be too complex a task for a generic mode toggle, however.

I'm not sure that pushing pre-built command buffers around should be that compute intensive. Why?
Cerny made note of a lot of dedicated processors to help prevent the order of magnitude greater rate of system events the fast SSD can create, but that doesn't mean that what the PS5 CPU sees is a PS4-level of system requests. On top of giving more games the option for 60 FPS or the higher rates of VR, the system services and tools available for controlling the IO subsystem are more complex and more latency-sensitive. If games are meant to rely on these faster resources at vastly lower latencies than was assumed for a system that is frequently hard-disk bound, then navigating the system layer or setting up the resources to be used by the GPU and other clients can be a sporadic but high-performance task.

Any game worth its salt is going to spread its workload across at least four cores, so I don't see a ton of value in comparing to single-core boost speeds.
Zen 2 has a much less abrupt clocking curve than Zen did, and it can matter what the other cores are doing before it is known what the boost algorithm will select for a clock speed across all of them.
System locks or service requests that hold up other cores or other clients could benefit from an upclock so that the blockage is resolved quickly.
 
I've always thought that 4x multiplier seems conservative.

Given IPC, Clocks, architectural differences, additional instruction sets and SMT at least. Counter in I/O and Audio being almost entirely offloaded too and I'd be thinking closer to 5-6x.
 
It's not a dumb nitpick because had Cerny said "a few" it'd be open to the possibility of needing 10% lower clocks to reach 10% lower power. It's not 10%, it's 2% clocks in exchange for 10% lower
It's not like we're lacking people grasping at straws here to suggest the GPU will rarely be working at above 2GHz, even after Sony stating as such.

And to be clear, I'm not talking about you.


We'll probably never know this, though.
we might know one day.
https://www.resetera.com/threads/nx...a-new-generation-is-born.176121/post-30071443

we know that the information is being discussed.
I will say, what Alex is alluding to in this thread is based on discussions with multiple people doing actual work on this box so keep that in mind...
 
They would still limit ambient temperature allowed to 35C or 40C. It's a waste of material otherwise for a consumer product. There are sensors on parts or close to parts with limits extrapolated from these sensors to first spin up the fan to stay within the margins, or trigger an overheat condition if it reaches those limits. It's not really caring about the ambient temp, just the operating temp of parts without a sensor (power capacitors, mosfets, abs plastic melting at 105C), and those with internal sensors.

I suppose the deterministic consumption would have to use operation counters of every sections of the chip, and have a cumulative extrapolation from some database of how much energy each operation consumes. So any sequence of operation through the whole system would end up with exactly the same sum over a slice of time to trigger a downclock, regardless of small variations of silicon or the real voltages. It can have short term peaks too if it allows a deficit and the rails have enough capacitance.

If they cap this at say 175W, they have a much more consistent cooling solution than "it depends let's guess what's the worst that will happen and add 10%". Everything is designed for 175W at whatever altitude and up 35C ambient. There is no excuse for anything but a perfect cooling design.
 
Last edited:
It's not a dumb nitpick because had Cerny said "a few" it'd be open to the possibility of needing 10% lower clocks to reach 10% lower power. It's not 10%, it's 2% clocks in exchange for 10% lower power.
It's not like we're lacking people grasping at straws here to suggest the GPU will rarely be working at above 2GHz, even after Sony stating as such.
And to be clear, I'm not talking about you.
We'll probably never know this, though.
We'll have a good idea if it's not running at 2.23 most of the time, it will consistently render 30% less pixels per seconds on third party games.

And we'll know it's at 2.23 all the time if third parties are showing about 15% less pixels per seconds.

Other improvements and bottlenecks will rear their ugly heads though, memory bottleneck might be dominant?
 
My son who plays GTA5 a lot considers the idea that this game would load in 1 second sufficient to make the PS5 an insta buy and offered to pay half of it [emoji23]

Come on, I remember you posting (I think I do at least) that you looked forward to your kid getting older so he could hold the controller and play with you. HE IS NOT 18 YET, IS HE? :D
 
I was considering a conventional or convection oven rather than a microwave oven. A lack of military-grade EMP protection is forgivable.
Does the PS4 enforce a shut-off past a specific ambient temperature, or does it depend on whether the APU ramps up to an unsafe temperature?
That can make a difference in the PS5's case. The cooler will have a max ambient temperature in order to ensure there's a sufficient gradient across the cooler, and there's a limit to how over-engineered a heatsink can be to raise it (unless we're thinking chiller/peltier). Assumptions baked into the power modeling and hot-spot protection don't work if it's not kept.
If the PS5 is allowed to start or ambient rises after startup, throttling in a fail-safe mode would be a reasonable solution. To go back to when I was discussing how SSDs vary in how they handle abrupt power loss, depending on how Sony's custom storage hierarchy works there's varying tolerance for an abrupt thermal shutdown.

Dunno PS5 will use ambient temp sensor or not. On PS4 afaik they use APU temp for fan speed and overheat detection.

Someone also changed the fan curve based on apu temp https://github.com/toxxic407/PS4-fan-control-payloads
 
We might if PS5 gets somehow jailbroken.

Also maybe by monitoring it by proxy. Maybe if there are uncapped, un vsync PS5 games. We can monitor its temperature and power consumption and framerate.

Rotate camera to sky or ground, and see what changes. We won't know the exact clocks but maybe can see it throttles or not and when.
 
Back
Top