Assuming it didn't have an issue with the 2.4ghz waves spewed by the microwave, I suspect it will simply shut itself off.
And refuse to turn on if it's already / still hot.
(those are also the behavior of ps4 and PS4 pro)
I was considering a conventional or convection oven rather than a microwave oven. A lack of military-grade EMP protection is forgivable.
Does the PS4 enforce a shut-off past a specific ambient temperature, or does it depend on whether the APU ramps up to an unsafe temperature?
That can make a difference in the PS5's case. The cooler will have a max ambient temperature in order to ensure there's a sufficient gradient across the cooler, and there's a limit to how over-engineered a heatsink can be to raise it (unless we're thinking chiller/peltier). Assumptions baked into the power modeling and hot-spot protection don't work if it's not kept.
If the PS5 is allowed to start or ambient rises after startup, throttling in a fail-safe mode would be a reasonable solution. To go back to when I was discussing how SSDs vary in how they handle abrupt power loss, depending on how Sony's custom storage hierarchy works there's varying tolerance for an abrupt thermal shutdown.
Could it be the case, for example, that one CPU core has dedicated lanes to the memory controller/DDR4 for the OS?
And is there any difficulty in keeping one CPU core in SMT mode at a lower clockspeed, while the other seven cores run without SMT at a higher clockspeed?
A core interfaces with the internal network of the CCX that connects with the other cores and the L3. The CCX has an entry point into the on-die infinity fabric, which links to various other clients in what is probably some kind of crossbar or mesh setup. A core doesn't physically have the ability to directly link to a memory controller, and limiting the OS to a single channel can lead to difficulties. Even if the bulk of the system reserve is in one address range, there are system services and events that can deliver data to the game or synchronize with other parts of the SOC, and those relatively rare but critical events are in the purview of the OS. Having those pile up in one place by forbidding the OS from spreading its signals or shared buffers across multiple channels can lead to parts of the system stalling more than they should.
No, I definitely dont agree with you. If you think Mark just set on his chair and made presentation for first deep dive into PS5 (that has been watched 10+ million time in 3 days) without input from marketing dept then you are kidding yourself.
You dont do deep dive for developers and talk why TF dont tell the whole story or what are CUs and why its better to have less of them with higher clocks.
Notice he didnt guarantee anything, he didnt say what "close" is and what "most of time" means. This is textbook example of wiggle room. I dont doubt var freq is better then fixed for them, not only for performance but also for marketing, but if by his own admission they struggled to lock at 2.0GHz, then I have a bit more conservative expectations of 2.23GHz, especially since no clear guarantee, numbers or percentages were given.
Some of the terms he was using remind me of the financial calls from the various tech companies. So perhaps not just public relations, but legal and investor relations. Whatever Sony tells the general public is also part of the information available to investors, and in this case the information may help gauge where Sony's flagship product is fitting into its market.
Boost mode is going to show its greatest performance benefits early on when the hardware is less well utilised. That's also when it's most important to not look significantly weaker than the Xbox Series X.
I wonder if this could subtly encourage developers to adopt frame-pacing measures and more closely optimize when the GPU and CPU can idle. With made-up numbers: a scenario where a game has a 16ms frame budget, and it can readily get 1-2ms where the GPU can idle, then the CPU is guaranteed max clock. Or maybe if the pause menu goes to 200 FPS, the developer can expect the overall power ceiling to clamp down on it and there may be a period of time where their game's power budget is closer to zero and there may be a period of reduced clocks for a period after exiting the menu.
While this is better known as an Intel penalty, changing vector widths incurs either performance or power penalties. AMD doesn't specifically list a penalty in the same way, but flip-flopping on vector width at intervals that trigger power gating transitions can make iffy code more power-intensive than it should be.
Without knowing how rapidly the monitoring and power emphasis can be changed, maybe trading power priorities during a frame is possible. The GPU can have high priority during the ramp-up phases of various shader stages or around barriers, and the CPU can get more in periods where the GPU is more clearly limited by bandwidth or one of those "fast" SSD asset loads, when the GPU is running low on commands, or with system operations that the game or GPU may potentially stall on. That might be too complex a task for a generic mode toggle, however.
I'm not sure that pushing pre-built command buffers around should be that compute intensive. Why?
Cerny made note of a lot of dedicated processors to help prevent the order of magnitude greater rate of system events the fast SSD can create, but that doesn't mean that what the PS5 CPU sees is a PS4-level of system requests. On top of giving more games the option for 60 FPS or the higher rates of VR, the system services and tools available for controlling the IO subsystem are more complex and more latency-sensitive. If games are meant to rely on these faster resources at vastly lower latencies than was assumed for a system that is frequently hard-disk bound, then navigating the system layer or setting up the resources to be used by the GPU and other clients can be a sporadic but high-performance task.
Any game worth its salt is going to spread its workload across at least four cores, so I don't see a ton of value in comparing to single-core boost speeds.
Zen 2 has a much less abrupt clocking curve than Zen did, and it can matter what the other cores are doing before it is known what the boost algorithm will select for a clock speed across all of them.
System locks or service requests that hold up other cores or other clients could benefit from an upclock so that the blockage is resolved quickly.