Current Generation Hardware Speculation with a Technical Spin [post launch 2021] [XBSX, PS5]

Status
Not open for further replies.
The actual culprit behind 360 was apparently crappy solder, not inadequate cooling.

IIRC, it was both crappy solder (manufacturers still not accustomed to lead free solder) and inadequate cooling.

The cooling system itself is adequate enough to cool down the chips, but the solder are not good enough to handle the thermal cycle stress of very high temperatures

Microsoft revised it with a better heatsink (and a newer x clamp), thus the max temperature got lower and the pressure is better. It reduced rrod but not fixed it. IIRC it was jasper or falcon revision... Can't remember.

Then comes an even cooler x360 that's basically immune from rrod. It also comes with a lower wattage power supply
 
The actual culprit behind 360 was apparently crappy solder, not inadequate cooling.

Early 360s had both crappy solder and a completely inadequate GPU cooler! Two red lights were the all too common warning of overheat. MS were well aware of both issues, and from very early on.

The crappy solder (really the move to lead free solder which presented a new set of design challenges) was the main failure point causing RRoD, but this problem was exacerbated significantly by the GPU running very hot (and increasing in temperature very quickly), and so putting high mechanical stress on the solder. Both CPU and GPU used the same solder and same heatsink attachment mechanism, but it was (almost) always the GPU that shit the bed first because of the stupid temps it was reaching.

Even disregarding RRoD, two red light overheats and stupid amounts of noise coming from the 2 x 6cm fans quickly hitting max speed (and the 12 x DVD drive) warranted action. I remember removing DVDs from my housemate's launch 360 and they were hot. Not warm, but *hot*. It was the same for all the early 360's I used. The DVD drive almost became a secondary heatsink for the naked GPU cooler it was only mm above.The upgraded cooler partially addressed the issues of noise and sweltering DVD drives.

Interestingly, I recall reading that MS hadn't properly placed thermal sensors to monitor all the hotspots on the GPU, so that was another problem with thermal management - though with fans at max I'm not sure what else you could do without beefing up cooling. This was finally fully rectified for the 45nm SoC in the slim, IIRC.

MS also suggested that adding the HDD was a factor - as it reduced ventilation on the GPU side of the unit. If true, this would have further overwhelmed the rather pathetic cooler.

The whole early 360 saga was ... interesting. But the launch GPU cooler was absolutely highly inadequate, hence the extra dollars on the upgraded heatsink with its funky outgrowth to a high air flow area.

6RY6O2kV6RoCKP5N.medium


IIRC, it was both crappy solder (manufacturers still not accustomed to lead free solder) and inadequate cooling.

The cooling system itself is adequate enough to cool down the chips, but the solder are not good enough to handle the thermal cycle stress of very high temperatures

Microsoft revised it with a better heatsink (and a newer x clamp), thus the max temperature got lower and the pressure is better. It reduced rrod but not fixed it. IIRC it was jasper or falcon revision... Can't remember.

Then comes an even cooler x360 that's basically immune from rrod. It also comes with a lower wattage power supply

I think it was Zephyr that added the heatsink, possibly with a half node shrink to the GPU. I had a Zephyr briefly - it was definitely quieter in terms of fans and disk now only came out of the drive warm instead of hot. Still returned it due to the thunderous vibration of the always-spinning DVD drive!

By Falcon (CPU shrink) and with disk installs the 360 was actually nice to use!
 
Last edited:
Well they could be both right (as perhaps theres variations between both consoles)
Does Sony run a standard voltage for all PS5 APUs, and would a “golden sample” even run and cooler given the same voltage?

I had a read of https://www.eurogamer.net/articles/...the-new-playstation-5-cfi-1100-series-console
'the fan has been replaced by another that has more blades, potentially capable of pushing more air out at the same speeds'
Is this true? I thought it was the opposite ( I know this is true with wind power generation, fewer = better i.e. more power from fewer blades )
If you can believe internet comments, one on either that article or a YouTube video said that the PS5 OG sources three different fans (Delta, Nidec, and something else), and the different fan found on the new revision is just one of those three.

Edit: The comment was from the DF article. No idea how to link a comment, just search for delta:
seifsoudani 5 days ago

I'm quite surprised that a specialized media like Eurogamer / DF does not know by now that the fan shown in Austin's video is NOT new at all. It's one one of the 3 variants of the fans present in the PS5 lottery: the Delta, other ones being the Nidec and the one we saw in the Sony's official teardown (I got that one in my Japanese PS5 and I can confirm it's even better than the one mistakenly thought to be new. The Nidec fan (a.k.a the UFO sounding one with only 17 blades) is the only one that is annoying and louder. The Delta has been dominant (no pun intended with covid) in the market even before the CFI-1100 was released. French website Les Numériques famously reported on these 3 variants (including the one with the gap) since the launch of the PS5.
 
Is it really a fan lottery or are the fans tied to particular SOC characteristics?
 
Even disregarding RRoD, two red light overheats and stupid amounts of noise coming from the 2 x 6cm fans quickly hitting max speed
I have a 360 2x6 fan in my PC. They are surprisingly quiet if you run them from a standard PC case fan header. I also have a fan from an original Xbox, but that one rattles a bit on cold boot until I tap the front of my case. I should replace that one.... But I'm sort of attached to it.
 
from my understanding (being no expert so correct if wrong and this is based somewhat on wind turbines ) but from I can gather,
more blades = more quiet, but more fans = more drag thus more power required to spin the same speed, I couldnt find out if more fans = better cooling capacity, some places said yes, some said the opposite
 
from my understanding (being no expert so correct if wrong and this is based somewhat on wind turbines ) but from I can gather,
more blades = more quiet, but more fans = more drag thus more power required to spin the same speed, I couldnt find out if more fans = better cooling capacity, some places said yes, some said the opposite
It's probably not that simple. I would imagine that the angle and curvature of the blades would have at least as much effect on airflow as the number of blades. Also, it would probably be different based on fan size, because at the same RPMs a larger fan's blades would be spinning at a faster speed at the outside edge than a smaller one. Would it make a measurable difference with small changes to the length of the fan blades (like a few mm)? I don't know. I'm certainly not qualified to calculate fan efficiency either.
 
Does Sony run a standard voltage for all PS5 APUs, and would a “golden sample” even run and cooler given the same voltage?

If you can believe internet comments, one on either that article or a YouTube video said that the PS5 OG sources three different fans (Delta, Nidec, and something else), and the different fan found on the new revision is just one of those three.

Edit: The comment was from the DF article. No idea how to link a comment, just search for delta:
This french guy received the new PS5. He has a new fan model from NMB (previously the best brand from the 3 models): 12047CS-12N-WB-01 . Supposedly consuming less with 2A instead of 2.4A, and as quiet as the other NMB (already very quiet).

https://www.jeuxvideo.com/forums/42-3017788-67410020-1-0-1-0-ps5-chassis-b-nouveau-ventilateur.htm

Based on the expelled air being hotter seems to me Sony actually improved the cooling solution overall.
 
Last edited:
It's probably not that simple. I would imagine that the angle and curvature of the blades would have at least as much effect on airflow as the number of blades. Also, it would probably be different based on fan size, because at the same RPMs a larger fan's blades would be spinning at a faster speed at the outside edge than a smaller one. Would it make a measurable difference with small changes to the length of the fan blades (like a few mm)? I don't know. I'm certainly not qualified to calculate fan efficiency either.
Yes it differently makes a difference, iirc 2 the diameter = cube the power.
So if you ever buy a fan, get the largest blades possible, it will be a lot quieter for pushing the same amount of air
 
Hello there !

I don't know if this is the right place to put this but anyway... shameless plug : a year ago i've published a long rambling about this new generation of consoles. In the past few days I've finally took the time to translate it the best I could in order ro reach more people so here is the english version : https://link.medium.com/rQVadvW9kjb

This is quite long and you'll have to bear with my broken english but comments, suggestions and corrections are welcome.
 
Well, at least we can now say, that the CPU in the PS5 is not that overwhelming
AMD 4700S CPU Reviewed: Defective PS5 Chips Find New Life | Tom's Hardware (tomshardware.com)
GDDR6 seems to be not really that good for CPUs after all. And don't look for the GPU tests, the build with PCIe 2.0 x4 is just not good enough to measure anything there. Only CPU can show use something. But keep in mind that the CPU has all the GDDR6 memory for itself, so in a game the CPU should perform a little bit worse.

Also I guess the CPU in the xbox also won't be much better. Yes the xbox has a little higher clock rate (and the PS5 CPU has a dynamic clock rate) and maybe Sony just saved a bit too much on those cpu cores (missing parts in the Die shots), but expect for the clock-rates it should be more or less the same. So console CPUs are more or less equal to a Ryzen 1800x. Not the best CPU but also not a bad CPU.
 
Well, at least we can now say, that the CPU in the PS5 is not that overwhelming
AMD 4700S CPU Reviewed: Defective PS5 Chips Find New Life | Tom's Hardware (tomshardware.com)
GDDR6 seems to be not really that good for CPUs after all. And don't look for the GPU tests, the build with PCIe 2.0 x4 is just not good enough to measure anything there. Only CPU can show use something. But keep in mind that the CPU has all the GDDR6 memory for itself, so in a game the CPU should perform a little bit worse.

Also I guess the CPU in the xbox also won't be much better. Yes the xbox has a little higher clock rate (and the PS5 CPU has a dynamic clock rate) and maybe Sony just saved a bit too much on those cpu cores (missing parts in the Die shots), but expect for the clock-rates it should be more or less the same. So console CPUs are more or less equal to a Ryzen 1800x. Not the best CPU but also not a bad CPU.

Ouch, double the latency for memory accesses compared to DDR4, that's going to hurt CPU intensive tasks that access main memory. Basically rendering it slower than an equivalently clocked mobile RDNA2 CPU. And yeah, I expect the situation to be the same for the XBS consoles.

And that doesn't even take into account memory contention with the GPU which could potentially introduce more latency in CPU memory requests.

Regards,
SB
 
Well, at least we can now say, that the CPU in the PS5 is not that overwhelming
AMD 4700S CPU Reviewed: Defective PS5 Chips Find New Life | Tom's Hardware (tomshardware.com)
GDDR6 seems to be not really that good for CPUs after all. And don't look for the GPU tests, the build with PCIe 2.0 x4 is just not good enough to measure anything there. Only CPU can show use something. But keep in mind that the CPU has all the GDDR6 memory for itself, so in a game the CPU should perform a little bit worse.

Also I guess the CPU in the xbox also won't be much better. Yes the xbox has a little higher clock rate (and the PS5 CPU has a dynamic clock rate) and maybe Sony just saved a bit too much on those cpu cores (missing parts in the Die shots), but expect for the clock-rates it should be more or less the same. So console CPUs are more or less equal to a Ryzen 1800x. Not the best CPU but also not a bad CPU.

The high CPU memory latency combined with the relatively small CPU L3 is probably not helping. Could latency be lower for XSX from the "none GPU optimal" memory (the 6 GB area)? I have no idea. Interesting to see that the CPU can consume nearly double the memory bandiwdth of the DDR4 limited CPU parts.

It was interesting that in AVX stress tests this 4700S CPU drops below 2.5 ghz, and that's despite hitting above 100 C. Toasty. That fits with Cerny's comment during Road to PS5 that with locked clocks they were having trouble maintaining 3 gHz.

Perhaps this strengthens the idea that PS5's AVX units are smaller basically because they are more compact, using a high density library, and so being more clock limited by high thermal density. We know from MS that their (physically larger) AVX units and their temperature / fan noise balance are what's limited them to 3.8 ghz on the CPU in 8 thread mode, though MS do guarantee locked clocks.
 
The high CPU memory latency combined with the relatively small CPU L3 is probably not helping. Could latency be lower for XSX from the "none GPU optimal" memory (the 6 GB area)? I have no idea. Interesting to see that the CPU can consume nearly double the memory bandiwdth of the DDR4 limited CPU parts.

It was interesting that in AVX stress tests this 4700S CPU drops below 2.5 ghz, and that's despite hitting above 100 C. Toasty. That fits with Cerny's comment during Road to PS5 that with locked clocks they were having trouble maintaining 3 gHz.

Perhaps this strengthens the idea that PS5's AVX units are smaller basically because they are more compact, using a high density library, and so being more clock limited by high thermal density. We know from MS that their (physically larger) AVX units and their temperature / fan noise balance are what's limited them to 3.8 ghz on the CPU in 8 thread mode, though MS do guarantee locked clocks.
Although the xbox has the faster clocked memory, I guess it still has comparable latencies so it shouldn't be better from a latency perspective.

More cache could really help in this situation, but that costs space. And you should get better results on spending more space on the GPU than more cache for the CPU. We are far away from Jaguar territory there.

What really is interesting is, that the 4750g outperforms this chip (boosts up to 4.4Ghz), even with a lower TDP. I guess this is all due to the GDDR6 memory.
 
Ouch, double the latency for memory accesses compared to DDR4, that's going to hurt CPU intensive tasks that access main memory. Basically rendering it slower than an equivalently clocked mobile RDNA2 CPU. And yeah, I expect the situation to be the same for the XBS consoles.

And that doesn't even take into account memory contention with the GPU which could potentially introduce more latency in CPU memory requests.

Regards,
SB

I need to find it but AMD had a patent to help with memory contention on APU basically to prioritize CPU request. This was post PS4 Pro and XBox X release-
 
Last edited:
Although the xbox has the faster clocked memory, I guess it still has comparable latencies so it shouldn't be better from a latency perspective.

More cache could really help in this situation, but that costs space. And you should get better results on spending more space on the GPU than more cache for the CPU. We are far away from Jaguar territory there.

What really is interesting is, that the 4750g outperforms this chip (boosts up to 4.4Ghz), even with a lower TDP. I guess this is all due to the GDDR6 memory.

Isn't the memory clocked the same on XSX and PS5?

Yeah, the power consumption is very interesting. I think it might hint at a very different power management system to regular Ryzen chips, in addition to the GDDR 6 factor that you raised.

Basically, I think the 4700S could be using the same power management system as PS5, where activity counters determine frequency based on a model designed to cover the worst yielding chip. This would lead to almost every chip drawing more power than it would if it were using sensors it's own sensors, and e.g. applying things like vdroop protection only when needed. Perhaps what Sony required for PS5 meant there was no fallback to "normal" Zen 2 power saving features. Tom's did say that applying better thermal paste made no difference to the throttling, so perhaps it's not actually based on thermals at all.

I think it's also a bit cheeky for AMD to say this chip has a 3.6 ghz "base" clock when it can drop under well 3 gHz and even down to around 2.5 ghz under certain circumstances. That too hints at something very different than regular Ryzen power / frequency management IMO.
 
The high CPU memory latency combined with the relatively small CPU L3 is probably not helping. Could latency be lower for XSX from the "none GPU optimal" memory (the 6 GB area)? I have no idea. Interesting to see that the CPU can consume nearly double the memory bandiwdth of the DDR4 limited CPU parts.
I think in this instance that if the GPU is not in-use that almost all the memory space is effectively non-GPU optimal. AMD's controllers in APUs were able to adjust stride patterns based on what was dedicated to the GPU in the past, so I don't think it would be stuck using GPU access optimizations if the GPU isn't getting any allocation.

Perhaps this strengthens the idea that PS5's AVX units are smaller basically because they are more compact, using a high density library, and so being more clock limited by high thermal density. We know from MS that their (physically larger) AVX units and their temperature / fan noise balance are what's limited them to 3.8 ghz on the CPU in 8 thread mode, though MS do guarantee locked clocks.
I think it would be interesting if there were ever a disclosure about what was changed. The last time a high-density library was discussed as such, it was a whole-core change rather than just one functional block. I suppose the flexibility could only have improved since Excavator, although I'm not sure there's a high-density library for the node as much as one of the other track count selections. Not sure which one AMD defaulted to.
Whether that was enough to increase power density that much, or if this variant's more aggressive turbo algorithm could have made it even worse (overshoot in voltage and clock, forcing more dramatic downclock) is another question.
There can be other ways to reduce the impact of AVX without compromising features, like duty-cycling or forcing higher issue latency--but those would not be detectable without profiling.

Basically, I think the 4700S could be using the same power management system as PS5, where activity counters determine frequency based on a model designed to cover the worst yielding chip.
If it's a salvage die, I think relying on the PS5's method would defeat the purpose.
The PS5's method is likely using AMD's existing DVFS, since activity monitoring predates the console, but the console would also be rejecting dies that might be functional but prove less efficient than needed.
A reject of that type isn't saved unless a more forgiving set of parameters is in place, and AMD is much less concerned about consistency in its own offering than Sony is. Putting a die too leaky for the PS5 on the more aggressive AMD parameters would likely make that inefficiency more obvious.

That too hints at something very different than regular Ryzen power / frequency management IMO.
Some of the salvage SKUs in random markets have sometimes been pretty forgiving as far as how far things can drop.
 
Well, at least we can now say, that the CPU in the PS5 is not that overwhelming
AMD 4700S CPU Reviewed: Defective PS5 Chips Find New Life | Tom's Hardware (tomshardware.com)
GDDR6 seems to be not really that good for CPUs after all. And don't look for the GPU tests, the build with PCIe 2.0 x4 is just not good enough to measure anything there. Only CPU can show use something. But keep in mind that the CPU has all the GDDR6 memory for itself, so in a game the CPU should perform a little bit worse.

Also I guess the CPU in the xbox also won't be much better. Yes the xbox has a little higher clock rate (and the PS5 CPU has a dynamic clock rate) and maybe Sony just saved a bit too much on those cpu cores (missing parts in the Die shots), but expect for the clock-rates it should be more or less the same. So console CPUs are more or less equal to a Ryzen 1800x. Not the best CPU but also not a bad CPU.

While its underwhelming compared to even mid range Zen2 CPU's, its still a substantional gap over the tablet cpu the 2013 console had. ideally would be DDR4/5 ram for CPU tasks along GDDR for vram but that wouldnt be very cost effective for a 400/500 dollar box i guess.
Id be more thinking of the memory BW since its shared between everything (CPU, GPU, audio chip etc).
 
I think in this instance that if the GPU is not in-use that almost all the memory space is effectively non-GPU optimal. AMD's controllers in APUs were able to adjust stride patterns based on what was dedicated to the GPU in the past, so I don't think it would be stuck using GPU access optimizations if the GPU isn't getting any allocation.

I suppose in that case the reason for a fixed split in the XSX (which MS have said is 10 GB for the GPU), could simply be because of the arrangement of 1 and 2 GB chips across different parts of the bus?

I think it would be interesting if there were ever a disclosure about what was changed. The last time a high-density library was discussed as such, it was a whole-core change rather than just one functional block. I suppose the flexibility could only have improved since Excavator, although I'm not sure there's a high-density library for the node as much as one of the other track count selections. Not sure which one AMD defaulted to.
Whether that was enough to increase power density that much, or if this variant's more aggressive turbo algorithm could have made it even worse (overshoot in voltage and clock, forcing more dramatic downclock) is another question.
There can be other ways to reduce the impact of AVX without compromising features, like duty-cycling or forcing higher issue latency--but those would not be detectable without profiling.

Thanks. So if you could cap the frequency on the 4700S, do you think that could show whether overaggressive boost was causing some of the frequency drops? It does seem to bounce around a lot under the AVX tests.

I'm struggling to find any useful search hits on "track count selections" (lots of stuff on synths!). Are there any other terms I could be using to narrow down more info on this stuff?

If it's a salvage die, I think relying on the PS5's method would defeat the purpose.
The PS5's method is likely using AMD's existing DVFS, since activity monitoring predates the console, but the console would also be rejecting dies that might be functional but prove less efficient than needed.
A reject of that type isn't saved unless a more forgiving set of parameters is in place, and AMD is much less concerned about consistency in its own offering than Sony is. Putting a die too leaky for the PS5 on the more aggressive AMD parameters would likely make that inefficiency more obvious.

Okay, thanks again. Now you mention it, I think I read you commenting on AMD DVFS around the time of Road to PS5.

I've been looking around for some Zen 2 power management stuff, and Google led me to this Abstract. Perhaps other people can make more of it than me.

https://arxiv.org/pdf/2108.00808.pdf

I think it does suggest that what you're saying is most likely the case for the PS5's implementation of power management. And looking again at the Tom's article, yeah, it does say outright that chip specific sensors are involved in throttling above 100C.

So my idea can go in the bin. Scrap that!
 
I suppose in that case the reason for a fixed split in the XSX (which MS have said is 10 GB for the GPU), could simply be because of the arrangement of 1 and 2 GB chips across different parts of the bus?
The 10 GB is GPU optimized, but is not exclusive to the GPU. The 10 GB would be the range of addresses that can be equally strided over all the channels, with the remaining space being an extra 1GB on only some of the chips, which Microsoft seems to have decided to handle a little differently due to the channel disparity.

Thanks. So if you could cap the frequency on the 4700S, do you think that could show whether overaggressive boost was causing some of the frequency drops? It does seem to bounce around a lot under the AVX tests.
Possibly, although the tested unit has a very poor cooler, which makes things even less comparable. The cooler and thermal interface would be something the PS5 has an embarrassment of riches compared to this salvage product.
A catch-all voltage for salvage may also worsen it.

I'm struggling to find any useful search hits on "track count selections" (lots of stuff on synths!). Are there any other terms I could be using to narrow down more info on this stuff?
Wikichip has a discussion about the variations on the TSMC 7nm process, in terms of standard cell libraries.
The number of fins and number of opportunities for routing metal through a cell can be adjusted to emphasize area or performance.
https://en.wikichip.org/wiki/7_nm_lithography_process
However, from elsewhere, it was indicated Zen 2 already went for high density:
https://fuse.wikichip.org/news/3320/7nm-boosted-zen-2-capabilities-but-doubled-the-challenges/
There's also a caveat that AMD doesn't need to always default to standard cells, but a whole FPU is a bigger exception than certain things like custom registers and a few key structures.

One thing that did come up in the review is that there's an apparent drop in FPU ports:

This is one possibility that I mentioned when the die shot came out:
https://forum.beyond3d.com/posts/2193153/
My question as to why they would go through this much trouble for what appears to be limited gains in area remains.

Whether it's truly a full halving of ports isn't clear to me.

A few operations like logical ones are tied to ports, and those weren't halved. Perhaps there was a reduction in register file/bypass ports and reduction in functionality while leaving a few basic functions of stubs of the original 4?

edit: misread the heading and thought it was only FPU testing, the logical ops are likely integer domain

Another question is what else changed with division, since that's only on one port in Zen2, so a port diet alone wouldn't account for the drop there.

Okay, thanks again. Now you mention it, I think I read you commenting on AMD DVFS around the time of Road to PS5.

I've been looking around for some Zen 2 power management stuff, and Google led me to this Abstract. Perhaps other people can make more of it than me.

https://arxiv.org/pdf/2108.00808.pdf
I've only skimmed, and some items are at a higher level than the implementation, but it does mention Zen2 using a model with over a thousand monitors.
 
Status
Not open for further replies.
Back
Top