I believe one of the problems with the 360 GPU in its original incarnation was that - in addition to the solder tech, and the initially insufficient cooling - the temperature probes in the GPU weren't located in the hottest parts and so the system under-reported temperatures, making cooling less responsive and allowing the system to keep running when it should have been protecting itself.
In one of the firmware updates back in 2006-2007 Microsoft was able to add something that kept the heatsink fan running for a minute or two after you turned off the Xbox360...I know this from being present with a couple of friends who had launch units which did not do that initially. They were three different units and the first black Xbox360 Elite model.
That trick didn't work in the long run, all units died, another had four dead Xbox360s in the closet...he kept buying new instead of using the recall program.
A far better analysis would have been testing a launch unit with re-applied quality thermal paste and no other modifications as that's the only way to know.
That said even in early 2006...perhaps late 2005 there were overtly enthusiastic fanatics who did do water cooling custom jobs on launch units...although that's different I do wonder how efficient and effective that was in preventing the BGA solder from cracking.
Remember properly seated heatsink and thermal paste is what allows heat to conduct away and be dispersed hence with proper heat dispersal it won't matter if we ran a 24/7 non-biased week long marathon with the most stressful games as a benchmark stress test. If it's effective it should be fine which is probably how it seemed ok to the QA assessment of the time.
My favourite workaround in early 360s isn't actually the emergency extra heatsink, it's the step they tried before that. When your first workaround is to physically glue the GPU package to the motherboard to reduce package twisting (contributing to cracked solder) you know you got problems.
Which if heat is dispersed, it shouldn't need the glue.
RRoD persisted into 55nm, though at greatly reduced rates. Technically slims can't RRoD as the error warning were removed, but by all accounts they cracked all their problems with their SoC which was designed to handle everything they could throw at it.
Only 55nm Xbox360 Xenos GPU fabrication did not exist.
Chip fabrication was 90nm, 80nm, 65nm then 45nm or 40nm for the unified CPU+GPU assembly of the last Xbox360s.
PC market 55nm was a half node stop gap decision announced and made because after Radeon HD 2900 ran so hot and drank too much wattage, ATI needed to stay competitive and profitable with yields which is a huge factor in wafer production related to process node specially when you're selling $300-$500 dollar graphics cards and also have to keep special workstation versions for clients.
ATI mostly skipped 65nm production for high end GPUs, Nvidia didn't because they transitioned to 55nm in April 2008...although if they had planned for it, there's no plausible reason why they couldn't skip direct to 55nm but their PC decisions are a different market.
Xbox360 slim models shouldn't crack any solders because they produce far less heat altogether despite the CPU and GPU+eDRAM being part of a close package...that's just benefits of the later years die shrink.
Indeed as I said it was a bad choice in the first place, DDR2 was available when the TBE was designed, it was actually the standard.
No we are not thinking that they should have sold the system at 800€. Thing is XDRAM was quite boutique, I don't think using 512 MB of DDR2 in the stead of 256 MB XDRAM would have have any significant impact on the system price.
I'm just saying that the cell building block as they were they were multiple bad arbitrations that were done. One may disagree but it is different all together than thinking that the impact on the BOM of removing 3256 MB of XDRAM and adding 512MB of DDR2 would have been +200€
DDR2 also came in different speeds, weaker latency compared to XDR-Ram and it wasn't cheap initially and didn't drop in price until after DDR3 was obsoleting it with their initial prices and market adoption.
512MB XDR-RAM and 512MB GDDR3 in a retail PS3 would have made far more logical sense...however there are physical limitations to selling such a console at a reasonable price when customers will take any take for granted.
The PS3 like the PS2 is a great example of how not to design a console. A poop-load of power doesn't mean much when developers have to pat their head, rub their tummy, while jumping and shouting "YAHTZEE!" over and over to get decent performance. It only sorta worked for the PS2 because the Playstation brand was the most successful one at the start of the sixth gen, as everyone wanted one and developers knew they had an audience to cater to. Obviously it didn't work again so well for the PS3.
Let's see...the PS2 had a reputation for being complex to program for in 1999 and 2000. PS2 Engineering Sample Prototype board was revealed in early 1999 implying that design decisions were finalized in late 1998...then there's the initial process node which required dedicated heatsink and fans each on CPU and graphics chipset...even back then Sony waited for a die shrink because yields and thermals and wattage...brings more credible reason to believe that 90nm Cell was NOT a desired target by Sony's engineering team and thermal/wattage and yields targets no matter.
I sorta think Ken Kutaragi was a hack. While obviously a gifted engineer, he obviously forgot to consider the development side of the equation for both the EE and Cell. You know his pride was probably shot when he and his team realized that going with a PS2 like setup, that is a massive CPU combined with vector engine + "2D" GPU (whether it was a graphics centric Cell or not) wasn't going to work out well for the PS3 in an era of fully equipped 3D GPUs with T&L, pixel and vertex shaders. It confounds me how Sony dismissed developer workflow, while MS fully embraced it and designed the absurdly more architecturally elegant Xbox 360. It becomes even more apparent when you remember that the 360 released a year earlier, featured a more advanced and more forward thinking GPU that forced developers on PS3 to spend countless hours moving graphics jobs to the Cell SPEs in order to get multiplatform parity.
It's good to see Sony learned their lesson when designing the PS4.
"Hack" is quite a stretch do devalue and diminish and insult that one person who basically helped revolutionize game consoles into full 3d polygon graphics, sophisticated sound chips (starting with the Sony sound chip in SuperFamicom/SNES which is still highly relevant in retrospectives)
He did have a thick English accent though and perhaps that's how his words were used against him far more as destroying his contributions became such a trend in 2006 till now. That and the insult of using Kaz quoting 1994's "Riiiiidge Racer" from the title menu of that game when revealing the PSP version...by people who perhaps were trolling and such a meme became so negative and destructive.
Maybe if Ken Kutaragi's English accent wasn't so thick and his use of the language more fluent, he wouldn't have been used as a caricature by the misinformation brigade.
Also it's best to look at how PlayStation 2 wasn't really expected to be such a record setting sales records breaking games console.
Regardless of "hype misinformation" that DVD drive gave many men and costumers in general to get a DVD player that played PS1 and PS2 games...that's three hard to ignore features.
Being first in comparison to Xbox 1 in comparison is ridiculous given the nature of when Microsoft finally announced and revealed their plans.
Sega's loses and leaky plan handling leading up to dropping the ball with Dreamcast only helped to rush development of PS2...by relation.
As such PlayStation 2 setting sales records, calling for increased production at fabs and reaching a certain number of units plus the different strategies and all of a sudden PS2 became known as easy to develop for because most technical stuff was documented.
PS3 may be seen in a negative light when being stuck in 2006 but those technical hurdles were documented. There's reason to still believe 2006 PS3 F.U.D. in 2011...oh wait it's 2016...
Finally each console was designed for different eras... PS3 had some major disadvantages compared to PS2 not just lukewarm sales. PS4 and Xbone were planned for different reasons which we all know now...it isn't fair to still believe some decisions were bad decisions in the past as even Microsoft were limited by physical limitations of not anticipating that reaching a launch in 2005 was gonna cost them billions in loses to write off...
PS3-Xbox360 multi-platform parity was still reached...and it's no different than current PS4-Xbone generation issues.