I'm not arguing that performances would have been the same, I'm saying that being the standard RAM it was significantly cheaper.
The price of RAM have had notorious hiccups that were not predictable, DDR2 was still cheaper, than more boutique memory. As a side note if newer and older memory are made on the same process there are no reason for the slower suckier memory to be that cheaper, DDR2 kept existing for legacy devices, volume went down => more reasons to kept the price high. The same reasoning applies to any RAM type, I suspect it will a while before MSFT gets the DDR3 (pretty fast type) in the XB1 at a significantly lower price than DDR4 using the same process especially as the y later get more common.
All I wanted to add was that console systems have certain engineering planned target performance...it's really best to focus on what was essentially an effective design of CellBE and XDR-RAM that DDR2 was never gonna match and just create problems.
DDR2 wasn't really "cheap"... we don't have hard numbers on pre-planned contracts where they cannot anticipate price drops...and I would imagine or assume that once that paper is signed, it's for that price regardless if AMD/Intel had saturated the market on DDR2 and caused a price drop like when DDR3.
Like I mentioned, IBM made PowerXCell8i use DDR2 for a completely different market in 2008...
Lets say if Sony had purposely pre-planned to launch PS3 in 2008...chances are their engineering design teams would still not use DDR2 because it would still be not as efficient...
Then contemporary die shrinks across the board plus engineering ambition would probably target a far more cutting-edge tech goals.
45nm CellBE all 8 SPUs @ 4.2Ghz with p-states and double the cache would bring CellBE to a cool running, comparably efficient performance per watt.
XDR-RAM at 1GB or preferable 2GB if costs plus die shrinks help make denser ram helping increase volume and allowing far more breathing room for the CellBE to address.
GDDR3 at 1GB or preferable 2GB for RSX reasons...but also to help solidify thruput
GDDR4 might be nice and niche...however my reasoning is that in 2008 2GB graphics ram on PC cards was seen as too much because the games and reviewers didn't have knowledge or apps that exposed ram usage like say game developers do... (look at Killzone 2/Uncharted 2 dev videos) so the unfortunate analysis from PC hard/soft reviews concluded that since they didn't see it, it wasn't necessary...
Games consoles handle ram and streaming different though...
Looking at PC I would think that more main RAM is wiser, for 1GB of Memory I would think 768MB for the main RAM and 256 MB for the VRAM would sound more inline. Won't happen anyway unless scientists make a really major breakthrough lol
I based my opinion on some "if" scenario where Sony instead green lights a 512MB+512MB configuration which honestly isn't that significant although it is...you still have to fill up that ram...the best solution is a 2007 tech plan launch or am optimal 2008 scenario that's a bit bleak.
Die shrinks help overall performance per watt and clocks...stuff like a SATA II controller becomes possible (but mainly for us in hindsight for SSD usage) and perhaps a PS4 like full or partial instal of games...so that the disc drive isn't constantly spinning...
Blu Ray drive could have been 4x or even 6x
Then cap everything with a Sony fabbed full blown custom GT200 GPU at 55nm...if it was possible to fab at 45nm then better still...and bam you would have a Microsoft dominated console market with a Ultra PS3 coming in boasting 4.5 times the GPU performance, capability and thruput...or games that almost look like PS4 games.
I'm off a bit because I still feel 65nm + 55nm wait for 2007 was the best possible theory idea to jack up image quality while 2008 is a cut off...too late
I think power consumption for Xenos was far less than for the Xenon. I could swear I read at launch that the power draw of the GPU was something like 25 or 30 W but I can't find that now, so perhaps I imagined it. The heatsink for the GPU had much less area and much, much less airflow though so indirectly that backs up the idea that the GPU produced rather less heat than the CPU and accounted for rather less than 50% of the silicon power draw.
I think it's reasonable to suggest that the slim SoC pumped out more heat than the Falcon GPU, perhaps more even than the launch GPUs...?
Whatever, the heat put out by Xenon shouldn't have been a problem and PC cards handled several times the power with much greater reliability. My Falcon RRoDed with a GPU solder issue despite the fans almost never going more than a notch or two above minimum - so the chips definitely thought they were running cool. Felt cool inside when I was doing clamp experiments too. Damn thing, a little past the three year warranty. I still bought another though. I couldn't bear the thought of not being able to play Halo 4 when it arrived.
PS3 launch units had a monster heatsink system...five or six copper heat pipes? Heat spreaders on GPU and CPU?
Xbox360 could have had that...I may have dramatically improved heat dispersal resulting cooler peak usage.
Power consumption...there were were launch socket draw tests done back then...not sure where they are but both had to be drawing over 300W hence their PSUs so there's no way in hell that a graphics GPU would be consuming such a low figure. Based on comparable information I would estimate 120W TDP or thereabouts for Xenos.
Xenon might be lower than 120W...probably 95W for launch units...that would help explain Xbox360 slims claiming 65% less power draw.
You guys are good at deluding yourselves.
Just look at heatsinks for PC cards of the era comparable to the 360. Way bigger than the ridiculous extruded dinky thing MS put in there.
Xenos was 240 million transistors right? And the 271 (far less than 302m G70) million transistor G71 was likely consuming upwards of 135W for a PC card part..it still needed that hair drier heatsink system to draw heat away and out the case.