The PS3 and G80 launched on virtually the same day.
Because the PS3 was delayed thanks to BRD issues. If PS3 had been designed for release at end-of 2006, they could have chosen a G80 and different GPU. Assuming of course when they started designing PS3 in 2005, they could predict what GPUs would be out and when and what they'd be capable of.
Or putting it another way, XB360 does as well as it does because of it has a great GPU with advanced features that ATi was in development of anyway. What if ATi's US tech was a another year or two out, and MS's choice was the same as Sony? Would XB360 with an RSX and eDRAM be as competitive as XB360 with Xenos? And what custom part could Sony get for spending a few hundred million on a custom GPU instead of Cell? They couldn't fabricate a larger chip and make it affordable, and nVidia couldn't have matched ATi's unified shaders because they weren't up to speed with US (nVidia PR claiming ATi's US wasn't any good). Just throwing money at that problem wouldn't have worked. Well, if Sony had commisioned a GPU with unified shaders, maybe they could have got something, but its performance would be an unknown and could go horribly wrong. And they'd have to make that choice around 2003 maybe, 3 years into having already started next-gen development and choosing to invest in Cell.
It's all very well and good evaluating choices with hindsight, but you need to view the engineers' choices from where they were to get context. The future of graphics wasn't certain back in ~2000 when Sony started thinknig about their next console. They chose programmable performance and the ability of GPUs to provide that wasn't known. They chose a flexible architecture that they hoped would have beneifts elsewhere too. Certainly Cell was a good choice to power BluRay. So in 2001 they committed to the Cell project. That was decided upon back then. How were they then supposed to launch with a G80 in 2006?? During development they evaluated a number of GPU options and settled upon RSX, based on the most powerful architecture available at the time (only trumped by ATi bringing out a new technological advance). In hindsight, Sony could have gone with more GPU...well, actually they couldn't because they were at the maximum chip size with RSX. A larger GPU would mean terrible yields. So they could have gone with a more advanced technology than the outdated 7xxx series...only they couldn't because they had the most advanced they could get. They could have commissioned a custom GPU using magical new technologies with unknown returns. Maybe to address the issues of programmable performance they could asked nVidia to include some versatile, custom vector processing units, building on the ideas of VU0 and VU1 in the Emotion Engine...
Cell didn't work out as well as hoped, but you can't begrudge the design choice really. It was a very flexible processor that did prove very useful and effective in some cases, and muddled by in the worst cases. RSX appears something of a rush job, but Sony didn't go cheap there, and they at least had something that worked instead of going with some crazy idea from Toshiba (whatever that might have been).
What is interesting is how those early decisions affected Sony's choices later, and it points to the idea of waiting and using off-the-shelf components. Why try to predict what computing needs will be 5 years in advance and try and design something for that, when instead you can know exactly what computing needs will be needed one year in advance and you can just buy a suitable set of processors to chuck in your box? Hell, we're seeing that now with the threat of streaming games. Just waiting, and then either deciding to throw in some CPU and GPU, or going with a streaming platform, gives flexibililty. Maybe Sony have seen OnLive, decided it's the future, and their next console will just be a stop-gap measure to ride a few years until everyone can get OnLive?