Why would Sony want to go with Nvidia? I seriously doubt they're happy at all with them. Fermi is different enough from the RSX that your PS3 low level 3D code will most likely not work, so you might as well ditch them and go with ATI, which provided the better GPU last time.
I'm all for a souped up Cell (would love to see that actually) but please, get rid of NV for the PS4.
You really think ATI actually delivered better GPUs at the time of xbox 360 in 2005 and 2006??
xenox or C1 or ATI R50X was a ATI-Microsoft development where they pushed to have a unified shader pipeline for release in 2005, just when was ATI/AMD actually able to release a unified shader pipeline in retail graphics card market? more than TWO YEARS later and it was a very poor performer considering they already had the engineering experience.
I might be going against the grain here...but I say screw the developers!!
Seriously guys when you think about it what would the PS3 have been if developers designed it? More then likely it would have been a rebagged PC ala Xbox1 with all the RAM they could afford to throw into it.
The PS2 and PS3 have been able to do things many people thought it could never do because on paper, the developers, the entire community saw those machines as inferior because they were different-they operated differently. Some of the games that came out of the PS2 many developers back in the day would have said "Impossible, it doesn't have enough ram" or "the graphic synthesizer can't handle that stuff" and yet it did because developers HAD to think differently.
The PS3 is another prime example of how it FORCED developers to look at programing differently and look what has come because of it. GOW, Uncharted 2, KZ2; these games are so impressive because the developers were forced to learn new ways to get power out of the machines and it allowed them to go further then they would have if they designed the machine themselves with the same budget and features.
My fear is that the developers are going to want things easy (we all know its human nature) and because of it the machine is not going to be as good as it could be. Yes developers will get some great results out of it right away but since the learning curve is almost non-existant and the tech is so normal to them they won't bother looking for new ways to accomplish things and the progression will be minimal.
If the developers had thier hand in the design of the PS3 originally, would we have gotten GOW with almost no jaggies? Would UC2 have looked as good as it does or instead would it simply have raised the quality of lesser titles and put more games on an even playing field. UC2 wouldn't look as good but Haze would have been better. GOW wouldn't have the stunning visuals but "insert random movie based game here" would be better.
IMHO developers are more concerned about "Here and Now" while the hardware manufactures have to look at "What we need in the future". It seems Sony has had the philosophy of "What can we do to get the most power for this price" and the Developers philosophy is "What do I want more of now to get more out of my game"
The problems and controversies concerning what ended up in PS3 has alot to do with Microsoft in 2004 reaching over 4 million copies of Halo 2 sold, the anticipation that in 2005 more copies of the game could sell because of the news headlines and word of mouth hype they new a next gen potential halo sequel could sell so they went ahead and put together the idea of forcing a new generation in 2005, things that could not really stay secret even despite Sony working on prototype hardware because even back then SEGA, Nintendo and others always worked on next gen plans.
Bill Gates was inverviewed by time magazine iirc and he stated when asked if sony releases PS3 "we would have halo to counter their launch day"
This agravated the announcement of PS3 with Sony making the tough choices of settling on 90nm process chips instead of 65nm which were at least two more years away and the thought of letting Microsoft of any console competitor have two whole years in retail with a new generation console sounded very dangerous.
Blu Ray was always planned and the format war could have something to do but not much really when you consider that PS3 would have a BR drive as standard, could it be pressure from shareholders (who seem to only care about statistics and spreadsheets)
If Sony were to have known that PS2 was going to stay very profitable AND that Microsoft was going to have manufacturing issues resulting in bricked consoles, they could have waited and taken the risk and come with a more satisfactory yielding 65nm process two years later but there was also the Nintendo factor...
But most important of all your argument is extremely valid when you consider that those same developers who complained about hardware were given more ram in XBox 1 and supposedly more power yet they did not bother to make a disctintion with their games on what hardware they were running on and currently they have all this "so called power and ease of development" and hey are not really pushing the hardware at all over what XBox 1 was capable.
They did screw the developers, the problem is that because the Wii and Xbox 360 did so well relatively speaking the developers unlike the last generation had the opportunity to say 'screw you right back'.
No console is an island anymore, not even the Wii. Consoles can't be too powerful or that performance is wasted and they can't be too weak or they risk not recieving as much multi-platform software as they ought to. they also can't be too exotic either and any feature not replicated on other consoles is a feature which risks being under-utilised.
As much as we would like to argue about consoles here and their hardware the fact remains that so long as its reasonably cheap, has reasonable performance and is reasonably easy to port to it will be the user experience and not the physical hardware / software layers which determines how well a console will do.
No sir, Sony DID NOT screw the developers they made their hardware just like they would make their hardware and provide tools for as they have with PhyreEngine, Edge, etc.
What you don't say is how XBox 1 IS BASED on Microsoft's trademark Direct X API and how Microsoft merely MIGRATED the same development tools with additional features to the Direct X based XBox 360. That is the REAL reason why Microsoft can boast ease of development because if you are a game dev team and you learned how to use the DirectX dev tools there is nothing dramatically different to learn when you step up to X360 other than flipping more switches to turn on more features.
Xenos has a great article about its features but it is not programmed in low level specific tools, its custom features sit there idle but make for great forum pissing constest comparisons.
Is game development easy even in a "easy to develop for" platform, of course not, PS2 started out as very difficult but because of Sony's support it became easy to develop for, PS3 raises the bar but now game devs (the ones that still complain) are in the cradle of Microsoft proprietary DirectX API tools that they really do not want to switch to another platform that is using a different API like PS3 so when you see night and day differences between a low level custom programmed PS3 sports game versus a easy to develop for port you can figure out the reasons a dev might complain.
Look at the recent rant by Activision's CEO, he wants a dedicated PC like console so people can play his company's CoD Modern WarSequels and they can charge for the service, notice how he did not say it was a Linux box, all their games, all of the CoD sequels are programmed using DirectX API.
To be fair, RSX wasn't at all exotic in its hardware or software programming model. If RSX had wound up being a fair competitor to Xenos, I wonder how much grief Cell and the split memory model alone would have given developers.
To the extent that RSX is the problem, it seems more an issue of NVidia being a half-generation behind ATI at the time PS3 came out than that Sony deliberately created a weird system to make it hard for programmers.
Ok Nvidia made Nv40 at .130nm, then they revised their design to Nv47 aka G70 at .110nm at which point it was made public that Sony was working with Nvidia so that they can fab a Nv47/G70 derivative GPU for console use on a 90nm engineering process in Sony/Toshiba fabs in 2006. Meanwhile Nvidia was only able to release their own 90nm graphics card in 2006 with G71 and late 2006 with G80.
ATI released their Shader Model 3/4 compliant GPU with a unified shader pipeline 80nm process in 2007 previously other than xenos they had no other unified shader gpu chip in retail.
Notice how ATI's Radeon HD 2900 XT was no real competition to the older G80 even though it was on 90nm.
On a side note Sony engineers and decision makers were most likely well aware of G80 and what it could offer but if you missed that G70 design went from 110nm to 90nm it becomes clear that they also knew that G80 was out of the question for a console due to power comsumption and thermal output. When you really start to think about it, it becomes clear that G80 derivative would have to be on either a 65nm process or a 55nm process (like G92) so as to keep thermals down and power comsumption in check.
Sony's people were pressed for time and competition, Nintendo's choice is safe and sound as something they could easily do but with Microsoft speeding things out to the next gen gate thinking more about establishing a brand and user base relying on the potential for a halo sequel instead of really thinking about thermals, power comsumption and a real next gen leap, Microsoft could have waited so that they could instead market their console a Direct X10 platform to go hand in hand with vista but at the same time it becomes dangerous that Sony could also get their hands on a GPU that is compatible with their trademark features, so they did what they had to, to differentiate themselves, its cost over 2 million dollars but Microsoft has been able to write off money spent due to their dominant income in the OS market..
That's what they did with the PS3. Didn't work. Then went from being a dominant #1 to a lackluster #3 in the console space.
A more conventional architecture would have allowed developers to hit the ground running. Parity in game quality between the 360 and the PS3 wasn't reached until 2009, - four years into the HD console generation !!
Yeah, multiple cores forces you to think in a different way. In order to tap the potential of any of the HD console you need to exploit parallism.
Cheers
So far this gen there are more PS3 games being programmed from a parallelism perspective than the competition, the "parity in game quality" is more because a game developer cannot use anything other than Microsoft Direct X based XNA development tools so when they go over to Sony PS3, they quickly realize that its not a Microsoft platform reguardless of it having a "PC" like GPU with RSX, they have to use OpenGL/ES based API tools and to really get performance they have to go low level with LibGCM but they are not going to do that, instead they have been waiting for Sony to keep working at their tools and its still up to the developer to make the effort and if you learned something about last gen (no disrispect to devs but its true) is that the more things change the more they stay the same.
Many multiplatform releases are also on the PC. If the PS3 was more powerful than the 360 the right way, i.e. if it had a more powerful GPU (i.e. G80 with a 256-bit memory interface and more memory) instead of having the cell, you bet you'd see the difference in multiplatforms such as better textures and higher rendering resolutions, because the PC versions already have these features. It wouldn't be wasted.
Like I said earlier, think about it, re-read all about G80 and its arrival, G80 would have been a thermal nightmare if it would have been chosen for PS3 in either 2006 or 2007 mainly because of the 90nm process, it really would have made alot more sense to wait for 65nm (still too much power comsumption, heat) and it would have been perfect at G92b @ 55nm but thats like three years later in a November 2008 launch, by that time there would be no problems with BR drive diodes and no problems disabling SPUs in Cell, maybe the console would have had twice the ram for system and graphics but the problem is the competition has been selling consoles since Nov 2005 and they would have to deal with a large MP group of devs treating the PS3 like devs were treating the PS2 to XBox 1 ports and you basically get diminishing returns.
If the PS3 had shipped with a better GPU, it couldn't have shipped with the Cell processor. Price matters and the PS3 was costing Sony a huge amount of money. The system had incredible price tag at launch. Throwing in even more expensive hardware could have been a death blow. The PS3 was launched as it was for a reason. You could make the same arguments about xbox, having a better CPU, more EDRAM, more RAM. In the end, it just ends up being a console priced like a computer, which is probably something people don't want.
Indeed, I personally would have prefered that all three would have waited a couple of years, its like they really (minus Nintendo) disreguarded how much heat these consoles were going to make (Microsoft) and how small the graphics leap was going to be (again microsoft)
I think during that time, only Kutaragi could make the call. He's most likely overwhelmed by the sheer workload. Cell and Blu-ray already took up major attention and resources. People were b*tching about long turnaround time in the early days when negotiating for exclusivity.
If he had a more open approach, things would have worked out better for Sony. I am sure other talent in Sony are familiar with the more common GPU platform, enough to iron out the details for him.
My theory with Sony waiting to release a console in 2008 makes sense to me because at 45nm higher yields while satisfactory frequency CPU and 55nmGPU less problems with BR drive they really could have ramped up production but the console would still be expensive, there really is no way out of that, I don't know how much and obviously for Sony making the decisions they did not know what was going to happen but they could anticipate that in 2010 Microsoft would release a new platform and worst of all the financial meltdown would greatly hurt such alternative thinking, its just too many variables but the fact remains G80 with its over twice the performance of G70 is just physically and realisticly impossible unless a major die shrink was made, think about the fact that even current PS3 Slim uses a copper heatpipe in its still heavy duty heatsink solution.
Most of PS3's price was due to Blu Ray though, not the CPU or GPU which were in league with what 360 had cost wise. In simplistic terms, at launch, the PS3 was a $400 machine with $200 of Blu Ray. The 360 was a $400 machine with no Blu Ray. Arguing over a few bucks on CPU, GPU or RAM pales in comparison.
Appart from CPU, GPU and Blu ray drive, you also have a DVD drive and had an Emotion Engine+Graphics Synthesiser AND 32MBs of RDRAM as well as built in Wifi, USB ports, HDMI, you really had alot of EXTRA components and features adding up and geared to the early adopter, basically Sony anticipated and expected that their die hard consumers were willing to snap up 2 million PS3s in short time just like many of those previously unknown at the time die hard Sony fans sprang forth to buy PS2s in Japan at roughly $400 US and in the US at $300 in the same year.
The biggest difference here though is that Sony went for a lets copy Microsoft and do a world wide launch and that played a major role in backfiring against Sony.
The PS2 launched in March 2000 in Japan with a small number of games, by the time the console reached official US launch in October (and aside from people outside of Japan ordering PS2s) there was a lower price and more games, maybe and only just maybe if Sony were to have only launched in Japan in 2006 and left a US and Europe launch for late 2007 there might have been slightly less problems with the number of games at launch reguardless if the price was the same there would have been more games and more of a focus on the US and Euro markets instead of getting overwhelmed like they were.
Next time if Sony launches a new gen Playstation they are going to have to go back to at least a 9 month wait for outside of Japan launches.
Oh and yes I believe that the rumors are really about a possible next gen PSP, it would be a major mistake if Sony were to release a new console in 2011, 2012 and even 2013. they gotta aim for 2014 and 15 and focus on a chip engineering process that would deliver the required yields for their chips because there is no doubt that such consoles would have more transistors and these transistors are going to consume more power and produce more heat.
Lets say in example a fermi based GPU is used, it would have to be a 32nm or far less, 30nm or less