I don't see why you think CPU transistor budget will affect GPU transistor budget in such a significant way.. It's not as simple as saying "lets put in less SPUs so we have more silicon for the GPU"..
Next gen, transistors budget will be far more significant than current gen. This gen we have around 500m budget, next gen it'll be somewhere inbetween 4-8 b transistors. Any inefficiency or flaw in the design will make the system handicap be more obvious.
Also there's alot you can do with an array uber-fast general purpose stream processors (including weird and wonderful systems and algorithms we haven't even thought of yet) to make both your game look better and also create a richer world "simulation" which I think will have a much greater impact on the quality of games going into the next gen..
Here is the problem, it is good to design a system to handle some new algorithms that may arise in the future, but not at the expense of what algorithms that are already available. Beside research into GPGPU is going ahead so you better be sure that there are something out there that will make putting 32 SPUs be worth it, if you're to go ahead with it.
We can always find more work for the hardware to do.. The problem is getting the hardware to do the work faster and smarter. So as far as the question of whether 8 SPUs is enough? no chance.. What about 32?.. 64..? 512..?
As far as there's money to burn there's always work that can be done to improve the quality of a product regardless of how powerful (or not) the hardware maybe..
If you're going to view this from unlimited resources sort of design you're not going to see my argument. You can always give Cell more jobs to do, but are they useful jobs ? Jobs that GPU can't do ? Because Cell is more expensive compare to GPU in term of transistors budget.
That's not the question you should be asking.. The question you should be asking is are there going to be new solutions to problems that will exist in the future (that will provided added value to the quality of the product, whether through the visuals or the complexity of the world simulation..) that you can handle efficiently on the CPU without having to push all your data across the bus to the GPU..? The answer more than not depends entirely on the rest of the hardware configuration as a whole & not just in whether you have a "stronger" CPU or GPU..
I think this is the wrong way to go about it. This is the recipe to end up with overengineered product.
I would make sure the system runs the solutions that are available and run it upto standard. As for new solutions that may rise in the future, if it worked on the configuration than use it, if not it will have to wait for another gen of consoles.
Beside those new solutions tend to be a way to do jobs more efficiently on the current hardware configuration, so what ever your current hardware configuration is, there will be new solutions anyway. But it is poor hardware design to make dev think of new solution so your product can catch up to competitor that had released their hardware a year earlier.
I'd agree with you if it wasn't for the fact that I don't
.. There were and still are plenty of problems the Cell was designed to handle efficiently and its an interesting point to note that these problems generally make up the wider load of games-related processing.. I don't see how a chip that is considerably well optimized for things like vertex processing, skinning, culling, collision, physics, encoding/decoding audio/video streams etc.. is not a solution to a set of problems that have existed in games from the beginning.. Also it's worth noting that Cell is much more future-proof as far as being much more adaptable to new types of work loads than having to somehow wangle all your data into a shader-friendly format & fit it into your GPU registers..
There is optimized then there is overkill. GPU or cheap dedicated hardware already cover some jobs you mentioned. For the rest Xbox 360 managed to do them on smaller CPU. Games like GTA4 where there are large scale world simulation going on everytime, just failed to perform any better on PS3. Because at the end of the day, Playstation runs games, not physics. Even if Cell could run physics simulation faster than its rival, it is overkill when it still runs the game as choppy as its rival. Future proofing is nice, but future proofing is the same as looking for problems. Which I said what Cell is doing, so in a way you seems to agree with me at the end
Overall though my point remains that a new Cell wouldn't require any new "learning" of how to apply processes to work with the hardware as most of the practical/theory in this area has been done this gen. It'll only be a case of scaling existing tasks to take advantage of more raw power (i.e. spreading your physics, skinning & vert processing across more cores for example..) & making use of the hardware to implement new forms of processes that can further enrich the quality of your product (in the ways I mentioned earlier)..
I still don't see any jobs that future GPU can't handle to a satisfactory level and you keep saying new forms of processes which imply "learning" new algorithms, finding out what works and what doesn't. Given how good the GPU performance to cost ratio compare to other solutions that pave the way into GPGPU research, wouldn't better GPU enriches games more compare to 32 SPUs Cell ?