Ah the joy of semantics... don't ever change B3D!
Couple points:
1- MS released their BS tflop number first, Sony then released their 2x bigger number. It was a clear statement that our system is twice as powerful, especially taken with their whole speil of how 'flop is the defining measurment of power this generation'
Reflecting back, the TFLOPs number actually was something discussed in Sony's vision (and patents) for the Broadband Engine and was subsequently carried forth by a number of vocal individuals (viral marketers?). I had stated at the time that MS
had just cashed in on the BE hype brewing in cyber space with a pre-emptive strike. You can see from that post how I pretty accurately predicted how we would be entagled in a web of near useless numbers for well over a year. Anyhow, MS and Sony had done a lot of jockeying for position of mindshare pre-launch and that the designs also took some of this into consideration.
I don't know how often you run into a job where the *code* takes up a huge chunk of a 256k block since you're not going to put an entire *application* on either one all at once.
Where were you when we were hearing all the doom and gloom about 1MB of cache
With code optimization? No, Cell will emerge victorious every time.
Every time?
but I want to be unequivicol in what I'm saying here: the Cell is superior to the XeCPU - they are not 'balanced' chips.
The tradeoff? Cell is harder to program for; but let the myth of "general purpose" die the death it needs to. (did you read that thread yet?)
If you have a strict time budget for developing a product could you not envision scenarios where being more difficult to program for makes the chip less superior? The chips have a context, game development, that is pretty important to whether they are better or not at what tasks and overall. And developers do NOT agree on this topic (just take a look at a respected developer talking about moving over large amounts of performance unsensative code and moving it to a managed code framework for ease of access).
In a utopian world where you can build your code from the ground up and have as much time as possible to test new approaches to solving software issues to get the best performance out of your processer Cell would win pretty easily. It is, afterall, 50% larger and has some peak performance metrics that double that of Xenon and when you create ways to leverage the design's architectural strengths (very, very fast local store; robust internal bus; more processors) it could be even more. But most developers are not starting at step 1, they do have a host of staff they need to educate (and some, babysit), and are under difficult time and expense guidelines. They cannot stop and take an extra 6 months to discover a new technique to do AI because their learned methods is not friendly with the SPEs.
This is the problem with Cell from a market standpoint. If MS had released in 2006 and Sony had released in 2005 and had already accelerated their investment in development things could be substantially different (see: PS2). One of the problems developers are facing is increasing complexity of their games (as well as increased budgets and dev cycles), but in the same breath they are also facing more complex hard designs that require additional work and management to extract performance. Both consoles face this problem, yet Cell is going against the flow in regards to where the market had been as well as its competition (PC, 360) who were to market quicker. With much of the advanced middleware coming to the market from the PC intentioanlly departing from that model had a high risk/reward.
In theory it is easy to ignore these elements of the market, but what is "superior" is what gives returns in the market where the competition is occuring. The ceiling is high for Cell, higher than Xenon, but extracting Xenon level performance in a game that has many various systems running that all take optimization the question is how long will it take before the median game reaches and then exceeds Xenon in performance?
IMO one of the reasons the SPE design was chosen was because it would make porting the code to other systems (360, Wii, PC) nearly impossible. At the time, with Sony's absolute market dominance, this seemed like an excellent move to a) leave significant performance headroom for a long lifecycle at an affordable price
erformance point and b) keep the competition at bay. But the market has not quite shaped up this way, partly due to the increasing complexity of games and designs and the time and fiscal requirements involved. Getting performance now using the tools and staff and methods that have proven to be effective and not re-inventing the wheel (to get comparable performance) less where necessary is an important aspect of a designs strength and "superiority". It is easy to look at the flops or the advantages of an architecture, but it needs to be put in the context of a market.
Right now whether you like Cell better or not really depends on if you are 1st party or 3rd party, what are your design and performance limitations, and your resources. Whether this will change in the future (probably so, with movement toward Cell) and how much is really up in the air and will be dictated strongly by the market. If 2007 sees significant growth disparity in the console marketshare (based on the software lineups and retail prices I think it will) I think we will see developers putting a fair amount of effort in trying to solve performance issues on Xenon. I think Xenon has a fair amount of overhead for performance bound situations where the time saved from not optimizing huge amounts of code and selecting key performance bottlenecks could have positive results. But I think it all comes down to what you are doing and the resources you have to accomplish your task.
IMO What is superior is the one that lets you meet your goals within your constraints and resources. In that framework I cannot give the same unequivicol affirmation because it seems the answer really is, "It depends". But in theory, yes, you are correct.