Will Microsoft trump CELL by using Proximity Communication in XB720 CPU?

Yeah, and ease of development helps you with the crucial software bit, especailly in the early days (if you've been following the PS3<>360 stuff, or followed any of PS1 <> Saturn stuff). Getting a strong library out, on time, and being the leading platform with the more impressive (stable) versions of multiplatform games weighs in at the dawn of a generation.

Plus, the more of your code that you can throw at multiple machines (such as Xbox1/360 and PC) the more appealing it makes the combined weight of the platforms. I'm sure this isn't a point lost on the likes of Capcom, who've taken their PS3 exclusive DMC4 to "MS platforms".

The point is, if you can do it, why not? Clearly, it does have *some* bearing, even if it's not the most important thing to consider. Chasing CPU power buys you potentially very little for your money, and it's not something that with their often hugely successful systems Nintendo have ever really bothered doing.

Agreed.. But this still doesn't take away from the point i made that neither coding complexity nor processing power are major influential factors in platform success..
 
But that was a pretty significant move. From using external IP in the form of discrete chips, to owning all the core IP of their console.
And I'm clearly not arguing that. I just corrected the implication Fox5 made.

And APIs are used to isolate lower level implementation details from higher level systems. Implementation details like architecture.
Right. I know that. How is that splitting console and PC apart?

Right, look and the end price for PS2 compared to the cheapest PCs. Consoles start highish in price and then quickly cost reduce like all other mass market CE devices. With the XBox and PS brands battling for market share cost reduction is absolutely imperative.

The cost-reduction enabler is integration (and economies of scale production).
My point is that console prices are going up (Wii > GC, X360 > Xbox, PS3 > PS2) while PC prices are trending down. And that is true. Your point is that other companies will never be as good at minimizing the cost, and I think that's true too. How important that is, though, is up for debate.

In the console market you don't win by maintaining a high price on your hardware. You win by lowering cost which enable price reductions which results in massive market share advantages, which in turn results in revenues from game license royalties. In the proces, try not to hemorage to much cash.
Winning by marketshare is secondary. Profiting is primary. Winning marketshare is tied in with making a profit, though, so the situation is not straightforward. The H&E division is under pressure to make profit, that much is well known. It's that way because of the hardware. I think MS would love to get out of the hardware part of it and stick to the software part where they can basically always turn a huge profit. That's how MS makes profit in most every other part of the company, that is.

I don't understand how flippant antagonism is supposed to help the situation.

I especially don't understand it because you skipped my 3rd and 5th points. Does your ignoring them mean they've magically stopped existing?


we won't be seeing a preview of Xbox3 tech in PCs within 2 years, other than at the very high level like shaders and newer versions of DDR memory.
What, in your opinion, could the next two years of CPU technology lack that the next 4 years of CPU technology could produce?
 
Where does it says that?

All they're trumpeting is that they're more power efficient than Cell or Xeon.

Also, a flops rating on its own is a meaningless value. We need to know about the IPC and the supported instruction set and the real cycle cost of every ops.


They're also promoting the flexability of the design.

Unlike most other processors, it can be reprogrammed and reconfigured on the fly to change the kind of processing it does, such as signal processing or data processing. Raytheon calls this a polymorphic computing architecture (PCA).

By functioning as a single processor, it reduces the number of processors needed for a system like a satellite or aircraft. More important, it's designed as an array of chips, which allows for teraflop throughput at a fraction of the wattage needed by today's processors.


I remember reading about people claiming CELL would be in demand by the military. MONARCH seems much more up the military's alley.


It uses eDRAM, which makes it more difficult to clock high, but as eDRAM options improve with things like Floating Body and Magnetic dram, a lot more potential opens up.



Making something like CELL is not some "near impossible" goal for engineers. People need to come back down to earth regarding the hype.
 
You'd have to be stupid to believe single threaded code could reach the performance of multi-threaded on multi-core CPUs.. And considering it's blatantly obvious that execution-parallelism is where micro-processor tech is headed and will continue to go, such an arguement is pointless and irrelevant..

Pointless and irrelevant is it? So your telling me that single threaded performance is completely irrelevant in a modern CPU and there is no problem or task that can't be made to take full advantage of multiple cores and threads?

Also there is no requirement for developers to utilise integer heavy-branchy code for games development.. Granted it's much more efficient for things like AI when working on PC for example (OOOE general purpose CPUs which are good at dealing with it) but for non-OOOE CPUs present in next-gen consoles, a change in one's programming practice in certain situations could drastically reduce any possible performance loss incurred by the CPU's inadequacies in this area..

Why is there still so many people who believe that comparison between CPU architectures is valid without programming optimally for there strengths?

I wonder why some people continue to insist that there is no problem/task that can't be "optimised" for Cells rather single minded design philosophy to the point were it ouperforms all other architectures in every single scenario. Tell me, if FLOPS is all you need for high peformance and its simply the code thats written incorrectly then why havn't CPU's focussed on raw FLOPs perormance since day one and the code simply grown up around that? If every problem can be solved with lots of FLOPs and you can pack a lot more into any given die space than a more balanced architecture then why go balanced at all?

Come to think of it, why arn't we running everything on GPU's if everything aside from FLOPs performance in a modern CPU is irrelivant?

Intel & AMD aren't focused on games performance in the PC space so this arguement is a little odd..
They are much more concerned with all-round performance on a vast multitude of operations than gaming only..

Gaming performance is a big focus for desktop CPU's. A win in this area effects perception of the CPU as a whole, just look at the A64, it wasn't really much faster until the last one or two iteraton than the P4 aside from in games but it was widely accepted as the much faster chip.

What percentage of PCs sold each year you you believe are bought by gamers?

Answer: Not nearly half as many as you seem to think..

And the percetage of high end FX/EE type CPU's? You know, the ones marketed at gamers?
 
They're also promoting the flexability of the design.




I remember reading about people claiming CELL would be in demand by the military. MONARCH seems much more up the military's alley.


It uses eDRAM, which makes it more difficult to clock high, but as eDRAM options improve with things like Floating Body and Magnetic dram, a lot more potential opens up.



Making something like CELL is not some "near impossible" goal for engineers. People need to come back down to earth regarding the hype.

I still fail to see how you can derive any conclusion out of a brief news article... :(

At least for Cell, we have concrete data and supporting researches from the academia. It is also being sold to the military, supercomputing centers and national labs for real. According to its original patent, Cell is designed to work in a "cluster" over a highspeed network too ... if money is not an issue.

In many situations, the military will pick solutions based on the total system capability and software (not just a CPU). The shortcoming seems to be double precision for some of these applications (Not sure how Monarch performs in this respect).



As for the Proximity Communication suggestion in your OP, the lab guys have to clam down the test chips (test board) in a vice to demonstrate it (just showing "something" flowing). I'm not into these things :) and am not sure if they can sustain vibrations and drops yet. You may be hyping Proximity Communications and Monarch more than people hype about Cell.


EDIT: I'm also curious if Cell will find use in areas beyond gaming, scientific computing and military. In consumer electronics, Sony and Toshiba have been sitting on it for months and years now. The only consolation I have is when I see Toshiba engineers posting in the cbe-oss-dev mailing list (which could mean they are working on something... but not sure if anything will be released eventually).
 
Last edited by a moderator:
I remember reading about people claiming CELL would be in demand by the military. MONARCH seems much more up the military's alley.
Because it's lower-power consumption? As for Cell being in demand for Military applications, we have heard already of its considered use there. More importantly, military applications of the chip have sod all to do with consoles! Unless MS is willing to invest a military style budget in XB3000. That's going to be one BIG chip, that'll cost the earth. You think that's better in a console than a 100 Watt CPU that costs 1% the military price?

The real-world consideration is what commercial solutions are viable in a mass-produced CE device, which surely will have to look heavily at what happens with Cell. If much of it's theoretical potential doesn't contribute to a better platform, MS can confidently go with a large conventional CPU. If Cell is used effectively and it makes a difference, proving high maths performance is beneficial to games, they'll want to source a similar high-FP-throughput CPU or other vector processor.
 
You'd think with a ;) people would recognize sarcasm. I guess I was wrong.

Sarcasm? I don't think so. Were you exagerating to make a point? Ya, but the point is still off base. Programming ease of use was actually a big goal for MS in their design.

Likely for a good reason. OOOE is not something you just tack on at the last moment.

And again you make stuff up. Tacked on at the last minute? According to whom? This seems like a design goal from early on in the project.

It had to been clear from the beginning that OOOE was not possible, not to mention IBM's somewhat checkered history in this regard.

Obviously it was not clear to MS, as they were reportedly very dissapointed when IBM informed them they were unable to deliver.
 
Sarcasm? I don't think so. Were you exagerating to make a point? Ya, but the point is still off base. Programming ease of use was actually a big goal for MS in their design.

I meant it would've been really hard for them to good OOOE without a massive increase in cost. $800? Probably not, but if they tried to still have 3 cores then yes, $800-1000 is not out of the question.

And again you make stuff up. Tacked on at the last minute? According to whom? This seems like a design goal from early on in the project.

I said it is not tacked on at the last moment if you read it carefully.

Obviously it was not clear to MS, as they were reportedly very dissapointed when IBM informed them they were unable to deliver.

Then MS was really dumb. It is obvious that IBM could never deliver 3 OOOE cores running at 3.2 Ghz. They couldn't even get 1 OOOE core to go 2.5 Ghz on the same process. Even Intel and AMD couldn't get dual-core 3.2Ghz CPUs at the 90nm node. Unless they expected a much worse off chip, like 2 cores going at 2Ghz, what MS wanted was a pipedream.

EDIT: For clariifcation, excluding the Pentium D, neither AMD nor Intel could get dual core 3.2Ghz at the 90nm node.
 
Last edited by a moderator:
I said it is not tacked on at the last moment if you read it carefully.

"OOOE is not something you just tack on at the last moment."
Obviously, insinuating that this was what MS attempted to do.


Then MS was really dumb.

So you're smarter than the engineers at MS and IBM who thought this would be possible?

With that, I'll leave the conversation to the interesting topic at hand. I just thought it was relevant to point out that MS did indeed make OOOE one of their goals, so going forward, they will probably try to do this again with the next Xbox.
 
MS did indeed make OOOE one of their goals, so going forward, they will probably try to do this again with the next Xbox.

Perhaps, but this feature FWIU does limit the performance of the chip. With Cell gaining performance and acceptance over the next few years, I think MS will have to consider performance parity with Cell2 a goal as well.

Then again as Joshua L. predicted many moons ago, perhaps they just focus on a very gpu centric design and offset their deficient cpu in this manor.

Whatever they decide it will be interesting to see how they plan to counter Cell2.
 
Last edited by a moderator:
"OOOE is not something you just tack on at the last moment."
Obviously, insinuating that this was what MS attempted to do.

How is that insinuating what MS was attempting to do? I was pointing out that OOOE is hard, and thus something you can't just tack on, meaning that they couldn't be just surprised at the last moment that OOOE wasn't going to be there. I think you are way too sensitive about this stuff.

So you're smarter than the engineers at MS and IBM who thought this would be possible?

Please don't make an appeal to authority here. We've never heard MS or IBM engineers say it was possible, only that they couldn't do it. And going by existing products, logic would dictate that a 3-core OOOE 3.2Ghz CPU on the 90nm is totally impossible within console costs and power consumption limits.

With that, I'll leave the conversation to the interesting topic at hand. I just thought it was relevant to point out that MS did indeed make OOOE one of their goals, so going forward, they will probably try to do this again with the next Xbox.

Maybe, but as debated throughout this thread, this is quite a challenge. If they're stuck with a Quad or Sextuple core OOOE CPU, and Cell2 has something like 32 SPEs, then they would lose by an order of magnitude in performance. Even worse, Cell2 could have 1 or 2 OOOE CPUs as the PPE, thus getting the best of both worlds. That's why I suggest that MS should simply license a variant Cell for the Xbox3. Any other strategy would be very expensive or very underperforming.
 
What, in your opinion, could the next two years of CPU technology lack that the next 4 years of CPU technology could produce?

All I am saying is, Xbox3 / Xbox720 technology will be, for the most part, different than what we see in 2008, 2009, 2010 PCs.

the Xbox systems seem to be moving further & further away from being a modified PC like the original Xbox.

even the upcoming R600, which is somewhat based on the Xenos technology, completely lacks EDRAM.


p.s. I don't believe we will EVER see an Xbox using any kind of CELL processor.
 
the Xbox systems seem to be moving further & further away from being a modified PC like the original Xbox.
That's a huge reach from one generation! There's not enough data to extrapolate a direction for the platform. There's just as much probability as convergence back to PC hardware as there is further divergence. I'd even say that with the push of XNA and coupling software between XB360 and PC, a more PC friendly development-platform would be desired by MS, rather than making things even harder for developers between PC and XB3000. Or gimping the hardware by dependence on middleware. By that point, PS4 devs ought to be cruising along on Cell, porting existing algorithms and being able to just throw more at it (assuming Cell maintains its current format, which it'd be crazy to give up on).
 
Then MS was really dumb. It is obvious that IBM could never deliver 3 OOOE cores running at 3.2 Ghz. They couldn't even get 1 OOOE core to go 2.5 Ghz on the same process. Even Intel and AMD couldn't get dual-core 3.2Ghz CPUs at the 90nm node. Unless they expected a much worse off chip, like 2 cores going at 2Ghz, what MS wanted was a pipedream.

EDIT: For clariifcation, excluding the Pentium D, neither AMD nor Intel could get dual core 3.2Ghz at the 90nm node.

There is a range of implementations for out of order execution. One of the parts of the equation left out is the width of each processor. Both Cell and Xenon have cores that top off at an issue width of two.

The x86 chips at the time had a width of 3, and the IBM OoO core had a width of 5.

Internally, the number of execution units for a single A64 core is close to the total number of units in a three-core Xeon chip. In any given cycle, a K8 core can internally issue 9 instructions peak, several times more than that of any single Xenon core.

An OoO core that did not go to such extreme lengths to extract ILP would significantly cut back on the amount of related hardware.

Less easily measured but still significant is how aggressively speculative a design is. x86 chips as we know them are very aggressive, which also results in a large amount of work thrown away.

OoO doesn't automatically make a chip as bulky as an AX2.
 
Pointless and irrelevant is it? So your telling me that single threaded performance is completely irrelevant in a modern CPU and there is no problem or task that can't be made to take full advantage of multiple cores and threads?

No i'm telling you that, given a multi-threaded architecture, nobody in their right mind would develop software systems based on a single-threaded model in the hopes to gauge any kind of performance metric for the system as a whole.. How can someone say "Cell sucks because it stinks at single-threaded games" when the game itself leaves 7 whole cores sitting idle at run-time..?

I wonder why some people continue to insist that there is no problem/task that can't be "optimised" for Cells rather single minded design philosophy to the point were it ouperforms all other architectures in every single scenario.

I wonder that too because I've never believed such a strangely obtuse idea myself..

Tell me, if FLOPS is all you need for high peformance and its simply the code thats written incorrectly then why havn't CPU's focussed on raw FLOPs perormance since day one and the code simply grown up around that? If every problem can be solved with lots of FLOPs and you can pack a lot more into any given die space than a more balanced architecture then why go balanced at all?

Who said anything about FLOPS being all you need?

Come to think of it, why arn't we running everything on GPU's if everything aside from FLOPs performance in a modern CPU is irrelivant?
Such comment leads me to believe that your the kind of person of the mindset that Just because of Cell's highly touted FLOPS performance it has no other practical use other than uber fast number crunching.. The truth is Cell IS great at vector math but that's not all it can do either.. & the truth is that the more of your code you can efficiently vectorise and push through the chip, the faster it [your code] will go..



Gaming performance is a big focus for desktop CPU's. A win in this area effects perception of the CPU as a whole, just look at the A64, it wasn't really much faster until the last one or two iteraton than the P4 aside from in games but it was widely accepted as the much faster chip.

How about looking at the PC market as a whole instead..? How about comparing the market share of a company like Intel with that of a company like AMD..?

The gaming PCs are nothing but a fraction of the total number of PCs sold each year and I'm pretty darn sure a company the size of Intel couldn't survive off that demographic alone..
 
I think there is another angle to consider in all of this. 360 managed to do so well in this round partly because they were first to market, and partly because it was so easy to get good results dev'ing on the 360. The PS3 has a much higher learning curve and as such, it takes significantly longer to get impressive games running on it.

The above situation though could totally reverse itself next gen. I think its fair to assume that PS4 will use the next flavor of Cell, probably a few of them bolted together. By the time PS4 is released in a few years, the Cell learning curve will be old news. We'll all have tons of code and libraries that run on spu's, and we'll all be familiar with the typical techniques needed to make Cell shine. All the bitching about Cell being tricky to use (some of my own included) will for the most part be obsolete. It's *possible* that the CPU migration from PS3 to PS4 could be relatively painless. Worded another way, I'm guessing it will be easier to get PS4 games out of the gate compared to what its taking on PS3.

Microsoft though is facing two risks. First If they decide to go with some new, esoteric and uber cpu configuration they risk having the same problem that PS3 is having now, in that its taking forever to get the impressive games out. Sony was able to risk it since they had the strong Playstation brand as backup, plus PS3 serves dual purpose both as a game machine and HD movie revenue source. Can Microsoft really risk doing this? Second, if they go with a more traditional easy to use cpu setup, they risk being totally out gunned when they try to compete with a mature codebase thats tuned to run on 20+ spu's.

I personally find it hard to believe that AMD or Intel would be able to have a multi core cpu setup ready for Microsoft to fab themselves for a reasonable price, when the time comes. Although I suppose anything can happen, especially if AMD or Intel feel threatened in a few years by being totally shut out of the console market, which keeps getting bigger and bigger with each generation. Or maybe Microsoft has some other trick up their sleeve? Still though, aside from pride reasons, I'm not sure why simply adopting Cell2 would be a bad idea. As I understand it, they would not be paying Sony any royalties to use it correct? If thats the case, then from a purely strategic point of view, they must be at least considering Cell.
 
Next generation consoles won't enter the teraflop space because of the CPU's. If you look right now, GPU's are miles ahead of CPU's in flop power. I'm kind of doubt we will see 1 teraflop CPU's in the next consoles. Instead of working on raw computing power, I believe we will see CPU's instead that are more efficient.
 
There is a range of implementations for out of order execution. One of the parts of the equation left out is the width of each processor. Both Cell and Xenon have cores that top off at an issue width of two.

The x86 chips at the time had a width of 3, and the IBM OoO core had a width of 5.

Internally, the number of execution units for a single A64 core is close to the total number of units in a three-core Xeon chip. In any given cycle, a K8 core can internally issue 9 instructions peak, several times more than that of any single Xenon core.

An OoO core that did not go to such extreme lengths to extract ILP would significantly cut back on the amount of related hardware.

Less easily measured but still significant is how aggressively speculative a design is. x86 chips as we know them are very aggressive, which also results in a large amount of work thrown away.

OoO doesn't automatically make a chip as bulky as an AX2.

That's true to an extent. You can definitely make 3 OOOE core fit on one die if you must. Whether it can reach 3.2 Ghz, have decent fp power and decent integer power, have reasonable power consumption, and can be fabbed by a non-dedicated fab is another question. The latter was the killer, and IBM was clearly in no position to make such a chip. At some point and probably early in the development cycle, it because clear that 3 in-order core was much more practical than trying to force 3 OoOE core to the same level of performance.
 
Cell in XB3000 would do wonders for cross-platform development and make the idea of competing consoles pretty much obsolete.
Next generation consoles won't enter the teraflop space because of the CPU's. If you look right now, GPU's are miles ahead of CPU's in flop power. I'm kind of doubt we will see 1 teraflop CPU's in the next consoles.
Easy peasy! PS3 is at 200+ GFlop now. Put four in PS4 clocked at 4 GHz, and you've broken 1 TFlop peak. You're look at more than 4x a power increase between console generations (unless you're Nintendo :p) - as much as 10x.
 
Back
Top