Will Microsoft trump CELL by using Proximity Communication in XB720 CPU?

Raytheon has already made a monster CPU for DARPA that smokes CELL.
umm no it doesnt it saiz

Current estimates by Raytheon put the MONARCH chip at somewhere between three and six GigaFLOPs per watt, depending on the application, with an average of five GigaFLOPs. The company claims the Cell processor in the Sony Playstation 3 runs at an estimated 2.2 GigaFLOPS per watt and the Intel Xeon runs at around 0.5 GigaFLOPS per watt, making MONARCH twice as power efficient as Cell and 10 times more efficient than Xeon.

thus the one that will be doing the smoking are the least efficent ones, ie the headlinje should of been, Xeon smoke all others!!!

its not more powerful, it saiz more powerefficent
and also since its a company that will over value itself and underrepresnt the others the only thing that can be concluded is
Raytheon <= five GigaFLOPs
cell >= 2.2 GigaFLOPS

Xeon has run away im afraid

about the only thing u can conclude is Xeon sux WRT to powerefficency ( also speed but thats another story )

i wonder is my fridge more powereffecient than cell
FRIDGE more powerfull than cell
 
GFlops are irrelevant anyway. With the introduction of Intervals, and they're absolute accuracy, GFlops will suddenly stop working and you won't be able to calculate anything on normal processors.
 
This is not to suggest Carmack is completely wrong, since he does have a point. Unfortunately for MS, future versions of the Cell processor will get you into the TeraFLOP range whereas OOOE CPUs won't come even remotely close. I can't imagine what kind of PR nightmare MS would have trying to spin the fact that Xbox 3 has an order of magnitude less CPU power than the PS4.

And in single threaded code? Or integer heavy branchy code? How much does Cell wipe the floor with your average AX2 then?

Not everything is about higher FLOPS. In fact as PC games benchmarks show FLOPs doesn't make all that much difference (in PC games). If it did then Intel and AMD would have been far more powerful in this area by now in an attempt to win the games performance market.
 
You've also went from a triple-core to a dual core and about a 33% decrease in clockspeed. It's unclear if performance went up at all.

Two bigger more complex cores with higher IPC so that doesn't really say anything. Single threaded its a no brainer and multithreaded, especially accounting for the lack of 1:1 scalability in cores to performance suggests that the X2 would come out ahead.

Of course the devs are the ones in a position to either confirm of deny that. Perhaps someone will chime in....
 
They will go there, of course, but that doesn't really tell us about what new fundamental technologies they might employ in its design, which I think was almighty's question.

I remember someone posted a PDF or a HTML document about the Cell optical network (Not the Cell architecture patent). I think it was written by a Sony staff. Does anyone know where I can find it again ? I tried to search for it to no avail.
 
There is a pdf with some info here.
http://www.it.lth.se/courses/DARK/novelarchitectures/monarch-overview.pdf


The Monarch chip itself:
800Mhz clock
12 GFLOPS
32 MB DRAM (on chip)
320KB SRAM
512 ops/clock
36 Watts

The 6 Processor version
75 GFLOPS
192MB DRAM (on chip!)


Obviously it is not sane to compare this beast directly to Cell.

Raytheon predictably slanted their PR to appear in the best possible light. If you want the same GFLOPS/Watt rating out of Cell, simply lower drive voltages and clock speeds a bit. And that's with the 90nm version.

Re:pS4 CPU
Depends a bit on what process node you assume will be used - at 45nm you have roughly 4 times the transistor budget, at 32nm roughly 8. I'm not sure Sony would be willing to aim quite as high (relatively) in terms of hardware for the next generation, then again, quite a bit of the cost this time around was presumably in the BR-drive. A conservative guess would be that they would allow themselves 4 times the transistor budget for the PS4 CPU, as this would assure that they could hit a targeted launch window at decent cost even if lithography hadn't progressed at a hoped for pace. They are likely to go with an evolved Broadband Engine, with the benefit of half a decade of programmer experience and tool development. The underlying structure that allows scaleable performance is already in place.
 
Not this year.
Personalities like you really ought to be banned. Moderators have stated time and again that PR is not to be dragged onto the forums. But when was the last time you posted anything without wearing your PR bunny hat?
 
The Monarch chip itself:
800Mhz clock
12 GFLOPS
32 MB DRAM (on chip)
320KB SRAM
512 ops/clock
36 Watts

The 6 Processor version
75 GFLOPS
192MB DRAM (on chip!)
How big is it, from which we can derive minimum price. Presumably the thing costs a zillion dollars (32MB eDRAM + 320 KB DRAM), or has severely gimped logic as the transistor budget is blown on storage!
 
And in single threaded code? Or integer heavy branchy code? How much does Cell wipe the floor with your average AX2 then?

You'd have to be stupid to believe single threaded code could reach the performance of multi-threaded on multi-core CPUs.. And considering it's blatantly obvious that execution-parallelism is where micro-processor tech is headed and will continue to go, such an arguement is pointless and irrelevant..

Also there is no requirement for developers to utilise integer heavy-branchy code for games development.. Granted it's much more efficient for things like AI when working on PC for example (OOOE general purpose CPUs which are good at dealing with it) but for non-OOOE CPUs present in next-gen consoles, a change in one's programming practice in certain situations could drastically reduce any possible performance loss incurred by the CPU's inadequacies in this area..

Why is there still so many people who believe that comparison between CPU architectures is valid without programming optimally for there strengths?

Not everything is about higher FLOPS. In fact as PC games benchmarks show FLOPs doesn't make all that much difference (in PC games). If it did then Intel and AMD would have been far more powerful in this area by now in an attempt to win the games performance market.
Intel & AMD aren't focused on games performance in the PC space so this arguement is a little odd..
They are much more concerned with all-round performance on a vast multitude of operations than gaming only..

What percentage of PCs sold each year you you believe are bought by gamers?

Answer: Not nearly half as many as you seem to think..
 
Raytheon has already made a monster CPU for DARPA that smokes CELL.
I don't find it very surprising that a military/defense company makes chips that are better than chips in consumer electronics, after all they sell them to customers which don't whine over an "overpriced" console ;P
 
How would a CPU like a quad core A64 with on die "stream computing" elements derived from this GPGPU lark fair in a next gen console?

That would seem to tick a fair few boxes ("high flops", ease of porting to/from PC, high performance for legacy/unoptimised code, available from MS's current chums, developer friendly whatnot etc).

It's questionable how much CPU performance actually helps you sell consoles anyway (e.g. MD > SNES, Saturn [supposedly] > PS1, Xbox > PS2, PS3 > 360). Might as well make it easy to make games for and focus on graphics hardware ...
 
How would a CPU like a quad core A64 with on die "stream computing" elements derived from this GPGPU lark fair in a next gen console?

That would seem to tick a fair few boxes ("high flops", ease of porting to/from PC, high performance for legacy/unoptimised code, available from MS's current chums, developer friendly whatnot etc).

It's questionable how much CPU performance actually helps you sell consoles anyway (e.g. MD > SNES, Saturn [supposedly] > PS1, Xbox > PS2, PS3 > 360). Might as well make it easy to make games for and focus on graphics hardware ...

It's also questionable as to how much ease of develop helps you sell them either..

The fact remains that none of the factors you've described hold any bearing over the financial success of a console..

The factor that hold the most sway are software and price..
 
How would a CPU like a quad core A64 with on die "stream computing" elements derived from this GPGPU lark fair in a next gen console?

That would seem to tick a fair few boxes ("high flops", ease of porting to/from PC, high performance for legacy/unoptimised code, available from MS's current chums, developer friendly whatnot etc).

It's questionable how much CPU performance actually helps you sell consoles anyway (e.g. MD > SNES, Saturn [supposedly] > PS1, Xbox > PS2, PS3 > 360). Might as well make it easy to make games for and focus on graphics hardware ...

I can't envision how a GPU+CPU would look in the next decade, but right now:
"Working on the Cell was soo much easier!"

Not exactly a sterling endorsement. As of right now, GPGPU should never even remotely be considered a CPU replacement.
 
  • Like
Reactions: one
the "fight" between Xbox720 CPU and PS4 CPU should be titantic, cataclysmic. we will finnally enter the age of Teraflop computing in consoles. I hope both Microsoft and STI make programmers lives easier at the same time.
 
And Xbox 360 would cost $800. ;) It's manufacturing necessity, not programming wishes that rule future CPU architectures.

Spreading more made up facts?

Dean Takahashi’s Xbox 360 Uncloaked said:
An OOOE processor was initially promised by IBM and planned for but later scrapped when IBM could not execute
http://www.xbox365.com/news.cgi?id=GGNdPdrGLu06140956

OOOE was indeed a major design goal for MS. IBM failed at the execution. I'm sure they never intended to sell it at $800...
 
It's also questionable as to how much ease of develop helps you sell them either..

The fact remains that none of the factors you've described hold any bearing over the financial success of a console..

The factor that hold the most sway are software and price..

Yeah, and ease of development helps you with the crucial software bit, especailly in the early days (if you've been following the PS3<>360 stuff, or followed any of PS1 <> Saturn stuff). Getting a strong library out, on time, and being the leading platform with the more impressive (stable) versions of multiplatform games weighs in at the dawn of a generation.

Plus, the more of your code that you can throw at multiple machines (such as Xbox1/360 and PC) the more appealing it makes the combined weight of the platforms. I'm sure this isn't a point lost on the likes of Capcom, who've taken their PS3 exclusive DMC4 to "MS platforms".

The point is, if you can do it, why not? Clearly, it does have *some* bearing, even if it's not the most important thing to consider. Chasing CPU power buys you potentially very little for your money, and it's not something that with their often hugely successful systems Nintendo have ever really bothered doing.
 
Take a look at what's going to ship in PC CPUs in the next 2 years. That's what kind of technology MS will be using in the X3 (if the X3 will even exist).

Personally, I think MS will try to get Compaq, Dell, etc. to make the hardware and MS will just specify the minimum hardware requirements. The X3 will be a PC with certain guaranteed features and performance.

I disagree. whatever goes into Xbox3, it'll be somewhat more custom than what's in Xbox 360. we won't be seeing a preview of Xbox3 tech in PCs within 2 years, other than at the very high level like shaders and newer versions of DDR memory.
 
I can't envision how a GPU+CPU would look in the next decade, but right now:
"Working on the Cell was soo much easier!"

Not exactly a sterling endorsement. As of right now, GPGPU should never even remotely be considered a CPU replacement.

lol!

I was simply thinking that with a few years development, and probably some DX style APIs behind it, mixing "traditional" PC CPU cores with some of that ... kind of ... stuff ... might make a workable compromise.
 
Spreading more made up facts?

You'd think with a ;) people would recognize sarcasm. I guess I was wrong.

http://www.xbox365.com/news.cgi?id=GGNdPdrGLu06140956

OOOE was indeed a major design goal for MS. IBM failed at the execution. I'm sure they never intended to sell it at $800...

Likely for a good reason. OOOE is not something you just tack on at the last moment. It had to been clear from the beginning that OOOE was not possible, not to mention IBM's somewhat checkered history in this regard. For instance, the PPC 970 was something of a disaster at 90nm, only capable of reach 2.5Ghz with liquid-cooling. I can't imagine them getting 3 of them work at any reasonable speed in the heat envelope of a console.
 
Back
Top