Why would a Blu-Ray drive cost $66 when you can get stand alone Blu-Ray players for $99?
Why not? Retail prices have little to do with manufacturing costs, especially when you consider said $99 players are often loss-leaders.
Why would a Blu-Ray drive cost $66 when you can get stand alone Blu-Ray players for $99?
http://www.digitimes.com/news/a20100325PD203.htmlWhy not? Retail prices have little to do with manufacturing costs, especially when you consider said $99 players are often loss-leaders.
Most of Sony's billions did not go into the Cell, since that was a joint effort.
Found this at emsnow.com
That's fine with me my point was more to continue the discussion Alstrong started about what could be built now within a reasonable power envelope.Carl B said:Let's just in general save ourselves the hassle of analyzing the cost predictions of iSuppli and others.
I think we have more or less blacklisted said docs around here anyway.
Aaron please take it down a notch - I know it's only type, but there's no need to 'yell,' so to speak. Some people do think that the Cell architecture has redeeming qualities, and has been a net positive contribution for computing.
I think it speaks to the benefits of the architecture that on a Flop/watt basis, clusters built around it are still the 'greenest' in the Top500. And I say "still" as an achievement in its own right, since as we know the architecture is one that has essentially been frozen in time.
To say nothing again of the fact that the architecture has remained static for like four/five years now. I feel you are attacking the architecture via the context of its industry position, but that does the actual architecting and design direction a disservice IMO. There are plenty of cases where the Cell has found itself as a superior compute alternative to x86.
I'm not saying that this is in the world of consoles per se, or that the project was a 'wise' undertaking, but I mean how much hatred for this architecture is reasonable to have?
While I partially agree with you on CELL being for many reasons a dead end you really don't want to replace the SPUs with PPU cores. They are easier to program for but they are so insanely slow.. I am sure there are in order cores around that are much better than a PPU. An OOOE core or two would make me happy though..The die area would of been better spent if they removed all the SPUs and replaced them with comparable area of PPUs.
I don't think replacing SPU with PPU is necessarily better (Doesn't solve the memory wall and bandwidth issues). Since it has been deployed in RoadRunner, I also don't think one can say Cell doesn't scale either. You need to specify the applications.
Cell has contributed effectively nothing. Its largely a rehash of already known deadends.
I'm not attacking cell via its industry position. It has been a bad dead end design from day one. It isn't scalable, it takes significant programming effort to even use, and has been completely rejected by the industry. I was saying this on day one along with many other people with computer architecture knowledge that looked at it. It takes the viewpoint of a programming model that wasn't accepted in the 1970s/80s and fosters it upon the world. The die area would of been better spent if they removed all the SPUs and replaced them with comparable area of PPUs.
That model caused them to be late. For the software to take a lot of time and resources to develop, the majority of which is also dead end because there are better ways to do things that aren't so bound by such a broken architecture. Cell is part of the defining reason why Sony went from first to last in consoles.
So far as IBM itself is concerned I think Cell has been a huge saving grace, as it got them in the many-core mindset much earlier than they may have been otherwise, have made them active in OpenCL as a result (which supports Cell it should be mentioned), and are supposedly using a good bit of their experiences there with HPC architectures going forward. If they had gotten what they had originally pushed for vs the SPE model, it would have been just another non-event without even this argument to be had as to its merits.
I disagree that it isn't scalable - on the contrary, SPE-offloaded tasks are almost linearly so. As for rejection, I suppose it is all relative. It is not omnipresent, no, and you and others were saying as much back in the day. But it also made its way into the first Petaflop computer on Earth, and by the same token I doubt you would have believed that would come to pass had someone posited such back then, along with the decent traction for the of-the-shelf clustering in academia.
I think Blu-ray had more to do with the above than Cell, personally. Cell was late(ish), large, and hot - but Blu-ray was way behind schedule.
RoadRunner the super computer with with 12,960 IBM PowerXCell 8i and 6,480 AMD Opteron dual-core processors?
What is the reason each Opteron has a Cells attached to it as in an Opteron cluster with Cell accelerators or such, just wondering.
RoadRunner the super computer with with 12,960 IBM PowerXCell 8i and 6,480 AMD Opteron dual-core processors?
What is the reason each Opteron has a Cells attached to it as in an Opteron cluster with Cell accelerators or such, just wondering.
IBM has already been in this space with their bluegene products for quite a while.
making it into supers really isn't that big of a deal, be cheap, be somewhat efficient, be cheap, have an interface to IB, be really really cheap. Did I mention be cheap?
I know which one is more credible.
1. That's less than 5 years away and with motion controls just coming out, this generation will be extended.Some points you missed:
1. 2015.
2. GPUs of 2010 are in the 2TFLOPs range; do we expect a 25x FLOP increase in less than 5 years?
3. GPU architecture is very far away from running gamecode.
4. CPUs are still lagging back in the 200GFLOP range.
5. Yes, OOOe is irrelevant for significant amounts of FLOPs (especially graphics which when combining programmable FLOPs with non-programmable ones found in TMUs and such is already in the tens of TFLOPs) but ...
6. but ... the game loop, as non-FLOP intensive as it may be can hold you back a lot.
7. And parallelizing it isn't so easy/efficient.
8. Creating a parallel game loop on GPU-like "cores" would be a nightmare. Or even SPEs.
The current console CPU's are fast enough to run whichever downloadable game is thrown at them. When making their console, MS listened to Cliff B, not Jonathan Blow (Braid) and it'll remain that way. The big money makers are always the big name titles.The market is growing decidedly in the area of mobile and "arcade" like sectors. Likewise a slew of software is done by smaller studios. Not to mention development expectations are increasing while turn around rate has stabilized. There is a premium on turning games around, on time and on budget, not getting the most out of esoteric hardware.
I don't know about x86 since Intel makes it so expensive, but that's the idea here. Have 2-4 small GP cores in a sea of GPGPU cores.A 2nd or 3rd generation Llano style CPU (a handful very fast OOOe CPU cores--meets the needs for the serial gameloop, "deadline non-efficient code," indie devs, etc) with the vector built on (could use extensions, on the same die similar to old-style ondie FPUs; giving you your high peak FLOP performance on die as well as be a setup and/or post processing monster for the GPU as well as physics and such if the libraries ever catch up) and then a normal GPU. Down the road 5-7 years after this style of system we could see single chip solutions with a fast OOOe core(s) on a sea of GPU styled cores.
That's because of a lack of optimization and only 256 MB memory. Your netbook probably has at least 1GB.Anyhow, after reading all the PS3 owners shrug over losing Linux because it was piss slow at basic tasks (like web browsing), I am not sure how argueing going for even simpler, more basic, cores than the PPE in the Cell processor and how serial performance isn't important ... my 1.4GHz Core Solo netbook runs FF faster
No cpu in 2003-4, when next gen decisions were made, would be better than the cell for a console. I agree with the rest about tool chain etc, Sony was really unprepared for this.While ND and DICE may not need a faster main processor I think we have heard many developers note that speeding up their core loop and having some "forgiveness" for some bad code when crunch hits could really make a difference in a lot of games. Just because the industry is moving toward one direction doesn't mean it should be done overnight--probably the biggest problem with CELL. Sony tried to use their market share to force the industry in a particular direction. But they didn't anticipate the strength of the competition, the importance of tool chains, were late (and half baked), and the industry didn't buy into their vision. Right direction, but wrong road it seems.
That said I do think we may see some compromises where there will remain a small number of very efficient, fast, serial oriented CPUs (like x86 OOOe processors) and a consolidation of the "FLOP" resources with many, very simple, cores ala GPUs. Performance per mm^2 is very high for GPU cores so if simple, peak FLOPs what you are going for that is the direction you would want to go.
I agree with the rest about tool chain etc, Sony was really unprepared for this.
The SPE ISA is general purpose. It's in no way restricted to "matmul" duty. The memory model is the difference here, but it's also the trait that makes SPEs small and scalable enough that you can pack a whole bunch of them on one chip, and not lose manufacurability nor clock headroom.But that was the problem with cell. It was a totally different programming model with significant restrictions. If they had made it anything but a control/data store memory model, it would have been much easier to adapt tools and programs too it, but instead you effectively have to jump through hoops to get anything outside of a matmul to run on it.