Why? Easier?Cell can do ALOT but its a choice between using Cell's SPE's for Graphics rendering or having them do physic's. Personally i would choose physic's.
Why? Easier?
Physics make for a more believable interactive world. Graphics will only take you so far, and physical interactivity and animation can bridge a lot of the believability gap.Why? Easier?
Oh. I thought it was more powerful.Well you get better effects from it.
The cell isnt very powerful graphically compared to a GPU. But its excellent at physics.
Oh. I thought it was more powerful.
Sony originially planned to use 2 Cells in their PS3, one acting like a CPU and one like a GPU. These plans quickly got shredded after every developer asked about it complained about how ridiculusly hard it would be to code for and performance wise, it would not be worth it at all (Graphically)
Depends on what you want Cell/the GPU to do. In warhawk, a Cell SPU calculates volumetric clouds, you can't really do stuff like that on a GPU. They're not programmable enogh for that, or at least not enough to do it efficiently or we'd seen it in games by now. So your claim is sort of true sometimes, and not at other times.The cell isnt very powerful graphically compared to a GPU. But its excellent at physics.
Total urban myth.
Kutaragi reveals that there was once the idea of using two Cell chips in the PS3, with one used as the CPU and the other used for graphics. However, this idea was killed when it was realized that Cell isn't appropriate for the functionality required for shaders, software tools that are used to draw images to the screen. The decision to go with a separate GPU was made in order to create the most versatile architecture possible.
Measuring what? If you're talking pixel throughput, rasterization, texture reading, raster tests like alpha/depth/stencil test, rapid framebuffer read/write operations like alpha blending, then hell no. Not within a million miles. If you're talking raw vertex processing, then yeah, for the most part. Exceptions still exist in this case, though.Oh. I thought it was more powerful.
You sure about that?
Well, for the context in which I'm saying it, yes I'm sure.
Obviously Sony considered it - it's in their patent afterall - but the way Kutaragi admits to it it's an 'in passing' sort of a feel. Not only that, but the quote I was directly refuting indicated Sony went to devs and they gave their input on it. Yet not one dev ever has ever spoken of such a plan actually set into motion at Sony.
And not for lack of this topic coming up a million times before, mind you...
Ok, I think see what you mean. You're saying the urban myth is that they approached developers with it (perhaps a prototype or sim) and got hands on feedback? Even though we're working with a second hand translation, well, summary of a translation, it still seems like it was considered more then just in passing. Perhaps even talking about the concept to some of its internal teams (SCEI based, probably), and getting feedback on that. Not hands on, but discussion about it conceptually. Of course, that's just an assumption of mine. I don't think I've ever read any interview, with an internal or 3rd party dev, mentioning that possibility. But, then again, I don't explicitly remember that being brought up either. I just get the feeling this is something Ken seriously considered. Especially since, IIRC, the original patent is actually in his name (or was at least created by him).
Total urban myth.
Anyway yes, Cell is better than any modern GPU at raytracing/casting - which may what launched MistaPi down this topic road. But the vast majority of games are obviously not going to be raytraced; enter the GPU.
Well I guess Carmack bought into that urban myth too..
I think it is also in that Xbox book. What are your sources?
And if you watched his Quakecon address, Carmack while definitly against the idea of CPU as GPU..he did seem to think there was some merits too the idea, far more positive than I expected him to be.
Oh I dont have the Xbox book. But I thought there was mention of this. There was also a Toshiba GPU config? I guess the only debate here is how far each project got conceptually towards reality. Anyways it's over.
If there's something solid on this in that XBox book, by all means bring it out. It's not like I'm not interested in this.
But no one knew that inside Sony, something was going terribly wrong. Sony had created a new game system, dubbed GS Cube, with 16 Emotion Engine chips. It proved to be a technological dead end. In parallel, IBM fellow Jim Kahle had proposed Cell, a radically different computing architecture. Instead of a microprocessor and a graphics chip, the system for the PlayStation 3 was originally supposed to have two Cell microprocessors. One would handle the system while the second one would handle graphics. The game developers couldn’t make heads or tails of this non-traditional architecture. Sony scrapped that plan. Then it commissioned both Sony’s and Toshiba’s chip designers to create their own graphics chip. The graphics chip was going to be a screaming monster that relied totally on one kind of processing, dubbed fill rate, to handle the graphics. That was what Sony and Toshiba’s engineers knew how to create, based on their work on the PlayStation 2. But in the meantime, both ATI and Nvidia had pioneered the use of shaders, which were subprograms that added the nuance and texture to the surface of an object. This technique simplified the process of creating art for games. To create a new effect, the developer had to simply create a new shader. The Sony and Toshiba team were far behind on shader technology. Game developers once again objected to the solution that they were proposing. Sony had to cancel the graphics chip altogether. The console just wasn’t going to launch in 2005.