So you admit to being one of the sucky programmers incapable of handling the greatness. Do you also use C++?
Lol Alex
I find it hard to believe that with a better GPU, no developer could come up with any additional computing to do. There's a whole lot on the physics/animation side that either isn't happening this gen or only barely happens. Seems like whenever anyone declares this or that chunk of computational ordinance to be useless, someone finds a way to use it.
Right but we're not talking about having part A+B vs just A. We're talking about using the area and power resources that went into the SPEs to do a better CPU and/or a better GPU. And going forward this gets a lot more murky because when power becomes the limit, evening having extra hardware is not a pure win because if you power it up to run a problem that could have been more efficiently executed elsewhere then you'll slow down the whole chip and end up behind...
i get the feeling that people who dont work on it or never have presume that everything possible on the Cell architecture has already been accomplished because it happens to exist as an 1ppe6spe chip in the ps3
I've written code on both PS3 and cell blades... as well as lots of GPUs, AVX/SSE, Larrabee, etc. So I think I'm qualified to speak about the comparison?
GPU compute is limited to specific workloads.
But that's the biggest problem with Cell too. It only does well at workloads that GPUs are even better than it at. Sure it can run stuff like LZMA, etc. (and so can GPUs), but more poorly than a standard CPU core would have.
Maybe, if you have a tightly paired APU with shared TLBs and etc.
Having a unified address space is barely useful since it's not even cached... not even for read-only data (like GPUs). You don't want to be pointer-chasing on SPUs (or GPUs) anyways so the advantages vs. the GPU model are pretty moot.
I really want to give Cell the benefit of the doubt but as I mentioned retrospectively the memory hierarchy choices they made are just too crippling. Thus I agree with Aaron... either a CPU or a GPU is more efficient at the vast majority of interesting workloads.
And of course you can make the argument that it was interesting "when it came out", but I doubt Sony would have made a bet on something they knew had no future. Furthermore it came out around the same time that G80 did, which really is where the GPUs begun to put the nails in the coffin.
The real-time ray tracing link is kinda of funny because ray tracing is one of the things I was involved in doing on Cell and GPUs in the past. Cell's performance was interesting briefly compared to 7xxx series GPUs and their crap branching, but once G80 came out it (again, around the same time as PS3 IIRC), it was clearly outmatched. And for reference, G80 is awful at ray tracing compared to modern GPUs (and massive strides have been made once the switch to software warp scheduling was made for ray tracing on GPUs), so I think it's pretty fair to conclude that Cell is going to look really bad in that comparison today.