Cell is not much faster at folding, particles I would say otherwize
I thought Cell was quite a bit slower than R580 at folding? And a lot more so in comparison to the R6xx generation?
Cell is not much faster at folding, particles I would say otherwize
Cell development has existed for a couple of years now. Add another couple of years development before GPU's are really offering effective all-purpose processing, and the reality is Cell will probably be in a much stronger development position then GPU's.
GPU's aren't going to offer a seamless, easy development system. Every new multicore architecture that's doing parallel processing is going to face the same issues. A Cell-based PS4 will offer in effect the same programming difficulties as Wii did this gen...none Unless there's a radical shift, which is unlikely, code can be copied over exactly from PS3 to PS4. This maintains BC, ease of development, while more cores etc. provides an excellent scaling of the already developed techniques (assuming developers have got to grips with scheduling systems and work distribution models). Compare that to designing your engines from scratch for whole new hardware, and the advantage is obvious.
I know for heavily parallel workloads GPU are better.I thought Cell was quite a bit slower than R580 at folding? And a lot more so in comparison to the R6xx generation?
My half a cent:
First off, I believe the approach Sony took with the PS3 will pay off in the long run. The console had a rough start, but it's doing fairly well at the moment. Once some of the heavy hitters for the system are out, and once it's had one or more price cuts under its belt, forgeddaboutit. It'll do well. With that assumption in place, I don't believe Sony will be so turned off by the PS3's big up-front investment as to turn 180 degrees and attempt to pull a Wii for their next console.
The two most exotic/expensive components in the system were the BD drive and the Cell, correct? By the time PS4 comes around, BD drives will be dirt cheap, and Cells will hopefully also be more entrenched and have benefitted from economies of scale (assuming they actually start being used in TVs and other devices). I think this would afford Sony the abillity to load up the PS4 with nice goodies and still have it retail for less than $450.
A setup like this maybe, just pure uneducated speculation and whishful thinking :smile::
multiple Cells (would leverage the existing experience of PS3 devs)
custom nvidia part
4GB system ram
1-2GB vram
???MB edram
BD drive
standard or laptop sata hd
10Gb lan
external sata port too perhaps?
Cheers.
Cell is not much faster at folding, particles I would say otherwize.
Folding@Home is not a good indicator since it's not a straight benchmark, not to mention it's hard to connect it to realtime game performance.I thought Cell was quite a bit slower than R580 at folding? And a lot more so in comparison to the R6xx generation?
What type of calculations the PS3 client is capable of running?
The PS3 right now runs what are called implicit solvation calculations, including some simple ones (sigmodal dependent dielectric) and some more sophisticated ones (AGBNP, a type of Generalized Born method from Prof. Ron Levy's group at Rutgers). In this respect, the PS3 client is much like our GPU client. However, the PS3 client is more flexible, in that it can also run explicit solvent calculations as well, although not at the same speed increase relative to PC's. We are working to increase the speed of explicit solvent on the PS3 and would then run these calculations on the PS3 as well. In a nutshell, the PS3 takes the middle ground between GPU's (extreme speed, but at limited types of WU's) and CPU's (less speed, but more flexibility in types of WU's).
How should the FLOPS per client be interpreted?
We stress that one should not divide "current TFLOPS" by "active clients" to estimate the performance of that hardware running without interruption. Note that if donors suspend the FAH client (e.g. to play a game, watch a movie, etc) they enlarge the time between getting the WU and delivering the result. This in turn reduces the FLOPS value, as more time was needed to deliver the result.
It seems that the PS3 is more than 10X as powerful as an average PC. Why doesn't it get 10X PPD as well?
We balance the points based on both speed and the flexibility of the client. The GPU client is still the fastest, but it is the least flexible and can only run a very, very limited set of WUs. Thus, its points are not linearly proportional to the speed increase. The PS3 takes the middle ground between GPUs (extreme speed, but at limited types of WU's) and CPU's (less speed, but more flexibility in types of WUs). We have picked the PS3 as the natural benchmark machine for PS3 calculations and set its points per day to 900 to reflect this middle ground between speed (faster than CPU, but slower than GPU) and flexibility (more flexible than GPU, less than CPU).
Folding@Home is not a good indicator since it's not a straight benchmark, not to mention it's hard to connect it to realtime game performance.
http://folding.stanford.edu/English/FAQ-PS3
Sure they'll run.. But you can definitely churn through alot more per frame using the Cell than most GPUs & in real-world scenarios it makes sense to use Cell for what it's good at & not waste GPU resources leaving your beefy floating point monster CPU sitting idle..How about particles? colissions? fluids simulations, clothes simulations?
Those are heavily parallel workloads.
And it looks like a lot more workloads could find their place on the gpu the same that found their place on SPU.
I never ever "ever" mentioned gpgpu at all.. Where in the world did you get the idea that I said anything of the sort..?But if you think that gpgpu are (will) useless there is better place to discuss the issue as quiet some members will disagree.
But you lose alot of flexibility running on the GPU, many tasks just wouldn't run efficiently enough to get comparable performance & with things like collision, if you already have a Cell sitting along side your GPU then why in the world wouldn't you use it instead..?In fact the gpu cores/mutliprocessor could act as SPU and offload the cpu from a lot of calculation.
Sure..Ok these cores maybe more specialized (they can run good graphics) than SPU but it's nothing choking as we're speaking of a system supporting video games, and that effectively more revelant than folding or Hd decoding...
In practise a console game is ALOT more likely to be GPU-limited than CPU-limited so it would make sense going forward, for the focus to rest on improving the CPUs capacity to help the GPU in graphics relating processing tasks (calculating radiosity for example) as well as doing everything it already does (or just making the GPU & CPU both faster & more flexible), alot faster & more abundantly than trying to do the opposite..
...
Whereas keeping all those little SPUs busy with enough useful work per frame is a much harder & more complicated ordeal for example..
I think you've got your wires crossed. This is about the future consoles, not the current ones. PS3 is a good architecture. Going forwards, would it be worth having a big CPU/Cell in a console, or will a 'GPU' be better at most of the processing heavy workloads, and a CPU will only need to be something fairly simplistic - in essence the CPU being PPE and the GPU providing lots of vector units as SPEs?But you lose alot of flexibility running on the GPU, many tasks just wouldn't run efficiently enough to get comparable performance & with things like collision, if you already have a Cell sitting along side your GPU then why in the world wouldn't you use it instead..?
What I don't get is this strange mentality you seem to have that for some reason developers would want to offload general game processing tasks to the GPU when you have a big fat multi-core CPU that can process the same data alot more efficiently..?
Why would console manufacturer spend more money (and silicon budget) than needed if they intended to use the gpu as a general purpose accelerator.Sure they'll run.. But you can definitely churn through alot more per frame using the Cell than most GPUs & in real-world scenarios it makes sense to use Cell for what it's good at & not waste GPU resources leaving your beefy floating point monster CPU sitting idle..
Point, but you seem to imply that using GPU for things outside of graphic is not a good idea. Doing calculation on the gpu even if game relative is gpgpu.I never ever "ever" mentioned gpgpu at all.. Where in the world did you get the idea that I said anything of the sort..?
Look at the link above, GPU are catching up.But you lose alot of flexibility running on the GPU, many tasks just wouldn't run efficiently enough to get comparable performance & with things like collision, if you already have a Cell sitting along side your GPU then why in the world wouldn't you use it instead..?
Once again why have a huge ass CPU? And their no proof the big ass cpu is better for the task where SPU/GPU are really likely to shine.Sure..
What I don't get is this strange mentality you seem to have that for some reason developers would want to offload general game processing tasks to the GPU when you have a big fat multi-core CPU that can process the same data alot more efficiently..?
In practise a console game is ALOT more likely to be GPU-limited than CPU-limited so it would make sense going forward, for the focus to rest on improving the CPUs capacity to help the GPU in graphics relating processing tasks (calculating radiosity for example) as well as doing everything it already does (or just making the GPU & CPU both faster & more flexible), alot faster & more abundantly than trying to do the opposite..
Your argument is broken:I don't envisage technology like GPGPU (as wonderful as it maybe) making much of an impact in the console space as console GPUs aren't at all in danger of being starved of work in the vast majority of cases so theres little incentive to "offload more work onto it"..
Whereas keeping all those little SPUs busy with enough useful work per frame is a much harder & more complicated ordeal for example..
Really well put together ShiftyI think you've got your wires crossed. This is about the future consoles, not the current ones. PS3 is a good architecture. Going forwards, would it be worth having a big CPU/Cell in a console, or will a 'GPU' be better at most of the processing heavy workloads, and a CPU will only need to be something fairly simplistic - in essence the CPU being PPE and the GPU providing lots of vector units as SPEs?
liolio's faith is in the development of GPUs as versatile processing engines, moreso than GPU's are capable now. Looking at the situation now Cell is a no-brainer as GPUs just can't compete with the flexibility and performance, but that isn't going to remain that way. As for GPUs being tied up with graphics work, if we lose the preconception-forming naming conventions and don't call them GPUs but VPUs (vector processing units) then the choice is to spend your transistor budget either on CPU+GPU, or on CPU+VPU. If you could have 600 M transistors, would you get bang for your buck from 300 M on CPU and 300 M on GPU, or 50 M on CPU and 550 M on VPUs, across multiple dies if necessary? This is similar to the idea of doing graphics solely on Cell or Larrabee. If you have 600 M transistors of Cell/Larrabee, will you get overall a better result with those transistors churning through game code and graphics rendering, versus a budget split over discrete chips dedicated to specific roles?
It's an interesting time, with three (at least!) different approaches to the 'lots of processing power' target, an a choice between how much workload to place on CPUs and GPUs, giving the developers also the option of flexibility or performance.
That's about the technical side of console design, Cell is special for Sony because its IP cost is supposed to be cheaper than an IP entirely developed by a third party. In other words, SCE has to recover the development cost of Cell by reusing its IP for a game console unless it's embedded in every TV. In an earlier phase in a console life cycle transistor cost is important but after shrinks IP cost becomes more apparent in the way of cost cut.Going forwards, would it be worth having a big CPU/Cell in a console, or will a 'GPU' be better at most of the processing heavy workloads, and a CPU will only need to be something fairly simplistic - in essence the CPU being PPE and the GPU providing lots of vector units as SPEs?