davis.anthony
Veteran
This is an example of what Cell could do and what it was doing: https://www.slideshare.net/DICEStudio/spubased-deferred-shading-in-battlefield-3-for-playstation-3
Hard disagree. It's fair to say Cell wasn't good for the gaming code and paradigms of the time. If you were to take modern GPGPU focussed concepts, or even just data-oriented game design, and map them onto Cell, you might have a very different story. When it was able to stretch its legs, Cell got unparalleled throughput. The main problem was devs couldn't and shouldn't have had to rearchitect everything for Cell on top of developing for other platforms. Sony perhaps imagined a scenario like PS2 where the majority code was focussed on it and ported, but that didn't happen, and realistically couldn't for the cost of game development as it increased exponentially.
Sadly we'll never know. There isn't going to be a future retro-demo scene pushing PS3's the same way we've seen C64, Spectrum, etc. pushed. There's no access to Cell hardware or interest in exploring it. I'm not going to argue it's a loss to the world and a travesty that other tech displaced it, but I'm not going to accept that it's a poor design for a gaming CPU. Modern game development is data focussed, working around the data access and memory limits caused by RAM buses that haven't kept up with the increase in maths power, gaining massive improvements (10-100x) over object orientated development concepts, and that's entirely what Cell was about 15 years ago when STI got together and said, "you know what the future limiting factor is going to be? Data throughput."
I really wonder what cell + geforce 8 series gpu could have done. That $500 budget could have gone to a cell + geforce 8 and 512megs of ddr with the 256 rdram. Could have been an amazing console if they didn't try to go with that weird gpu and force bluray into the console.
Yea but it was sony. Baby steps lol.Or simply something else for the CPU than the weak Cell. Whatever AMD/Intel or even IBM had on offer really.
Your initial argument was that the CELL wasnt very powerful. Clearly it was extremely powerful. We have seen it in action and there is proof.If the Cell wasn't good for gaming tasks at the time, and it was today, then it still wasnt a good CPU for gaming when the PS3 was hot. Though i doubt its design would reign today, evidently it does not. As a user after your post has shared, it wasn't all that great of a CPU back then for gaming, or most things basically, nor is its architecture 15 years later.
We are not seeing anything of it in todays architectures and consoles, and for good reason. It was a terrible console hardware-wise.
Your initial argument was that the CELL wasnt very powerful. Clearly it was extremely powerful. We have seen it in action and there is proof.
CELL was not good for gaming and CELL not being ideal for gaming development paradigms of the time are two different things.
Regarding how its architecture fares 15 years later it is irrelevant. Whatever architecture we have today, like everything else, is a result of 15 years of steady evolution and improvement.
The CELL was designed well for what was about to come, but came too early.
The earlier release of 360 and the slow adoption of PS3 meant that developers found home in the 360's development ecosystem. If the PS3 was launched earlier and its market share exploded like the PS2, we would have been seeing developers taking advantage of CELL in a lot more ways, and whether it was ideal or not would have been irrelevant. It makes more sense to focus main support for the huge market leader even if it is not ideal. Just like the PS2. We were having the same arguments back when PS2 was released. It was difficult to make games on, libraries and documentation were unfinished, it was too exotic and developers had to be super smart how to use the architecture, some games looked worse than on the DC, analysts were predicting the elimination of many games companies because it was too costly to make games on, yada yada.
Same story at the beginning, but at the end it rocked the industry. Simply because it came early and sold like hotcakes until the end. The PS2 was overall, probably a bigger nightmare, but it was irrelevant. When it owned 70% of the market, comparisons with a product with significantly less market share, makes financially zero sense. It does make a hell of a lot more sense though if your market share is below 50% and developers are making multiplatform games.
We arent having this discussion today about PS2 and we saw it's architecture pushed by developers. Even though XBOX was significantly more powerful and straightforward, PS2 was the base platform. It didnt get Saturn'ed.
The PS3 and the 360 were much more similar in terms of performance and architecture on the other hand, but we are having discussions about how badly the PS3 was designed and how bad CELL was, when we do have games that prove its capabilities, since the PS3 almost got Saturn'ed. This discussion might not have taken place if the PS3 repeated PS2's success. All developers would have jumped in and put aside the 360's more efficient. more traditional approach and they would have been discovering more ways to use the CELL's potential.
Anyway, we don't see anything like that being tried again in general purpose computing, and probably with good reasons.
CELL wasn't designed for graphic workloads. It was designed for 'stream processing', whatever workload that is, including things like video encoding/decoding. Kutaragi may have had a notion it'd be great for pure visuals, but the architecture was not designed specifically around graphics rendering.think if the SPE is able to address the main memory directly (read only is probably fine, but there's also cache coherence problem to consider), and the if CPU is more powerful (e.g. if we trade two or more SPE for another PPE or a better PPE), CELL would probably be much better. Unfortunately, since CELL was not designed to be like that (as mentioned in a previous post, CELL was designed to cover graphics workloads), it's probably too late to change.
IIRC a memory manager was developed for a SPE to server the requirements of other SPE workloads. When it comes to 'streamable workloads', modern software is structuring all workloads around streaming workloads because that's how to get the best utilisation from the caches and keep the many ALU's and cores occupied, so as many jobs as possible are now being designed around parallel processable, streamable data workloads. The best economy from any processor, ARM, x64, GPGU, comes from lining up all your data and running through it as quickly as possible. Even doing this multiple times as opposed to condtional and branches to only do the 'necessary' maths.My problem with CELL is, basically, it's a CPU + DSP set, with a underpowered CPU. The DSP part (those SPE) are quite powerful at the time, in terms of DSP, but they are also limited in terms of more general purpose workloads. Each SPE only has 256 KB memory, and from my understand SPE can't address main memory directly (has to go through DMA, which has some latency problem). That means SPE are best suitable for "streaming" works i.e. workloads with highly localized memory access, e.g. audio or video encoding/decoding. Unfortunately, the CPU (PPE) is not powerful enough to feed to those SPE. This creates a lot of balancing problem, making CELL a pretty fragile architecture (that is, you need to carefully optimize for it to run very well, but if some of the workloads changed, you'll have to go through the whole optimization process again).
Yes, but probably not because the architectural concepts are a dead end. An independent IHV can't add 'stream processors' onto an existing CPU because no-one will use them unless they are standard across all platform. What we see instead is processors being adapted to fit the workloads Cell was designed for, notably adjusting the GPU shaders to stream process. You'll have to trip to a parallel dimension without the same economic constraints to find a universe where all silicon swapped to 'stream processors' 15 years ago to understand what the architecture real potential/limits are compared to what we have as standard, including 15 years evolution across ARM, Intel, AMD and STI 'cell' type processors to compare with i9 and Ryzen/Threadripper.Anyway, we don't see anything like that being tried again in general purpose computing, and probably with good reasons.
We arent discussing which is the more capable system which again is subjective and a different subject since both consoles had their own set of pros and cons. We arent discussing about whether power defines who the market leader is going to be either. Again thats a different subject.So, whats that proof it was 'extremely powerfull'? People with actual hands on, from home users to developers clearly have a different view. As a CPU it was very underpowered, coupled to specialized processors (DSP's) that proved to bring not much to gaming performance aside from some use cases. On top of that it was hard to code for.
The 360 launched earlier yet was the more capable system, evident in multiplatform games. We cant judge on AAA exclusives since those never made it to both consoles/optimized for both.
All you are proving is that hardware isnt always tied to a consoles success, look at the PS2 and switch.... both the weakest but seemingly the most successfull.
its like if some are completely ignoring what others that have had experience with it here (and elsewere) have to say. The PS4 was and still is the much better machine hardware wise.
Its good that todays consoles (starting with the PS4) bear nothing from the PS3 and before that area. Yeah it was intresting to talk about these exotic (yet weak) designs), though we couldve had so much more graphically if they didnt went with crazy custom designs.
The raw data from paper specs, performance tests and benchmarks coupled with an understanding of the data and processing workflow of a game (or indeed any CPU optimised workload) seeing how it can be parallelised and streamed. Firstly, your CPU has a peak amount of maths it can do. Then it gets bottlenecked trying to keep them busy. If you can keep them busy, the CPU with the most ALUs wins. The case you are wanting, a game 10x better on PS3 than XB360, you will never find, so you have to understand what's actually happening with the software, data, and processing to see where Cell's strengths lie and why it was attempted in the first place.So, whats that proof it was 'extremely powerfull'?
We could have had better results if devs were able to develop Cell-ideal games too and the software was actually mapped well to it.Its good that todays consoles (starting with the PS4) bear nothing from the PS3 and before that area. Yeah it was intresting to talk about these exotic (yet weak) designs), though we couldve had so much more graphically if they didnt went with crazy custom designs.
In 2017, Unity Technologies, known best for the Unity Real-Time Development Platform used by many game developers, hired DOD proponents Mike Acton and Andreas Fredriksson from Insomniac Games
Mike Acton and Andreas Fredriksson took the work needed for Cell at Insomniac and saw it applied to all processors. You want to map your work to stream processing, and once you start doing that it makes sense to design the hardware to be optimised for those workloads given your limited silicon budget. Which is exactly what Cell was.The History of DOD
The movement toward data orientation in game development occurred after the PlayStation (PS) 3 game console was released in the mid-2000s. The game console used the Cell hardware architecture, which contains a PowerPC core and eight synergistic processing elements, of which game developers traditionally used six. This forced game developers to make the move from a single-threaded view of software development to a more parallel way of thinking about game execution to push the boundaries of performance for games on the platform. At the same time, large-scale (AAA) game development was growing in complexity, with an emphasis on more content and realistic graphics.
Within that environment, data parallelism and throughput were very important. As practices coalesced around techniques used in game development, the term DOD was created and first mentioned in an article in Game Developer magazine in 2009.6
In 2017, Unity Technologies, known best for the Unity Real-Time Development Platform used by many game developers, hired DOD proponents Mike Acton and Andreas Fredriksson from Insomniac Games to “democratize data-oriented programming” and to realize a philosophical vision with the tagline of “performance by default.”7 The result has been the introduction of a DOTS, which is a canonical use of DOD techniques.
To date, many blogs and talks have discussed DOD since the original article, but very little has been studied in academia regarding the process. Richard Fabian, a practitioner from industry, published a book on DOD in 2018, although it existed for several years in draft form online.8
In 2019, a master’s thesis was published by Per-Morten Straume at the Norwegian University of Science and Technology that investigated DOD.9 Straume interviewed several game industry proponents for DOD in the thesis and concluded that, while they differed in their characterizations of DOD, the core characteristics of DOD were to focus on solving specific problems rather than generic ones, the consideration of all kinds of data, making decisions based on data, and an emphasis on performance in a wide sense.
Both Fabian and Straume discuss DOD elements without fully tying those elements together to form a design practice that can be used to create software. The overarching theme that ties all of the elements together is that software exists to input, transform, and output data.
The only real architectural complaint here is that the SPE memory was too small. That's a difficult one to evaluate. Is it inherently too small to d the work needed, or was it too small for the work devs were trying to get it to do back then in the way they wree trying to get it done?
Edit: The saddest part, "Mike Acton" is a B3D member who actually teamed up with Beyond3D to host the Cell development centre for discussion and best practice. You can check out his content at https://cellperformance.beyond3d.com/articles/index.html
His posts: https://forum.beyond3d.com/members/mike-acton.6747/#recent-content