All I see is a visionary not very good at explaining himself meeting people who can't adapt to his ideas. That specific example your video ends on about trees is the one I remembered and mentioned here:
Cell design is extremely powerful for its time. It may have been difficult to extract the power from cell, but that was a learning process. The hardware is capable of what the hardware is capable of. I think if people knew how to maximize PS3 from day 1 it would have been a pretty solid case vs Xbox 360. It's just one of those things that is attributed to the fact that developers were still largely of single threaded mindset back then. The fact that we have to revisit those types of software designs again is sort of testament that we got back there anyway, only difference is that we do it through cores instead of SPE.
The reality is that, to really maximize hardware, developers need to code software specifically designed and tailored to it. Which, will mean moving away from object oriented code, and code that is written for human logic. But in this day of age performance seems to matter less than iteration speed. The reality is, the more you want to scale up, you'll likely end up adopting some form of coding in which everything is done in parallel and at the same time. We can't just slowly iterate through each object and apply logic to it. Single threaded game code is really about simplification for our minds to handle, but as you can see with DoD and where unity is headed, there are ways to program in parallel that takes significant advantage of the hardware.
There are issues with Cell, but parallel programming paradigm wasn't one, that's a developer problem that nearly 20 years later are we starting to see a push back in that direction. It is sad to see UE5 not be able to move in this direction just yet, but honestly, so many games are still stuck in that single threaded loop still.
Yea I guess the OO paradigm is what allows for the blueprints in UE. Easily accessible but at the cost of being very slow.Honestly, Tim Sweeney has nobody else to blame but himself for Unreal Engine's current slow gameplay framework. Tim Sweeney strongly swears behind the OO paradigm. He thought garbage collection collection was a good idea and transactional memory was the future of parallelism. Their investment into implementing a garbage collector could've arguably been effort better spent elsewhere with their engine. They started working on a new data-oriented game logic framework like Mass but hopefully they saw the light so that it get's seen to completion ...
Yea I guess the OO paradigm is what allows for the blueprints in UE. Easily accessible but at the cost of being very slow.
Sadly we'll never know. There isn't going to be a future retro-demo scene pushing PS3's the same way we've seen C64, Spectrum, etc. pushed. There's no access to Cell hardware or interest in exploring it.
All I see is a visionary not very good at explaining himself meeting people who can't adapt to his ideas. That specific example your video ends on about trees is the one I remembered and mentioned here:
Those different algorithms have different levels of hardware utilisation. The point of the talk was to make devs aware of other choices they are ignoring by going with the status quo, highlighting that consideration of algorithms needs to look at the weak link in the processing which is largely data transactions, not maths complexity, and so change the data to allow different algorithms that run faster. If different data can't run faster than a tree, use a tree, but why are you choosing a tree and what are the alternatives and how do they perform by comparison? Was the tree chosen simply because that's standard practice and no-one questions it as it makes sense to the developer?I think we have to disagree here. What it basically boils down to is different data -> different algorithms, which is not very novel.
Those different algorithms have different levels of hardware utilisation. The point of the talk was to make devs aware of other choices they are ignoring by going with the status quo, highlighting that consideration of algorithms needs to look at the weak link in the processing which is largely data transactions, not maths complexity, and so change the data to allow different algorithms that run faster.
Cell was ready, but the dev tools and documentation were not. I've programmed Cell in a large scale server environment and like a lot of complex architectures - and what isn't these days - everything hinges on good documentation and good tools. Sony eventually realised this and this drove them to acquire SN Tools in 2005, but it was too late for games launching in the first couple of years.
Cell didn't work like any other processor design and there were initially no realtime debugging tools at all, what you had was IBM's Cell emulator which ran like a dog on every platform of any cost because accurate Cell emulation means emulating the individual PPU, SPEs and the interactions across the internal Cell bus, and the two external memory buses in PS3, which is why Cell remains a challenge to emulate some sixteen years later.
I spent a lot of time writing code for Cell in a massive server environment and we had pretty good tools, but we had to rewrite all our tools for Cell development from the ground-up. It was almost like alien tech arriving and you have to solve the mystery of how it worked. Getting Cell code to run some calculations at 20,000 x faster than x86 servers was joyous, but man it took so much damn effort.
Any theoretical reason you couldn't write a demo to run on RPCS3, before maybe testing on a ps3 debug kit? Speed shouldn't be an issue with a modern 8 - 16 core CPU either.
How so?It may be a little early for such claims...
How so?