Sony talked about Cell and Cell based workstations

The big issue in next gen game development will be content creation, not coding. Even if PS3 is 10 times harder to code for than PS2, it won't make a difference from a budget standpoint. For starts, given the performance of next gen consoles, even games that only runs at 10% of the hardware capacity will look good enough. Furthermore, most game teams nowadays have only one or two engine programmers, the other tasks (game logic, scripting etc.) are abstract from the hardware and won't become any harder.
Third, middleware is already used extensively this gen and will be pretty much the norm next gen, removing most of the complexity.
Finally, content creation will take so much time and money that engineers will be a small part of the overall budget. Just look at the movie industry: a feature film requires upward of 300 people to be produced. Only a handful of t5hem are programmers. Next gen game developer teams will be very similar in resource allocation.
 
Gubbi said:
It is exactly this mechanism I'd like to know more about. Are we talking explicit setting up DMA with source, destination and range for every access, then wait for it to complete (polling for completion ?). That's alot of overhead just to read one node in a tree.
Well if you can occupy the channel for duration of the tree scan you can simply loop the DMA-chain into itself, and have it stall on every iteration. Then you only need to update source addy, and release the stall on the channel everytime you load. :p
(I actually did this on PS2 for calling my VU0 triangle collision micro-routine, except that in my case I could also double buffer transfers and the routine execution).

Ok but that's probably not very general purpose solution :oops:

Guden said:
Can APUs actually access external memory without having to have the APU DMAC DMA it into their own RAM?
Not according to any of the patents we've seen so far.
 
I agree with what you said DeanoC, software is the challenge and IMHO one of the reasons why Sony/SCE added IBM to the list of major partners for PlayStation 3, well to the technology they wanted to use in PlayStation 3, and they did not just stay with Toshiba and Rambus.


Only if we retrain all the level designers (who aren't usually trained programmers, let along low-level programmers), game play coders etc to use 128K only with no random pointer walks. Similar problems will occur on Xenon due to massive miss cache costs. On PS3 it simple won't run as they won't be able to access outside the 128K memory.

Well, they have more than 128 KB: they have to split the program in chunks or you can design a sort of paging system that virtualize data and instruction memory for the APU.

Also, if we are pipelining APUs then we limit to 128 KB ( with he ability of DMA-ing more data to/from the shared DRAM ) to each step as each step would be running on a different APU.

A few years ago Sony said only 4 developers could make games for PS2, we managed to disprove that prediction. Does anybody want a world where the only developers are EA, Sony/Enix, Nintendo and SEGA making sequels to there franchises?

You are right and I agree with you that even though there are exciting technical challenges what people want is to have tons of great looking games and for that to happen we need to help developers produce content faster.

I think that with PlayStation 3 some efficiency will be lost: I do not think that as a game platform, that PlayStation 3 is only seeking pure performance per se, to push the highest polygon figures.

No matter how we put it, what programmers want is speed and the ability to optimize ( well at least a good bunch of them likes the challenge as you yourself said ) while the producers/artists want to have an abstracted environment where they can control the content production/creation without having to know in depth about the console's inner working besides some tips from the programmer ( "learn how to use CLUT and good quantization programs or I will kill you", that kind of thing ;) ).

I know the battle will be related to content and what the hardware and software is trying to do next-generation is two-fold in regards to performance: they have to set the bar high to allow noticeable improovements in graphics and they have to set it even higher to allow designers/artists to create the content that will showcase the jump in graphical power between the old generation of consoles and the new one.

CTO : Not yet. But we've setteled the basic policy and discussed with some vendors. Contrary to PS2, we will provide library with devoted support like PSP.

This shows a change in focus from Sony itself ( latest patents from IBM's Gschwind seemed to be focused on solving programming aspects of CELL rather than just go on and on about hardware enhancements ).

Any abstraction layer in practical terms means leaving some performance behind and that is why they have to push for such high performance goals in the first place.

Much faster hardeware would help designers in level creation and model design in one way as it would allow them to avoid the lengthy mesh optimization work they have to do in order to fit processing power requirements for the Hardware: do you think that GT4 designers decided to optimize their 4,000-6,000 polygons car models to look like they had 10,000-15,000 polygons ( the cars in GT4 look quite comparable to some games like WRC 4, RSC 2, PGR 2, etc... which use many more polygons per car ) just for fun ?

Of course not.

I know that given very high expectations modelling time does increase as you try to model everything in minute detail ( doing Gollum for LOTR: TTT and LOTR: ROTK did not take 1 or 2 days, but much more time and money, but how can Sony or Microsoft help developers to reduce their workload ? ): more than allow very easy integration of your models in your game engines directly from your 3D Studio MAX or SoftImage 3D rendering package I do not know what Sony and Microsoft can do to help artists create more and better content.

I think Sony knows that middlewares will be highly utilized in the next-generation and I take that comment I quoted as to say "we do not plan for you to code to the metal, we will give you high level libraries ( edit: please Sony give them a nice REYES compatible renderer in the SDK :D I am such a broken record :) ) and do not worry the performance increase will be high enough to support the middleware and the high level abstraction and to give you the rendering power you need".

On PlayStation 2 they allowed the developer to really go deep and close to the metal and in some cases they forced them to do things that develoeprs were not even used to do anymore like clipping in software ( as it was inefficient to put in Hardware and the developer would still be able to extract great performance by doing it him/her-self ).

PlayStation 2 seems to me to have been created with the idea "we will not put high level roadblocks in your way so you cna achieve close to peak performance without dependind on us to improove constantly our libraries".

The tune changed with PSP and with PlayStation 3 I think we can expect a closer to PSP and PSOne development environment rather than PlayStation 2.

To make my idea more concise, it seems that their goal has been "imagine the performance of the next-generation fo consoles... now imagine that performance achieved with high level libraries and as little as possible low level management".

That kind of idea takes much more power under the hood than what it would take to just create such a hardware that would achieve those performance goals only when programmers took the effort of doing lots of low-level optimization ( which was still a feasible way of doing things with PlayStation 2, but that would not work any longer for future generations of consoles ): in this sense the design approach that was behind the PlayStation 2 startegy, under hardware and programming perspectives, cannot be simply recycled.

To me it is telling that they had all the plans about EE 2 + GS 2, EE 3 + GS 3, their line of Creative Workstations and the GSCube project.

It tells me of an approach that they tried internally, they studied and analyzed and they saw that was just "not enough", that evolving the PlayStation 2 approach was not enough.

They started to realize, probably working on GSCube, that parallel processing was the key to high performance, but even in the technical know-how they might have needed the help of someone with experience in that field, someone that could have allowed them to achieve even more performance than they could by themselves at the time.

With GSCube the question "how do we get a performance leap ?" was answered, but then inside Sony people did ask themselves "how are people going to use this performance leap in meaningful ways" and I think the feedback from the use pf GSCube in the prefessional rendering industry helped ( Dreamworks, Squaresoft and others were able to work with the GSCube ).

I think that, at that point, SCE knew that more high level abstraction work had to be done to help developers and in order to reach the performance goal they had in mind they needed also some more boost in regards to hardware processing power available.

Sony/SCE and Toshiba needed the help of someone like IBM that knows how to design parallel architectures ( I think Sony started hearing nice things about the BlueGene projects and got very interested in them ) and that has important informations to help Sony/SCE and Toshiba develop new manufacturing processes to put such architectural concepts into silicon .

When I say "designing parallel architectures" I am not restricting myself at the Hardware/System Architecture level, but I am thinking broadly, I am thinking about things such as development environments ( IDEs ), compilers, OS's, programming libraries, debuggers, programming languages and various other tools.

IBM could provide that to Sony/SCE and Toshiba and that is why the joint-venture was started IMHO ( there are other things such as IBM's aim for the CE market, but that is the topic for another discussion ).
 
Hey Deano, if you're so against Microsoft and Sony's intents to increase console preformance (which is the causation behind the concurrent nature), then why don't you just treat the XB2/PS3 as a Single Core CPU with a GPU that can run shaders and call it a day?

Because, that's what you're really asking for with your demands. Process technology is the bound, and without going parallel as they are, you won't extract more preformance at any snapshot, N, in time.

So, seriously, just forget about the parallel parts of the architecture and treat it like a 3.5 or 4GHz PPC with a GPU that can run shaders. And, as for the marketplace and the inevitable repercussions, well it was your desire and choice I suppose. Because I don't see why the console vendors shouldn't provide the best product they can for my money. No offense, but IMHO the marketplace will be doing us a favor - either you alter your content creation and managing of content (perhaps analogous to how movie studios rent actors, sets, and equiptment) as many did with game engine creation or you die at the feet of those who did; just as has happened in almost every other industry known to man since the birth of the Industrial Revolution.
 
OK my 2c for what it's worth.

A system with a shared L2 cache and an array of Identical general purpose processors is considerably easier to deal with than a system with a lot of small memory pools.

FWIW I think you end up implementing medium grain paralellism on both of them in the same way, albeit with the extra difficulty of scheduling code uploads on a Cell like system.

Cell is architecturally very similar to the PS2's design, Sony and IBM seem to have at least thought through some of the issues that plagued PS2's architecture. the APU's can actually DMA from main memory and have decent integer support.

What worries me more about CELL is that fact that if the main CPU is weak (or crippled), to get any sort of reasonable performance out of it you will have to write code in a very specific way.

All those old codebases aren't just going to get magically replaced. Given the Lead time on a launch title, the codebases will pretty much have to evolve, rather than be rewritten. I have enough problems trying to explain to other programmers on my projects why what they are doing will be slow on PS2, PS3 has the potential to be much more complex.

When I first started in this industry technology was everything; I've been saying for a few years now that technology and programming are really just support roles now. Games look good becasue of the artists and they play well because of the designers. Technology is about facilitation, if Sony can't make development on PS3 comparable to development on competing platforms it will hurt them.
 
ERP said:
All those old codebases aren't just going to get magically replaced. Given the Lead time on a launch title, the codebases will pretty much have to evolve, rather than be rewritten. I have enough problems trying to explain to other programmers on my projects why what they are doing will be slow on PS2, PS3 has the potential to be much more complex.
Exactly, it not about the 'old timers' (BTW anybody whose been in the business longer than 1 console generation is an old timer, not many people survive >3-5 years...) like ERP and I :) We are used to packing DMA chains and writing game logic in ASM :oops:

Its the guys who have grown up on PC like architectures that are going to get the shock. Today there nothing like seeing the newbie who can wrap DirectX around their little finger meet the PS2 for the first time :devilish:

Its not that the new machines will be too hard, just harder and that will probably mean the release titles aren't that impressive, that their will be a long evolution as we get to grips with it. In some ways it will be nice to see titles gradually getting better over time (like PS2) and if Cell is all its cracked up to be then PS4 should just be extra cells, which will hopefully mean no more relearning.

Edit:
Anyway the original point was that cell isn't going to be a drop in replacement for Wintel in the workstation market. Its going to be hard in game development where it will be worth it (probably) but in the 'normal' market I'm not convinced.
 
I still hope they get together with Apple somehow. A cell card in my G5 (or G6) for all my rendering and photoshop jobs would be nice, but I agree with you that the normal office user will not at all need that much power and for the gamers, I think that would be a bad idea for Sony to deliver them the power. (Ok, Longhorn looks like it will eat all that power on it's own ;) )

Fredi
 
Its the guys who have grown up on PC like architectures that are going to get the shock. Today there nothing like seeing the newbie who can wrap DirectX around their little finger meet the PS2 for the first time

Been there, done that, enjoyed the popcorn... :p
 
archie4oz said:
Its the guys who have grown up on PC like architectures that are going to get the shock. Today there nothing like seeing the newbie who can wrap DirectX around their little finger meet the PS2 for the first time

Been there, done that, enjoyed the popcorn... :p

Must have been one of those times you were printing the T-shirts too :p.
 
Guden Oden said:
Deano,

What you're saying basically sounds as if programmers aren't interested in doing their job, they just want to lean back, relax and have the money come pouring down their pockets all by itself.

Certainly it's NOT too much to ask that programmers learn about the hardware they're writing code for! That's why Japanese developers typically kick US coders' asses when it comes to slickness, especially on "weird" hardware such as PS2; because they're willing to get down with it.

US coders seem to be stuck in their lazy-ass rear-wheel-drive-automatic-transmission way of thinking even when out on the race track, unable to understand why they're getting outclassed by manual transmission 4WD drivers. What, you mean it's not possible to win by just steering and stepping on the gas pedal?! You gotta be kidding me! :p

Ok, so I'm generalizing, but when listening to PC dev-people like Sweeney and Carmack, that's certainly the way they come off; they're so obsessed with their own ease and comfort when coding. So programming for a console requires WORK and THINKING. Tough shit pal, if that's too much to ask of you maybe you should run home to mama instead, or switch carreers and apply for a job with MS over in Redmond, I'm sure nobody will ask you to do any hard optimization-work over there! :devilish:

;)
This is one of the more ridiculous posts I've read in a while. A game programmers job is to create a great game. Of course some of this time will involve performance optimizations, but I don't blame many developers for prefering to spend time on game logic instead of graphics. It doesn't mean they're lazy and just interested in money.
 
Mfa said:
As long as a processor can have enough outstanding prefetches/DMA-requests you can do vertical multithreading in software.

So now you're explicitly doing vertical multithreading on multiple cores. So I take it you agree with me that this will be a bitch to program :)


Fafalada said:
Guden said:
Can APUs actually access external memory without having to have the APU DMAC DMA it into their own RAM?
Not according to any of the patents we've seen so far.

If that is the case, then you'll need to lock down the entire quad tree in eDRAM to do collision tests, because any given APU doing collision checks *might* touch any given node. I had hoped that CELL could have used some of the eDRAM as demand loaded cache/RAM.

Cheers
Gubbi
 
Guden Oden said:
I think the 100+ team of highly experienced engineers that designed this micro-architecture already thought of these issues - and more besides. Don't you?

In light of Intel's recent decision to trash the future P4 cores...

I think the 100+ team of highly experienced engineers that designed the [Pentium 4 Tejas/Jayhawk] micro-architecture already thought of these issues [with heat, current leakage, excessive power consumption, and pipeline stalling] - and more besides. Don't you?

Just trying to point out that even large teams of experienced and extremely competent engineers occasionally make billion dollar mistakes... ;)
 
Difficulty in programming can be overcome ... to whomever can do it go the spoils, and the rest can stick with middleware.
 
Panajev2001a said:
I agree with what you said DeanoC, software is the challenge and IMHO one of the reasons why Sony/SCE added IBM to the list of major partners for PlayStation 3, well to the technology they wanted to use in PlayStation 3, and they did not just stay with Toshiba and Rambus.


Only if we retrain all the level designers (who aren't usually trained programmers, let along low-level programmers), game play coders etc to use 128K only with no random pointer walks. Similar problems will occur on Xenon due to massive miss cache costs. On PS3 it simple won't run as they won't be able to access outside the 128K memory.

Well, they have more than 128 KB: they have to split the program in chunks or you can design a sort of paging system that virtualize data and instruction memory for the APU.

Also, if we are pipelining APUs then we limit to 128 KB ( with he ability of DMA-ing more data to/from the shared DRAM ) to each step as each step would be running on a different APU.

A few years ago Sony said only 4 developers could make games for PS2, we managed to disprove that prediction. Does anybody want a world where the only developers are EA, Sony/Enix, Nintendo and SEGA making sequels to there franchises?

You are right and I agree with you that even though there are exciting technical challenges what people want is to have tons of great looking games and for that to happen we need to help developers produce content faster.

I think that with PlayStation 3 some efficiency will be lost: I do not think that as a game platform, that PlayStation 3 is only seeking pure performance per se, to push the highest polygon figures.

No matter how we put it, what programmers want is speed and the ability to optimize ( well at least a good bunch of them likes the challenge as you yourself said ) while the producers/artists want to have an abstracted environment where they can control the content production/creation without having to know in depth about the console's inner working besides some tips from the programmer ( "learn how to use CLUT and good quantization programs or I will kill you", that kind of thing ;) ).

I know the battle will be related to content and what the hardware and software is trying to do next-generation is two-fold in regards to performance: they have to set the bar high to allow noticeable improovements in graphics and they have to set it even higher to allow designers/artists to create the content that will showcase the jump in graphical power between the old generation of consoles and the new one.

CTO : Not yet. But we've setteled the basic policy and discussed with some vendors. Contrary to PS2, we will provide library with devoted support like PSP.

This shows a change in focus from Sony itself ( latest patents from IBM's Gschwind seemed to be focused on solving programming aspects of CELL rather than just go on and on about hardware enhancements ).

Any abstraction layer in practical terms means leaving some performance behind and that is why they have to push for such high performance goals in the first place.

Much faster hardeware would help designers in level creation and model design in one way as it would allow them to avoid the lengthy mesh optimization work they have to do in order to fit processing power requirements for the Hardware: do you think that GT4 designers decided to optimize their 4,000-6,000 polygons car models to look like they had 10,000-15,000 polygons ( the cars in GT4 look quite comparable to some games like WRC 4, RSC 2, PGR 2, etc... which use many more polygons per car ) just for fun ?

Of course not.

I know that given very high expectations modelling time does increase as you try to model everything in minute detail ( doing Gollum for LOTR: TTT and LOTR: ROTK did not take 1 or 2 days, but much more time and money, but how can Sony or Microsoft help developers to reduce their workload ? ): more than allow very easy integration of your models in your game engines directly from your 3D Studio MAX or SoftImage 3D rendering package I do not know what Sony and Microsoft can do to help artists create more and better content.

I think Sony knows that middlewares will be highly utilized in the next-generation and I take that comment I quoted as to say "we do not plan for you to code to the metal, we will give you high level libraries ( edit: please Sony give them a nice REYES compatible renderer in the SDK :D I am such a broken record :) ) and do not worry the performance increase will be high enough to support the middleware and the high level abstraction and to give you the rendering power you need".

On PlayStation 2 they allowed the developer to really go deep and close to the metal and in some cases they forced them to do things that develoeprs were not even used to do anymore like clipping in software ( as it was inefficient to put in Hardware and the developer would still be able to extract great performance by doing it him/her-self ).

PlayStation 2 seems to me to have been created with the idea "we will not put high level roadblocks in your way so you cna achieve close to peak performance without dependind on us to improove constantly our libraries".

The tune changed with PSP and with PlayStation 3 I think we can expect a closer to PSP and PSOne development environment rather than PlayStation 2.

To make my idea more concise, it seems that their goal has been "imagine the performance of the next-generation fo consoles... now imagine that performance achieved with high level libraries and as little as possible low level management".

That kind of idea takes much more power under the hood than what it would take to just create such a hardware that would achieve those performance goals only when programmers took the effort of doing lots of low-level optimization ( which was still a feasible way of doing things with PlayStation 2, but that would not work any longer for future generations of consoles ): in this sense the design approach that was behind the PlayStation 2 startegy, under hardware and programming perspectives, cannot be simply recycled.

To me it is telling that they had all the plans about EE 2 + GS 2, EE 3 + GS 3, their line of Creative Workstations and the GSCube project.

It tells me of an approach that they tried internally, they studied and analyzed and they saw that was just "not enough", that evolving the PlayStation 2 approach was not enough.

They started to realize, probably working on GSCube, that parallel processing was the key to high performance, but even in the technical know-how they might have needed the help of someone with experience in that field, someone that could have allowed them to achieve even more performance than they could by themselves at the time.

With GSCube the question "how do we get a performance leap ?" was answered, but then inside Sony people did ask themselves "how are people going to use this performance leap in meaningful ways" and I think the feedback from the use pf GSCube in the prefessional rendering industry helped ( Dreamworks, Squaresoft and others were able to work with the GSCube ).

I think that, at that point, SCE knew that more high level abstraction work had to be done to help developers and in order to reach the performance goal they had in mind they needed also some more boost in regards to hardware processing power available.

Sony/SCE and Toshiba needed the help of someone like IBM that knows how to design parallel architectures ( I think Sony started hearing nice things about the BlueGene projects and got very interested in them ) and that has important informations to help Sony/SCE and Toshiba develop new manufacturing processes to put such architectural concepts into silicon .

When I say "designing parallel architectures" I am not restricting myself at the Hardware/System Architecture level, but I am thinking broadly, I am thinking about things such as development environments ( IDEs ), compilers, OS's, programming libraries, debuggers, programming languages and various other tools.

IBM could provide that to Sony/SCE and Toshiba and that is why the joint-venture was started IMHO ( there are other things such as IBM's aim for the CE market, but that is the topic for another discussion ).

Just in case nobody has read this yet: hey it took time to write :(.

:).
 
I did read it Panajev. And I think that CELL/BE will make a wonderful vertex engine, I'm confident that SONY will deliver a true graphics API this time.

I'm more concerned with the core game engine, in particular physics.

Cheers
Gubbi
 
Back
Top