Speculation on PS3 dev kits

n00body

Newcomer
Based on the companies who sony has partnered with during the PS3's development, I wonder what sorts of tools they would have access to for a good dev kit. The way I see it, any of the following could easily be included as powerful additions:

Eclipse (IBM)
FX Composer (NVIDIA)
OpenGL ES (KRONOS)
OpenML (KRONOS)
OpenVG (KRONOS)
OpenMAX (KRONOS)
any DCC program (because of COLLADA)

Does anyone have any comments on my list or possible additions (keeping within the range of companies that Sony has confirmed partnerships with)?
 
I highly recommend the "Cell for Dummies guide" and "So You Want to Make a Cell Program...interactive CDROM tour"

;) :p (This was not to imply that game programmers are dummies. I just thought it would be funny if they already had one of these sort of yellow and black books out, since they exist for just about everything else.)
 
kaigai002.jpg

kaigai003.jpg

kaigai004.jpg

kaigai005.jpg

kaigai006.jpg

kaigai007.jpg


Next gen, PS3 devs will be forced to program in '1s' and '0s'! ;)
 
Jaws said:
Next gen, PS3 devs will be forced to program in '1s' and '0s'! ;)

I laugh when people brag about "going to the metal" programming in assembly language or joke about programming in machine language (The Story of Mel, the Real Programmer). There are processors on the PS2 that you have to program without even the benefit of any language whatsoever. In order to cut costs, Sony made several chips that don't waste transistors on such niceties as "instruction decoders" -they don't have instructions! You manipulate them by stuffing data right into their registers. The best example is the register-only API of the graphics synthesizer. The only problem is that you don't actually access the registers directly -instead you interleave your data into an instruction stream for another processor that exists only to re-arrange the bytes on the way to the registers. In fact you don't even directly access that processor. To get it there you feed that instruction-data stream interleaved with a different instruction set into the DMA processor -and thats if you want to be stupidly slow about it. In order to be fast, you have to direct the instruction-data mix at the vector unit for intermediate processing and use the extremely limited integer&logic capabilities of the vector processor to set up the rearranging processor to get to the registers of the rasterizer processor in the right format. Of course, in order to do that you would also have to hand-write "Very Long Instruction Word"(VLIW) assembly for vector unit and further interleave into the data-instruction gumbo another instruction set for the byte-unpacking processor between the DMA processor and the vector unit... After going through all that and you are finally ready to actually draw something, you get to the part of the translated documentation that explains that the third time you set the XYZ2 register, the GS will rasterize a triangle. When you realize that this is interface you have to use to create every polygon in your game and there are no provided, commercial-quality libraries to make it any easier -That's when you discover whether or not you are a Real Programmer.


Sorry about the rant. It is the steam built up from years of swimming in this crap while listening to "Developers complain about the PS2 because they are lazy." Its all managable once you decipher the extremely terse documentation. It just takes ten times as long to accomplish a fraction of the capabilities of the Xbox. My company has shipped games on the PC and every console since the PS1, but it seems like the vast majority of the time and pain has been pounding our heads against the PS2. Not that I'm bitter...
 
Interesting post corysama, thanks!

All I can say is that MS and Nintendo have both issued statements declaring their intentions to make development on next gen systems easier and cheaper. Anything that gets game developers to spend more time on the game and making it FUN the better :) All the consumer cares about is how good the game is, and since time is money it is important for developers to spend more time making the game good instead of fighting the machine.

With the increase in power I would think that developers could spend more time on the creative aspects and not have to try to squeeze every ounce out of the machine. Oh well, that is utopia... graphics are an easy sell, but great games last forever :)
 
After reading your post, corysama, I had a Back to the Feature feeling, as it felt like 2000-2001 all over again.
Do you mean that things haven't changed a bit since the early days of PS2 from a Sony-developers relationship point of view? Have things gotten easier just because developers now know what to do and Sony hasn't really helped them properly?

If that is the case, then Sony can only do better this time around...
 
To be honest here I think corysama is exagerating just a teeny tiny little bit.

To suggest that Sony saves transistors by not putting an "instruction decoder" into the GS seems a bit odd. What kind of instructions would you expect for a rasteriser chip? The commands I'd expect would be very simple anyway (draw primitives, set textures, change blend-mode) and probably map directly to the register set anyway.

You talk like the register settings to draw polygons are some kind of arcane mess. In reality you send a simple header which says what kind of primitive you're drawing, and how many, and then you send the vertices... what were you wanting exactly? A "Draw car" function? You don't need massive amounts of bit-manipulation functions on the VU precisely *because* you have the VIF to take nice, ordinary, easy to output data, and pack it up for sending to the GS. You don't need to worry much about how it's getting packed because it's all done for you, unless you want to pre-process some of it yourself (which wouldn't be done on the VU anyway).

Besides which, the GIF and VIF aren't "processors". They're interfaces. The GIF just has simple gating to merge a potential 3 streams of data together, and the VIF is pretty much a device for unpacking data and controlling the VU. They're just there as a convenience to control these low-level, language-less processors and give you exactly the kind of thing you're moaning about not having - a programmable instruction stream.

The programming model for the PS2 is one where you send lists of instructions (yes, like you have in a programming language) to co-processors to execute. These lists are sometimes built by the CPU itself, and sometimes built offline in tools.

This is no different to any other architecture, other than the fact that other architectures hide some of this behind a "driver". But you still generally compile lists of vertices, bits of shader code, and kick them off to external units.

The VU is certainly not a VLIW processor by any serious definition of the phrase. It has relatively high level instructions which manage a multi-stage pipeline. If it were truely VLIW you would be setting seperate instructions for each stage of that pipeline, each cycle. *That* would be hideous. Other than a few annoying ommisions from the instruction set, and a slightly restrictive memory size, it's a fairly easy thing to program. The only bit I consider a serious f**k-up is the data-path from VU0 which prevents it being very useful - but even then Sony have released samples showing how to use it very efficiently for a few tasks. Now using VU0 properly might be considered a bit of an art-form, but it's not remotely essential for most titles.

There are Sony supported tools to make programming the VUs easier, and there was even a 3rd party making a C compiler for it. But frankly you'd have to be insane to use C on the VU when the asm is perfectly decent.

It's not like the bulk of your game has to be written in hex and entered on punch-cards of something. The vast majority will be written in C/C++ in a modern development environment (ok, the "official" SDK uses GCC, but it's relatively easy to plug that into whatever environment you normally use).

Only a few core programmers on any team will have to mess around at the really low-level, and there's a wealth of sample code and helper libraries in the SDK and on the official developer website.

Of course the documentation is translated. Do you expect Japanese engineers to write documentation directly in your own native language? Should it be written from scratch each language by teams of bilingual engineers? I don't really think that will reduce the small number of translation errors (which get fixed anyway) and it will take a lot longer...

Yes, it's probably the most complex and tricky of the "current" generation of machines. It's certainly not the worst ever (I hated the restrictions of the original PS1 libs more). In fact when I first got my hands on a devkit, the next six months were probably the most I'd enjoyed programming for 10 to 15 years.

Naturally I still hope Sony do better next time regarding giving us higher level libraries, but lets not continue to perpetuate the myth of the "impossible" PS2...

I have no problem with people having a bitch about programming the thing - I've sworn blue at my devkit on numerous occasions - but in my experience those who try to make it sound worse than it is are generally those who are trying to make themselves sound better...

IMO "real programmers" aren't those who wrestle with the PS2 and manage to tame it, all the while complaining how much easier other machines are - "real programmers" are the ones who love it for all it's flaws and make it sing and dance.
 
megateto said:
After reading your post, corysama, I had a Back to the Feature feeling, as it felt like 2000-2001 all over again.
Do you mean that things haven't changed a bit since the early days of PS2 from a Sony-developers relationship point of view? Have things gotten easier just because developers now know what to do and Sony hasn't really helped them properly?

If that is the case, then Sony can only do better this time around...

If you ask me (which you didn't, but I won't let that stop me...) things have gotten better. The developer support guys do a decent job of.. er.. support, and theres been some moves to producing more "developer friendly" sample code (i.e. stuff that could be plugged more directly into a game). I imagine that bodes well for future platforms if the trend continues. It certainly shows that at least some people within Sony understand what developers want/need.

The big developer community which now exists, mostly with a lot of PS2 knowledge, provides a good backup method of support too.

Also developments like the PS2 PA have helped people understand, and get the best out of, the PS2 architecture. In fact I think it's likely the these days PS2 developers probably understand an awful lot more about the architecture they're programming than programmers on other platforms do about their own. While some think that's unnecessary (and they have a point) I also think it's a ultimately a good thing and results in people pushing the platform further than you see elsewhere.

It's definitely moving the right direction - whether it's moving fast enough, only time will tell.
 
MrWibble said:
You don't need massive amounts of bit-manipulation functions on the VU precisely *because* you have the VIF to take nice, ordinary, easy to output data, and pack it up for sending to the GS.

No, no no... the VIF is no excuse... I like VU's and all, but the instruction set would benefit (VU was good for 1999-2000, not for a 2005-2006 Streaming Processor like the SPU's/APU's). The code itself that you write for the VU's would benefit of more logic operations and more integer and FP instructions (wait... if I divide I can only save into the Q register ? Ok... :(). Not to talk about VU instructions that are nto even commented/talked about in the oficial docs not to mention other bugs (like how far from the XGKICK to put the [e] bit). Would like to manipulate easily the GIF tag in the VU data Memory ? It would be nice and easy if you could access easily all the bits you need using the VU's integer instruction set.

VCL is a BIG help, but then one of the best thing even the author of VCL dreamt about (a macro to write GIF tags) cannot still be realized with it...

Still, I like the VU's and getting around the issues is a bit masochistic, but can be also fun and rewarding once you passed them.

This is no different to any other architecture, other than the fact that other architectures hide some of this behind a "driver". But you still generally compile lists of vertices, bits of shader code, and kick them off to external units.

If you can efficiently hide it behind a driver... you know... do it. I am glad they have given lots of flexibility, but in some cases they exagerated. People do it, but if the GS could do full 3D clipping, I would hear no weeping among the development community unless this came at ultra crippled GS performance, but it does not have to be this way).

The VU is certainly not a VLIW processor by any serious definition of the phrase. It has relatively high level instructions which manage a multi-stage pipeline. If it were truely VLIW you would be setting seperate instructions for each stage of that pipeline, each cycle.

What ? For each Stage of the pipeline ?!? That is not the requirement for a VLIW processor: at least not something have seriously seen applied. What are these real VLIW processors ?

You are taking the ideal behind VLIW a bit too far: VLIW moves back all the scheduling of instructions (as much as possible) and a lot of the conditional branching prediction (if conversion in IA-64) and schedules bundles of executions, VLIWs, that contain instructions for each functional unit and informations to help the CPU to schedule the instruction in the bundle and different bundles with as little logic as possible.

What modern VLIW processor makes you write code for fetching stages, decoding stages, issue stages, execution stages and retire stages ?

http://www.hotchips.org/archive/hc11/hc11pres_pdf/hc99.t2.s1.IA64tut.pdf


It's not like the bulk of your game has to be written in hex and entered on punch-cards of something. The vast majority will be written in C/C++ in a modern development environment (ok, the "official" SDK uses GCC, but it's relatively easy to plug that into whatever environment you normally use).

Only a few core programmers on any team will have to mess around at the really low-level, and there's a wealth of sample code and helper libraries in the SDK and on the official developer website.

Thsoe few core programmer HATE that job: lots of them would gladly renounce to PS 3.xx if they could have good General Purpose performance that allowed the use of C/C++ without having to re-write a good amount of code into CPU core friendly ASM code.

GCC for the R5900i is hardly a work of art, but the R5900i performance even with a better compiler would hardly be optimal as they generally ignore/cannot handle the SPRAM and if you rely on GCC you have to live with a nice 8 KB Data Cache.
 
MrWibble, thanks for that lovely post, you mirrored most of my thoughts, and saved me a lot of time typing 8)

Pana, from what I recall the issues you mention are documented in the official docs I have - but then I doubt linux kit shipped with the most up-to date versions.

Panajev said:
The code itself that you write for the VU's would benefit of more logic operations and more integer and FP instructions
The only integer instructions I actually missed on VU were shift, xor, not. Of course, would also be nice to have 32bit integer registers instead 16bit.
The one really missing FPU instruction - explicit DotProduct.

VCL is a BIG help, but then one of the best thing even the author of VCL dreamt about (a macro to write GIF tags) cannot still be realized with it...
Not general purpose GIF tags, but I wrote one for specialized purposes, and it's quite handy.

If you can efficiently hide it behind a driver... you know... do it. I am glad they have given lots of flexibility, but in some cases they exagerated. People do it, but if the GS could do full 3D clipping, I would hear no weeping among the development community
Clipping omission was transistor saving, not flexibility. And if GS accepted H-Space coordinates and did its own clipping, that would simplify VU framework of vast majority of titles by quite a lot - so obviously it'd be nice to have it.

Thsoe few core programmer HATE that job:
Hey now, vectorizing C version of Cholesky decomposition and writting it in VU0 microcode can be fun too! :oops:
 
(Just to continue the theme of PS2 developer mutual appreciation, I agree with Faf :) )

Panajev2001a said:
What ? For each Stage of the pipeline ?!? That is not the requirement for a VLIW processor: at least not something have seriously seen applied. What are these real VLIW processors ?

You are taking the ideal behind VLIW a bit too far: VLIW moves back all the scheduling of instructions (as much as possible) and a lot of the conditional branching prediction (if conversion in IA-64) and schedules bundles of executions, VLIWs, that contain instructions for each functional unit and informations to help the CPU to schedule the instruction in the bundle and different bundles with as little logic as possible.

Ok, so I may have taken the definition a little far and been guilty of my own little exageration there - but my point is still valid. The VU is not a VLIW processor. It is for the most part, quite easy to program for. The Q register never bothered me - I'd rather have that than some messy pipeline or extra register hazards / stalling.

And on your last point - I was one of those core programmers, and I certainly did not hate the job... if anything, now that I have moved on to deal with higher-level libraries, I miss it.
 
i guess for someone who has 0 lines of code run on the ps2 and whose whole familiarity with the PS2 comes from the programmer's guide, i should stay out of this discussion, but i just feel the tickling urge to give the outsider's view on the console-generation-old question 'is the ps2 a bitch to code for?', which view could or could not bear any relevance to reality(tm):

for a down-to-the-metal platform, the ps2 gives me the impression of a very clean hw interface design - the various interface units provide an almost IMR-graphics-API level of abstraction. which i'd say compares favorably with at least some of the rest of 3d hw of the ps2 generation ("pass bits 0-23 of the vertex n-th attribute's gradient in reg x, pass the remaining bits 24-27 into reg y, wait 5 cycles, repeat for next vertex"). IMO many of the ps2 hw interfaces were designed to be used directly on an app level, i.e. they were designed user-friendly, hence sony did not feel the urge to release much of a HAL for them. alas, this can (and does) create some uncomfortability in many devs on their first encounter with the platform: if you're used to getting the bulk of your rasteriser work done by invoking API calls, it can feel a bit weird if you now have to talk to some hw "IFs" for achieving that, even if you'd essentially use the same (or very similar) semantics you'd use with HAL calls, given latter existed. and then there is the psychological aspect of people tending to keep their first impressions for quite a long time afterwards.

..so why did i write all the above?.. i guess to give myself some break from my very-high level, highly-sophisticated and totally-abstract graphics engine coding task for the day. sometimes i wish i could just pass down a bunch of variables to some metal and get stuff visualised.
 
MrWibble said:
And on your last point - I was one of those core programmers, and I certainly did not hate the job... if anything, now that I have moved on to deal with higher-level libraries, I miss it.

You loved converting lots of C/C++ code you did not even write yourself, but some guy working on the physics code or the A.I. code and he is writing more code so you have to understand what he did meanwhile convert all that into ASM code ? Maybe you loved solving the problem and getting very optimized ASM code, but... I am sorry, I feel that as console game making keeps to mature, you will be part of an even smaller minority.

I am not insulting your capabilities and facing the challenge hand-on is a good quality of yours: I think that as games gets bigger and bigger as well as more and more complex that doing that kind of job for large portions of game code cannot continue.

You could not expect, for example, that Microsoft did in pure x86 ASM code Windows XP ? How much code in XP, in percentage over the whole code-base, do you think it is hand-written ASM code ? Do we break the 5% barrier ?
 
Fafalada said:
MrWibble, thanks for that lovely post, you mirrored most of my thoughts, and saved me a lot of time typing 8)

Pana, from what I recall the issues you mention are documented in the official docs I have - but then I doubt linux kit shipped with the most up-to date versions.

I know, but the PlayStation 2 Linux kit is right now what I can use as a basis. Talking with others... cough... cough... it does not appear that the issues I mentioned are the only ones :p.

VCL is a BIG help, but then one of the best thing even the author of VCL dreamt about (a macro to write GIF tags) cannot still be realized with it...
Not general purpose GIF tags, but I wrote one for specialized purposes, and it's quite handy.

1. What does it do ?

2. Can I have it ? Can I have it ?

:D

Thsoe few core programmer HATE that job:
Hey now, vectorizing C version of Cholesky decomposition and writting it in VU0 microcode can be fun too! :oops:

Ok, maybe not every single one of them hates it, but I have talken to more than a few that does and I hav seen, for some of them, what they could extract out of machines (ND level of graphics on PSOne and quite good hit man... cough... graphics on PlayStation 2): good people, but they still felt that a compiler should have been doing that job.

Nothing against hand-writing small loops, but converting big amounts of C/C++ code into ASM code because the compiler a bit blows and the CPU can b quite inefficient unless you hand-optimize code around its strenghts is not what should be wished for future consoles.
 
Back
Top