DeanoC blog PS3 memory management

silhouette said:
I have no idea how they teach to write the code.. However, the program I saw is extremely competitive.. One of the instructors is the lead programmer of the original Far Cry ;-) .. So, at least they know what they are talking about. However, they were also complaining about not having access to console development kits. Anyway, here is the link for the program. They are part of SMU in Dallas, TX.

The program in Florida was not fullsail. I think it is another program from another college. I am not sure but it could be Florida State Univ.
The program in question is brand new, located in downtown Orlando, affiliated with the University of Central Florida, sponsered by EA (Tiburon is close by), and it's a graduate program as opposed to a trade school. So you need a 4 year degree to get in. I don't go there or know anyone that does, but a while back I spoke to some of the professors at a Siggraph event.
http://www.cas.ucf.edu/news/2005-MicrosoftKits.php
 
Shifty Geezer said:
Almost all CompSci is geared towards 'general' computing (mainframe, network, PC) as that's where 99% of the jobs are. Does anywhere teach for console development? I imagine most seasoned devs cut their teeth on the home-brew sector where coding could be on the metal and all aspects considered. Without that same open low-level hacking on limited hardware being generally available, those who learn game development do so through DirectX and OpenGL from PCs. Without anywhere to learn, how can this ever be fixed?

MS, Sony and Nintendo should open their own academies for teaching low-level console programming!

The University of Calgary has a Game Development concentration in their CompSci BSc program. They work on Xbox (1) Development Kits last I heard, and on PCs (depending on the course).
 
DeanoC said:
The real benefit of big disks is mental... its one less thing to worry about, I have enough to things to worry about (you should see how much hair I have these days :devilish:) without figuring out how I'm going to make it fit.

No bean counters asking whether you could fit it all on less-costly media?

I guess it's a Sony-financed project but will third-parties have the same outlook on an expensive new medium?
 
Griffith said:
hey, this is a joke, it isn't?

with your logic, the 360 uses NUMA because it has edram
:D

no, the ps2 stores GS data anc CPU data in the same memory, edram is used for framebuffer and for little particles effects

PlayStation 2 uses a hybrid-UMA configuration while PLAYSTATION 3 will use a more similar to some other console (;)) set-up doing most likely logically unified memory.

In PlayStation 2, each unit has a scratchpad buffer from which it can work independently and without touching main memory or an external shared bus: the R5900i CPU has 16 KB of 1-cycle SPRAM, VU0 has 4 KB of Instruction + 4 KB of Data Local Storage, VU1 has 16 KB of Instruction + 16 KB of Data Local Storage and the GS has 4 MB of scratch-pad also called VRAM ;).

This is not pure UMA and in the end it works quite differently from Xbox: you might program the two of them the same way, but I am not sure performance will be so great ;).

With what we know of PLAYSTATION 3 it seems that, for example, the CPU can DMA from XDR and GDDR3 memory pools and yes there is a difference in accessing each of the two pools: one would have longer latency than the other.

Similarly RSX would be able to load and write data to XDR memory as well as its own GDDR3 memory pool: likely it could appear as another EIB client connected to the EIB through the FlexIO interface the same way a switch-less SMP system with two CEB CPU's could be realized in which the other chip would be "en-slaved" to the first CEB chip.

The EIB's arbiter would take care of schedule the transfer (transfers to and from external RAM are handled by the EIB's Resource Allocation Manager: http://www.ibm.com/developerworks/power/library/pa-expert9/index.html.)

dW: Can we explore this for a little bit then? Because there is a layer where the software is talking to the Resource Allocation Manager and I took it to mean that the management of the load, once it's actually on the bus, is up to the data arbiter. Is the data arbiter going to be the one scheduling "this goes to the next one," or is that software interface to the Resource Allocation Manager going to be the one that schedules that this element talks to the next element?

Krolak: The Resource Allocation Manager can only schedule access to memory and to I/O. Software is, in broad strokes, scheduling which SPE talks to which SPE and when.

I do not see RSX being able to directly read/write from SPE's LS all by itself though, but I might very well be wrong on this.
 
ShootMyMonkey said:
The trade schools (possibly with some exceptions, but I've yet to experience that firsthand) tended to be memorizing how with no why. Show some fundamental examples and what they do, but never get into the motivation and the logic and the theory behind it. University CS programs for me were always broader pure theory and your whole life was defined in terms of hypothetical machine constructs with infinite storage, and it was very rare that you ever got to sit in front of a machine writing any kind of code. I was surrounded by classmates who cried in pain at the minuscule blocks of assembler they had to write in the architecture courses. Numerical analysis? HA! When I took it, I was the only undergrad in the course.
Maybe it various a lot country to country? Here in Blighty CS was a good mix of software theory and practice. We did discrete maths, software management and OO design as theory, and hands on with Modula-2 as the introductory language, C, SML and Assembler. We even had circuit boards to wire up at one point!

However, that's still far removed from the low-level needed for game design. I know enough that if placed in a console development environment, I could understand the limits, requirements etc., and eventually pick it up. But I'd be entering with little capacity to make useful hardware-efficient code. That's got to be the case for 99% of graduates, even from well balanced hands-on courses, because console coding is a 2 year course of itself without being diluted with expansive high-level theory and SQL database programming and other redundant topics.
 
Griffith said:
YES IT can

Show me the code then.

You do not initiate any memory transfer on the GS per-se and whichever way you like to think, the GS does NOT touch main memory directly: the GIF sits on the EE's main bus and can receive data from main RAM or write to it (when you reverse direction of the GIF-to-GS bus to take screen-shots for example).

In GS's world there exists three things mainly: its e-DRAM, a connection to the CRTC and the display and a connection with this GIF element.

The GS executes display lists generated on the EE by software and uploaded by the EE and does not manage texture uploading (you cannot texture from main RAM): it is up to the programmer setting everything up (making sure that all texture buffers are set-up correctly and kept fed on a per-frame basis) and making it all work (it is both the joy and the pain of working with PlayStation 2).

You can think of the GS rendering to frame-buffers, reading/writing the Z-buffer, rendering to off-screen buffers and then reading them to do screen-wide blends, etc... but this never touches main RAM directly. The GS does not read directly from main RAM or write to it: if you stretch things just about any system under the sun is UMA based then.
 
Maybe he is using the terms Unified Memory Architecture and Uniform Memory Access interchangeable and hence the confusion? :???:
 
Shifty Geezer said:
Maybe it various a lot country to country? Here in Blighty CS was a good mix of software theory and practice. We did discrete maths, software management and OO design as theory, and hands on with Modula-2 as the introductory language, C, SML and Assembler. We even had circuit boards to wire up at one point!

However, that's still far removed from the low-level needed for game design. I know enough that if placed in a console development environment, I could understand the limits, requirements etc., and eventually pick it up. But I'd be entering with little capacity to make useful hardware-efficient code. That's got to be the case for 99% of graduates, even from well balanced hands-on courses, because console coding is a 2 year course of itself without being diluted with expansive high-level theory and SQL database programming and other redundant topics.


I'll agree with you here on the country differences seeming quite apparent. At Cam we're taught primarily SML/discrete maths to get us into the hardcore theory side of CS, a lot on the architectural side with detail on ARM/x86/JVM (all about the pipeline, cache and memory heirachy, execution units, why stack machines etc.) which can be generalised. We also do primarily Java for the software engineering aspects with C/C++ briefly touched on in a 'comparative languages' course. Then a lot more maths is piled on top for doing graphics and AI etc. that is very much used in the console sector - and we have a numerical analysis course to back up and hammer home some of the subtle aspects of working on fixed precision computers.

So the course is far from being a 'Java shop' and spitting out graduates who only know Java. Perhaps that's just here, but Shifty seems to agree, and dependent upon the University or country? I can definetely say that we're not taught to solely use the chip/language/design stratergy de jour but are told about the fundamental arguments so that when you are trying to solve actual problems you can reason about which is best. From what I've learnt I perfectly understand the problems about console programming (admittedly I have gained insight from these boards) as opposed to 'general' purpose computing.

As Shifty also points out: you can't expect someone who has never had the chance to touch a 'magic box' to understand it straight off. I have a damn good idea where to start but in no way feel I could exploit it 100% within a week. Though, this is not a problem with the Universities, this is a problem with IP rights and not getting the opportunities to play about. A lot of the 'current' generation dev. talent appears to be 30yrs old and had the chances to write games on the systems of yester-year to get on to the bottom rung and work up. Not many people these days get that opportunity because of the secretive nature of many companies.
 
Last edited by a moderator:
Shompola said:
Maybe he is using the terms Unified Memory Architecture and Uniform Memory Access interchangeable and hence the confusion? :???:

I think so too, which is why I made the comment XB360 seems to be optimized for resource management (Unified Memory Architecture), and PS3 optimized for speed and actually also larger number of cores (NUMA).
 
Shifty Geezer said:
Maybe it various a lot country to country? Here in Blighty CS was a good mix of software theory and practice. We did discrete maths, software management and OO design as theory, and hands on with Modula-2 as the introductory language, C, SML and Assembler. We even had circuit boards to wire up at one point!

However, that's still far removed from the low-level needed for game design. I know enough that if placed in a console development environment, I could understand the limits, requirements etc., and eventually pick it up. But I'd be entering with little capacity to make useful hardware-efficient code. That's got to be the case for 99% of graduates, even from well balanced hands-on courses, because console coding is a 2 year course of itself without being diluted with expansive high-level theory and SQL database programming and other redundant topics.

I did my CS degree in the UK, and it was OK for what it was. Doing it again, I'd take a Math Degree, since it's moe interesting to me, and I can't really claim any of my CS degree was really all that indispensible. Having said that I'd written several games before I even started my degree.

This is a huge generalisation, but my biggest problem with new graduates is that the good ones are generally arogant and careless (they're all careless). They just don't have the experience working on a large software team and although they know that their mistakes impact 50+ other people they don't really appreciate what that means. It takes a while for them to realise what they don't know, so they can actually learn how to do the job better.

The low level stuff is less of an issue to me, I'm not going to let someone check something in without having a senior engineer look at it, and I'm not going to put a junior engineer on designing a system that my game is dependant on.
 
ERP said:
This is a huge generalisation, but my biggest problem with new graduates is that the good ones are generally arogant and careless (they're all careless). They just don't have the experience working on a large software team and although they know that their mistakes impact 50+ other people they don't really appreciate what that means.

Yes, I'm experiencing the same thing on my team. You have to stress this to the new guys break the build, break the QA cycle and your wasting hundreds of people's time. I think it takes some experience and they just do not know the impact or the severity of the carelessness.

When you have a young team you start introducing processes that reduce the problems depending on how large the organization is. But then the process bogs down productivity :(

Ah, can't win em all.

Speng.
 
speng said:
Yes, I'm experiencing the same thing on my team. You have to stress this to the new guys break the build, break the QA cycle and your wasting hundreds of people's time. I think it takes some experience and they just do not know the impact or the severity of the carelessness.
That really annoyed me at uni. Pretty much everyone couldn't care less about design or procedure, and only wanted to code as though if you weren't entering lines of code you weren't doing anything useful or productive. None of them appreciated the importance of the design phases. QA wasn't even touched. No-one left the course having worked with a Testing and QA department, and the process was left to photocopied slides and doodle-in-the-margin notes. Coding might be more fun for the students, but in the professional workspace you can't get away with just hacking out random undocumented code and hope everyone else is happy with it. Sooner or later, someone is going to need to understand your code if just because there's a problem on site and your on holiday or something similar.

We had bug tracing in the exam, but a bigger deal should be made of it. A more complex bit of code, one version clearly documented and the other the usual spaghetti code with data structures and objects made up ad hoc by the programmer, and have the student work through both and get an instant fail if they can't solve the spaghetti code problems. Of course, you'd be lenient when they don't solve the spaghetti code problem, but they don't know that when they sit the paper! The stress it causes should awaken them to the real issues they'll leave others facing if they don't mend their ways!
 
Inane_Dork said:
I thought the observation that 300 MB is more than either pool in the PS3 was a pretty interesting one. Apparently, the RSX must be able to handle at least reading from both pools at real time rates and possible both reading and writing to both pools at real time rates. That's certainly good for development, I would think.

Damn that never entered my mind. Things are really starting to get interesting. Why doesn't anybody care about Inane's quote? Hmmm....
 
Last edited by a moderator:
speng said:
Yes, I'm experiencing the same thing on my team. You have to stress this to the new guys break the build, break the QA cycle and your wasting hundreds of people's time. I think it takes some experience and they just do not know the impact or the severity of the carelessness.

When you have a young team you start introducing processes that reduce the problems depending on how large the organization is. But then the process bogs down productivity :(

Ah, can't win em all.

Speng.

Actually, I think continuous integration is a process you can introduce with no bog-down on productivity, although it is most effective with unit tests. Anytime anyone checks in code, you automatically build and/or test. If someome breaks something, everyone knows it, and the L0z3r is exposed. :)

Most all young hotshots screw up, because they are a) used to programming in isolation b) extremely arrogant and c) not aware that the 'inventions' and 'tricks' for which they think they are an uber-coder compared to their teammates are old hat and d) don't care that other people have to read their code, whilest they take an extremely critical view of other people's code.

It's only after you learn you're not the biggest fish in the sea, but a fish that is part of a good 'school of fish' that you become more effective in a team, on the other hand, this usually takes several years.

I once interviewed a guy who came into my office and wrote on the whiteboard in big letters "4 YEARS of programming experience" (UNDERLINED). He was extremely arrogant, but when pressed, admitted *all* of his experience was projects in college. I got a chuckle out of tripping him up with interview questions :)
 
Continous build and crash dumps are great for keeping the build quality high (well higher than they would without it...)
 
mckmas8808 said:
Can someone translate this sentence for the regulars?

Basically it's a process where by you have a build machine that continually builds and tests everytime someone checks code in, when it fails it sends out an email to everyone something along the lines of

BUILD BROKEN --- Culprit is AWanker

We do that, but it isn't really sufficient, we simply don't have adequate tests and more often than not even a QA cycle misses major issues, that build gets released and the content creators who are touching the broken stuff either use the old build or sit around on their thumbs until a new build gets pushed.

Games are just really complicated and sometimes it's not even clear what the current correct behavior even is.
 
ERP said:
Basically it's a process where by you have a build machine that continually builds and tests everytime someone checks code in, when it fails it sends out an email to everyone something along the lines of

BUILD BROKEN --- Culprit is AWanker

We do that, but it isn't really sufficient, we simply don't have adequate tests and more often than not even a QA cycle misses major issues, that build gets released and the content creators who are touching the broken stuff either use the old build or sit around on their thumbs until a new build gets pushed.

Games are just really complicated and sometimes it's not even clear what the current correct behavior even is.

So is he saying that the PS3 devkit is lacking it or that the devkit has it?
 
mckmas8808 said:
So is he saying that the PS3 devkit is lacking it or that the devkit has it?

He's not saying anything at all about the devkit, he's talking about a process that is independant of dev environment.
 
mckmas8808 said:
Damn that never entered my mind. Things are really starting to get interesting. Why doesn't anybody care about Inane's quote? Hmmm....

RSX can texture from both Vram and main ram, but texturing from main ram is twice slower. Output is always to Vram as far as I know.
 
Back
Top