DeanoC blog PS3 memory management

seismologist said:
I was just saying the other week how streaming from Blu Ray would allow for more complexity in each scene. I'm glad that me and Deano think alike :p

Say are you guys hiring? :D
Why is this any different than streaming from DVD. Seems like the question isn't how much disc storage you have but how much data you can get into ~450 MB of ram at 1/30th of a sec. Seems like that would determine graphic fidelity of the game. Seems like blu-ray would only affect graphic quality if you have a game with a single level and that single level has to fit into more than 9 gb of space.
 
Deano? If you don't mind me asking, you allude to Blu Ray in the Blog. Is Heavenly Sword going to be using Blu Ray disks for the game? or are you going to be using DVDs?

Also, If you are using BD, then would the game be any different if you couldn't use BD?

(*not sure if you can even answer those questions, thought I'd ask anyways) :p
 
Well, it will be annoying is the PS3 potential was limited by memory concerns; and everything seems to suggest (from discussions mainly + OS requirements + AA/HDR implementation) it will. Sony adopted a much smarter memory design last round...
Well, Deano's comments are really interesting. Can't wait dor 3rd generation games...
(Should wait for the console to launch first !) :LOL:
 
BlueTsunami said:
Deano? If you don't mind me asking, you allude to Blu Ray in the Blog. Is Heavenly Sword going to be using Blu Ray disks for the game? or are you going to be using DVDs?

Also, If you are using BD, then would the game be any different if you couldn't use BD?

(*not sure if you can even answer those questions, thought I'd ask anyways) :p

I'd be interested too. I'm guessing it's fair to interpret from the blog post that blu-ray allows for better streaming behaviour? Is the game in any way dependent on that better behaviour, or could you use a smaller disc without any compromises?

I've never fully understood what larger capacity brings for streaming, but other devs have mentioned blu-ray in this context also. I'm guessing data replication to improve seektime, less compression = data available for use immediately hot off the disc...??
 
pcostabel said:
You cannot overwrite the front buffer while it is being displayed. If you use tile rendering, you need a backbuffer in main memory.

Acyually you could resolve during VBlamk.
But if your tiling or doing anything rational you're right you need 2 post AA buffers in main RAM.
 
Vysez said:
A simple search in the forum would have answered this. ;)
But I'll answer your question, since I'm not as evil as advertised, the PS3 is using a NUMA architecture for its memory management, therefore yes, the GPU or the CPU can access both of the memory pools available.

Yea, I should've done a search. Thanks a lot though Vysez.
 
Titanio said:
I'd be interested too. I'm guessing it's fair to interpret from the blog post that blu-ray allows for better streaming behaviour? Is the game in any way dependent on that better behaviour, or could you use a smaller disc without any compromises?

I've never fully understood what larger capacity brings for streaming, but other devs have mentioned blu-ray in this context also. I'm guessing data replication to improve seektime, less compression = data available for use immediately hot off the disc...??
Well there they very real bonus of being able to duplicate stuff to reduce seeks, you still as much compression in reality cos load times are the killer on all next-gen consoles.

The real benefit of big disks is mental... its one less thing to worry about, I have enough to things to worry about (you should see how much hair I have these days :devilish:) without figuring out how I'm going to make it fit.
 
DeanoC said:
Well there they very real bonus of being able to duplicate stuff to reduce seeks, you still as much compression in reality cos load times are the killer on all next-gen consoles.

Thanks, that makes sense. Would you say what you're trying to do is fundamentally dependent on that improved behaviour? I mean does it fundamentally impact what you can or cannot do in a scene? Or could you imagine it ever being so?

And I imagine it is nice not to have to worry about capacity :)
 
Npl said:
Oh and PS3 sports unified Memory too, though you probably want to handle both chunks differenty because of their speed differs.

as like PC's ;)
256 MB to gpu
256 MB to CPU
cpu and gpu can dialog

this is not unified memory ;)
 
Could Deano say something about the possibility of calculating simple physics on vertex shaders on gpu? I know that this is not concern for ps3, but anyway. :D
 
Lysander said:
Could Deano say something about the possibility of calculating simple physics on vertex shaders on gpu? I know that this is not concern for ps3, but anyway. :D
Yes he could...

The main problem with physics on vertex shaders is getting the data out, if its purely for visual effects (particles usually but also some soft bodies) than that probably not a problem but if your on a architecture that doesn't have memexport (or what ever its called in DX10) then you have to pass the data through the pixel shader and triangle setup units. Which is usually a waste (in most cases you just want them to pass the data through untouched...)

Actually a far better idea on non-memexport like GPUs is to use the pixel shaders to work on arrays of floating point render targets. Then essentially you can use the parallel shader units to work on each item in the array indepedently which can be quite handy if you can design you algorithm with that kind of structure....

Still if you've got a bunch of 3.2Ghz SIMD processors with fast LS, sod all that for a game of soldiers and just poke the algorithm onto one (or more) of them and go to the pub...
 
DeanoC said:
Still if you've got a bunch of 3.2Ghz SIMD processors with fast LS, sod all that for a game of soldiers and just poke the algorithm onto one (or more) of them and go to the pub...
I hear there's a nice Irish pub in London. You wouldn't per chance be considering a visit there in the not too distant future?
 
Griffith said:
as like PC's
Not the same, PC are not using NUMA like memory management.

BTW, a UMA memory management is considered as a disadvantage, not an advantage.
In the case of the X360, the eDRAM alleviate this UMA disadvantage, so it works in this case. But for the PS3 it would be a clear problem if it had to deal with a Unified Memory Architecture.
 
drpepper said:
eSa said:
He is right on a money here , so to speak. Most of the kids starting CompSci now dont get hardly any exposure with memory management. You can get Masters degree (with very good grades!) in CompSci and not understand even a simple indirect addressing at asm level.
I think the education system is content on just teaching on how to get a program up and running rather than managing the program well. The problem is (and this is from my experience in Univeristy here in Ottawa Canada, I odn't know what it's like elsewhere) is that there is little time to teach everything so memory managment is just glossed over. Despite it, IMHO, being the most important aspect of programming. Your program will suck if there's memory leaks everywhere no matter how well it does its job.

What they should have done here, at Carleton University, is dedicate an entire course on memory managment and theory. After first year there were still students who didn't know the difference between "pass by reference" and "pass by copying". Bah! Now I know why I switched programs.

People didn't even know what a heap or stack was...

This jives with my experiences - programming is not taught with efficiency in mind. At all. While understandable, there are still a lot of areas that profit from coders being efficient. Consoles are hardly the only example - almost all embedded code, computational science/engineering codes, rendering, et cetera.

Besides, the people with server farms care as well - they would rather have 500 units than 750. Hell, even some PC users care, otherwise noone would ever buy anything but the slowest speed bin.

In my opinion, programming is largely the art of managing your memory (communication), CPUs have outstripped their memory hierarchies for a long time now in my part of the computing world, and processor synchronization is also generally non-trivial if efficiency is important. I wonder if we might see a return to a more general awareness of the underlying hardware in academia, or the attitude will always be that it's the job of the hardware guys to see to that the software runs efficiently. The administrative/business emphasis of the courses and practises taught doesn't really reflect the enormous variety of computing tasks in our society. Is programming business databases for instance really more significant today than cell phones or cars or industrial process control systems or.... To some extent I think the narrowness of the courses reflects an inablility to deal with the complexity of todays computing reality.
 
Vysez said:
Not the same, PC are not using NUMA like memory management.

BTW, a UMA memory management is considered as a disadvantage, not an advantage.
In the case of the X360, the eDRAM alleviate this UMA disadvantage, so it works in this case. But for the PS3 it would be a clear problem if it had to deal with a Unified Memory Architecture.


case 1:
think, if the code of the game is 100 MB

256+256 MB NUMA =>

100 MB used, 156 not used in cpu ram
Max 256 MB for textures + framebuffer, around 230-246 MB depending on FB

UMA + eDRAM

100 MB for the code
eDRAM for frame buffer
412 MB for textures

case 2:

Cpu have to calc some textures data in use by gpu

NUMA

gpu take the data
gpu pass the data to cell via flexio
cell put data in cpu memory
cell elaborate the data
cell take the data from system memory
cell pass the data via flexio to gpu
gpu put data in local memory

total passages: 6

UMA

cpu elaborate the data without moving it

total passages: 1


I think that rsx is not a customized solution, that's why it have 256 MB of local memory, no edram, two output, as like the pc graphic cards

for the best solution, read above ;)
in the pc using an unified mem is impossibile, but in console, we have the best solution
ps2,xbox,gc,360 all uses UMA, and you can trust that revo will usa UMA, it is simply the best solution for a customized hardware dedicated to gaming
 
ShootMyMonkey said:
Dear god. At least tell me that they do something like locking down the compile settings so that you're limited in memory or something. It seems there has to be some sort of reasoning behind that that goes deeper than devkit production. I can understand not wanting to give random students support and access to developer forums and newsgroups where loads and loads of internal little secrets move around, but geez.


Florida makes me think it might be Fullsail. I've interviewed a few people who went through the game programming curriculum at Fullsail, and I have to say that I was extremely unimpressed.

When a guy is applying for a game development job and can't tell me the difference between a pre and post increment, I have to worry. Typical guy got as far as answering how to check if two spheres intersect, but when asked why the method works, he had no clue. One guy was actually capable enough to answer solving for ray-plane intersection, but again gave a textbook answer, and when asked about the logic behind the method, his answer was along the lines of "That's just what they tell us. It doesn't really matter why so long as you have the answer or know where to find it."

I have no idea how they teach to write the code.. However, the program I saw is extremely competitive.. One of the instructors is the lead programmer of the original Far Cry ;-) .. So, at least they know what they are talking about. However, they were also complaining about not having access to console development kits. Anyway, here is the link for the program. They are part of SMU in Dallas, TX.

The program in Florida was not fullsail. I think it is another program from another college. I am not sure but it could be Florida State Univ.
 
Programmers always complain about lack of memory.
Give them 1GB and they will fill that too and say it ain't enough!

:LOL:
 
I fail to understand your logic.
NUMA means that the GPU can address the Main Memory pool, just as the CPU can address the VRAM. In other words, you don't "have to" stock all your data anywhere... obviously it's more practical to do some things logically, but you're not forced to that.
Also, if your scene is composed of 412MB of texture datas, I think you have to fire your Lead Programmer, now.
Griffith said:
in the pc using an unified mem is impossibile, but in console, we have the best solution
ps2,xbox,gc,360 all uses UMA, and you can trust that revo will usa UMA, it is simply the best solution for a customized hardware dedicated to gaming
Only the Xbox uses an UMA.
 
Back
Top