So is the cell spe's "downgrade" confirmed ?

Actually the main issue, for me, would be the reserved SPE, more than the allocated RAM, since the amount of required RAM for the final OS and/or the future updates could be decreased if SCE optimize their code or if they start relying on Virtual RAM (If the HDD is standard).

On the other hand the SPE might be "wasted" on these extra features, no matter what. The only thing the OS, so far, alleviates a bit from the game code is the messaging/in game (video?) chat feature.
If SCE was smart, they'd do what Barbarian proposed. That is making the OS SPE handle a part of the sound calculations. At least it would be a more useful use of the SPE, from a gaming standpoint.

Anyway, last I heard about the OS, it wasn't yet finalized, so they may change things before even the launch. Now, would it be for the better or the worse... With SCE, you never know.
 
The reserved OS resources could be put to use in games too, such as front buffer space, IO and controller thingies, EyeToy processing, at which point it makes sense. They idea that 96 MB and 14% of the SPEs is reserved for what might amount to most users as the occassional glorified videomail is certainly a concern for people interested in system design and efficiency!
 
What else, other than video encoding would require this much performance?

It must be something to do with converting video on the fly, and serving it to PSP, or some other sort of video related service.

If you're playing a game, and your signifigant other is halfway around teh world and wants to watch a video off of the PS3, on their PSP, it's going to have to decode that video on the fly, and send it out over the internet. That's gotta take a considerable amount of CPU power.
 
So far as I've heard from Sony myself, any current restriction we've been told about is a maximum to give them headroom while they work out what they need for whatever they're planning (they're kind of vague on what they're actually doing with any of that). The basic gist was that the OS footprint might well be reduced - though I guess it's possible that if they want to jam in all kinds of bizarre features and functionality, they'll keep everything.

Back on PS2 the kernel reserved memory was halved (from 2M to 1M) in a fairly early SDK (but not, IIRC, before the first batch of titles), so it's not without precedent.
 
I don't mind too much the idea of one of the SPEs being cordoned off, and however much XDR being dedicated to the OS, as long as the functionality this should enable sees fruition. I'm all for seemless background apps. The games I will still trust to 'look incredible' and all, I'll just recognize that the maximum ceiling for what otherwise could have been reached will have been lowered somewhat. That's an issue for the end of the consoles life more than the beginning of it's life I feel though.

I see the XDR pool as being to an extent as available to be drawn upon for this, with less of an adverse effect to a project's development than were GDDR to be syphoned off. But that said, it also raises the question in my mind: to what extent are devs presently filling and utilizing the XDR memory pool?

I recall a number of conversations from some time back that seemed to indicate that Cell itself in a console environment probably wouldn't need all 256MB of RAM on a regular basis. There's the fact that RSX can also utilize that pool on... but my question is, is there anything devs may can say to speak to the 'burden' presently being placed on the XDR pool?
 
Last edited by a moderator:
I think this was touched on at GDC so hopefully I'm not saying anything I shouldn't. The video memory is used for the framebuffer and consequently quite a bit of bandwidth is soaked up there. Sony suggest storing some graphics resources (textures, vertex buffers) in XDRAM as there's quite a bit of bandwidth between the GPU and XDRAM. Exactly what the best balance is depends on how the resources are being used and what else is going on in the game, figuring out how to get the balance right is going to be one of the challenges of PS3 development.
 
Wasn't the whole idea of SPE's that the code could be run in any of them, that the code could "migrate" to an idle SPE or a SPE that is using just a fraction of it's resources.
If one SPE is dedicated to OS, could other code still run on it if the OS isn't using it fully, as I would imagine the case to be when a game is being run?
 
Context switching is very expensive on the SPEs (you've basically got to save out the full 256K of local store plus all 128 128 bit registers). As a result sharing an SPE between multiple tasks when some of those tasks have realtime constraints (voice or video chat for example) does not work terribly well. That's probably why Sony have reserved a whole SPE rather than tried to share it with game code.
 
heliosphere said:
I think this was touched on at GDC so hopefully I'm not saying anything I shouldn't. The video memory is used for the framebuffer and consequently quite a bit of bandwidth is soaked up there. Sony suggest storing some graphics resources (textures, vertex buffers) in XDRAM as there's quite a bit of bandwidth between the GPU and XDRAM.

Okay? So can anybody explain if this is good or bad? If good explain why, but if bad explain why also.

Thanks.:D
 
mckmas8808 said:
Okay? So can anybody explain if this is good or bad? If good explain why, but if bad explain why also.
Good:

XDRAM has quite a lot of bandwidth (more than 360 GDDR3) and so by using it you can compensate for the relatively low frame buffer bandwidth on the RSX.

You have some flexibility in moving resources around to memory that is best suited to their usage - things like how frequently the resource is used, whether it needs to be written to by the PPU or SPUs on a regular basis, how frequently it needs to be streamed in and out, etc.

Bad:

Extra complexity to deal with for programmers - rather than one fixed pool of memory to worry about you have to manage two pools and pick between them according to complex context dependent performance characteristics and availability of space (some types of resource have to be placed in one or the other type of memory).

RSX fetches from XDRAM have higher latency than from GDDR3 so performance may be worse in some situations.

Rather than vertex and texture fetches competing with framebuffer access for bandwidth they compete with PPU memory accesses. This is no different from 360 where everything but the framebuffer shares one common memory pool.
 
heliosphere said:
I think this was touched on at GDC so hopefully I'm not saying anything I shouldn't. The video memory is used for the framebuffer and consequently quite a bit of bandwidth is soaked up there. Sony suggest storing some graphics resources (textures, vertex buffers) in XDRAM as there's quite a bit of bandwidth between the GPU and XDRAM. Exactly what the best balance is depends on how the resources are being used and what else is going on in the game, figuring out how to get the balance right is going to be one of the challenges of PS3 development.

This doesn't really speak to my question though, because at the core of it what I'm wondering is what the normal non-RSX related overhead of running game code would be on the XDR memory pool. I understand that whatever is left could potentially find productive use elsewhere, but I'm more just trying to frame the 'severity' of a 96MB XDR OS overhead.

Like I said I recall conversations from the past where in terms of the Cell itself, 256MB seemed 'enough.' Now for game code is 160MBs also 'enough,' or does the situation begin to change... or was it never thus to begin with? Keep in mind of course I realize there's always a potential productive use for 'spare' RAM, I'm just really trying to eliminate graphics/RSX uses from the debate though in seeking this answer.

The difference between 'Now we can't implement our awesome rendering idea' vs 'Now we have to reduce the game code footprint' kind of a thing, with the later being the more negative of the two and ultimately what I'm wondering about in terms of this OS overhead.
 
xbdestroya said:
This doesn't really speak to my question though, because at the core of it what I'm wondering is what the normal non-RSX related overhead of running game code would be on the XDR memory pool. I understand that whatever is left could potentially find productive use elsewhere, but I'm more just trying to frame the 'severity' of a 96MB XDR OS overhead.
It's actually 64MB of XDR and 32MB of video memory.

There's no such thing as 'enough' memory - you can always use more. There's lots of stuff living in main memory typically. There's the code itself and the working memory representing the current state of the game world. That's probably not going to be the largest chunk however. Then you have buffers for I/O (for disk reads, audio, networking, etc.), animation data (can get very large even though it's usually quite heavily compressed), world map data, collision geometry, physics state, working memory for AI (pathfinding etc.), the list goes on and on. Console games always end up having to squeeze things down to fit into the available memory. The more memory you have the less time you have to spend trying to squeeze everything to fit and the more time you can spend working on new features, fixing bugs or optimizing.

The less memory you have available, the more time you're going to spend addressing memory shortages and the less you're going to spend improving the game in more tangible ways. The relatively large chunk being reserved for the OS isn't the end of the world but you're not going to hear any developers say they wouldn't like to get some of it back.
 
Heliosphere I see what you're saying, but still, you're not really speaking to the question. You're kind of just continuing on your own path here. Obviously developers would want more memory rather than less; the question is simply - in essence - is 96MB egregious, or is it simply... I don't know... annoying.

Anyway but thank you for clarifying on the 64MB/32MB memory split, because I was not aware of that.

Also don't think that your points aren't being taken or I'm not agreeing with them, because that's not the case. I'm just trying to get a sense of how much 'damage' this 96MB does to a potential game vs the would-be benefits it might bring in additional background functionality.

Or of course that 96MB footprint could simply be bloated, and obviously I'm not here trying to defend bloat in any incarnation.
 
It's hard to say. I don't know exactly what the background functionality will be, though I'd be concerned if they were planning on doing anything too heavy because that could have implications for available memory bandwidth (which is always in even shorter supply than memory space).

It makes sense for them to set an upper limit and have the option of reducing it later rather than saying they'd take 32MB and then coming along at the last minute and saying they need more space. The figure might still come down so it's a bit early to be panicking yet.

I'd say the amount currently reserved is annoying - it's enough that there will be a fair bit of extra development effort involved to get everything to fit in 64MB less space than we might have had. It's not catastrophic though, at least for the first generation of games everyone will be just thankful they're not trying to squeeze the entire game into 32MB any more.
 
Well, I think you and I are on the same page in terms of our views on the memory useage, we're just meeting each other coming from opposite directions perhaps. Hopefully Mr Wiblble or Deano or whoever else will be able to chime in with a rough indication of how it's effected their games personally, but pending that we'll just go with the assumption that "it's annoying."

It's a fair point you made also in terms of saying any task computationally intensive enough to require an SPE and that much RAM in reserve might also prove to be a drain on bandwidth as well. I hadn't thought of that aspect before; I guess we'll see how that develops as an issue.
 
MrWibble said:
Back on PS2 the kernel reserved memory was halved (from 2M to 1M) in a fairly early SDK (but not, IIRC, before the first batch of titles), so it's not without precedent.
True, although even 2MB was only ~5% of PS2 memory, the supposed 100MB would be 20%.
As for following precedent - if their most recent product on the market defines it, it doesn't seem like something we should be wishing for :devilish:

heliosphere said:
The relatively large chunk being reserved for the OS isn't the end of the world but you're not going to hear any developers say they wouldn't like to get some of it back.
That's always true, however personally I don't think the reserved memory is Ever an issue in itself - you design your application around fixed size memory regardless of what the hw designers give you.
The issue at hand is plain and simple with multiplatform stuff only.
When the nearest competing product has like 1/10th of your system total memory, it's completely irellevant if you loose 1/5th of it to Kernel, but the story is a LOT different when the paralel platforms are evenly matched in terms of available memory...
 
Fafalada said:
The issue at hand is plain and simple with multiplatform stuff only.
When the nearest competing product has like 1/10th of your system total memory, it's completely irellevant if you loose 1/5th of it to Kernel, but the story is a LOT different when the paralel platforms are evenly matched in terms of available memory...

Alot different in what why? Is it more positive?
 
mckmas8808 said:
Alot different in what why? Is it more positive?
he's saying that if one system (competing product) has 64 megs of ram, and the other 512, it's no big deal if the one with 512 gives up 20% of it's Ram. (by comparison in multiplatform games)

But when both systems have 512 and one gives up 20%, it is a bigger deal.
 
Considering for a time we thought we were going to only get 256Mb total (early Xenon numbers), its not that bad. Sure would be nice to have more but consoles are always tight, we are not crossplatform so its easier for us.

The other issue is that the smaller the XDDR pool, the less likely we can spare any to help RSX get some extra bandwidth.
 
Back
Top