Question for developers... PS3 and framerate

The 8th SPE is physically disabled I believe. But even if it were just left there, I am under the impression that there is a task/job queue and the work is sent out to the SPEs as they become available to do a job.

That's one of several programming models for Cell. It's not the only one.

Developers will tailor the tasks with having 7 SPE's in mind

Six, unless Sony stopped taking one. (I thought it was for the hypervisor, but DeanA says it's for "security stuff", and he's probably right.)

but there is always the possibility that the 7 SPEs are in use at a given time slice with more jobs on the list where an 8th SPE (or more) would allow that next task to be started. Of course, the speed increase may not even be noteworthy if one of the original 7 SPE's was just about to finish a job anyway.

The thing is, workload ought to be reasonably predictable. So I'd expect that, normally, you'd be able to plan out a workload that the system will reliably do fast enough to avoid framerate drops.

There's a potential for marginal differences between systems, because the different systems could have different physical layouts, although I'd expect that to be a very small difference -- but it could explain why one machine is about 1% slower than another, and if a game engine were really cutting it close, that might be enough to result in a dropped frame now and then.
 
Thanks for the insights.

Six, unless Sony stopped taking one. (I thought it was for the hypervisor, but DeanA says it's for "security stuff", and he's probably right.)

Yeah, six for the game. I meant on a more general view for consideration (with 7).
 
Don't forget also seebs, that in modern consoles, the GameOS itself uses the hypervisor to among others display messages about received mail, download software in the background if a game doesn't use any online features, and so on. All those things are currently much more likely to affect that 1% (or more) than affinity issues (which I've argued in the other thread aren't likely to exist in the first place, and aren't likely to occur because of the way current games have been programmed in the second place)
 
Don't forget also seebs, that in modern consoles, the GameOS itself uses the hypervisor to among others display messages about received mail, download software in the background if a game doesn't use any online features, and so on. All those things are currently much more likely to affect that 1% (or more) than affinity issues (which I've argued in the other thread aren't likely to exist in the first place, and aren't likely to occur because of the way current games have been programmed in the second place)

Here's what I end up wondering about:

From some of the IBM research papers, it looks like affinity CAN make a noticable difference. A couple-few percent, anyway.

Now, imagine that you code without any awareness of affinity issues. And imagine that, as it happens, your code ends up getting a performance boost from a serendipitous affinity arrangement on your dev system.

And then I run it on my PS3, with one of the other layouts, and you no longer get that boost, which you weren't aware of and didn't intentionally design.

Could that cause an occasional dropped frame? I think it's pretty clear it wouldn't be a big deal consistently -- it's not going to drop a game from 30fps to 20, or anything. But could it explain why one reviewer says a game's silky smooth, and another complains about occasional dropped frames, just enough to notice?

I don't know enough about the internals to be sure, but it seems it could happen, if not very often.
 
No, I honestly doubt it. The varying comments on framerates are typically much more related to the different resolutions that have been selected, and as I said, probably things like background downloads and such ... And again, I stand by the comment that DeanA also made in the other thread, that I just don't see any current game satisfy bandwidth in that area.

In fact, if you do want to consider the hypervisor I think it's more likely that variations in framerates occur because so many current games depend much more heavily on the PPE (from being 360 ports for instance) and then any additional hypervisor activity, stemming from the PPE, is much more likely to cause framerate drops than the hypervisor owned SPEs.

I think that in theory, your scenario could happen I suppose. It would be interesting (and not that hard) to test if you can make a difference, simply by running the same test that maxes out the SPEs but assigning different parts to different SPEs?

By the way, this kind of thing might interest you?:

http://www.wired.com/techbiz/it/news/2007/10/ps3_supercomputer
 
Yeah, I've thought about it. It's not hard to design a program which should in theory be able to saturate EIB; it won't do much, but who cares? It's a proof of concept. Hmm.

Well, it's on my list. If I ever find myself inexplicably not having enough to do, I might have a go at it. It shouldn't be hard to measure net bandwidth, then try a couple of configurations and see whether performance changes.
 
Yeah, I've thought about it. It's not hard to design a program which should in theory be able to saturate EIB; it won't do much, but who cares? It's a proof of concept. Hmm.

Well, it's on my list. If I ever find myself inexplicably not having enough to do, I might have a go at it. It shouldn't be hard to measure net bandwidth, then try a couple of configurations and see whether performance changes.



woah...so if you're true there should be "better PS3" and "worst PS3"... awful prospective!!:devilish:

Anyway please report me any news, infact i'm suffering few framedrops playing COD4 while my buddies not at all..and if confirmed i'm going to change my PS3;)
 
woah...so if you're true there should be "better PS3" and "worst PS3"... awful prospective!!:devilish:

Anyway please report me any news, infact i'm suffering few framedrops playing COD4 while my buddies not at all..and if confirmed i'm going to change my PS3;)

Have you tried bringing one of your buddy's PS3s to your place and put it in your setup with the same settings?

As I've said in this discussion before and everyone who knows anything about gaming on PS3 (sorry, seebs, but you're far to single minded to understand these issues at the moment as far as how they relate to actual gaming :p), this is almost 100% certainly not the cause for any difference you might experience (and from what I understand I think your basing this on hearsay rather than actual hand son comparisons, so just because the other guys don't experience framerates like you do for various reasons that doesn't mean there is even actually a difference - I can tell if a CRT is on 60, 72 or 80hz, almost noone else (at least outside of these forums ;) ).

Should there actually be a difference and it's not related to resolution settings (you can have significant differences in framerate performance between 576i, 720p, 1080i and 1080p), then it may just as well (or even much more likely) be due to that your HDD isn't performing as well as it should for streaming. Or if you experience it in online gaming, lag issues, or ... etc.
 
I think the affinity concerns are overblown. If a core is going to go off local store, then it's already faced with hundreds of cycles of DMA latency. The 1-4 cycles of additional latency that depends on the layout of the cores around the EIB is negligible. You're talking about less than 1% variance due to layout.
 
I think the affinity concerns are overblown. If a core is going to go off local store, then it's already faced with hundreds of cycles of DMA latency. The 1-4 cycles of additional latency that depends on the layout of the cores around the EIB is negligible. You're talking about less than 1% variance due to layout.

In most cases, sure. What about the case where the layout of the cores makes the difference between a given set of >8 transfers happening simultaneously, and them having to share bandwidth? If you have EIB saturated, then you could see issues.

I agree that this may not be able to occur outside of unusual test code, and may not affect games.

However, I'm pretty sure you could design a program which would see a noticable performance difference depending on affinity, because it is relying on peak bandwidth. Now, that may be unlikely to come randomly, but...

If someone's got a program that is using that kind of bandwidth, it could happen, right? Or is there some reason I'm not aware of for which it is actually impossible, as opposed to merely unlikely?
 
Bandwidth is not affected by layout differences around the EIB. Only latency is, and the amount of latency variation due to layout is so small that it's more than drowned out by the other sources of latency when a core goes off its L1 cache or its local store. Any code that is sensitive to a few cycles of latency variation will suffer from bus stalls, DMA state machine transitions, cache misses, etc., much more than layout differences of the cores around the EIB.
 
Back
Top