PS3 - Lowest common demoninator?

Pale

Newcomer
Lair developer (Factor 5, a very close Sony partner) politely points out how the PS3 is the lowest common denominator.

"Q: How do you look back at this point on the differences between the PS 3 and the Xbox 360?

A: You'll have a hard time if you port without having a PS3 game in mind when you created the 360 version.That is where a lot of complaints are coming from.

They [developers] created the 360 engine with a unified memory architecture in mind, with the embedded frame buffer with its advantages and disadvantages, and not thinking too much in early stages about multicore. If you try to get that over to the PS3, you're in for a bad surprise.

The PS3 is all about streamlining about the two different memory pools. They are separate. You don't have to do tiling because you don't have an embedded frame buffer. "

No unified memory (i.e. the 256 MB memory limitation on the PS3), and no embedded memory (lack of embedded 10MB framebuffer on the PS3), makes it difficult for developers to port from X360 to the PS3.

"All of these advantages of the PS 3 turn into disadvantages if you don't start making your game on the PS 3. Hence the griping."


Unlike X360, the PS3 has 1 out of 3 architecture advantages for developers, and that is multicore (not counting bluray as it's moot point for performance).

Whereas X360 has 3 out of 3 advantages for developers (multicore, embedded memory, and unified memory).

Hence if developers started writing their games utilizing only the multicore advantage (without utilizing the 2 extra advantages the X360 has over the PS3), then things will be fine for both the PS3 and X360.

But if they did utilize the 2 extra architectural advantage of the X360, then obviously the PS3 version will suffer greatly trying to compensate for the lack of those 2 extra X360 exclusive features.


"If you create first on the PS3, it is pretty easy to port it to the 360. A lot of companies coming on board now will probably start on the PS3 and move to the 360. The lucky thing for us is we didn't have to think about the 360 at all."

It's much easier to port from the PS3 to X360 cause the PS3 has no special hardware that X360 doesn't have already.

And then finally he goes onto comparing the PS3 limitations to last gen hardware limitation.


"yes we spent the last four or six weeks going through hell getting Lair into memory [when asked about PS3 memory limitation]. But then again, we were doing the same thing on Rebel Strike [a last gen GC title]"

So basically the PS3 is the console that will bring down graphics fidelity in multiplats because it's the lowest common denominator. (just like last gen with Xbox1 and PS2)

Developers now have to think of the PS3's limitations, adjust their engines accordingly to match the PS3's hardware, and then port with ease to the X360.


^This is someone's interpretation of this article, how would you interpret this and is there any truth to what was said? I'd like to clear up if this is FUD or not.
 
this is a pretty old article and i'm pretty sure its been posted here already. he didn't say the PS3 is the lowest common denominator, hes just saying its more difficult to port to PS3 compared to PS3 to 360.
 
^This is someone's interpretation of this article, how would you interpret this and is there any truth to what was said? I'd like to clear up if this is FUD or not.

Cell certainly is special hardware that the 360 doesn't have, as is the Blu-Ray, as is the separate memory pools with the separate bandwidth allocations, as is the mandatory hard drive, as is..

They are two separate systems, which each have their own unique characteristics. A first party game, exclusive to either of the platforms, is likely to perform better than a multiplatform title with the same development resources thrown at it. Neither is a perfect subsitute for the other, and a game written to max out either system would necessitate some significant rework if it were ported to the other.
 
The thread title is misleading and incorrect. Using his analogy, the PC with its lack of EDRAM and unified memory must also be the Lowest Common Denominator. I view unified memory as a disadvantage due to contention related issues between the CPU and GPU. As for the 10MB of EDRAM in the 360 - it's a nice to have, but the 10MB limitation requires the use of tiling.
 
Old article, old argument.

It isn't so much a lowest common denominator, but going PS3->360 is easier than 360->PS3.
 
Yeah and the developer was actually pointing towards the fact that there is more under the PS3 than it looks and that devs have been having problems with it due to the wrong approach with it and not poitning towards PS3's disadavandages
 
Unlike X360, the PS3 has 1 out of 3 architecture advantages for developers, and that is multicore (not counting bluray as it's moot point for performance).

No, in fact, for the type of developers Factor 5 is describing, the multi-cre aspect of the PS3 isn't an advantage at all, it's more of a nuisance. It's already a nuisance for them on the 360, because they'd rather just take one core into account, but it's even bigger of a nuisance on the PS3, because the cores are different types as well (and there are more! yuck!).

Whereas X360 has 3 out of 3 advantages for developers (multicore, embedded memory, and unified memory).

No, again, to be consistent with the kind of developer described here, the multicore is actually a disadvantage, but it's less of a disadvantage on the 360 than on the PS3, because at least the three cores are of the same type and don't have this weird FlexIO system and local stores and stuff.

Hence if developers started writing their games utilizing only the multicore advantage (without utilizing the 2 extra advantages the X360 has over the PS3), then things will be fine for both the PS3 and X360.

Now this is where the interpretation is more seriously flawed. Having listened to Julien on more than one occasion, and probably knowing more about modern game design than your friendly interpreter (or having more of a PS3 bias), I'd say he quite simply means that if you design for the PS3, with multi-core, data driven applications in mind from day one, you will benefit from the various advantages that the PS3 has (there are more cores and the specialised SPEs can do a lot of things better than the PPE, the memory layout allows for a lot of efficient bandwidth use if you optimise for it), you can assume a HDD in all machines, you have BluRay for extra storage, and so on and so forth. The only advantage in this scenario is that the 360 currently has more memory available because the OS currently has a smaller footprint. For certain effects, the embedded memory can also be an advantage, though a lot of that you can achieve on PS3 thanks to the way you can use the GDDR3 in parallel with the XDR memory. Also, if you design a game for 1080p like Factor 5 has done, there are just too many tiles needed for the embedded memory to work effectively, and the amount of stuff you can do with the embedded memory at full speed is limited to the special instruction set that is located on the embedded side of the GPU.

In general though, this should lead to a game that runs better on PS3 than on 360. But here's the really important point. Using this design, and porting it to the 360, should STILL result in a game that runs better on the 360 than a game that was designed in the traditional way (e.g. assuming single core and unified memory).

So in other words, on the 360 you are less punished for designing and using old-fashioned game engines. An important factor that is omitted in this discussion is that a large factor in that is also the scaler chip. It allowed early games to run at 720p even before tiling was properly mastered by both developers and Microsoft themselves (through their SDK).

So basically the PS3 is the console that will bring down graphics fidelity in multiplats because it's the lowest common denominator. (just like last gen with Xbox1 and PS2)

Compared to the PC, all consoles are limited in memory. It's not like the 50mb of extra memory they have available in the 360 was the reason why they had to work so hard to get the memory footprint down. This is, however, an issue for multi-platform developers in general, where indeed the larger memory footprint of the OS is a limiting factor (think of id's most recent comments).

Developers now have to think of the PS3's limitations, adjust their engines accordingly to match the PS3's hardware, and then port with ease to the X360.

This again, is a mistake. Developers have to think of the strengths of of the PS3 when designing (not adjusting!) their game engines. Because the PS3 really calls for a modern data driven application, this will then scale better to all other multi-core systems, including the 360 and (multi-core) PCs (and probably even the PS2 or Dreamcast :p).

This is how I would interpret it. ;)
 
So in other words, on the 360 you are less punished for designing and using old-fashioned game engines. An important factor that is omitted in this discussion is that a large factor in that is also the scaler chip. It allowed early games to run at 720p even before tiling was properly mastered by both developers and Microsoft themselves (through their SDK).

I can't say if you are less punished on the 360 for simply using old-fashioned game engines, because I don't know the PS3 good enough to make this call, but I don't think that just simply porting an old engine to the 360 is a wise idea. GoW might prove me wrong though.

And I don't think that the embedded RAM is an advantage in the general case. What I've found working on the 360 is that when you make the right choice it litterally flies, but I find it pretty hard to "move away" from the architectural choice that works and maybe try new approaches, and this is mostly due to the EDRAM. I feel the hardware pretty much dictates how to drive it and is not as flexible as the PS3 architecture.
Deferred rendering, for example, which is fashionable these days, is almost definitely a no-go on the 360: you don't really want to do it with the EDRAM and predicated tiling around, and who tried it have reported horrible tales.
I can't say at the moment if this perceived inflexibility of the architecture is an advantage, because it makes the platform pretty easy to work with in some sense, or it's a disadvantage because you can't really try too many different things with a decent hope they will prove viable.
Sometimes it feels like there's no viable solution for a problem, but if you find the solution, it flies.
 
Hence if developers started writing their games utilizing only the multicore advantage (without utilizing the 2 extra advantages the X360 has over the PS3), then things will be fine for both the PS3 and X360.
I really don't agree with that, you can perfectly write multicores code that runs well on 360 and runs like crap on PS3 or that it's extremely difficult to run on SPEs due to bad data formatting/organization. PS3 hasn't simmetric cores and this IS an extremely relevant fact as it changes the way you have to code for it. If ppl were writing code thinking about PS3, formatting data to be SPU processing friendly/etc.. we'd get fast code on both platforms, the other way around is not necessarely true
 
How much bandwidth savings does EDRAM represent as opposed to storing the framebuffer in memory like the PS3?

Purely theoretically, if you are rendering to a MSAA4x render target, you save 3/4 of the bandwidth because you only have to resolve a 1X render target to main memory. That's the single main reason why we see EDRAM on the 360. In practice, I think it will depend on the actual scenario and the real figure would be substancially lower.

Note that the framebuffer is still stored in memory on the 360, and you still have to resolve to it to display your rendering; the EDRAM is a purely scratchpad memory.
 
Purely theoretically, if you are rendering to a MSAA4x render target, you save 3/4 of the bandwidth because you only have to resolve a 1X render target to main memory. That's the single main reason why we see EDRAM on the 360. In practice, I think it will depend on the actual scenario and the real figure would be substancially lower.

Note that the framebuffer is still stored in memory on the 360, and you still have to resolve to it to display your rendering; the EDRAM is a purely scratchpad memory.

In your opinion, would a higher bandwith link between the GPU and CPU (like 44 G/B instead of 22) solved most of x360s need for EDRAM?
 
Bandwidth

Purely theoretically, if you are rendering to a MSAA4x render target, you save 3/4 of the bandwidth because you only have to resolve a 1X render target to main memory. That's the single main reason why we see EDRAM on the 360. In practice, I think it will depend on the actual scenario and the real figure would be substancially lower.

Note that the framebuffer is still stored in memory on the 360, and you still have to resolve to it to display your rendering; the EDRAM is a purely scratchpad memory.

So can we say Xbox360 has better design for 720P 4xAA but PS3 has better design for 1080P no AA?
 
I would have thought the 360 would have been quite well suited to deferred rendering..? Does the bandwidth requirements for the various position/normal/material buffers eat it alive?
 
Back
Top