*Game Development Issues*

Status
Not open for further replies.
As this gen keeps going I'm feeling lead platform doesn't mean as much (compared to last generation) as far as final game product is concerned. Interestingly both the ps3 and 360 seem completely hampered by the 512MB Ram. It's almost like a scenario where the xbox1/ps2/gc each had 8megs of total memory. Silicon power starts to become null fast.

Surprisingly true.
The problem with the Xbox is latency to the CPU and core usage.
The problem with the PS3 is that Cell needs to ask GDDR3 to send it to XDR first.
Otherwise it is limited to 256MB minus the OS reserve.

Both systems use 2006 graphics chips that would have 256MB Graphics RAM each.
Yet only planned 256MB for processing. 512MB would have been a nice fit.
768MB would have been nearly perfect this generation.

Otherwise everybody has to go to a lower resolution to support more memory space for general processing.
 
Last edited by a moderator:
"Now, however, the impressive-looking game is on track, and Blackman says that, after some hard-learned lessons, LucasArts now has a strategy for future multiplatform titles--develop the PS3 version first. "Our next project will use the PS3 as the baseline, and then apply that to the Xbox 360," he said. He made no mention of the other versions of The Force Unleashed, which are being developed externally."
Like I said before, you'll never know that a game has been downgraded if you never see a superior version of it. That's too bad, gamers are ultimately the ones who lose in this scenario. Hopefully LucasArts will lose enough sales that just dropping the baseline doesn't become the easy way to avoid the "lazy developers" tag.
 
Surprisingly true.
The problem with the Xbox is latency to the CPU and core usage.
The problem with the PS3 is that Cell needs to ask GDDR3 to send it to XDR first.
Otherwise it is limited to 256MB minus the OS reserve.

Both systems use 2006 graphics chips that would have 256MB Graphics RAM each.
Yet only planned 256MB for processing. 512MB would have been a nice fit.
768MB would have been nearly perfect this generation.

Otherwise everybody has to go to a lower resolution to support more memory space for general processing.



Why put 768 of video memory on a 128bit bus? Given the buses in the architectures are not traditional; one has to question what would be overkill in an amount vs. usability scenario.

There are various aspects within both architectures that would need to be addressed in order to "raise" limitations outside of their "low" memory allocations. I do agree that raising the amount of memory would create some headroom, but it wouldnt be the end all solution (afaik).
 
Last edited by a moderator:
Why put 768 of video memory on a 128bit bus? Given the buses in the architectures are not traditional; one has to question what would be overkill in an amount vs. usability scenario.

There are various aspects within both architectures that would need to be addressed in order to "raise" limitations outside of their "low" memory allocations. I do agree that raising the amount of memory would create some headroom, but it wouldnt be the end all solution (afaik).

It would help with games that use insane amounts of memory at once. For instance Battlefield 2 and 2142, Crysis, FEAR. A game like Crysis needs lots and lots of memory, upwards of a full Gigabyte just for itself if you want it to look good and run smoothly, hence why I'm glad to have 4 GB of head room on my computer. Sure a 128 bit bus is a bit of a hampering on GPUs that should have a 256 but the memory is a gimmpage too.
 
Why put 768 of video memory on a 128bit bus?

The question is more like How do you... not why!

Its not possible, you'd need to go to a 192-bit bus, which has ramifications on the architectures, ramifications on die sizes, ramifications on motherboard traces, etc., etc. In otherwords, its too costly.
 
I did not mention the bitness of the bussing involved.
And yes it would have been too costly.

But we are rarely satisfied by limits. Even when they are relatively generous to us.

America, land of obesity, where people still gripe about buffet prices when they can't eat more snow crab than they pay for.
 
Alrighty then, I will mention bitness.

On PS3 it could remain 64+8bit ECC while moving from 256MB to 512MB.
This could be done by using eight 64MB XDR chips, with 8bit bussing to each XDR chip.
And the GDDR3 RSX memory buss would also remain unchanged.
This would represent an increase in board area, without an increase in PCB traces or bitness.
Correspondingly then memory bandwidth would also remain the same as the current 256MB build.

Again,
No increase in bitness, or changes made to the GDDR3 memory on the PS3.
Dave, are these the same changes you seem to think I had mentioned?
 
Alrighty then, I will mention bitness.

I apologize for the curtness. Please understand, very few people enjoy being told what they were thinking.
Earlier was an attempt at being brief on the convenience of memory size. But a longer post is needed.

The motivation for my mentioning increasing memory space has to do with developer convenience. (John Carmack interviews.)
However that is a greedy request, adding more cost to a system that already offers more bandwidth and storage than we are using.
The same as crying for lower priced all-you-can-eat buffets, when all you are eating is the snow crab, and not trying something new.
On the Xbox360 if you reduce the native resolution you lower processing demand placed on Xenon/Xenus and allow more space for logic.
It is honestly that simple. And because it is easy to do this, many developers have lead with easy.

On the PS3 if you lower the resolution you free up more GDDR3 memory.
Yet the Cell can handle a lot more Geometry & Shader processes than Xenon. So why focus on reducing processing load by doing smaller GDDR3 resolutions?
Additionally, the PS3 was not designed for Cell's cores rely on making direct requests from the GDDR3 memory.
So to do this, you might create a very small buffer pool in the XDR space to stream from GDDR3 to XDR to Cell cores.
Even though once you have done this, it is easy to do again, cell shader calls could be set up to automatically initiate the process.

With dedicated/pre-partitioned memory you have the advantages of both higher bandwidth and lower memory latency.
But you have the disadvantage of managing two separate pools and one storage space limit (XDR) to make calls from.
{256MB of GDDR3 is more than generous enough to be equal to the graphical needs of all in game action on the Xbox360. It's target resolution promised no games below 720p. That was even part of its official FAQ. Yet as mentioned smaller resolution makes processing easier. So games like Halo3 (640p) wind up being less than 720p.}

A better and smarter approach than wishing for additional XDR Memory
When developing cross platform games leading on the PS3 it is easier to work around slowness on Xbox360 by making games smaller.
Because when you come over from the PS3 you have more speed, more storage, though DRAM is divided in its physical space.
So why not use the unified space on Xbox360 as a single collecting pool for, PS3 lead cross platform developing?
It would be a lot easier to reduce texture sizes, frame rate, or size, while moving two separate memory pools into one.

The decision to lead with the Playstation3 makes a lot of sense when you reverse the picture.
If you start with the PS3 then you don’t have to get creative to overcome having divided memory or programming.
Because all the divisions you make can be combined into a single pool on the Xbox360.
Then IF your game is limited by reduced memory and processing speed you can simply shrink it down on the Xbox360.
Problem solved. It is easier to down sample a few factors (Xbox360), than it is to grow into a separated parallel shape (PS3).
 
Last edited by a moderator:
Yet the Cell can handle a lot more Geometry & Shader processes than Xenon.
You make an excellent point, except for the fact that the GPU is generally what handles geometry and shading. It's because the RSX isn't capable of keeping up with the 360's GPU that developers have used Cell to offload some of the burden.

{256MB of GDDR3 is more than generous enough to be equal to the graphical needs of all in game action on the Xbox360.
Exactly, except for, you know, all those games that use more than 256MB. Hint: it's probably pretty much all of them.

It's target resolution promised no games below 720p. That was even part of its official FAQ. Yet as mentioned smaller resolution makes processing easier. So games like Halo3 (640p) wind up being less than 720p.}
Bungie made a bad choice, IMO. They apparently chose a lighting method that required them to render twice the number of frames and to do that and meet their other targets they chopped the resolution. That doesn't mean a less obtuse developer would have a problem running at 720p, most of them do.

When developing cross platform games leading on the PS3 it is easier to work around slowness on Xbox360 by making games smaller.
Yes, because the 360 is the one with the more limited RAM, exactly. Make sure to cut the texture budget too, otherwise the 360 would really be in trouble!

If you start with the PS3 then you don’t have to get creative to overcome having divided memory or programming.
Because all the divisions you make can be combined into a single pool on the Xbox360.
Then IF your game is limited by reduced memory and processing speed you can simply shrink it down on the Xbox360.
Problem solved. It is easier to down sample a few factors (Xbox360), than it is to grow into a separated parallel shape (PS3).
Interesting perspective. Here's how it's always worked in the past: a system with superior hardware can more easily handle the port from more limited systems because it has, well, superior hardware. This doesn't seem to be the case with the PS3, or at least it hasn't often been the case without a lot of blood, money, sweat, money, tears and some more money again. This is what all the fuss has been about; not that the PS3 has been held back by sub-par 360 led games, but that it couldn't match the 360 running those same games. Fanboys got riled up, Sony wasn't happy, developers were called lazy, etc.

Fortunately for gamers Sony and some developers have a solution to all this: lead on the PS3. Hey presto - no more inferior PS3 ports. It's important that you actually lead on the PS3 rather than employ these same programming methodologies which are needed for PS3 performance (and supposedly work better on the 360 as well) because... um, yeah, what was that reason again?

I know I'm pretty much one lone voice crying in the wilderness against this here but it'd be nice to see developers actually work to exploit each platform to their limit, whatever that happens to be. The least common denominator sucks.
 
I always wondered, could they have done a 384MB XDR / 384MB DDR3 split on the PS3 given the current 128-bit buses?
 
Interesting perspective. Here's how it's always worked in the past: a system with superior hardware can more easily handle the port from more limited systems because it has, well, superior hardware.
What you just wrote here doesn't make much sense.
What does "superior hardware" mean?

This doesn't seem to be the case with the PS3, or at least it hasn't often been the case without a lot of blood, money, sweat, money, tears and some more money again. This is what all the fuss has been about; not that the PS3 has been held back by sub-par 360 led games, but that it couldn't match the 360 running those same games. Fanboys got riled up, Sony wasn't happy, developers were called lazy, etc.
PS3 (and 360 as well) is mostly held back by poor engineering practices and lack of proper planning.
In 2005, without having touched/seen a 360 or PS3 SDK ever, other ppl and I were writing on this very forum what we needed to do in order to push these new CPUs (and we were mostly spot on).
Three years and milion of dollars later a lot of studios with their senior engineers and managers still haven't got a clue.
They relied on old assumptions (next gen = the same old stuff, just faster) they used in the past and they were..emh..simply wrong, while they had all the information they needed to take better decisions.
This barely works for 360 and it doesn't work at all on PS3. I can't wait to see what will happen in 2-3 years when we will have 32 cores per CPU..:)
Fortunately for gamers Sony and some developers have a solution to all this: lead on the PS3. Hey presto - no more inferior PS3 ports. It's important that you actually lead on the PS3 rather than employ these same programming methodologies which are needed for PS3 performance (and supposedly work better on the 360 as well) because... um, yeah, what was that reason again?
The best way to enforce this programming methodologies is to design and test them around PS3 but this is not necessary at all. Lead platform doesn't mean that everything has to be done FIRST on PS3, this is a another non sense.
Having PS3 as a lead platform means that everytime you design some new subsystem in your engine you have to sit-down for more than 5 seconds and carefully think about your data design/flow, dataset size, etc.. and make sure that what comes out of your mind fits PS3/360 architectures. Unfortunately in order to do that you have to study and to learn new things.. do you see where the barrier is here?
In 10 years we will maybe have programming languages and tools that will automatically care of this (though I'm kind of skeptic) and we won't need to spend a single minute thinking about these issues.
I know I'm pretty much one lone voice crying in the wilderness against this here but it'd be nice to see developers actually work to exploit each platform to their limit, whatever that happens to be. The least common denominator sucks.
Multiplatform games will always be about the lowest common denominator, but this LWD is a moving target and can set the bar quite hight as some AAA multiplatform titles already display (COD4/Burnout in primis)
 
Bungie made a bad choice, IMO. They apparently chose a lighting method that required them to render twice the number of frames and to do that and meet their other targets they chopped the resolution. That doesn't mean a less obtuse developer would have a problem running at 720p, most of them do.

Well, they're not so much rendering the scene twice as they are outputting a given frame twice with two different levels of exposures. Doing so uses up most if not all the eDRAM and they didn't want to do tiling (for who knows what reason. The caveat of tiling is the increased geometry processing cost, but other devs here have considered it a "solved" problem).

The reason they chose the lower rendering resolution is because the lighting algorithms are very taxing on the GPU's computational power; even if there was enough eDRAM for 720p and their double framebuffer, they might very well have chosen the same lower resolution since there are already times when the frame rate can drop during the single player campaign. i.e. if the framerate is mostly stable and we're seeing a few frame rate drops, just imagine how much worse it would be with 25% more rendering work (1280x720 is 25% more pixels than 1152x640).
 
Well, they're not so much rendering the scene twice as they are outputting a given frame twice with two different levels of exposures. Doing so uses up most if not all the eDRAM and they didn't want to do tiling (for who knows what reason. The caveat of tiling is the increased geometry processing cost, but other devs here have considered it a "solved" problem).

The reason they chose the lower rendering resolution is because the lighting algorithms are very taxing on the GPU's computational power; even if there was enough eDRAM for 720p and their double framebuffer, they might very well have chosen the same lower resolution since there are already times when the frame rate can drop during the single player campaign. i.e. if the framerate is mostly stable and we're seeing a few frame rate drops, just imagine how much worse it would be with 25% more rendering work (1280x720 is 25% more pixels than 1152x640).

Giving up so much for their goddamn HDR was such a bad decision. Of course, millions of screaming fans I'm sure beg to differ, and even still, Halo 3 could've been rendered at 480p EDTV and it would still have been the smash hit it is of course.
 
To hell with UT3, it's not that great anyways. I want to see the PS3 build of CryEngine 2.0 instead :D. But on the difficulty with development, I honestly think it's best that the PS3 ends up being the lead platform for multi-platform titles, that way a game is completed on the "harder" platform first then it can be easily moved to the 360 or PC and we still get a quality PS3 game.
 
PS3 (and 360 as well) is mostly held back by poor engineering practices and lack of proper planning.
In 2005, without having touched/seen a 360 or PS3 SDK ever, other ppl and I were writing on this very forum what we needed to do in order to push these new CPUs (and we were mostly spot on).
Three years and milion of dollars later a lot of studios with their senior engineers and managers still haven't got a clue.
They relied on old assumptions (next gen = the same old stuff, just faster) they used in the past and they were..emh..simply wrong, while they had all the information they needed to take better decisions.
This barely works for 360 and it doesn't work at all on PS3. I can't wait to see what will happen in 2-3 years when we will have 32 cores per CPU..:)

Exactly!

The console hardware (PS3 and 360) is amazingly fast when programmed correctly. Multi-platform devs are having problems not just with the PS3 but with the 360 as well. A good part of the problem is that you can just almost port legacy single threaded "engineered for the PC" code to the 360 and have it run almost good enough to ship a game. Split your "PC engineered" game into two threads: game on first core, and render on second core (and other lessor things elsewhere), and you probably can get enough performance to finish the project without doing any new thinking. For those types, the PS3 simply looks like a single core 360 (instead of a 3 core 360). So naturally complaints arise that the PS3 sucks or that the PS3 is hard to develop for because those devs are frustrated trying to optimize the code to run on only the PPE (without using the SPUs for anything important).

The real problem is just as nAo said, threading + going SIMD, and to a lessor extent dealing with less cache, in-order execution, expensive branches, etc. How many devs do you think even get close to extracting the performance available on the 360? How many are even doing major SOA SIMD code?

Look at the PC side of things. How many cross platform devs still prototype or develop mostly on PCs? Many PC devs are perhaps stuck in MSVC 2005 SP1 and have to support non-SSE2 class PCs. MSVC 2005 SP1 even with /arch:SSE (meaning compile for SSE class machines) still uses the standard x86 float stack most of the time when compiling. MSVC doesn't have GCC's easy assembly templates (where registers are picked in the compiler), and SSE intrinsics compile very poorly with MSVC 2005. The point here being that even SIMD stuff on PC's is a mess and way under-utilized. Not to mention, C/C++ isn't designed to take advantage of the hardware.

The hardware trends are obvious: more cores, larger SIMD, in-order, smaller (ie shared) caches, etc. If devs are bitching now about the PS3, I cannot wait to see the reaction if they find Larrabee as the CPU+GPU in the next XBOX... how do you think they are going to manage to provide enough work to feed 64-128 threads each requiring 16-wide SIMD to get anything close to good floating point utilization, and all with similar data locality to get good cache utilization?

The PS3 is an opportunity for devs to redesign and engineer algorithms + data flow which will work extremely well on future hardware. Those who are making good use of the SPUs on the PS3 now are going to be leagues ahead on future hardware, everyone else, well, will really being complaining!
 
Let it roll...

I think a lot of developers last year...There were challenges getting the PlayStation 3 up to par with the Xbox 360. We feel like we've put that behind us.

As we look to that second batch of next-generation games - with the engines more stable and the technology more stable - I think it is a combination of being able to do them not so much in less time but with more manageable costs and also putting more quality into the game.

http://www.gamesindustry.biz/content_page.php?aid=34418
 
Status
Not open for further replies.
Back
Top