*Game Development Issues*

Status
Not open for further replies.
Okay, what you're saying is:
SMP-optimal-model = Cell-optimal-model, and vice versa.

But Non-Cell optimal model != non-SMP-optimal model. So there is still a difference between these 2 models. What most are saying is people can get away with the non-optimal situations more on 360.

I look forward to the day when someone can show the "differences" due to the extra CPU power (visually or otherwise).

The reason that they are fundamentally similar is that performance pretty much directly correlates to the way you access memory.

The reason they still differ is that the "Cell" model isn't the only good way to keep memory accesses controlled and for some workloads with potentially large datasets it isn't the best model on a machine like the 360.

Also it's not just about performance anymore, ease of implementation and maintenance figure into the equation aswell.

You'd be stunned how much engineering time can be lost when some one writes a function that has a none obvious bug, that someone uses 6 months after the original author left the company.

But most of these types of operations can be abstracted from the implementation anyway, so it should be largely irrelevant unless your designing interfaces that only support one implementation.
 
All this discussion is dependant on some concept that games are even vaguely optimal though, and for the most part it just isn't true. As engineer count increases, the more shipping a game becomes about software engineering and the less it becomes about technology.

Sure at the core tech level, things still have to be vaguely optimal, assuming your not just buying it in these days, but my experience with gameplay code (which is the bulk of the codebase) on large teams is that optimisation is way down the list of things people are thinking about, until it becomes a prformance issue.

I can believe that !

It (always) takes a special team to deliver the extra omph.


The reason that they are fundamentally similar is that performance pretty much directly correlates to the way you access memory.

The reason they still differ is that the "Cell" model isn't the only good way to keep memory accesses controlled and for some workloads with potentially large datasets it isn't the best model on a machine like the 360.

That's right. There are usually more than one way to tackle a problem.

Also it's not just about performance anymore, ease of implementation and maintenance figure into the equation aswell.

You'd be stunned how much engineering time can be lost when some one writes a function that has a none obvious bug, that someone uses 6 months after the original author left the company.

I understand. I have maintained someone else's 15 year old code. It's even tougher if you are under strict time pressure and have non-technical people to service.
 
Except you're going from the assumption that the 'easy' way of working on the 360 is the optimal one, when other developers have mentioned that it ain't so. At the very least, it's not a given truth.

[...] What multiplatform development involving the PS3 does is it makes this more robust architecture necessary for good cross-platform performance, rather than optional for better single-platform performance.
I think the 'budget' part of Mint's statement is his point, that any gain from that "more robust architecture" is outweighed by the effort to attain it (that's why Carmack said 360 is like games are developed now, and why Valve bitched so much about Cell). So while your neat new system may take less CPU time, it wasn't worth the manpower (while joker's saying it isn't time-consuming to implement Cell best-practices on 360, that's separate from the argument that those practices might be limiting on 360 [tho less so than vice-versa]). In that sense, it's not optimal for the 360 from a holistic POV.

nAo is saying all this is well and good, but PS3 is the reality, so hypotheticals to the contrary are as useful as designing with blinders on. To add to nAo's point, it's almost certain PS4 will be packing even more Cell, so railing against the inevitable would be time better spent optimizing for it.

The argument (not yours) that 360 owners aren't getting the most for their money as a result of time spent 'accomodating' PS3 seems a bit short-sighted. Even imperfect competition is usually better from a consumer's POV.
 
I think the 'budget' part of Mint's statement is his point, that any gain from that "more robust architecture" is outweighed by the effort to attain it (that's why Carmack said 360 is like games are developed now, and why Valve bitched so much about Cell). So while your neat new system may take less CPU time, it wasn't worth the manpower (while joker's saying it isn't time-consuming to implement Cell best-practices on 360, that's separate from the argument that those practices might be limiting on 360 [tho less so than vice-versa]). In that sense, it's not optimal for the 360 from a holistic POV.

Actually, though, what I'm referring to isn't CPU performance, but dev time. From what I've read and even accounts here, a 'SPE-centered' approach enforces certain behavior that may help prevent race conditions, as it forces you to be far more aware about your thread's data (I'm going by Insomniac's SPE Shader model, which seems like a good baseline). Naturally, it's not going to be a panacea; there's the old adage that you can program FORTRAN in any language.

Maybe Carmack hasn't seen much of an advantage because he's already extremely careful regarding threads: the extra overhead could just be a PITA. (But then, as the legend goes, Von Neumann thought assemblers were a waste of time.)
 
Last edited by a moderator:
Okay, what you're saying is:
SMP-optimal-model = Cell-optimal-model, and vice versa.

But Non-Cell optimal model != non-SMP-optimal model. So there is still a difference between these 2 models. What most are saying is people can get away with the non-optimal situations more on 360.

I look forward to the day when someone can show the "differences" due to the extra CPU power (visually or otherwise).

Exactly I really want to understand the statement that the Xenon is not a good cpu. It is not good in regards to what....the Cell? Because of theoretical performance or actual performance, or are we now saying that giving the choice you would choose the Cell over Xenon. By no means am I a developer is just that since the begining of this gen all I hear is Cell this and that without any proof.

I would think that the best way to analyse these system is as a system overall.
I would like to here what multiplat devs have to say about Xenon being "bad

Mod : I've added the missing capitals for you. Maybe this got moved before you saw it, but please read and adapt. : http://forum.beyond3d.com/showthread.php?t=49336
 
It looks to me that Xenon IS a very good CPU. I dont think there is a single developer that says otherwise.

According to some insightful replies here when I asked in the past it seems that Cell is probably a bit more powerful albeit hard to work on.
 
It looks to me that Xenon IS a very good CPU. I dont think there is a single developer that says otherwise.

According to some insightful replies here when I asked in the past it seems that Cell is probably a bit more powerful albeit hard to work on.

You know, stuff like this has been said for the past 3 or so years. Doesn't make much of a difference if the most talented technology coders aren't aboard your ship. Looking at 360 hardware as a whole, I am starting to believe the first party developers are too much migrated from PC to even begin maximizing and coding effectively for the one configuration. The 360 CPU is actually unremarkable; it's the GPU that, if maxed out, should still give a game the edge over PS3. Isn't happening.

Whether teams who focus on PS3 solely can really use the Cell so much that it will improve "graphics" or the RSX can be tricked out to a good extent, we are still seeing the most impressive looking games exclusively on PS3 (Uncharted being number one; those textures + colours are unrivalled).

Yea, Mass Effect looked great, but did it run great? Far from it. It felt like it was chucked onto the system although the PC version was the port (and a much better build).
 
Cant disagree

A similar thing doesnt also count for the PS3 due to the difficulty to program the Cell too though? It will all depend on the tools ofcourse, and I wonder sometimes if the Cell is able to "branch" all game related tasks efficiently to the SPU's.

It is still an exotic architecture and probably lacks some flexibility when it comes to what you can throw to the SPU's. Any clarifications from the experts in here? ;)
 
i would like to here what multiplat devs have to say about xenon being "bad

If you code for it in older programming methodologies then it will run kinda slow. However, there are still three cores in there. So, short term there was still enough power to use non optimal existing methods and/or code base and make framerate, so long as you as least made some basic use of threading. As with the rest of the 360, I think they struck a very good balance in the cpu design because you could get away with non optimal code in the short term, yet performance gets much better over time as you slowly switch to the new techniques which it prefers. This was *critical* in the short term when everyone was scrambling to ship product, and gives it good legs long term because code keeps getting faster as we re-write more of it to suit it's liking.
 
patsu said:
It (always) takes a special team to deliver the extra omph.
The issue is not about quality of individuals as much as the fact that it's nigh impossible to have good codebase quality control as the team size increases.
As ERP said, you get development process where it's "quantity over quality" as far as code goes and optimization becomes a patching process, which is not a great way to approach things if you want true efficiency, regardless of the underlying hardware.

Nesh said:
It looks to me that Xenon IS a very good CPU. I dont think there is a single developer that says otherwise
Ehm... It's ok conceptually, but I don't think I know anyone that thinks IBM did a good job with the console PPC cores (Xenos or Cell alike).

Mintmaster said:
I'm still not convinced that project directors are misallocating resources or making bad technical decisions, which is a fundamental but unstated component of your point of view.
It's his point of view because that's one of the realities of the industry. And it's not exclusive, or new to this generation of consoles either.

Just because the practices that make code Cell-friendly also makes it run better on Xenon doesn't mean that implementing them will give you a better game
On the same line of reasoning, C++ is no guarantee to give us better games either, but we still code games that way instead of say, Java, even though it means we spend more time coding.
Or more likely, just throw more people at the problem.
 
Clearly you were upset by my comment but like I said it's not much of a secret and I like that you're one of the few developers who isn't afraid to show it.
You got it wrong. What really upsets me is a half-trolling comment based on my passions not on my arguments.
I'm sure more developers have their own preferences, it'd be better for everyone if people were more candid because then there would be fewer hidden agendas.
Who are you referring to? My preferences are crystal clear.

B3D is interesting but I can't spend my life patrolling the boards. :)
Then avoid inflammatory comments not based on any factual statement.

What you don't seem to want to admit is that this will lead to games where the technical level will drop because there is no competition. Instead of trying to reach a high bar they'll settle for the lower one because nobody will be able to tell the difference.
Multiplatform games are all about the lowest common denominator and always will. Developing on 360 and PS3 using PS3 as a 'lead platform' won't generally hurt 360, in many cases PS3 coding practices will actually benefit 360.

If it is then the PS3 should have no trouble keeping up or surpassing the 360 and I just don't see this happening if the PS3 is chosen as the lead platform solely to help companies make a buck. It's like the result of all the crying about "lazy developers" is going to be rewarded by... you guessed it, lazy developers.
The proof that you are wrong is in the mythological pudding: developers already went down the so called lazy route, when you don't know what to do you take the most simple and apparently logical decision. Too bad they/we were wrong, this is why the tune is changing. Change is always painful, almost no one changes just for the sake it. It's the evolution baby, you change or you die!
And for the 1000th time, it's not about personal preferences, it's about shipping the best multiplatform game possible while trying to not die in the process.
If you are looking for games that push your 360 then you have to search elsewhere, try with 360-only titles :)
 
If you code for it in older programming methodologies then it will run kinda slow. However, there are still three cores in there (Xbox360). So, short term there was still enough power to use non optimal existing methods and/or code base and make framerate, so long as you as least made some basic use of threading....

About the older programming methods, do you reference to a monolith single-threaded game engine just as an old-skool PC game engines were built?

Its actually a suprising an early multi-platform games "made by lazy devs" even run on PS3 and delivered some sort of (playable) game experience. I mean as we have understood, Xbox360-copypaste-to-PS3 port should be a failure from the start. Not to mention a time constraints given to the multiplat devs, must have been a miserable long and stressfull days.

About the many-core game engines, what are the tasks PS3 developers put to run on SPU cores nowdays. AI, physics simulation, audiomixing, animation transformation, etc...?

SPU cores have a local ram and both data+program must fit to a core. SPU does not see all 256MB+256MB memarea in one go, but must manually fetch/stream an active data to a local spu ram.

Is Xbox360 engines (the new way) built to simulate same very small independent tasks and TaskManager thread controls the workflow. Tasks are implemented to fetch an active data to a "local task ram" and write it back before fetching a new chunk?
 
Do we have any figures supporting this view, or some research? I know muliplatform issues make a lot of noise in the internet, but are sales really impacted by the knowledge one platform is inferior to another?
Well, you can't really release the same game twice, once with and once without issues. But at the end of the day, this is more of a marketing thing and we're engineers,. We try to deliver what marketing wants, and they don't want inferior versions. ;) So the pressure may not come from the market itself, but it' still there.

On a less hearsay basis, I promise you that you will see a lot less cross-marketing opportunities from the manufacturer of the inferior version. This may or may not impact sales, but it will impact your own marketing budget/clout.

Including the install and persistent data space, or just system cache?
I have to wave the NDA-flag here. But think about a limited HDD and unlimited numbers of games and you'll get an idea. Sorry.

I believe at least on Linux, PS3 hypervisor reroutes SPE IO calls to OS@PPE via soft interrupts. I think standalone performance is pretty similar to PPE IO calls.
Well, inside an SPE, the SPUs can only communicate through channels. Those are very minimalistic and basically lead to the MFC. The MFC is mostly an overglorified DMA-processor, so it moves data to and from local store. If you want to access OS-services, you can of course use a background thread that waits for commands from the SPEs and then processes it. But what do you use the SPU for then? (Disclaimer: I'm not an expert in writing operating systems on Cell. I may be missing something here)

I'm still not convinced that project directors are misallocating resources or making bad technical decisions, which is a fundamental but unstated component of your point of view.
It's not misallocating, it's just sub-optimal. Basically, people who have to decide methodology for large shops *should* be risk-averse. It's a good thing, I guess, most of the time. People need time to switch methodologies, some take longer than others. Now if you have a 30 programmer team and your hot-shot engine progammers go fully job-driven, this will cause trouble for the less technically inclined people. And on the other hand, your hot-shot tech dudes don't usually make these decisions. ;)

You can't make something Cell-friendly for free, and you can't get Xenon benefits for free.
I'm not totally sure if I'm on the same page as nAo (sorry for inserting myself into your discussion, BTW), but I don't see it as Cell vs. Xenon. Yes, writing the SPU-code takes time and writing the VMX128 code takes time as well. These are the platform specifics. The rest is architecture you would even want on the PC. Interestingly, one of the first instances of "we have to do it like that" in our project came from the PC tech-god, btw.

For any given multiplatform budget, you will spend more on coding and less elsewhere if you want PS3 parity. This, and this alone, is my point.
Point taken. It all depends on where you want to compete. I can make awesome XBLA titles that run on one core. If I want to compete with the big boys, I'll have to get as much out of the machine as possible. But sure, if I compete on gameplay and not on tech, PS3-like coding is overkill, no doubt. Even on the PS3.

Okay, what you're saying is:
SMP-optimal-model = Cell-optimal-model, and vice versa.

But Non-Cell optimal model != non-SMP-optimal model. So there is still a difference between these 2 models.
Of course! There are more models than the two. Hell, computer science is so full of models, you'd think it's fashion week.
 
Is Xbox360 engines (the new way) built to simulate same very small independent tasks and TaskManager thread controls the workflow. Tasks are implemented to fetch an active data to a "local task ram" and write it back before fetching a new chunk?
In regard to this there was a presentation during the gamefest 2008.
I try to gather informations in the related thread (tech section) nobody answered, as hinted by some members it must be under NDA.
 
Well, inside an SPE, the SPUs can only communicate through channels. Those are very minimalistic and basically lead to the MFC. The MFC is mostly an overglorified DMA-processor, so it moves data to and from local store. If you want to access OS-services, you can of course use a background thread that waits for commands from the SPEs and then processes it. But what do you use the SPU for then? (Disclaimer: I'm not an expert in writing operating systems on Cell. I may be missing something here)

Yeah, this doesn't sound completely right. Maybe someone can chime in ... I do have the full 10MB IBM Cell Programmer's Manual lying around here as a PDF but no time right now. I have never seen anyone talk about SPU and SPE like this before anyway, so that's interesting in itself. I think the point is that an SPE is quite fully featured - I heard Insomniac talk about running the game main loop from an SPE and that this may even be the best way to do it.

Actually an answer to this question probably helps more - why are you talking about the SPU's capabilities in isolation of the SPE here?

I'm not totally sure if I'm on the same page as nAo (sorry for inserting myself into your discussion, BTW), but I don't see it as Cell vs. Xenon. Yes, writing the SPU-code takes time and writing the VMX128 code takes time as well. These are the platform specifics. The rest is architecture you would even want on the PC. Interestingly, one of the first instances of "we have to do it like that" in our project came from the PC tech-god, btw.

That does sound like you're on the same page as nAo.

Point taken. It all depends on where you want to compete. I can make awesome XBLA titles that run on one core. If I want to compete with the big boys, I'll have to get as much out of the machine as possible. But sure, if I compete on gameplay and not on tech, PS3-like coding is overkill, no doubt. Even on the PS3.

Ah but that I disagree with. Think about something simple like Super Stardust (or in a different way, Geometry Wars), a game with some really excellent gameplay. It's totally reinvigorated by using the Cell to allow for such an insane amount of objects flying around, colliding with each other, chasing you, and so on. There are a lot of new styles of gameplay involving physics (think from MotorStorm, through Pain, to LittleBigPlanet) that can offer something new in terms of gameplay. I still believe that in this respect, things haven't changed that much from the very first days of computing - gameplay and tech are still very strongly linked. However, you can still also do something like Fl0w or Braid of course, that doesn't need a lot of tech and still oozes gameplay, although then again the former does benefit a lot from motion sensing ... which brings me to the Wii and gameplay, and tech. I think you see where I'm going with this. ;)
 
In regard to this there was a presentation during the gamefest 2008.
I try to gather informations in the related thread (tech section) nobody answered, as hinted by some members it must be under NDA.

I remember that there have been some discussions here that you can lock parts of the Xenon cache to work in a similar way as the SPU's local store, and something similar with the VMX unit in particular.
 
I remember that there have been some discussions here that you can lock parts of the Xenon cache to work in a similar way as the SPU's local store, and something similar with the VMX unit in particular.
I also read something like that but based on comments it looks pretty much useless and pretty much a standart on PPC implementations.

I was in fact thinking about these two presentations:

Microsoft Directions in Parallel Computing and Some Short Term Help:
This talk focuses on the native task scheduler being announced by the Parallel Computing Platform group in Microsoft this spring and offerings that are available in the XDK. The scheduling of tasks within games can improve resource utilization, load balancing, and performance. For games targeting the current generation of PCs and the Xbox 360 console, we discuss an interim solution. Previous talks given on this topic laid the foundation for using tasks to move work required by the engine from an over-utilized hardware core to an underutilized core. A progression of task and scheduler designs is presented that start with simple implementations and move to more complex designs that have low-overhead. The simple implementations are satisfactory for a small number of tasks but impose a prohibitive amount of overhead when the number of tasks grows. Finally, we present the work-stealing algorithm that pulls work from one core to another in the low-overhead scheduler.

And

also think that this give us another hint:
Xbox 360 Compiler and PgoLite Update:
The Xbox 360 compiler has changed dramatically in the last year, which changes the rules for how to write efficient Xbox 360 code. Many of the improvements automatically make your code faster, but others require you to change your code in order to reap the benefits. PgoLite has also improved and should be used differently, to get even better results. This talk summarizes the past year's developments, and gives simple rules for how to get maximum benefit from the changes.

the proper thread is here:
http://forum.beyond3d.com/showthread.php?t=49183

But sadly most of this seems under NDA.
 
Yeah, this doesn't sound completely right. Maybe someone can chime in ... I do have the full 10MB IBM Cell Programmer's Manual lying around here as a PDF but no time right now. I have never seen anyone talk about SPU and SPE like this before anyway, so that's interesting in itself. I think the point is that an SPE is quite fully featured - I heard Insomniac talk about running the game main loop from an SPE and that this may even be the best way to do it.

Actually an answer to this question probably helps more - why are you talking about the SPU's capabilities in isolation of the SPE here?

Well, it's IBM nomenclature. SPE=SPU+MFC. My reason for breaking it up is to show that you'll need someone on the same *PE as the OS to make calls to the driver. I guess you can run an OS on the SPEs instead of the PPE, as all IO is memory-mapped, IIRC. But this is academic, as nobody does that. :)
 
Well, it's IBM nomenclature. SPE=SPU+MFC. My reason for breaking it up is to show that you'll need someone on the same *PE as the OS to make calls to the driver. I guess you can run an OS on the SPEs instead of the PPE, as all IO is memory-mapped, IIRC. But this is academic, as nobody does that. :)
I remember form early discussion about this that a cell without ppu couldn't run/boot an OS.
EDIT
I don't remember the reason, maybe something some IRC non supported, but it's been a while (likely to be wrong).
 
Who are you referring to? My preferences are crystal clear.
Which is why I said it the way I did.

Multiplatform games are all about the lowest common denominator and always will. Developing on 360 and PS3 using PS3 as a 'lead platform' won't generally hurt 360, in many cases PS3 coding practices will actually benefit 360.
Coding practices are agnostic, they don't generally care what platform you implement them on. How well they run on a given platform is a different story, but that's not really the point. Nothing I've seen you or anyone supporting your position say suggests how you proscribed solution (leading on the PS3) will lead to better games, just less controversy.

The proof that you are wrong is in the mythological pudding: developers already went down the so called lazy route, when you don't know what to do you take the most simple and apparently logical decision. Too bad they/we were wrong, this is why the tune is changing. Change is always painful, almost no one changes just for the sake it. It's the evolution baby, you change or you die!
The "mythological" pudding says no such thing. Your so-called "lazy" way led to some of the best looking games among both 1st and 3rd parties. Assassin's Creed for example. It took them a while to get the PS3 version running as wll as the 360 one but I'd much rather see that then both systems run what is easily done on the weakest platfom with the current tools. For sure it'd be easier for Ubi to do what you suggest. They'd have saved money and since all anybody ever saw was the lowest common denominator nobody could complain much, but it wouldn't have made a better game.

And for the 1000th time, it's not about personal preferences, it's about shipping the best multiplatform game possible while trying to not die in the process.
What makes it about personal preference is your suggestion that leading on the PS3 is the only or best way to implement this and if the games don't push the platforms quite as hard then it's still all to the good because it's only multiplatform after all. I don't see it that way and I don't think it's healthy for the industry to cement the 3rd parties in such a subservient role. That way lies Nintendodom.

I think especially for multiplatform games the competition is what improves the platform. If the 360 wasn't around to show up lower resolution textures on the PS3 would Sony bother to shrink their system memory usage? If the 360 didn't have a better vertex rate would they have worked so hard to develop the Edge tools? How about online, if the 360 wasn't pushing Sony would we have free online and Sony spending a mint to improve their services?

If you are looking for games that push your 360 then you have to search elsewhere, try with 360-only titles :)
Well, I'd have to get one first. :)
 
Status
Not open for further replies.
Back
Top