*Game Development Issues*

Status
Not open for further replies.
My idea was to have zero LS wasted for code. Pure hardware streaming, and the hardware takes care of suspending/resuming your "thread" etc, and latency can be managed via several hyperthreads, very similar to how GPUs handle Vertex/Pixel shader streaming.

One might be able to stream code "just in time" manually, but that might be very difficult to pull off. It is easier if you partition the code like Insomniac suggested and bring snippets with your data.

I understand and appreciate your thoughts. I suppose I should of qualified what I said. I meant that this could work in specific circumstances and perhaps in some of the situations you are facing. I do not mean that this should be a general practice in programming for SPUs. That would be a difficult undertaking to say the least. I think most would agree with you and Insomniac.

Barbarian said:
Well, some issues are inherent to C/C++, such as aliasing problems (exemplified with the "this" pointer), but I think compilers can definitely improve more. Some of the issues are caused by the ABI as well, for example, the inability to return a Matrix class in a set of 4 registers. Touching memory on those PPC cores is a nightmare because of the rampant LHS/cache issues. On SPU it's much much simpler really, there, it's just getting GCC to output decent code, that's all. For now even Sony admits that C with Intrinsics is the way to go - the ICE team was quoted as saying they get 20x improvements compared to any vector abstraction.

Well debates are never long when it comes to where C and C++ belong. Core engine = C with intrinsics. Higher level stuff = C++/some scripting language. I wonder why compilers still aren't a bit more proficient with virtual functions...I just read a paper on compiler tech which claims to improve peformance 18% (by optimizing them out etc) with them across the board and it was dated back in 1996.

Also when it comes to something as basic as a vector it's probably going to be difficult for some time to best intrinsics.

With respect to the ABI has of any group brought such issues to Sony? Something a little more forceful than reports and bug issues if you know what I mean.

PPE cache issues...like stalling on hits? Every effort to offload work from the PPE is a good effort as I'm sure you well know.

I'm curious have you attempted or had any success with compile time polymorphism? I know I'll get crucified for mentioning templates but they can work out if handled correctly.

Barbarian said:
I actually really like the openness that Insomniac show with their technology blogs. I work for a 3rd party developer and we very rarely get a glimpse of what the 1st party studios are doing, let alone get access to their tech.

Insomniac is certainly to be commended for taking their stance in an NDA stricken world. For that matter I read somewhere that Naughty Dog was going to be a lot more open with their tech as well. Its not a bad idea to listen to what they have to say either.

Barbarian said:
I do believe A LOT of the developers can benefit from Sony delivering a healthy mix of high level libraries coupled with low level access when needed. Sony DID try that with the initial OpenGL implementation but nobody was happy because it run poorly. Then they switched to GCM which is very low level library, but unfortunately it was (last i checked) incompatible with the OpenGL layer, and hence fracturing the development efforts and having studios making a choice which way to go, way early on, possibly before they even knew what they needed.

The moaning about PSSG was/is hard to miss. Again I'd like to ask if a consorted group has or is willing to bring the issue directly to Sony? It never hurts to have proper direction going forward.
 
Last edited by a moderator:
?!?!

What is so calculation-intensive in this game, given that you've talked about individual military unit AI on the SPUs?

Its not calculation intensive its just not performance optimised and its difficult to optimise because its all data driven and high level friendly. Whats good for a nice external real-time editing tool isn't necessary good for an in-order processor...

Unlike the engine systems, which were designed and built by people who know how to write high performing code, most game play systems weren't.

We live and learn...
 
In future titles (some already do this) I'm sure you'll see more work being transferred to the SPUs, for this very reason. I'm guessing that for Heavenly Sword, that option came too late ... Or am I completely off the ball here?

Its more a case of work required, you have a big complex piece of code that accessing a large set of data. To get it to work well on SPU, you have to rework lots of data, and thats breaks tools and paradigms and work flow... thats alot harder to optimise than counting a few cycles on a gpu.
 
Its not calculation intensive its just not performance optimised and its difficult to optimise because its all data driven and high level friendly. Whats good for a nice external real-time editing tool isn't necessary good for an in-order processor...

Unlike the engine systems, which were designed and built by people who know how to write high performing code, most game play systems weren't.

We live and learn...

That sounds like we can expect more stable frame rate in the HS2 :D

I have completed the game 6 times fresh from start (not to mention collecting all 129 glyphs in hell mode), I'm still playing it whenever I have time. My favorite is 4th level, where there are less frame drops & tearings just with more actions!

I bought PS3 sorely for upcoming GOW3, now I want HS sequel even more than GOW3!!

Hope we can hear some good news soon :D
 
I'm curious have you attempted or had any success with compile time polymorphism? I know I'll get crucified for mentioning templates but they can work out if handled correctly.

Well, templates are my friends. I actually consider templates the most important addition to C++, while I can easily live without built-in virtual functions. Having said that, there are other very important aspects of C++ that are not easy to ignore - ie the pillars of OOP - abstraction, inheritance and polymorphism - I can give up inheritance and polymorphism if I have to, but abstraction is a must. Having compilers that choke on abstracting VMX vectors into simple classes so I can have cross platform math library is beyond embarrassing. This is not easy of course, but as of now both Visual Studio and GCC have trouble with SIMD abstractions.
 
Well, templates are my friends. I actually consider templates the most important addition to C++, while I can easily live without built-in virtual functions. Having said that, there are other very important aspects of C++ that are not easy to ignore - ie the pillars of OOP - abstraction, inheritance and polymorphism - I can give up inheritance and polymorphism if I have to, but abstraction is a must. Having compilers that choke on abstracting VMX vectors into simple classes so I can have cross platform math library is beyond embarrassing. This is not easy of course, but as of now both Visual Studio and GCC have trouble with SIMD abstractions.

MSVC/GCC have had x86 SIMD (SSE1 thru SSE5 pretty soon) to play with for a while so it's apparent there is some difficulty but it is still sort of disheartening things aren't better. I do understand it's not easy though.

Maybe some smooth talking (or fervent postration) can convince those Blitz++ guys and gals to steal a couple more optimization duties from the compiler. Then again I'm not sure they can actually affect all that much.

Its a waiting game I guess but it wouldn't hurt to ask for what you need. You might get lucky.

I too think templates are invaluable in C++ as generic programming and many design patterns are just out of the question without them. I'm glad I didn't have to run to the hills for bringing them up for once.
 
Well, one could make a very good argument that the ICE team involvement with Edge IS a proof of SPU usage in Uncharted. After all they formed the ICE team foremost for their own needs and THEN shared a pared down version of their tools with the dev community.
s/pared down/second generation/
 
This is why higher polycounts near silhouettes (i.e. your intelligent distribution idea) isn't going to achieve that much better quad efficiency than higher polycounts everywhere
You're right, stuff that doesn't have much space for interiors, but just a silhouette can be rendered via brute force.

, as the triangles at the edge are the ones that really hurt efficiency in the first place. I'll admit that culling/clipping gets more efficient, but it seems you're focusing on quad efficiency in this post.
I'm also focused (at least in my mind!) at better culling and clipping. For example on PS3 we have EDGE that allow us to cull everything that will not generate any fragment to shade in the rasterizer, but I'm not sure it's the best way to handle these kind of things on PS3. But this is another story/another thread/another lost occasion to keep my mouth shut :)

In light of antialiasing, though, it's still questionable whether that's a good way to do things in general (by that I mean deferring computations to preserve quad-level efficiency). Sure, for N sample PCF you can distribute the shader load across the samples like in KZ2. But most shaders (including VSM) can't do that, and if you start looking at which samples are equal for selective supersampling, you're back to square one wrt efficiency.
We need an efficient way (something that is more than a hack) to determine if we need to shade at pixel level or at subsample level. Or we move to a rendering architecture where we shate stuff at a certain rate and then we resample and composite at a different rate.. ;)


So blending isn't free? Aside from imperfect seperation during tiling and the additional quads, AA isn't free? That's news to me.

EDIT: Oh, you're talking about blending FP10/I16, aren't you. Yeah, that's a shame...
Not only that, can't say more.

Well that's absolutely true, but vertex shading is rarely the bottleneck now, is it...
On PS3 is easy to be vertex shading limited if you don't do things in certain ways (no rocket science, believe me).
RSX is a GPU that has to work with different kind of memory, and has to be efficient with both (GDDR and XDR),
ppl that don't understand that and don't understand design decisions made upon those requirements will get poor shading
performance.
The funny thing is that we have know on PS3 some amazing profiling tools for the GPU that really put PIX to shame in some areas, so if you you understand what you're doing it's really easy to find a way to improve your peformance (if you haven't hit almost theoretical perf numbers yet..)

Isn't that the real bottleneck most of the time? If he has 10M verts per frame counted the way that you're describing, that likely means ~10M tris/frame, right?
Triangle setup can certainly be a bottleneck (and you're kind of happy when you hit it cause it forces you to rethink they way you're doing things as it's a kind of bottleneck that can't be easily worked around) and yes, if we assume that each new vertex will kick a new triangle than yes, number of post transformed caches misses = number of triangles that can be potentially set up.
why can't someone tell what the culling/clipping rate is? I originally assumed it was the same because many people (including myself) consider culling/clipping to be part of setup.
Cause we signed all these little documents..you know :) I can tell you that is certainly more complicated than many think and there's not a straightforward answer to your question.
Wouldn't 16 texture fetches per pixel be rather ugly for an SPU?
I was thinking about having the SPUs computing the SAT, SAT sampling/filtering would be fine on RSX even with FP32 textures...if the filtering shader is written in the proper way..
 
Last edited:
If you're rendering 1/4 the pixels and the polygons are big enough not to be vertex limited, is that so bad?

Alpha-tested polys: 1/4 the pixels with 4xAA, rendered 4 times.
All other polys: 1/4 the pixels with 4xAA, rendered once.

You seriously don't see an opportunity to gain speed here?
Theoretically yes, I see it. In practice I'm not sure it would be a win..just because the hw (both platforms) is sometime quirky and don't exactly perform/work as publicized.

Yes, drawing alpha tested polys 4 times at a quarter res could be slightly slower than once at full res, but you save so much on all other pixels. If you were completely setup limited, at the very least you could render a substantially larger shadow map without a perf hit.
But that doesn't mean you won't gain any perf benefit going N/4 to N zixels per clock or IQ benefit going from M pixels to 4M pixels in the shadow map.
I hope to be able to check this again at some point in the future, but (sadly) I'm quite sure that at least on one platform out of two woudln't be a win :)
 
Thanks for the replies, nAo.

You're right, stuff that doesn't have much space for interiors, but just a silhouette can be rendered via brute force.
That's not quite what I was saying. If you start with a simple model and start increasing polygon count, there's a point where polygons near silhouettes really start having bad quad efficiency, but the poly's facing you don't have this problem. Thus intelligent distribution doesn't help much. Removing polys from the middle don't help efficiency, and adding polys only near the edges doesn't give much less of a hit than adding polys everywhere.

At least in terms of quad efficiency. But yeah, it would definately help culling efficiency.

We need an efficient way (something that is more than a hack) to determine if we need to shade at pixel level or at subsample level.
Not an easy problem, but I think polygons are a great solution for now.

On PS3 is easy to be vertex shading limited if you don't do things in certain ways (no rocket science, believe me).
I know there are plenty of future looking techniques that can have tough vertex shaders, but is that a reality now? Aside from skinned characters, I figured the vast majorty of objects just needed some transforms for position, normal, light, and eye vectors, and sometimes less. I guess the extent of how VS limited you are depends largely on the culling speed (which you can't tell me :cry:). My comments stemmed from knowing that to avoid triangle setup limitations, you'd need a vertex shader much longer than the one I just described for rigid objects.

I was thinking about having the SPUs computing the SAT, SAT sampling/filtering would be fine on RSX even with FP32 textures...if the filtering shader is written in the proper way..
Oh, that makes more sense.

I hope to be able to check this again at some point in the future, but (sadly) I'm quite sure that at least on one platform out of two woudln't be a win :)
If you do, I'll definately be interested about the results. Seems like a great way to render huge shadow maps.
 
Some people forget that the G70 and G71 were and are among the top performers per transistor and/or die space. if i remember correctly G71 came in at 278M transistors. Plus it's power consumption was low compared to it's competitors. These are things that Sony would have held in high regard.

Edit: sorry i was reading past pages on this tread about RSX, i don't want to derail the informative discussion going on now.
 
This is programming. :rolleyes:

Having read all the responses in this thread, thanks developers and everyone, I can only conclude that the guy has not made accurate assumptions about PS3 capabilities. He is a MUSICIAN!

However, his (in some terms) down to earth opinion, as joker454 already pointed out, does bring some very interesting side-questions, especially one, is the PS3 software friendly?

I know, perhaps it's not that hard, for sophisticated enough console programmers, as the PS2, but now, years after, it does require some knowledge and tricks, which are time eaters...

Developers (so we people do) aren't getting any younger.

NG hardware is complex enough so you can try brand new, *unheard of* stuff anyways.

Right at everything or not, one thing is clear, Sony can learn from this guy's comments since IRL we learn a LOT from reasonable criticism (perhaps underneath it is), sometimes more than from anything else.
 
With this and your last post, and the middle-ground consensus of the two rivalling POVs on PS3 development, it seems to me MS are very much in the right mind targeting development ease over everything else. It's far too much a minority of enthusiasts working in the industry (to earn a living) to hope for the industry to embrace the 'fun' aspects of your eclectic architecture. Console designs should be developer centred, from a business perspective. Nintendo went this was (in a bad way!) and MS in a good way. Sony's choice has had a major impact on their system. One wonders what a developer designed console would look like?

That said, if Sony's long-term strategy is a 'consistent' development platform going forward (Cell + GPU) then they may not be in a such a bad place next gen. From that perspective, MS's choice might be very limited. I can't imagine them going the complex-to-code-for route, which may hamper performance in their hardware choices. If PS3 is something of a stop-gap solution - a system that bridges a couple of key technologies - then it's market performance now won't be as much of a concern as future platform efforts. They won't need to spend $2 billion creating the next version or the next lot of tools. Though in all honesty, I think we will mostly all agree that much of PS3's problems come from being designed by a technology enthusiast rather than a shrewd businessman.
Hardware specs are irrelevant if you don't have the software -- excepting that you want to attract software developers to your console which means making it easy to program on, competitively powerful, and have a cheap medium to produce a game for (remember how CD based systems pulled away from the N64 because it used cartdridges)

Here are some cold facts or in the least very accurate assumptions, I guess:

- Blu Ray is not cheap

- PS3 lacks the software in comparison with the history of the PS brand

- PS2 has thousands of games.

- You can find everything in the PS2 and I mean it. For instance, "weird" games which appeal japanese gamers, "cheap games" a la PSN Store or XBLA, cooking games, anime games, sexy games, family fun games, software for kids, independent productions..., scary, etc etc.

The thing is, software is what matters for casuals, especially, and hardcore gamers. Lots of games are delayed for the PS3, i.e. Half Life 2 Orange Box, UT3, etc.

The PC has the best backwards compatibility in history, and the best catalog of games ever because of that, imo, but PlayStation, if it does remain BC for decades, should be in a close 2nd place if the PS3 could be able to strengthen the list of games, which looks to me isn't going to happen easily, breaking the tradition.

See you, Shifty.
 
Hardware specs are irrelevant if you don't have the software.
The hardware choices affected software availability though. If Kutaragi hadn't gone with a crazy Cell idea (perish the thought!) and had gone with something conventional, PS3 would have been cheaper, software would have been easier, and PS3 would be in a much stronger position. It'd also still be able to serve many of Sony's longer-term objectives. Lack of Cell may prevent PS3 from decoding and displaying 42 different MPEG2 streams...but that's not really a loss! The nerd in me really likes PS3, and it's the only current-gen console that appeals. But there are questions around it which future console considerations are going to have to consider. Once upon a time all hardware was 'weird' and needed super-geeks to program assembler to make the most of it, but then those days had teams of a half-dozen people. As technology has progressed, software abstraction has become essential, and the hardware now needs to be considered as more an engine for the software as it's the software that developers interface with to produce the content. Disregarding the (development) software side will put you at a serious disadvantage, ever more so as hardware generation pass by. Design of future consoles should probably revolve around the thinking of 'What developer-friendly software systems can we use? Once we've got those, what hardware can we run them on?' rather than 'What hardware will be powerful and scalable to far better economies over the life of the system? When we have the hardware, what tools can we find to provide developers?'

Of course, if PS3 takes off and manages to dominate again, that may postponement that philosophy another generation!
 
The PC has the best backwards compatibility in history, and the best catalog of games ever because of that, imo, but PlayStation, if it does remain BC for decades, should be in a close 2nd place if the PS3 could be able to strengthen the list of games, which looks to me isn't going to happen easily, breaking the tradition.
Best and the worst in a manner of speaking. There are still a lot of warts that need removal and the almighty force of backwards compatibility blocks that.

For a console, though, you could make the argument that backwards compatibility decreases the value of the upcoming library. It certainly happened that way with the Atari 5200. And when the PS3 library is this small and the PS2 library is so huge, full BC has the possibility of making the PS3 look more like a Bluray player that plays PS2 games in high-def.
 
Of course, if PS3 takes off and manages to dominate again, that may postponement that philosophy another generation!
I think Sony has had a major scare this past year so if the console does take off they will make sure that next time the current situation they are in never happen again.
 
The hardware choices affected software availability though. If Kutaragi hadn't gone with a crazy Cell idea (perish the thought!) and had gone with something conventional, PS3 would have been cheaper, software would have been easier, and PS3 would be in a much stronger position. It'd also still be able to serve many of Sony's longer-term objectives. Lack of Cell may prevent PS3 from decoding and displaying 42 different MPEG2 streams...but that's not really a loss! The nerd in me really likes PS3, and it's the only current-gen console that appeals. But there are questions around it which future console considerations are going to have to consider. Once upon a time all hardware was 'weird' and needed super-geeks to program assembler to make the most of it, but then those days had teams of a half-dozen people. As technology has progressed, software abstraction has become essential, and the hardware now needs to be considered as more an engine for the software as it's the software that developers interface with to produce the content. Disregarding the (development) software side will put you at a serious disadvantage, ever more so as hardware generation pass by. Design of future consoles should probably revolve around the thinking of 'What developer-friendly software systems can we use? Once we've got those, what hardware can we run them on?' rather than 'What hardware will be powerful and scalable to far better economies over the life of the system? When we have the hardware, what tools can we find to provide developers?'

Of course, if PS3 takes off and manages to dominate again, that may postponement that philosophy another generation!

I'm sorry, but I have to disagree with this. After over 25 years of programming computers, and I've done it in almost every fashion that you can imagine - from making an A1200 rotate a 32 colour screen at 50fps in 68xxx to coding a legacy Fortan system, to integrating Dynamics NAV (yes that Microsoft thing) with all sorts of external systems, the one thing that seriously riles me, and hell does it rile me, are excuses such as this.

I'm laying my position on the line, I'm not a game programmer, but I have programmed them in the past (hey, so did Andrew Braybook), so I understand game loops etc. (but I'm way behing on new gfx tech, thus why I read this forum to try to keep my brain in gear). I've researched things from parallel programming distribution to how might be the best way to triangle strip a cow. I've let banks thieve you all of your money, and even programmed fruit machines in Z80. I am currently a systems architect specifically integrating Dynamics NAV and AX with external systems and taking care of complex version upgrades. As a systems architect I can at times have to deal with communicating with systems using over 6 different languages, databases, operating systems, accounting methods, etc. With the end client also wanting new added functionality within their new systems framework. However, at the end of the day, this new functionality can be done by junior programmers still wet behind the ears.

It's not a problem, and neither should programming an SPU be either. I'm with nAo on this, no senior project member on the programming level should ever have to moan about things such as this, but rather get their fingers out of their arses and think like a software engineer and provide the basic framework. Your point about PS3 needing "super geeks" is wrong, nAo is right, it's not a complex machine. Getting the best out of it in the longer run might require a little bit of ingenuity. For an example of what a quality technical lead should be doing, I refer you to this post by Mike Acton: http://forum.beyond3d.com/showthread.php?t=44542

I don't want to piss on anybody's chips here, but be it a game or a large financial, system, the project needs a systems architect, and that systems architect is responsible for putting the lower level framework in place for the rest of the team to deal with. At the end of the day, it comes down to the quality and dedication of that systems architect in what he/she does from feasibility, to requirements, to analysis, to design, to the actual framework coding. If the SA fails of does not do their job correctly, the rest is doomed.

Getting back to your points about generic machines however, and I suppose that is where you are going, the wish of EA for some MSX type console inbuilt in every TV? Well, it's all well and good, but do we really want to take console gaming in that direction? Do we really want it to become part of some grand bloatware exercise like Windows? Will we be programming games in Java and C# for .NET in the future because the hardware is supposedly so powerful that it doesn't really matter? I say no! What, just because of IMO incompetent software architects? You can come back at me with the "ah well it's a game" comment, but a game is no different to a lift, a realtime military system, or SAP trying to talk to COBOL stuck together in 1973. Systems are systems, and the sooner people in the gaming industry start to think of games as being such rather than a simple game loop the better.

Console hardware should advance in the direction of the Cell (or even more crazy ideas), even mainstream processors are now going in that direction. Kutaragi was spot on the ball, it might just help IMHO if certain people would also get with it.

Rant over.
 
Best and the worst in a manner of speaking. There are still a lot of warts that need removal and the almighty force of backwards compatibility blocks that.

For a console, though, you could make the argument that backwards compatibility decreases the value of the upcoming library. It certainly happened that way with the Atari 5200. And when the PS3 library is this small and the PS2 library is so huge, full BC has the possibility of making the PS3 look more like a Bluray player that plays PS2 games in high-def.

The Sega Master System - Genesis had great backwards compatibility which was well thought out and didnt hamper the future library. Sega used the old Z80 effectively in the genesis and had those that wanted sms compatibility pay $30 for the cartridge converter. Long live the power base adapter.

I think Sony has had a major scare this past year so if the console does take off they will make sure that next time the current situation they are in never happen again.

They just need to know there are lots of people that like to buy cheap consoles to play video games and many many many less people who like to buy high end expensive videophile gear that happens to play video games no matter how good a deal that gear is for the money.
 
Last edited by a moderator:
Cheap consoles are fine.

But if they want to deliver a big jump, are they going to be able to deliver next-gen consoles for $300 again?

Remember, inflation dollars since 2000-2001 when the last consoles launched at $300.
 
The Sega Master System - Genesis had great backwards compatibility which was well thought out and didnt hamper the future library. Sega used the old Z80 effectively in the genesis and had those that wanted sms compatibility pay $30 for the cartridge converter. Long live the power base adapter.
Well, that was a case where the reverse happened. Rather than hamper the success of the Genesis, it was simply a feature that nobody cared about. And in fact, one that many people didn't even know about -- "Sega what system?" For the vast majority of Genesis owners, they thought the Genesis AKA Megadrive was the first console Sega produced.
 
Status
Not open for further replies.
Back
Top