The Game Technology discussion thread *Read first post before posting*

Going in this direction, would it be fair to say, instead of 'bad port', 'bad architectural design' when referring to multi-plat games that don't feature something close to parity ('true' parity, in the Burnout Paradise sense doesn't seem necessary -- lots of games that were markedly better on 360 did proportionately well anyway, like RE4, SF4, even Ghostbusters fit the pattern first month).

As long as they are close enough, it probably doesn't matter anymore. I have both machines so naturally I go 360 for all my multi plats. But last gen I sold my Gamecube early (didn't care for it much) and ended up playing RE4 on my PS2. I knew that version was weak compared to the cube version but I didn't care, I just wanted to play it. Likewise, if I only had a PS3 I'd still have played Batman, COD4, RE5, etc, even if I knew they were the weaker versions. I'm guessing that most people would do the same. In the end most people don't have both versions side by side so they don't know what they are missing anyways. Hence they will probably be happy as long as the game runs reasonably well.


Going back briefly to Ghostbusters, aren't there workarounds to getting transparency working? I seem to remember nAo disagreeing with you or with Mintmaster a good while back on something like it.

Don't remember...maybe alpha to coverage I guess, but I'm not a huge fan of that. I think for most games it's easier to just design the issue away. That doesn't help Ghostbusters much though, but I think I'd rather use low res buffers rather than see that alpha to coverage pattern, ick.


Also, and this is slightly OT, but is using EDRAM best practice anyway? It seems from your and other previous comments that coding close to the metal on the 360 is unadvisable (because it's more trouble than it's worth, as opposed to the PS3, where it might as well be required).

Edram is always used automatically whether you hit the metal or use XNA, since all rendering must occur in edram. That's partly what makes it sinister, because you can munch up tons of bandwidth without realizing it...until you get the PS3 version working, then reality strikes. Hence why it's better to keep both versions humming along simultaneously and catch it early.


The question is whether the SEGA did their best to attack this problem (since that seems to be the focus of discussion as if game is only about alpha blending), for which we don't seem to have any idea.

Well without code we can never have any idea. I focused on transparencies for this game since it's the most obvious performance spike inducing feature of the game from what I saw in the demo. Barring that, we could speculate on what the issues are forever and possibly never be right. Plus, I haven't seen the 360 version so I don't have anything to compare it with to help speculate what they had to cut, and why they had to cut it.
 
Well, my original question was does that make it a bad port like people are claiming?
I haven't time to catch up on this whole thread, but I'm wondering if the issue here is actualy more one of language? Are people using 'bad port' to mean 'poor design choices for a cross-platform game'? I think so.
 
I haven't time to catch up on this whole thread, but I'm wondering if the issue here is actualy more one of language? Are people using 'bad port' to mean 'poor design choices for a cross-platform game'? I think so.

Likely. Certainly Platinum Games thinks so, otherwise they wouldn't have gone on record to say that they didn't handle the PS3 code and left it to Sega. When the game was announced, that wasn't clarified but as soon as the PS3 port was shown in public, there it was. There was even a disclaimer in Famitsu on who developed which version. Odd but perhaps they're confident enough in their tech on the 360 that they felt it's necessary. Especially when their former employer - Capcom - released competent ports thus far, notably DMC4 which is the closest thing to Bayonetta, running at 60Hz (and the only MT Framework game that ran a little better on PS3 but apparently it turned off some features to get the game running at 60Hz on all platforms since it's developed for 30Hz games). Going by the recent Lost Planet 2 demo (in the identical content sections), the revisions to their engine are certainly an improvement over the version used in RE5.

EDIT:

NGS2 was mentioned here briefly, but it should be noted that it wasn't a traditional port by any means:

http://www.gamasutra.com/php-bin/news_index.php?story=24388

Tecmo actually rebuilt the game from scratch on a new engine to avoid any potential bottlenecks that come from porting from the 360. In the process, they added stuff like depth of field, increased texture and rendering res, specular maps etc. Sure, the blood is reduced and it's even less than Sigma 1, but with the added visual pizzaz, who cares?
 
Last edited by a moderator:
Likely. Certainly Platinum Games thinks so, otherwise they wouldn't have gone on record to say that they didn't handle the PS3 code and left it to Sega. When the game was announced, that wasn't clarified but as soon as the PS3 port was shown in public, there it was. There was even a disclaimer in Famitsu on who developed which version.

I remember Edge listed Bayonetta among their top ten exclusive 360 games expected for 2009 less than a year ago. Obviously they were badly informed, but nevertheless.
 
I remember Edge listed Bayonetta among their top ten exclusive 360 games expected for 2009 less than a year ago. Obviously they were badly informed, but nevertheless.

Either that or Platinum really intended this to a be a 360 game and Sega only agreed to publish it as a multiplatform title.

Their lead programmer on the game isn't exactly a slouch. He worked on the PS2 when the libraries were nonexistent and managed to get great results with his team (DMC1). In other words, I don't think it would've been much of a problem for them to lead on the PS3.
 
Sure, the blood is reduced and it's even less than Sigma 1, but with the added visual pizzaz, who cares?

Well obviously they've added enough things and changed design enough that the intended audience won't care after 1+ year, but that's not what the thread is about. The point was that they did make changes to work around the bottlenecks (since the discussion had moved to the point of alpha blending and bandwidth at the time). Whether it's liked or not is irrelevant. But anyways...
 
You'll note that it's been said many times "Lead on PS3". There's two main reasons for this. First and foremost, is it forces you to redesign your data to be spu friendly, which in turn benefits every other platform out there as well, so win-win all around. Second though, you lead on PS3 so that you can't use edram. If you leverage edram, and you inevitably will if you lead on 360, then you will be in trouble when it's time to port. The amount of trouble will depend on how far along the project you started the port. If it's very early then you are in luck, because you will see the performance spikes early and you can re-design the game to eliminate them. If it's very late in the project, then you are screwed.
Reading this seems that you imply that every PS3 lead game can be ported to the XBOX360 without any troubles :eek: ... is this true? I think in an interview a Criterion games dev said that porting Burnout to the XBOX360 was simple (sans the problem with the needed hard drive for online play).

This brings me to one of my favorite "pure speculation" questions: how would a XBOX360 version of Killzone 2 look like? (if Sony allowed GG to port it, of course :D). Is there some game technology involved which would cause troubles on the XBOX360?
 
This brings me to one of my favorite "pure speculation" questions: how would a XBOX360 version of Killzone 2 look like? (if Sony allowed GG to port it, of course :D). Is there some game technology involved which would cause troubles on the XBOX360?


Sorry, this line of speculation isn't meant for this thread (check post 1 for Rules). The KZ2 discussion has been done to death already.
 
The point was that they did make changes to work around the bottlenecks (since the discussion had moved to the point of alpha blending and bandwidth at the time).

Fair enough about relevance, but I'm not entirely convinced that blending might've been the issue there (or if it's the main culprit in the Bayonetta situation). The cutscenes are completely censored and the original game was rated Z (AO in the ESRB) in Japan unlike the new version so it's intentional as they stated. Furthermore, as I stated, there is considerably less blood than its predecessor on the PS3.

Incidentally, the splash replacement effect does induce some noticeable frame rate drops when enemies are dismembered during UT (ultimate techniques).
 
Reading this seems that you imply that every PS3 lead game can be ported to the XBOX360 without any troubles :eek: ... is this true? I think in an interview a Criterion games dev said that porting Burnout to the XBOX360 was simple (sans the problem with the needed hard drive for online play).

This brings me to one of my favorite "pure speculation" questions: how would a XBOX360 version of Killzone 2 look like? (if Sony allowed GG to port it, of course :D). Is there some game technology involved which would cause troubles on the XBOX360?

He said SPU friendly not SPU optimised.
 
Ups sorry!!
Can you please delete the line from my post? Is there a way I can edit the post myself?

General reminders are good to have from time to time. ;) There is a "KZ2 tech" thread lying around the forums somewhere though if you're interested. It was a pretty big thread.
 
Why don't we use the Cell to help with alpha blending?
Here is an example of the solution:

frameBuffer = rendingPixel * X + pixelInBuffer * Y

rendingPixel and pixelInBuffer are the pixels being rendered and data already in the frame buffer respectively. X and Y are arbitrary factors.

What we need is the pixel information (color) of rendingPixel and pixelInBuffer to do the calculation, I think the Cell can be excel at these calculation.

Once we get the final pixel information, the GPU can rendered them out in one final buffer without the disadvantage of the overdraw.
 
Last edited by a moderator:
I haven't time to catch up on this whole thread, but I'm wondering if the issue here is actualy more one of language? Are people using 'bad port' to mean 'poor design choices for a cross-platform game'? I think so.

When the hat is on the other side (e.g. Burnout 60fps locked on one platform, 60fps and nearly locked on the other) it isn't called a bad port, it is called something else. If you catch up on the thread earlier I think "poor design choices" = "bad port" is being too generous.

Thankfully the discussion was steered that direction because indeed for a cross platform title, too have parity, you design to the collective lowest common denominators. The problem with this game is it didn't start off *multiplatform*. It also assumes that every developer should design their game to be "the same" on both platforms.

I am curious what brought us to the point from last gen (see differences... and just accepted at face value) where this gen if one platform underwhelms for some reason it is easier bad design or shows craptastic skill.

No one was shocked with the PS2 showed off its massive fillrate. Jaws don't it the floor when BDR is said to hold more data (e.g. better cut scenes; maybe potentially nicer textures as in possibly RAGE, etc).

But the sky is falling ... if one platform has more fillrate! Bad, bad design!

Anyhow, if anyone is using "bad port" for "bad multiplatform design" they should just STOP discussing. A port is where a title migrates from one platform to another. If people are going to be that vague and wiggly on terms how do they ever expect to someone who actually, you know, done the hard work of porting and/or multiplatform developing a title?
 
Why don't we use the Cell to help with alpha blending?
Here is an example of the solution:

frameBuffer = rendingPixel * X + PixelInBuffer * Y

rendingPixel and PixelInBuffer are the pixels being rendered and data already in the frame buffer respectively. X and Y are arbitrary factors.

What we need is the pixel information (color) of rendingPixel and PixelInBuffer to do the calculation, I think the Cell can be excel at these calculation.

Once we get the final pixel information, the GPU can rendered them out in one final buffer without the disadvantage of the overdraw.

I believe nAo before the consoles even shipped suggested the potential for such.
 
I think in an interview a Criterion games dev said that porting Burnout to the XBOX360 was simple (sans the problem with the needed hard drive for online play).

Criterion don't port on console. They work to a method very close to what Joker is suggesting - aim for the middle ground and don't do anything overtly crazy that favours one platform over another. Both Xbox 360 and PS3 games are developed side-by-side, simultaneously, using almost identical code.

http://www.eurogamer.net/articles/the-criterion-tech-interview-part-one-interview
http://www.eurogamer.net/articles/digital-foundry-criterion-interview-part-two

However, they did say that the port to PC was pretty easy using this technique.
 
Criterion don't port on console. They work to a method very close to what Joker is suggesting - aim for the middle ground and don't do anything overtly crazy that favours one platform over another. Both Xbox 360 and PS3 games are developed side-by-side, simultaneously, using almost identical code.

http://www.eurogamer.net/articles/the-criterion-tech-interview-part-one-interview
http://www.eurogamer.net/articles/digital-foundry-criterion-interview-part-two

However, they did say that the port to PC was pretty easy using this technique.

Hey, nice read...thanks for the info/clarification!!
 
Why don't we use the Cell to help with alpha blending?
Here is an example of the solution:

frameBuffer = rendingPixel * X + pixelInBuffer * Y

rendingPixel and pixelInBuffer are the pixels being rendered and data already in the frame buffer respectively. X and Y are arbitrary factors.

What we need is the pixel information (color) of rendingPixel and pixelInBuffer to do the calculation, I think the Cell can be excel at these calculation.

Once we get the final pixel information, the GPU can rendered them out in one final buffer without the disadvantage of the overdraw.

The problem is that CELL sucks at reading and writing to GDDR3 and you would have to use the RSX to move the data back and forth. a lot of interesting ideas were centered about SCE not completely botching the potential of FlexIO connecting CELL and RSX...

16 MB/s reads and 4 GB/s writes for transfers started by the CELL BE that act upon the GDDR3 pool? Really?
 
you would have to use the RSX to move the data back and forth.

Why is that a problem again? (genuine question). Can't you, say, create a framebuffer in XDR that is streamed as a texture to RSX or something?
 
Why is that a problem again? (genuine question). Can't you, say, create a framebuffer in XDR that is streamed as a texture to RSX or something?

Writing from RSX is certainly faster and reading it as a texture could work, but it would also cut the bandwidth PPU and SPU's can use from the XDR pool. If you could DMA data from GDDR3 directly into the SPU's quickly and painlessly (SPU's LS are memory mapped, but I do not know if you can render from RSX into SPU's LS without doing a trip to XDR first... also if it is possible I wonder what the cost is... sure you could read e-DRAM data on the VU's but reversing the GIF bus took quite some cycles and apparently the read speed was not incredibly good).
 
Back
Top