Question for developers... PS3 and framerate

In our case, we didn't try to formalize multithreading until we looked at PS3 first. We also emulated SPURS jobs on on other platforms -- lo and behold, we got a pretty darn sizeable gain out of Xenon and PC as well. Not only that, but compared to the approaches Microsoft has been recommending, it was utterly no contest.

Ok, does that make 2 teams now using this approach ? (Your team and the one joker454 mentioned... unless your guys are one and the same). :)

Once all the devs cycle through their first gen games, then we will start to see the real power of these boxes.

2-3 triangles per frame... that is indeed insane. ;)

Oh boy, oh boy... Can't wait for a game without edges. Is everything based on curves in Heavenly Sword 2 ? :LOL:
 
Last edited by a moderator:
Thank you for quick response Joker454.

Or...if you have a game for which RSX rendering alone is enough, yet you need piles of cpu power, then PS3 is your platform of choice. I presume Little Big Planet will fall into this category.

Of course the needs of every game are going to be different, but are there any generic examples in which the SPUs can be used for additional processing when it takes more than the RSX can provide, or do you find yourself eating up all of the SPUs on rendering tasks instead?

Missed this one earlier. Yeah, there are definite advantages to going PS3. One that for some odd reason isn't mentioned as much is it's standard hard drive. Personally I think thats the PS3's biggest advantage this gen. Many games stream assets nowadays, and with both optical and hard drive sources to stream from, this gives it potentially an edge. You can also pre-install frequently accessed data on the hdd, use it for caching, and so on.

So, this could theoretically be realized in things such as texture streaming, etc?

Oninotsume
 
being a newbie, from your words i can synthetize RSX is the very weak point of PS3 architecture....far less performing than xbox360 xenos

Far less performing? When I look at what games are comming out on these 2 platforms I don't see that the PS3 is far less performing. Take Uncharted or Killzone2 for example. It seems to me that these machines are about equal in graphical power and that multiplatform titles on the PS3 are getting the shaft because (a) the same results are more difficult to achieve (b) smaller userbase does not justify the additional costs.

Now what interests me more about the PS3 was in another thread that used to be quite popular on this forum called "PS3 devs are aiming higher". Based on what I am seeing down the pipe for 2008, this may be more important overall to the PS3's success than what SLIGHT technical advantages one machine may have over another.
 
Last edited by a moderator:
Visuals over needless frames any day imo.

uh, well I would say needless frames would start at 61 fps. anything above 60fps is pretty useless. but if you mean it's needless to go above 30fps, I completely disagree.

I'll take nice visuals at 60fps over somewhat better (even twice as complex) visuals at 30fps.




If you had 50 Gpixels/sec, you could still only hit 60fps if the graphical complexity is limited. That limit is simply higher.


true.

For 1080p I sort of agree, but not everything is about fillrate. More RAM and math help a lot.

agreed.



They did not have fixed framerates regardless of scene complexity. I assure you that those games could run over 100fps most of the time (when scene load was low or medium) if they did not have v-sync enabled.

regardless, what I am saying is, the vast majority of Model2, System 22, Model 3, NAOMI, NAOMI 2, games were in fact 60fps. probably 98% of them. even if they could run at higher framerates, that's not the point. 60fps was a standard for most 3D arcade games. I'll bet it wasn't tivial to archive that either.

Again, it's developer choice back then to maintain 60fps. Part of the reason is that they didn't have a whole lot to do. 2D games rarely require you to touch each pixel more than once. Clipping is very easy. There's no 3D world to maintain. There wasn't much you could do with twice the render time per frame, so 30fps didn't make much sense. Graphical quality was almost entirely dependent on art.

With 3D games back then, there wasn't much to do beyond, say, layering a couple textures on top of one another. Having more render time in a 30fps game didn't buy you much, because you couldn't do normal mapping or dependent texturing or arbitrary math. RAM was a pretty big constraint on graphical fidelity too.

I was strickly speaking of 3D polygon games, not 2D sprite driven games.


Depends on how you look at it. I think some of the hardware from 2 generations ago had some glaring deficiencies. PS1, for example, had no filtering, and N64 has very little storage.

definitally true.


Last gen didn't have to deal with a big resolution increase, either. Finally, if you're using XBox as your comparison point, remember that the span of time between XBox and XB360 was pretty small. (It's a shame Sony couldn't push graphics harder with the extra year they had.)

the beauty of last gen was, a.) not much of a resolution increase over the previous gen. b.) fairly long span of time c.) very large increase in graphics chip/GPU capability.


You're ignoring a lot of details here. First of all, fillrate has improved 4.3x from XBox to 360 (8.6x w/4xAA), and the resolution increase is a factor of 3. Secondly, the Z-culling is so fast now that hidden pixels are basically free. Finally, we have no performance hit for alpha blending either. All these factors explain why fillrate heavy things like grass and smoke look much, much better this gen.

I'm not saying things don't look better. somethings look much better. other things, not so much. the increase in relative fillrate is tiny because of the resolution increase. while fillrate is not everything, it partly explains the low framerates. bandwidth is the other main issue. though potentially on Xbox360 this shouldn't be a problem once developers are taking complete advantage of that 10 MB 256 GB/sec EDRAM.


Most important is the fact that we have a huge increase in math ability in the pixel shaders (15-30x) on top of it all being floating point. That makes for a substantial difference in graphics quality.

You and a lot of people have forgotten what last gen really looks like.


pixel shader has improved a great deal if you compare to PS2, and still very significantly if compared to Xbox1. pixel shaders are very nice but are not the be all end-all of performance.

I strongly believe Xbox 360 and PS3 should have both had a 256-bit bus connected to GDDR3 memory and 16 render backends. as well as ~16 MB EDRAM. (2006). sounds out of the ballpark in what it would've cost, but then 256-bit bus and 16 ROPs was not the highest end in late 2006 anyway on the PC side.
 
Last edited by a moderator:
uh, well I would say needless frames would start at 61 fps. anything above 60fps is pretty useless. but if you mean it's needless to go above 30fps, I completely disagree.

I'll take nice visuals at 60fps over somewhat better (even twice as complex) visuals at 30fps.

So for a game like GeOW would you prefer they had dropped the visuals and gotten it to 60fps? I certainly wouldn't.
 
Ok, does that make 2 teams now using this approach ? (Your team and the one joker454 mentioned... unless your guys are one and the same). :)
Would surely have to be a different one, considering he mentioned he's actually seen something comparing PS3 to 360... no such thing exists in our case. Even if it was just an impromptu "knew-a-guy-in-the-studio-and-dropped-by" kind of thing, joker being in SoCal means he's not geographically suited to do that freely within our studio... nAo is, but he's new to the area, so he wouldn't be likely to know anybody.

We're currently on our first title for PS3, so the PS3 codebase is still lagging behind quite a bit. The only other next-gen title we've had predated the PS3 launch by far, so it was far out of mind. Along the same lines, though, it's rather clear that PS3 not being a target earlier on has hampered our results on all current platforms (albeit PS3 more than everything else combined).
 
Again, it's developer choice back then to maintain 60fps. Part of the reason is that they didn't have a whole lot to do. 2D games rarely require you to touch each pixel more than once. Clipping is very easy. There's no 3D world to maintain. There wasn't much you could do with twice the render time per frame, so 30fps didn't make much sense. Graphical quality was almost entirely dependent on art.

You still had to watch how many sprites you got onscreen, or you got slowdown, flicker, or both. And I've played a ton of 2D games where the background certainly wasn't scrolling by at 60 Hz. 2D programming was apparently not as trivial as you make it out to be when you've got a processor measured in single-digit MHz.

With 3D games back then, there wasn't much to do beyond, say, layering a couple textures on top of one another.

Depending on which console you're talking about, there's also animation, polygon transformation, transparencies, dynamic lights, particles, fog, environment mapping, and filtering , all of which (including texturing) were nontrivial tasks in the olden-time days of 3D. What is considered trivial now was not trivial then. What is considered "much to do" is totally dependent on your machine. While these days, doing too many shader effects will kill your framerate, back then, trying to transform too many polygons, use too many textures, or animate more than about six characters would kill your framerate. It's really a quite analogous situation. Too many rendering tasks hurts your framerate, and it's all dependent on processor power.
Having more render time in a 30fps game didn't buy you much
...yet many home console games ran at or under 30 fps. Apparently, the additional render time bought them something. Did Conker 64 run at 15-20 fps because Rare didn't feel like running the game at 60 fps? The 64 was capable of 30 and 60, as a number of games showed.
Last gen didn't have to deal with a big resolution increase, either.
What math are you using? Most games ran at 320x240 in the N64 generation. Last gen, nearly everything ran at 640x480, a fourfold increase. This generation runs at 1280x720, which is only a threefold increase.
 
You still had to watch how many sprites you got onscreen, or you got slowdown, flicker, or both. And I've played a ton of 2D games where the background certainly wasn't scrolling by at 60 Hz. 2D programming was apparently not as trivial as you make it out to be when you've got a processor measured in single-digit MHz.
I'm not saying it's trivial, I'm saying that the workload does not have the same variance that you get with 3D scenes. It's far more controlled.

Depending on which console you're talking about, there's also animation, polygon transformation, transparencies, dynamic lights, particles, fog, environment mapping, and filtering , all of which (including texturing) were nontrivial tasks in the olden-time days of 3D.
Half of those aren't related to fillrate, and few affect the variance of framerate. I'm not saying there was nothing at all to do in those days. I'm saying that visually, the marginal benefit of more frame-time was less in those days than today.
...yet many home console games ran at or under 30 fps. Apparently, the additional render time bought them something. Did Conker 64 run at 15-20 fps because Rare didn't feel like running the game at 60 fps? The 64 was capable of 30 and 60, as a number of games showed.
Tell that to Megadrive1988, not me. He's the one claiming that nearly all games from that era were 60fps. I just assumed he was correct, and explained why.

What math are you using? Most games ran at 320x240 in the N64 generation. Last gen, nearly everything ran at 640x480, a fourfold increase. This generation runs at 1280x720, which is only a threefold increase.
Some games ran at higher res in the N64 gen, and a lot of games ran lower than VGA last gen. In any case, the TV resolution is the same, and SDTVs often had trouble clearly resolving VGA. My point is that there was no mandatory resolution increase last gen.
 
The only thing that even makes the upscaling-from-slightly-lower-than-720p res allowed at all is the fact that the rule that says "2xMSAA or equivalent" is vague by nature. Scaling up adds a little touch of blurriness, which in addition to things like bloom and what not can hide a large percentage of aliasing artifacts... and in the case of Xenon, it can also mean having the room to enable hardware MSAA w/o tiling.

While there are issues with pixel fillrate and texel fillrate (especially since the number of texture layers per pixel has gone way up), people can't seem to get it out of their heads that all framerate issues are directly a rendering problem... very often it can be computational, data throughput, or sync issues; it can be a million and one minor slowdowns meeting together at just the wrong time (this actually happens more often than you might think)... and the problem is that the level of complexity that makes this a huge nightmare is exactly what the consumer believes should be the minimum bar. There has never been, nor will there ever be, a hardware generation where the consumer expectations and reality ever meet in parity... It simply shows more now because the scale is so large.

I'd also add that Megadrive's generalizations of "it was done before, so it can be done again" are inherently erroneous, but they're the same sentiments almost anyone would have.
 
edit: it's also quite easy to throw a lot of geometry to a unified shading architecture without paying a huge perf cost for it, but I stopped getting excited about vertices/triangles per second figures this gen cause those numbers are not really representative of what you see on screen. As I stated many time we should get better at distributing geometry on
screen, we often manage insane polycounts (2-3M triangles per frame or even more) without having a good return in term
of image quality.

Fair enough, I'd agree with that. But...until someone has figured out a way to adjust polygon distribution in realtime that is suitable for games, then we're kind of stuck using the more tried and true methods. Maybe you're ahead of us in that regard, as we don't have any such system in house. Progressive mesh, as described at PS3Devcon, isn't a solution for us either, partly because it doesn't really redistribute polys from where they aren't needed, and party because it's a nightmare to setup/maintain. So for now for us that means using polygons, and lots of them. We have no choice in this, and I don't really see that changing much in the next couple of years, for us anyways. I'd wager that many other studios are in the same predicament though as I can still spot polygons in just about every title out there, even the uber ones.

I guess my point is that while the argument of poor poly distribution is noble and correct, it just doesn't matter much in the current reality of the situation. And hence we are still stuck with 4+ million triangle counts (in replay mode, 30fps but still 4xmsaa) because our silhouettes must look perfect for the replay camera which inevitably gets zoomed in close enough to see if a superstar players shoes are accurate.

Now, if you have already built a realtime polygon distribution system that works on PS3 then message me. Our parent company has deep pockets, they would be interested in buying and/or licensing it. No joke, just let me know and I'll get you in touch with the big wigs here that make these choices, then you guys can sort out the legal details. Just make the interface easy, since I'll probably be the one implementing it ;)
 
So, what is the exact problem with ps3 development?
Using a basic? ps3 cell framework still does not allow the ps3 to perform as well as xbox360 (which is relatively easy to program)?
 
You'd need to revisit old threads on the matter. It's more complicated than just what processors are in there. You have to design your game assets differently to target PS3 effectively, and come up with new data-structures for the CPU too. If you don't, you can use the traditional structures and it'll work, but slowly. Yet that's a quicker, cheaper solution than reengineering everything.

The solution presented by developers is to make sure you target PS3 from the beginning, designing everything to work well with it, and port those assets to other systems, rather than the other way round which is the current status quo. Even Cell specific algorithms should benefit XB360 too, because it's stream processing needs to be set up in a similar way to SPUs to make the most of them.
 
So for a game like GeOW would you prefer they had dropped the visuals and gotten it to 60fps? I certainly wouldn't.


generally speaking yes. I'd have to see how a 60fps GeOW with reduced visuals looked compared to the released game, but i think motion/movement is more important than graphics detail.

you know for some FPS on PC, 60fps used to be concidered low framerate. things have changed. graphical detail and resolution are the priorities, framerate almost an afterthought. it really sucks that many current gen games are not even 30fps. i'm expecting the situation to improve next-gen, at least back to where we were in the PS2,GCN,Xbox1 gen. which will mean still many games are 30fps, but a healthy amount are 60fps. I realize we'll never have everything we want, it's all about compromize.
 
generally speaking yes. I'd have to see how a 60fps GeOW with reduced visuals looked compared to the released game, but i think motion/movement is more important than graphics detail.
Obviously I can't speak for everyone, but my guess is most would disagree with you. 30fps even looks more cinematic most of the time. On TV you sometimes see low budget movies or TV shows having scenes recorded at 60i, giving it a cheap HandyCam look. For camcorders, true 24p recording is all the rage because it looks more professional.

Because our standard of quality for motion on a screen is set by TV and Hollywood, I doubt we'll see a shift back to 60fps because most people don't have a problem with 30fps.
 
So for a game like GeOW would you prefer they had dropped the visuals and gotten it to 60fps? I certainly wouldn't.

If they game would have launched with 60fps from the beginning, we wouldn't be having this discussion at all, as no one would know "how much better it would look at 30fps" and instead, everyone would be enjoying the better gameplay by having a more accurate and responsive game.

Graphics are all nice and cool when a game comes out - but it's the gameplay that gets stuck in your memories as graphics get surpassed by newer software.
 
If they game would have launched with 60fps from the beginning, we wouldn't be having this discussion at all, as no one would know "how much better it would look at 30fps" and instead, everyone would be enjoying the better gameplay by having a more accurate and responsive game.

Graphics are all nice and cool when a game comes out - but it's the gameplay that gets stuck in your memories as graphics get surpassed by newer software.

Yes and gears would have likely been surpassed graphically by a number of titles if they had made that mistake. Perhaps it would not even have been viewed as favorably in comparison to already released titles.

The pace of gears doesn't really demand a high frame rate, I really don't see how it would have benefited by reduced visuals and a higher framerate, only suffered.
 
Joker said:
Turns out it's *hugely* political alas
It would likely be the first time in history where politics contributed to the greater good then. Even if the marketing department would disagree with the "greater good" part.
 
Back
Top