In our case, we didn't try to formalize multithreading until we looked at PS3 first. We also emulated SPURS jobs on on other platforms -- lo and behold, we got a pretty darn sizeable gain out of Xenon and PC as well. Not only that, but compared to the approaches Microsoft has been recommending, it was utterly no contest.
2-3 triangles per frame... that is indeed insane.
Or...if you have a game for which RSX rendering alone is enough, yet you need piles of cpu power, then PS3 is your platform of choice. I presume Little Big Planet will fall into this category.
Missed this one earlier. Yeah, there are definite advantages to going PS3. One that for some odd reason isn't mentioned as much is it's standard hard drive. Personally I think thats the PS3's biggest advantage this gen. Many games stream assets nowadays, and with both optical and hard drive sources to stream from, this gives it potentially an edge. You can also pre-install frequently accessed data on the hdd, use it for caching, and so on.
being a newbie, from your words i can synthetize RSX is the very weak point of PS3 architecture....far less performing than xbox360 xenos
Visuals over needless frames any day imo.
If you had 50 Gpixels/sec, you could still only hit 60fps if the graphical complexity is limited. That limit is simply higher.
For 1080p I sort of agree, but not everything is about fillrate. More RAM and math help a lot.
They did not have fixed framerates regardless of scene complexity. I assure you that those games could run over 100fps most of the time (when scene load was low or medium) if they did not have v-sync enabled.
Again, it's developer choice back then to maintain 60fps. Part of the reason is that they didn't have a whole lot to do. 2D games rarely require you to touch each pixel more than once. Clipping is very easy. There's no 3D world to maintain. There wasn't much you could do with twice the render time per frame, so 30fps didn't make much sense. Graphical quality was almost entirely dependent on art.
With 3D games back then, there wasn't much to do beyond, say, layering a couple textures on top of one another. Having more render time in a 30fps game didn't buy you much, because you couldn't do normal mapping or dependent texturing or arbitrary math. RAM was a pretty big constraint on graphical fidelity too.
Depends on how you look at it. I think some of the hardware from 2 generations ago had some glaring deficiencies. PS1, for example, had no filtering, and N64 has very little storage.
Last gen didn't have to deal with a big resolution increase, either. Finally, if you're using XBox as your comparison point, remember that the span of time between XBox and XB360 was pretty small. (It's a shame Sony couldn't push graphics harder with the extra year they had.)
You're ignoring a lot of details here. First of all, fillrate has improved 4.3x from XBox to 360 (8.6x w/4xAA), and the resolution increase is a factor of 3. Secondly, the Z-culling is so fast now that hidden pixels are basically free. Finally, we have no performance hit for alpha blending either. All these factors explain why fillrate heavy things like grass and smoke look much, much better this gen.
Most important is the fact that we have a huge increase in math ability in the pixel shaders (15-30x) on top of it all being floating point. That makes for a substantial difference in graphics quality.
You and a lot of people have forgotten what last gen really looks like.
uh, well I would say needless frames would start at 61 fps. anything above 60fps is pretty useless. but if you mean it's needless to go above 30fps, I completely disagree.
I'll take nice visuals at 60fps over somewhat better (even twice as complex) visuals at 30fps.
Would surely have to be a different one, considering he mentioned he's actually seen something comparing PS3 to 360... no such thing exists in our case. Even if it was just an impromptu "knew-a-guy-in-the-studio-and-dropped-by" kind of thing, joker being in SoCal means he's not geographically suited to do that freely within our studio... nAo is, but he's new to the area, so he wouldn't be likely to know anybody.Ok, does that make 2 teams now using this approach ? (Your team and the one joker454 mentioned... unless your guys are one and the same).
Again, it's developer choice back then to maintain 60fps. Part of the reason is that they didn't have a whole lot to do. 2D games rarely require you to touch each pixel more than once. Clipping is very easy. There's no 3D world to maintain. There wasn't much you could do with twice the render time per frame, so 30fps didn't make much sense. Graphical quality was almost entirely dependent on art.
With 3D games back then, there wasn't much to do beyond, say, layering a couple textures on top of one another.
...yet many home console games ran at or under 30 fps. Apparently, the additional render time bought them something. Did Conker 64 run at 15-20 fps because Rare didn't feel like running the game at 60 fps? The 64 was capable of 30 and 60, as a number of games showed.Having more render time in a 30fps game didn't buy you much
What math are you using? Most games ran at 320x240 in the N64 generation. Last gen, nearly everything ran at 640x480, a fourfold increase. This generation runs at 1280x720, which is only a threefold increase.Last gen didn't have to deal with a big resolution increase, either.
I'm not saying it's trivial, I'm saying that the workload does not have the same variance that you get with 3D scenes. It's far more controlled.You still had to watch how many sprites you got onscreen, or you got slowdown, flicker, or both. And I've played a ton of 2D games where the background certainly wasn't scrolling by at 60 Hz. 2D programming was apparently not as trivial as you make it out to be when you've got a processor measured in single-digit MHz.
Half of those aren't related to fillrate, and few affect the variance of framerate. I'm not saying there was nothing at all to do in those days. I'm saying that visually, the marginal benefit of more frame-time was less in those days than today.Depending on which console you're talking about, there's also animation, polygon transformation, transparencies, dynamic lights, particles, fog, environment mapping, and filtering , all of which (including texturing) were nontrivial tasks in the olden-time days of 3D.
Tell that to Megadrive1988, not me. He's the one claiming that nearly all games from that era were 60fps. I just assumed he was correct, and explained why....yet many home console games ran at or under 30 fps. Apparently, the additional render time bought them something. Did Conker 64 run at 15-20 fps because Rare didn't feel like running the game at 60 fps? The 64 was capable of 30 and 60, as a number of games showed.
Some games ran at higher res in the N64 gen, and a lot of games ran lower than VGA last gen. In any case, the TV resolution is the same, and SDTVs often had trouble clearly resolving VGA. My point is that there was no mandatory resolution increase last gen.What math are you using? Most games ran at 320x240 in the N64 generation. Last gen, nearly everything ran at 640x480, a fourfold increase. This generation runs at 1280x720, which is only a threefold increase.
Some games ran at higher res in the N64 gen, and a lot of games ran lower than VGA last gen. In any case, the TV resolution is the same, and SDTVs often had trouble clearly resolving VGA. My point is that there was no mandatory resolution increase last gen.
Yes. No game gets to render at 720x480 SDTV res and upscale to HD. All have to be rendered at a much higher resolution.Is there even a mandatory resolution increase in this gen???
edit: it's also quite easy to throw a lot of geometry to a unified shading architecture without paying a huge perf cost for it, but I stopped getting excited about vertices/triangles per second figures this gen cause those numbers are not really representative of what you see on screen. As I stated many time we should get better at distributing geometry on
screen, we often manage insane polycounts (2-3M triangles per frame or even more) without having a good return in term
of image quality.
So for a game like GeOW would you prefer they had dropped the visuals and gotten it to 60fps? I certainly wouldn't.
Obviously I can't speak for everyone, but my guess is most would disagree with you. 30fps even looks more cinematic most of the time. On TV you sometimes see low budget movies or TV shows having scenes recorded at 60i, giving it a cheap HandyCam look. For camcorders, true 24p recording is all the rage because it looks more professional.generally speaking yes. I'd have to see how a 60fps GeOW with reduced visuals looked compared to the released game, but i think motion/movement is more important than graphics detail.
So for a game like GeOW would you prefer they had dropped the visuals and gotten it to 60fps? I certainly wouldn't.
If they game would have launched with 60fps from the beginning, we wouldn't be having this discussion at all, as no one would know "how much better it would look at 30fps" and instead, everyone would be enjoying the better gameplay by having a more accurate and responsive game.
Graphics are all nice and cool when a game comes out - but it's the gameplay that gets stuck in your memories as graphics get surpassed by newer software.
It would likely be the first time in history where politics contributed to the greater good then. Even if the marketing department would disagree with the "greater good" part.Joker said:Turns out it's *hugely* political alas