*spin-off* idTech Related Discussion

If the developers can build on top of previous PS3 code base and experiences, increasingly more developers and middleware providers should be able to exploit the PS3 goodness. It won't happen quickly. I remember nAo or DeanoC mentioned this evolution in a series of stages in the Heavenly Sword days.

People were skeptical about Blu-ray at first. Now some are exploiting it. With the advancement of first party titles like KZ2 and Uncharted 2, the multi-platform developers will also benefit from tech sharing.


EDIT:
For Post-Mortem, I'd be disappointed if "360 and PS3 feature parity" is the only thing aimed and highlighted. There are a lot more to learn this gen, including new techniques/practices and the potential to enter new segments/markets.

I respect nAn and Deanoc's work but I wouldn't be using HS as a model for anything. While Rage is working at 20-30fps and working towards 60, HS was finalized at 20-30 with lots of tearing. Not exactly a template to follow for anyone. And this was done by some of the best PS3 programmers on an exclusive title.

Josh did some nice research and laid it out well. You can choose to tip toe around it but that doesn't negate the fact that two of the most influential company heads made predictions that hold true to this day.

Next generation reflecting back won't be about "what we learned from the Cell" it'll be more about "let's not go through this shit again...."
 
I respect nAn and Deanoc's work but I wouldn't be using HS as a model for anything. While Rage is working at 20-30fps and working towards 60, HS was finalized at 20-30 with lots of tearing. Not exactly a template to follow for anyone. And this was done by some of the best PS3 programmers on an exclusive title.

That is why I brought up Heavenly Sword. The prediction was made in reference to how aggressive people farm jobs off to the SPUs (Answer: Not very :)). It turns out that they were right.

Why compare a launch title with a 2nd/3rd gen title ? In this case, a cross platform developer is attempting 60fps compared to a 20-30fps first/second party launch title with tearing. So, we are improving in general ? And yes, Id needs to offload even more things to the SPUs but that is the nature of PS3 programming (part of it anyway).

Josh did some nice research and laid it out well. You can choose to tip toe around it but that doesn't negate the fact that two of the most influential company heads made predictions that hold true to this day.

Next generation reflecting back won't be about "what we learned from the Cell" it'll be more about "let's not go through this shit again...."

:) What tip-toe ? I made the same prediction when PS3 was first launched. The multi-platform devs will lag behind first parties. The only difference is I believe even for cross platform developers, they will be forced to exploit PS3 advantages too because Cell is so drasitcally different. You have to spawn things off to the SPUs to get reasonably good performance. As for how aggressive they want to push it, it will depend on their ambition and budget.
 
Next generation reflecting back won't be about "what we learned from the Cell" it'll be more about "let's not go through this shit again...."

Actually, development for the next generation of consoles could very well involve a lot of reflection about "what we learned from the Cell," if speculations hold true that the PS4 will use an advanced version of the current Cell processor technology (and even if the Xbox 720 uses a CPU that is highly multi-core in its architecture, which is pretty much how the trajectory of CPU design is trending towards for the foreseeable future). Then yeah, what "we learned from the Cell" could very well be extremely relevant and of direct importance for development on future platforms in terms of optimizing software for the Cell architecture, assigning tasks to a bundle of SPU cores, and just writing code with very high parallelism in mind in general.

That's why I don't agree with some peoples' points about "oh, well the Cell is not worth the cost and resources to learn to optimize for and maximize its potential," because this research can help each individual studio, and the general developer communal code/knowledge base, come to grips more and more about development for the Cell and other highly multicore CPU architectures in the future for future platforms. It's not all "wasted, useless draining of money/resources" like some people seem to be under the impression of, if you take into consideration the larger timescale, the larger picture, and not just look at marketability/cost-efficiency for this generation alone.
 
In this case, a cross platform developer is attempting 60fps compared to a 20-30fps first/second party launch title with tearing. So, we are improving in general ? And yes, Id needs to offload even more things to the SPUs but that is the nature of PS3 programming (part of it anyway).

But they're already getting 60fps on the other systems, so how much is the PS3's theoretical power really going to help? There's plenty of extra power to be tweaked from the 360 too, and if it's easier to do so, then how relevant is it that the PS3 can theoretically do more? Will any developer actually get there sooner than they do on the 360, or will the PS3 always lag behind because it's so difficult to achieve that extra theoretical performance?
 
I think that problem is the same as usual - main team is working on PC/x360, when 3 to 5 spare men are trying to get PS3 version on par
 
But they're already getting 60fps on the other systems, so how much is the PS3's theoretical power really going to help? There's plenty of extra power to be tweaked from the 360 too, and if it's easier to do so, then how relevant is it that the PS3 can theoretically do more? Will any developer actually get there sooner than they do on the 360, or will the PS3 always lag behind because it's so difficult to achieve that extra theoretical performance?

We are just arguing about something useless, we discuss a fracture of a picture of a game that isn´t out until 2010.

But i really wonder about that "There's plenty of extra power to be tweaked from the 360", do you have any examples, i see it mentioned occasionally but thats about it.
 
We are just arguing about something useless, we discuss a fracture of a picture of a game that isn´t out until 2010.

But i really wonder about that "There's plenty of extra power to be tweaked from the 360", do you have any examples, i see it mentioned occasionally but thats about it.
So the 360 is topped? That's what you imply.
And basically PR doesn't seem to focus on this for the 360. we never heard statement about Fable 2 whatever title using xx% Basically 360 owners seems to care less and PR are thus more oriented about actual game/gameplay
 
according to this recent misplaced, out-of-context, poorly clarified quote, they're getting the same performance from Cell as Xenon, which we know shows isn't good use of Cell.

I'm taking the CPU comment to mean that the non-render/rasterizer parts of the engine and the game work almost as well on both systems rather than him taking about the power of each processor (Xenon/Cell) per se, but as you said, the quote was so ambiguous that it's hard to tell. He doesn't say how many SPEs are being used for game logic, physics, AI, etc. so we don't know, they could have 2 or 3 left unutilized at this point.
 
So the 360 is topped? That's what you imply.
And basically PR doesn't seem to focus on this for the 360. we never heard statement about Fable 2 whatever title using xx% Basically 360 owners seems to care less and PR are thus more oriented about actual game/gameplay

What? You think that 360 owners "care less" and that "dictates" Microsoft PR? Are you just joking? Surely you can't think that internet forum goers really dictate Microsofts PR (especially when their PR was relatively the same at launch, before the "360 owners" were around in spades).

If you're serious, then I have to strongly disagree.
 
So the 360 is topped? That's what you imply.
And basically PR doesn't seem to focus on this for the 360. we never heard statement about Fable 2 whatever title using xx% Basically 360 owners seems to care less and PR are thus more oriented about actual game/gameplay

Great post, no reason to get all defensive.

I didn´t imply anything, the original poster implied something which is why i asked for examples.
 
There always seems to be an implicit assumption in every discussion where the PS3 is lagging behind the 360, which is that if Cell is used properly, there's nothing the 360 can do that the PS3 can't. Thus 'oh well if Rage is running worse on 360 it's because they haven't optimised Cell properly'. Isn't it possible that the work they're doing is something that they can't offload onto the Cell processor? Surely there might be things that Xenos and more powerful PC GPUs can do that the RSX, even with help from Cell, can't keep up with?

There's also a problem with comparing first-party games to multiplatform games and saying 'oh well Uncharted 2 looks better than such-and-such game so obviously they're not using PS3 properly'; the first-party games are designed around the architecture of the console (specifically in PS3's case), so obviously they're not going to be utilising things that the console isn't very good. KZ2, UC2 et al obviously look very good but there's a certain amount of 'smoke and mirrors' involved (that makes it sound bad, but I don't mean it in a bad way, essentially, that they're using great art to cover up any technical shortcomings), so that the reason it looks so good is that it's playing to the console's strengths and finding workarounds for its weaknesses. If I had to guess - and I'm not a programmer, so this is completely conjectural - I'd guess we're seeing something similar with SC:Conviction and Alan Wake that have amazing lighting, the likes of which I haven't even seen in top-tier 1st party PS3 games; the lighting plays to the 360's strengths while the weaknesses of the system are disguised with smoke and mirror artwork.

Of course, since like I say that's completely conjectural, it's equally possible that there just haven't been any games on PS3 that have needed those kinds of lighting effects.
 
But they're already getting 60fps on the other systems

With flickering textures (filtering problems or mip-map problems).
You see: shipped game with 20-30 fps vs. shipped game with flickering textures, which is better?

But it's all BS, really. The Carmack was talking actually about how far from "shipped game" they are on both platforms, that's all.
 
Predicate has some pretty good points there, to which I'd also like to add that Rage is supposedly an open-world enviroment, so the scope is probably a lot bigger compared to typical FPS/TPS games where you're supposed to lineary progress through the somewhat confined levels.
 
There are no good points to be made here, because we know sweet FA about it because the interview was so basic, out of context, and ambiguous. There is absolutely no detail at all in it, and people are jumping to conclusions.

What I do think that has become obvious is that to make a PS3 sing, you have to take a very different approach to that you would take on both PC and 360, be that a move to deferred rending/lighting or anything else. Sony really like to overcomplicate things, but they must have chosen this path as a hardware developer for a very good reason, and that maybe system flexibility and the ability to crank more out of the system as it gets older. They didn't simply design the graphics architecture of the PS3 because it was an interesting concept, they must have gone with the Cell-RSX combo for some reason or another. Microsoft once described mutli-threading on the CPU to do graphics as "fluff". The Sony engineers obviously didn't think about it that way. The truth of the matter is probably somewhere in-between both approaches.

Both machines are probably getting close to hitting their theoretical GPU performance limits now (without using different techniques or tricks), Sony even admit to this. The question is, how developers move on from that, and how diverse the treatment of gaining extra performance is on each machine, which will probably be very different.
 
I know I have no business questioning Carmack but isn't the idea of using Mega-Textures with the PS3 architecture fundamentally flawed? Mega-Textures (from my limited understanding) is designed to address how GPU's manage texture memory addressing. It's almost like Carmack's hint to the API designers and IHV's about how to better use the resources at their disposal. But the PS3's whole architecture requires exotic and or custom memory access and utilization patterns. If JC does get this working, it would almost be equivalent to finding a unified solution to all the PS3's memory management woes... But the fact that he's trying to use the RSX to pull off the MegaTexture algorithm indicates that he is not doing what most other PS3 developers have done, which is to use the Cell's SPU's as an additional (weaker, yet more flexible) GPU... Very confusing based on my muddled take on things... It would be funny if this is JC's way of petitioning Sony for more low-level hardware details?
 
No, MegaTexture isn't flawed on the PS3. You should try to read more about what it does to better understand what shortcomings it addresses...

It's about how all current GPUs' internal texture management wastes memory, which is something even Sony's engineers can not address on their own.

Here's the basic outline of the problem:
http://www.bluesnews.com/cgi-bin/finger.pl?id=1&time=20000308010919

Video hardware will always require the entire texture to be loaded into local memory, even if only 1 texel of it is visible in the rendered image. This wastes serious amounts of memory which increases with scene complexity. Even the PS3 does not have anything to solve this problem.

Back then Carmack suggested a hardware solution which did not happen yet.

Now he's written a software solution instead, which has two main parts:
- one tricks the hardware by hiding the source textures and manually feeding it the actually required texture information
- the other is a compression scheme that loads the required information from a background storage (in this case an optical disc)


In short, the problem is inherent to the design of the RSX and can not be simply solved by resource management.
Carmack's problem here is that his workaround taxes the fragment shaders of the RSX a bit too much at this time, whereas Xenon's unified shaders can keep up with the workload without any trickery.
 
"oh, well the Cell is not worth the cost and resources to learn to optimize for and maximize its potential,"

or how about :
"oh, well the Cell is not worth the cost and resources to learn to optimize for and maximize its potential, when having to use it to make up for an inferior memory and graphics card design"
 
Thanks for the clarification... But I still have a disconnect here... Most current Dev's share GPU workloads between the SPU's and the RSX to achieve parity with Xbox360's shared memory structure... Is JC going to have to setup 2 different Mega-Texture routines (one running on the RSX, the other the SPU's) so that the 2 "GPU"'s can effectively share data? Don't other Console Dev's have to perform something akin to "Mega-Vertex" to trick the RSX into accepting Geometry that has been processed by the SPU's? Isn't the whole core of PS3 development oriented around using the hardware in unusual ways, especially texture memory addressing (See Ratchet & Clank's exotic texture scheme)?
 
But i really wonder about that "There's plenty of extra power to be tweaked from the 360", do you have any examples, i see it mentioned occasionally but thats about it.

I think it's a fair assumption given how developers have always managed to squeeze increasing amounts of performance from old console hardware. Just look at some of the last Xbox games (for example Doom 3) which are leaps and bounds beyond first generation games. The games continued to improve right until the end. I would expect this to be even truer for the 360 given how much tweaking can be done with parallelism. And there's always deeper and deeper tricks to be exploited in the hardware which later games will continually mine, just like they did for the Xbox. Eventually they'll run out, but will it matter by then, or will there be a new generation of consoles?
 
Back
Top