Digital Foundry Article Technical Discussion Archive [2009]

Status
Not open for further replies.
It seems pretty clear that when you substitute bit mapped images / textures for 3D models, you're doing that for technical reasons. Even if this is a new engine, why go through the effort of changing the base content / object that runs on this new engine? Unless the new engine can't handle it with reasonable or desired results.

Someone (an artist perhaps, or modeler) went through the trouble of mapping/making a 3D object representing fences, windows, etc... to convert that to a flat image is work. Why do the extra work instead of reusing what you have available -- a bunch of numbers in a file? Someone had to convert those numbers into an image file, or flattened using some tool that modeled the original object(s) and surface mapped the appropriate textures.

I know perfectly that. But the engine shows area differences on both version & downgrades not prevails on the ps3. Just replay ng & ngs for notice the same differences.
 
Last edited by a moderator:
Ok, so this was not THE WAY, it was a compromise to do something that was impossible to get by other means.
It's not a compromise if there was no other way to do it! PS2 was designed to work that way. It was given insane amounts of bandwidth and triangle drawing capacity.
But here, in article about NG2 we hear that there was some "architecture strength" exploited and not a compromise for an architectural weakness.
Yes. XB360 was designed with one eye on PS2, it looks like. The inclusion of eDRAM means XB360 can cope with a lot more drawing than RSX. Thus if writing for XB360, you may choose to use that to your advantage. Whereas RSX is very straightforward, favouring the approach that you get all your work done the one time that pixel is drawn and then forget about it and move on to another pixel, so you want to avoid overdraw on RSX.
 
Yes. XB360 was designed with one eye on PS2, it looks like. The inclusion of eDRAM means XB360 can cope with a lot more drawing than RSX. Thus if writing for XB360, you may choose to use that to your advantage. Whereas RSX is very straightforward, favouring the approach that you get all your work done the one time that pixel is drawn and then forget about it and move on to another pixel, so you want to avoid overdraw on RSX.
I'm guessing people like Criterion probably wanted it for things like the particles in Burnout and Developers who like to use a great deal of alpha blended effects in thier games.
 
It's not a compromise if there was no other way to do it! PS2 was designed to work that way. It was given insane amounts of bandwidth and triangle drawing capacity.

So what? If you have one system that can draw DOT3 BM in one pass and other that needs 4 passes, which one is better (the fact that systems exist in one time frame means that their speed is roughly the same)?

Yes. XB360 was designed with one eye on PS2, it looks like. The inclusion of eDRAM means XB360 can cope with a lot more drawing than RSX.

I don't see how EDRAM helps here. Vertex processing has nothing to do with EDRAM, and PS3 version of NG2 has much more per-pixel processing.

Thus if writing for XB360, you may choose to use that to your advantage. Whereas RSX is very straightforward, favouring the approach that you get all your work done the one time that pixel is drawn and then forget about it and move on to another pixel, so you want to avoid overdraw on RSX.

I still fail to understand why do you need to pump a lot of sub pixel-sized polygons on screen with lots of overdraw, all this with very simplified per-pixel processing, and consider it a "strength".

P.S. I don't argue that X360 architecture has strong vertex processing capabilities, I just don't see how this exact implementation is exploiting it in a right way.
 
Last edited by a moderator:
So you don't see a benefit of getting your full fillrate (i.e. not bandwidth limited, instead computationally limited) and the benefit of doing more overdraw (something a lot of games like to do and have to scale back; see 1/4th and 1/16th buffers to compensate for HW limitations)?
 
So you don't see a benefit of getting your full fillrate (i.e. not bandwidth limited, instead computationally limited) and the benefit of doing more overdraw (something a lot of games like to do and have to scale back; see 1/4th and 1/16th buffers to compensate for HW limitations)?

No, I don't see it in the mentioned game. Maybe I should look somewhere else?
 
I think the math is very simple: for 720p you need 1280 * 720 / 4 = 230,400 polygons on screen per frame, maximum. If you're pumping more: you're using your GPU inefficiently. :)

This gets posted a lot but I still don't really subscribe to it. Realistically the perfect lod system has yet to be invented. You can look at any game out there, even AAA games from the words best studios that put 2+ million polygons per frame, and you still see rough edges. Whatever the math may claim, if 230k polygons is all we need then why has no one been able to ship a game with proper silhouettes even with 10x that number of polygons? In theory maybe the 230k number has merit, but in the real world it currently has no place, at least until someone invents a perfect lod system. Until then, we still need lots of polygons that need to be processed multiple times. That's just how it is. The minute someone manages to ship a great looking game with 230k poly's per frame though, I'm sure it will make news headlines everywhere.

Aside from that, I just wanted to double check something. Are you guys comparing a brand new just shipped PS3 game (NGS2) to a 360 game (NG2) that was released in June of 2008? If I'm wrong on that ship date then never mind. But if the 360 version was shipped over a year ago, then you are talking about a huge difference in tools, sdk, as well as a wealth of knowledge improvements on dev forums. In other words even if it's a studios first console game, they can make it look better now than they could have in 2008. That makes comparisons tricky.
 
TYou can look at any game out there, even AAA games from the words best studios that put 2+ million polygons per frame, and you still see rough edges.

Err... who said that this "AAA game from the words best studio" is really carrying 2+ million polygons up until entrance to the command buffer? ;)
If you're talking about the game that starts with U and ends with D - it's even remotely not the case. I.e. this "words best studio" indeed has a LOD that is kinda close to perfect.
As for why it still looks not that good: this LOD is still not perfect.

Whatever the math may claim, if 230k polygons is all we need then why has no one been able to ship a game with proper silhouettes even with 10x that number of polygons? In theory maybe the 230k number has merit, but in the real world it currently has no place, at least until someone invents a perfect lod system. Until then, we still need lots of polygons that need to be processed multiple times. That's just how it is. The minute someone manages to ship a great looking game with 230k poly's per frame though, I'm sure it will make news headlines everywhere.

I think this game makes a lot of headlines right now, you know.
 
So what? If you have one system that can draw DOT3 BM in one pass and other that needs 4 passes, which one is better (the fact that systems exist in one time frame means that their speed is roughly the same)?
Neither is better if they achieve the same desired result.
I don't see how EDRAM helps here. Vertex processing has nothing to do with EDRAM, and PS3 version of NG2 has much more per-pixel processing.
Overdraw and transparency isn't all about vertex processing. You need to read and write the buffers multiple times. There is a reason ATI chose 256 GB/s internal BW for their eDRAM!
I still fail to understand why do you need to pump a lot of sub pixel-sized polygons on screen with lots of overdraw, all this with very simplified per-pixel processing, and consider it a "strength".
Particle heavy games re hard to do on GPUs with strong pixel-shaders and weak-vertex shaders. Thus hardware that is able to churn out lots of simple polygons has a strength. I don't know how NGS2 compares in that respect. I'll leave it to MazingDUDE/grandmaster to explain why in this implementation it is.
 
Aside from that, I just wanted to double check something. Are you guys comparing a brand new just shipped PS3 game (NGS2) to a 360 game (NG2) that was released in June of 2008? If I'm wrong on that ship date then never mind. But if the 360 version was shipped over a year ago, then you are talking about a huge difference in tools, sdk, as well as a wealth of knowledge improvements on dev forums. In other words even if it's a studios first console game, they can make it look better now than they could have in 2008. That makes comparisons tricky.
Yes they are a year apart It's even more tricky since the game that goes by the name Ninja Gaiden 2 is very much a Microsoft exclusive though people won't recognise it as one. Published by Microsoft everywhere except for Japan. I'm not sure how many other studios have access to Microsoft's focus testing groups but surprisingly enough NG2 did. Using what supposedly are the strenghs of one platfrom in this situation should be expected.
We are esentally comparing two exclusives something that we usually don't get a chance to do.
 
Err... who said that this "AAA game from the words best studio" is really carrying 2+ million polygons up until entrance to the command buffer?
DF has a copy of the U2 developer vid where the lead programmer says
...about 1.2 million triangles that we try to draw per frame.
But surely you trust the devs on this board where we've discussed several times before how nice it'd be to have an optimal LOD engine that doesn't exist yet.

I think this game makes a lot of headlines right now, you know.
You're being obtuse here. There is no game out there with perfect silhouettes and you know it. We're still at a point where we can spot polygons in every single title and that's with hardware capable of rendering more than one polygon per pixel in raw terms. 720 would only need about 60 million triangles per frame at 60 fps, 30 million at 30 fps.
 
You're being obtuse here. There is no game out there with perfect silhouettes and you know it. We're still at a point where we can spot polygons in every single title and that's with hardware capable of rendering more than one polygon per pixel in raw terms. 720 would only need about 60 million triangles per frame at 60 fps, 30 million at 30 fps.

Added to that, there's talk in other threads that to properly leverage Tesselation requires triangles to be sub-pixel sized for certain uses to avoid rendering errors.

And personally, I consider that to be one of the future directions for next gen consoles, especially after having seen dynamic tesselation in action.

So going forward, I expect to see polygon numbers explode for AAA titles where the hardware supports it.

Obviously something like this won't happen on current consoles (don't think even the X360 will be able to properly leverage its tesselation unit though one can always hope), but I think it's one of the key features going forwards.

Regards,
SB
 
Of course he is being obtuse. Joker mentioned AAA games, as a rough estimate of noting that even the best games on the market don't have perfect edges AND use more than 250k-ish polys/frame, and psorcerer immediately jumps to promote a specific game--which the developers themselves are promoting many more polys/frame. Heck, if it isn't a "type" (80k vs. 18k) characters themselves may be close to that "theoretical optimum."

Anyhow, if you want to do a lot of blended transparant particles (tons of games do) you need the ability to write (fillrate) those passes + the bandwidth to keep up.

Next thing we will hear is consoles only need 60MP fillrate (720p 60Hz) and 8GP fillrates are overkill ... I mean, do the math people! 8GP is wasteful!
 
Are you guys comparing a brand new just shipped PS3 game (NGS2) to a 360 game (NG2) that was released in June of 2008? If I'm wrong on that ship date then never mind. But if the 360 version was shipped over a year ago, then you are talking about a huge difference in tools, sdk, as well as a wealth of knowledge improvements on dev forums. In other words even if it's a studios first console game, they can make it look better now than they could have in 2008. That makes comparisons tricky.


NG2 was released on its third year of 360, so was NGS2 on PS3's third. Team Ninja did 2 games on 360 prior to NG2, but did only one with PS3. And just look at other examples like Lost Planet & Bioshock. Both games shipped a year later on PS3, but we got nothing but gimped version especially with LP.
 
Added to that, there's talk in other threads that to properly leverage Tesselation requires triangles to be sub-pixel sized for certain uses to avoid rendering errors.

I have high doubt that sub-pixel polygons can be anywhere near efficient on any type of hardware.


On other points made by other respectable forum members: I do see what you getting at, but this is not where I wanted to go.
I know what term "ideal" means, but for me "closer to ideal" means it's better.
 
NG2 was released on its third year of 360, so was NGS2 on PS3's third. Team Ninja did 2 games on 360 prior to NG2, but did only one with PS3. And just look at other examples like Lost Planet & Bioshock. Both games shipped a year later on PS3, but we got nothing but gimped version especially with LP.
BioShock wasn't a bad port IMO. But if your point was that they were inferior nonetheless, then point taken.
 
NG2 was released on its third year of 360, so was NGS2 on PS3's third. Team Ninja did 2 games on 360 prior to NG2, but did only one with PS3. And just look at other examples like Lost Planet & Bioshock. Both games shipped a year later on PS3, but we got nothing but gimped version especially with LP.

The year of release also matters because the collective knowledge improves, and much of it is similar for both machines. For example, take something like light scattering. The way we did it in 2005 was very different from 2006, which in turn was again different in 2007. In each successive case it because much faster not so much from shader optimizations due to more experience with the hardware, but from smarter implementations that ran faster with ~90% similar visual result. This is now knowledge that is publicly available amongst devs (spread either via dev forums or word of mouth) and largely applicable to both machines. So, light scattering in 2007 looks similar to what it did in 2005, but it's executed in a much smarter fashion now which frees up cycles. The same can be said of other techniques like shadow implementations, etc. We all learn from prior games of both what to do, and what not to do.

Basically, anyone releasing product in 2009 benefits from both successful and failed experiments of prior products. So, personally, I think it's tricky to compare a 2009 game to a 2008 game because the 2008 guys weren't playing with a full deck.
 
thinking of it like that is tricky as well. Assuming they could make a better NG2 this year than they did last year even with their experience with the console over multiple games.

Is all we are seeing the work of the rsx or do you think it impossible and how important do you think the CPUs in each console are to the performance of each game? i.e what kind of usage % could you speculate?
 
Basically, anyone releasing product in 2009 benefits from both successful and failed experiments of prior products. So, personally, I think it's tricky to compare a 2009 game to a 2008 game because the 2008 guys weren't playing with a full deck.

You have to limit extenuating circumstances at some point. The tale of NG2/NGS2 is long and fraught with troubles. You could make a credible argument that any or all of the story is part of the reason why the games turned out the way they did. If you can't compare NG2 to NGS2, then you can't compare any other late ports, either.
 
Status
Not open for further replies.
Back
Top