Digital Foundry Article Technical Discussion Archive [2009]

Status
Not open for further replies.
You have to limit extenuating circumstances at some point. The tale of NG2/NGS2 is long and fraught with troubles. You could make a credible argument that any or all of the story is part of the reason why the games turned out the way they did. If you can't compare NG2 to NGS2, then you can't compare any other late ports, either.

Basically yes. If NG2 was released a year later, would it have been graphically better? Probably. If Bioshock on X360 was released a year later, would it have been graphically better? Probably...

Does moving to another platform have advantages? Probably... Does it have disadvantages (if the game was previously exlusive to another platform)? Probably...

So it basically makes it difficult to judge the relative effects of one platform versus another. It is still fascinating to see what compromises had to be made and what improvements could be made a year later.

And then you have something like NG2 -> NGS2 where the whole design philosphy of the game changed. And no way to know how much of it is due to platform strengths and weaknesses and how much due to the new guys completely disagreeing with the past guys choices.

Regards,
SB
 
Basically, anyone releasing product in 2009 benefits from both successful and failed experiments of prior products. So, personally, I think it's tricky to compare a 2009 game to a 2008 game because the 2008 guys weren't playing with a full deck.


Later ports are always done with fewer people in shorter time with smaller budget, and NGS2 is no exception. You can't really call that a more deck. If you look at other examples of later ports, it's usually the opposite.
 
BioShock wasn't a bad port IMO. But if your point was that they were inferior nonetheless, then point taken.

In tech wise, yes it is a bad port. PS3 version runs sub HD at much worse frame rate even with a more aggressive LOD system. In my rough estimation, the performance difference is more than 50% in favor of 360.
 
What was the deal with LP? I didn't play either version extensively and the main difference I saw was that gamma was totally off on the PS3 and dark places were fairly bright.

On the other hand, the port ran typically well above 30Hz. What was downgraded?
 
i did another video regarding the drastically reduced polygon count <Mod: Doused potential flame>

http://www.youtube.com/watch?v=uvcS0gdmt7Q

<Mod: Doused potential flame>,there are 12 enemies on screen and the framerate doesnt even struggle...besides the stair case there were almost never more than 10 enemies on screen in the 360 version,and still you say the ps3 cant handle all the enemies from the 360 version and is reduced to 4-6

4-6 maximum.. <Mod: Doused potential flame>,
 
There are 12 enemies on screen

There's a big difference between a game arena with 12 enemies in the area versus rendering 12 enemies on-screen. At most there might be 8 werewolves in the marketplace section. It's difficult to trust any framerate assertions given the video uploaded, but tearing is more noticeable once the attack starts in that section and a few transparencies amount; the camera goes quickly to showing fewer enemies anyway.
 
That's why Tim Sweeney is pushing for software rendering (and micropolygons).

Errr...and on what exactly this "software rendering" will run? I know that software that's complex enough doesn't need any hardware, but we're still a very long way from that, perfect LOD will be invented earlier. :)

I see this whole "micropolygon" way as a fundamental latency vs. bandwidth problem, and as we all know: there won't be any solution for latency any-time soon, if ever.
 
Assuming the Ninja Gaiden 2 was in development for two years and was released on june 2008, that makes the project start around the middle of 2006. I heard that tiling is something you have to take in account (if you want to implement it obviously) early in the development stages. I think it might have been a bigger business by this time than by now.
going forward (Joker may shime in) it would make sense for most important technical decisions to be taken early in the project, ie mid 2006 for NG2 that streches the gap between NGS2 and NG2 in regard to technical choices imho.
 
Last edited by a moderator:
Errr...and on what exactly this "software rendering" will run? I know that software that's complex enough doesn't need any hardware, but we're still a very long way from that, perfect LOD will be invented earlier. :)

I see this whole "micropolygon" way as a fundamental latency vs. bandwidth problem, and as we all know: there won't be any solution for latency any-time soon, if ever.
It's an interesting discussion (I've been wondering for some time about opening a "end of the gpu road V.2 in the beginner section :) I see a lot of super high posters here that are dreaming of a narrower SIMD, even lower granularity for branch etc. But I see something a bit troublesome, larrabee is still 16wide and Intel has already to deal with quiet some cores and thus the communication between them and by the time it takes them it looks far from easy (obvious). More compute density may end taking quiet a hit. I wonder if this will be doable soon (ie in time for next generation systems) I feel like one would need something alike to the Intel polaris prototype (bunch of core on a grid for communication and the Ram should end under/stacked).
End of the out I've various questions about it some explanation will be welcome but that's for another thread.
 
Last edited by a moderator:
Errr...and on what exactly this "software rendering" will run?

Ask him, dude :) He's probably thinking about Larrabee or so.

I see this whole "micropolygon" way as a fundamental latency vs. bandwidth problem, and as we all know: there won't be any solution for latency any-time soon, if ever.

I'd read up on Pixar's Renderman before drawing any conclusions...
 
I'd read up on Pixar's Renderman before drawing any conclusions...

And what's interesting there?
"Let's make everything a sub-pixel polygon"?
I think I don't need to prove that this will be beaten by per-pixel processor on anything bigger than one pixel, linearly.
 
Interesting point

i did another video regarding the drastically reduced polygon count <Mod: Doused potential flame>

http://www.youtube.com/watch?v=uvcS0gdmt7Q

<Mod: Doused potential flame>,there are 12 enemies on screen and the framerate doesnt even struggle...besides the stair case there were almost never more than 10 enemies on screen in the 360 version,and still you say the ps3 cant handle all the enemies from the 360 version and is reduced to 4-6

4-6 maximum.. <Mod: Doused potential flame>,

Maybe they make a small mistake in their calculations?

On your video one person has said that in PS3 version he(she) notices that there is always no more than 2 enemy types.

I wonder if this is due to memory constraints. So maybe PS3 version has many enemies but of lesser variety.

Also, what about other levels? Can this trick work in different areas to get more enemies? Maybe it will be fun to do a maximum enemy:slowdown contest for someone who has both versions and both consoles.
 
Anyhow, if you want to do a lot of blended transparant particles (tons of games do) you need the ability to write (fillrate) those passes + the bandwidth to keep up.

I remember nAo suggested that the render buffer could be split into tiles that would fit the cache of the GPU as a mean to speed up rendering of opaque pixels.

I believe there were signs of that at least Insomniac were using that technique in the first R&C game.

It sounds like one viable way to reduce the bandwidth requirements to external memory.
 
Assuming the Ninja Gaiden 2 was in development for two years and was released on june 2008, that makes the project start around the middle of 2006. I heard that tiling is something you have to take in account (if you want to implement it obviously) early in the development stages. I think it might have been a bigger business by this time than by now.
going forward (Joker may shime in) it would make sense for most important technical decisions to be taken early in the project, ie mid 2006 for NG2 that streches the gap between NGS2 and NG2 in regard to technical choices imho.

would tiling have allowed 60fps?

Maybe they make a small mistake in their calculations?

On your video one person has said that in PS3 version he(she) notices that there is always no more than 2 enemy types.

I wonder if this is due to memory constraints. So maybe PS3 version has many enemies but of lesser variety.

Also, what about other levels? Can this trick work in different areas to get more enemies? Maybe it will be fun to do a maximum enemy:slowdown contest for someone who has both versions and both consoles.

wouldnt you have to load textures and process polys for all members of those 2 enemy types anyway? Seems that would be more related to media speed and storage space.
 
I think I don't need to prove that this will be beaten by per-pixel processor on anything bigger than one pixel, linearly.

Pixar's Renderman is the King Of All Renderers. It is highly scalable and efficient, it's various sampling techniques create superior rendering quality, and it's been polished for more then two decades by now.

I don't think that anyone can discuss the tech used in PRMan without learning more about PRMan itself.
 
Last edited by a moderator:
One would assume that in terms of quality, that also makes it king of all renderers. And if Renderman uses micropolygons, that's a good argument to suggest that with all the people in the world looking into getting pixel-perfect renders, micropolygons are accepted as an importnant and desirable feature. If there was a system that could achieve Renderman's quality with a single polgon per pixel method, and be that much faster to booty as you suggest, don't you think someone would have implemented it?

Not that I'm saying micropolygons are or aren't the bees knees. I don't know enough to comment. In theory one triangle per pixel is ideal, but there are enough people working on rendering tech to produce something that manages that if it were possible with current (fully programmable, in the case of offline renderers) hardware, I'm sure.

for the scope of this thread though, the future doesn't matter. In the here and now, the number of triangles isn't going to map 1:1, certainly not 1:4, with the number of pixels, so there's not mch point lamenting the design choices of games for drawing millions of triangles!
 
One would assume that in terms of quality, that also makes it king of all renderers. And if Renderman uses micropolygons, that's a good argument to suggest that with all the people in the world looking into getting pixel-perfect renders, micropolygons are accepted as an importnant and desirable feature. If there was a system that could achieve Renderman's quality with a single polgon per pixel method, and be that much faster to booty as you suggest, don't you think someone would have implemented it?

Latency. You don't need to worry about latency when your frame renders 2 hours, but you do need to worry about it if you need it in 30ms.
It's very naive to think that scalability can have same laws for low latency and high latency algorithms, no, it's not even remotely similar in most cases.
 
Latency. You don't need to worry about latency when your frame renders 2 hours, but you do need to worry about it if you need it in 30ms.
It's very naive to think that scalability can have same laws for low latency and high latency algorithms, no, it's not even remotely similar in most cases.
No-one was talking about scalability. :???: That wasn't why Renderman was raised. It's also OT. I outlined in my last line the relevance to this thread of micropolygons, Renderman, etc.
 
Status
Not open for further replies.
Back
Top