Ikusagami Screenshots

london-boy said:
Remember that on PS2, at 30fps, you're lucky to get 500k polys per frame PEAK (that's 15M polys per second, and most games don't even get close to that). Half of that (250k) at 60fps.
how did you get those numbers?
 
pixelbox said:
how did you get those numbers?

Which numbers? There's a few there. Mostly i got them from here though. Peak in game figures seem to be around 15M polys per sec. That translates to 500k per frame at 30fps.

Average numbers will be much lower than that.
 
ps2 has such a weird design, who knows what it can do. first normal mapping then 10's of thousands of people on screen. then again they could be including the horse models. btw is there any game on ps2 the has fur shading? i thought sotc had em but i dont know.
 
The Dreamcast...

It would also draw it at full 60 updates per second on VGA, with better internal color precision and better texture filtering.

icon_lol.gif
Hey now . THe tiled based defered renderer would have had a field day with that scene . It just would have drawn the parts of the characters you could see . :smile:
 
jvd said:
Hey now . THe tiled based defered renderer would have had a field day with that scene . It just would have drawn the parts of the characters you could see . :smile:

Fill rate wouldn't be the only issue with those scenes. Transforming could have very well throttled the DC.
 
Ty said:
Fill rate wouldn't be the only issue with those scenes. Transforming could have very well throttled the DC.
it would only have to transform what is shown wouldn't it ?
 
Transform comes before the visible surface determination, so bounding box tests or smart game based visibility culling would be needed to cut down on a lot of the T&L.

Overdraw definitely plays to a DC strength for pixel fillrate (and bandwidth, shading, and texturing), and the savings on external memory from not needing a bigger z-buffer, with the great many depth positions that DC's high floating point precision sorts unconditionally, affords DC designers a lot of freedom for games with a massive numbers of objects.
 
Last edited by a moderator:
jvd said:
it would only have to transform what is shown wouldn't it ?

Generally no. You need to move your polys first before you can determine whether or not they can be seen.

E.g. you have a soldier running across the landscape. He runs behind a hillside. Therefore you move the polys of the soldier before you can determine to cull him from your view.

Lazy8s said:
Transform comes before the visible surface determination, so only smart game based visibility culling would cut down on the amount of transform.

How do you determine what can be seen before you even determine where its at? This is directed towards your statement about "smart game based visibility culling". Do you have an example of how this works?

Lazy8s said:
Overdraw definitely plays to a DC strength for pixel fillrate, and the savings on external memory from not needing a bigger z-buffer, with the great many depth positions that DC's high floating point precision sorts unconditionally, affords DC designers a lot of freedom for games with a massive numbers of objects.

But how is the transform power of the DC? Where is the set up done? The scene is made up of many animated characters so it's not necessarily fill rate limited. Would be interesting to find out the amount of polys in those scenes.
 
SH-4 can transform 10 million raw polys/sec. The PowerVR CLX has a triangle setup limit of 7 million polys/sec. Looking at those screens, at any given moment you only have a couple hundred models visible. The rest is so far you just need to use 2D sprites like lb said. Of course DC won't be able to light the scene at that quality level, but the rest is pretty trivial. There aren't that many particles from what I can see. For 300 characters each made up of 100 polygons and factoring in LOD, that's only 30K polys/frame.
 
Last edited by a moderator:
Ty:
How do you determine what can be seen before you even determine where its at?
Generalized tests in some cases can check if an object even has a chance to be visible based on its previous state or position, so transforming to find out specifically where or what it is might not even be necessary.
But how is the transform power of the DC?
The issue was really more in regard to DC handling scenes with many objects in general than that particular scene of which no specifics are actually known. A Dreamcast would have to be tasked with lower, DC-level complexities in certain areas of performance of course.

PC-Engine:
SH-4 can transform 10 million raw polys/sec.
Its peak performance is thought to be even higher, possibly. Maybe around 12M.
 
I think it's quite impressive, regardless of what some of the more vocal Sony haters say. Sure, there are some repeating textures, most models are the same, and it uses quite aggressive LoD, but the overall feel of the screenshots more than makes up for it. They look nearly next-geny at first glance, especially the first one.
 
is it supose to be funny?.... there's about 4 generations difference in those 3 pics.

Considering that, its not very funny either, is it? difference in graphics should be way higher... but still, its not.
 
dskneo said:
is it supose to be funny?.... there's about 4 generations difference in those 3 pics.

Considering that, its not very funny either, is it? difference in graphics should be way higher... but still, its not.

... yes it should be WAY higher /think of 32MB and 6 years old HW of PS2/.
 
PC-Engine said:
Why all the blurring? Is it to hide ugly repeating textures? Anybody got a video so we can see what the animation is like? I bet the LOD system shrinks the characters to like 10 polygons when the camera zooms out. :LOL:;)

Fun hearing all the SONY fb wetting their panties over it though.:LOL:

It's fun that on the PS2 we're getting a taste of something thought to be only possible on next gen consoles. :)
 
Lazy8s said:
Ty:

Generalized tests in some cases can check if an object even has a chance to be visible based on its previous state or position, so transforming to find out specifically where or what it is might not even be necessary.
that sounded too much generalized ; ) and that matter is well complicated to allow for such generalizations. yes, generally you can do relatively cheap view-volume tests at the object level [ed: or any level for that matter], but when it comes to scene-wide occlusion, things are far from trivial. tons of early occlusion algorigthms have been developed throughtout the years, none of them too successful in the general case (tm). to the point where in some cases it is still more efficient to throw everything (front-to-back) at the gpu and let it sort it out through its per-pixel occlusion techniques, in which case transformation is unavoidable. unfortunately a filed of soldiers would be an example of the bad general case.

PC-Engine:

Its peak performance is thought to be even higher, possibly. Maybe around 12M.
mind you, that'd be vertices, not triangles. and you don't really want to get that close to the theoretical peak. you never know if your words may not reach the ears of the publishers ; )
 
Last edited by a moderator:
Back
Top