For real? That would be interesting to confirmat the time i remember interviews of devs saying that:
for jak and daxter 2 the game was peaking at around 15 millions polygons per second, and for burnout 13 millions.
For real? That would be interesting to confirmat the time i remember interviews of devs saying that:
for jak and daxter 2 the game was peaking at around 15 millions polygons per second, and for burnout 13 millions.
For real? That would be interesting to confirm
Btw isnt the PS2 supposed to render the polygons multiple times to render certain effects on screen?
Btw isnt the PS2 supposed to render the polygons multiple times to render certain effects on screen? i.e each pass required to render the polygons again like in the case of Gran Turismo where it had to output environment maps etc
Which kind of drained the amount of polygons on screen something that was not a problem on XBOX or GC.
I wonder if this was the case also for DC.
But now I wonder, if the polygons per second in the PS2 emulator are overestimated because of this. Or maybe the culling/clipping is supposed to free those resources. Didnt the DC and the other consoles also support that kind of culling?
So the PS2 was probably outputting a lot more polygons on screen than the DC did but a lot less rendered at any given time.
This is nuts. The DC maybe would have been able to compete hugely better if it used clipping in a similar way.
Those are a lot of polygons needlessly rendered compared to the PS2 versions. I d like to know how the GC and XBOX handled it.
The recent posts brought a whole new perspective to the discussion. Very delighted.
So maybe if we took the assets in isolation there is a chance the PS2 models had a lot more polygons thanks to the clipping and culling, whereas the developers on the DC had to adapt the polycounts of the models downwards to keep the framerate smooth knowing a lot more polygons would be rendered per frame. Considering how well the DC games fared like Sonic Adventure 2 DOA2 and Shenmue I can only imagine how much better results it would have achieved with clipping.
Article 2:
- Triangles off the screen— To test for this condition, apply view frustum elimination. That is, test every triangle, primitive, or object against the viewing frustum pyramid, and then eliminate the triangle, primitive, or object if it is outside the viewing frustum pyramid. This test generally eliminates a lot of triangles by using only a few tests.
- Triangles not facing the screen— To test for this condition, apply backface culling. That is, test every triangle or group of triangles to see if it faces the screen, and eliminate the geometry that is not facing the screen, such as the back of a person's head. This test generally eliminates 10-50 percent of the geometry, but the cost and overhead may be huge. The efficiency depends on the geometry; the more strips you find, the better.
Dreamcast system does not do any z-clipping on D3DTLVERTEX vertices. The application is expected to clip meshes so that they do not intersect the front clipping plane. Culling against the back clipping plane will not be performed by the system, either. If the vertex type is D3DTLVERTEX, he D3DDDP_DONOTCLIP flag is implied even when it is not explicitly set. Meshes are still clipped by the hardware to the left, right, top, and bottom bounds of the current viewport or the screen. Back-face culling works for D3DTLVERTEX meshes as specified by the current render state settings.
No, multipass rendering. PS2 would render the same objects multiple times, each time adding a different texture or surface property. If you have a scene with specular reflections everywhere, every object with specular reflections is being drawn at least twice.Something with culling?
I hate being pedantic, but someone has to say it:
clipping=/= culling
Yup.No, multipass rendering. PS2 would render the same objects multiple times, each time adding a different texture or surface property. If you have a scene with specular reflections everywhere, every object with specular reflections is being drawn at least twice.
The way I used it was if an object is unseen so removed it's called culled like backfacing polygons during backface culling while if the geometry goes past the viewport some of it has to be subdivided and only the stuff in the viewport is drawn, I called that clipping.
I dunno, maybe that's been wrong.
What do you mean? You mean polycount of assets is increased. Because polygons render per frame should be a lot lessit increases it actually.
Yeah but after the subdivision its cullled, no? Therefore it's clipped. For example a sphere is partially visible it will subdivide the mesh along the were the viewport edges are right? Then the unseen part is culled and they called that clipping. So it's culling I guess but done differently? That's different from just straight up doing visibility checks and not rendering objects no?( Backface culling, occlusion culling).That's my understanding too. So most polys that don't show up in ps2 games were culled, not clipped. Clipping does not reduce polycounts, it increases it actually.
The PS2 was developed with the mid90's way of doing things in mind, perhaps before the hardware/3d features came along. Just very fast.
Pretty much this. The EE and it's two vector units was like the Cell BE of it's era, though VU0 went heavily underutilized and the rise of shader programmability in true T&L GPUs left the PS2 a bit wanting in terms of easily handled features.
I remember that the PowerVR GPU did "tile-based deferred rendering", a quick search on wikipedia to refresh the memory found that it does this with two proprietary methods: "Hidden Surface Removal" (HSR) and "Hierarchical Scheduling Technology" (HST).
The GPU automatically delays rendering as much as possible and tries to render the polygons only when they become visible in the frame, so in practice the Dreamcast is rendering much less that it's in the scene.