Questions about PS2

The N64's RDP doesn't run programs either - its 'programmability' is by register, like the GS.
Meanwhile, the N64's RCP performs programmable transform/lighting functions, like the VU.
(I guess there's a distinction if you care where chip boundaries lie - but it's no different from the system/game perspective.)
Interesting!

RCP programming was unsupported, difficult, and strongly discouraged by Nintendo.
Wasn't it supported in lates games?
 
You can read all about this stuff here: http://hwdocs.webs.com/ps2
Thank you for explanation and link. I've already know abot that link, but for now I just don't understand a lot from those documents. :cry:

There is no magic. The EDRAM is just a canvas to paint on and copy-paste from. You don't erase anything. You just paint over it. It's like asking "In MS Paint, can you erase part of your painting so that you can use that place for something different?" There is no organization to the pixels in MS Paint. You just put stuff wherever you feel like it and you can make whatever mess you want.
Great, that explains a lot. BUt tell me one more thing, at start of frame rendering, can EDRAM be used in that way:
1 MB for backbuffer, 1 MB for Z buffer and 2 MB for textures. And only then, when frame is ready write front buffer to EDRAM?

There is no magic pixel pipe mode. There are only triangles and sprites. The GS doesn't have a magic post-processing feature, so you have to make due with the tools it does have. It does have sprites. Sprites have the same texture and blend options as triangles. They're just screen-aligned quads. So, you can make a sprite the size of a whole screen that reads a texture the size of a whole screen. Bam! You made a post-processing mode!
Great! But how do you think effects like watersplashes, but not on water, on whole screen is done?

The only magic is that someone figured out the layout of the internal caches in EDRAM. And, by drawing sprites that are a bunch of tall columns instead of one big sprite, you could line up with the caches really well. That way a row of tall sprites would be faster than one huge sprite.
You mean fullscreen postprocessing effect regulary done by one big sprite was done by same sprite split to many?
 
Interesting!


Wasn't it supported in lates games?
I know Conker used it heavily.

I remember Nintendo noting when they made the gamecube to be as efficient and easy to use as possible, Sony doubled down on the N64's complex design. Ps2 could be thought of as an N64 2.0 + eDRAM.
 
Wasn't it supported in lates games?
Sorry, I didn't mean it was completely disallowed (although it was at first). I meant from the developer's perspective, there was basically no support. Nintendo just dumped their internal tools and docs on the developer, then walked away. The tools and docs were not up to devkit standards - there wasn't even a debugger, no tutorials or getting started guides, FAQs or internal forums. The stories of the developers who did ship new microcode are impressive man-against-nature tales!
 
Last edited:
I know Conker used it heavily.

I remember Nintendo noting when they made the gamecube to be as efficient and easy to use as possible, Sony doubled down on the N64's complex design. Ps2 could be thought of as an N64 2.0 + eDRAM.

Virtually all N64 games (maybe excluding Midway's Greatest Arcade Hits) used the RSP. The distinction is that most developers only ran standard Nintendo/SGI library code (called "microcode") on them.
 
Great, that explains a lot. BUt tell me one more thing, at start of frame rendering, can EDRAM be used in that way: 1 MB for backbuffer, 1 MB for Z buffer and 2 MB for textures. And only then, when frame is ready write front buffer to EDRAM?

The front buffer, back buffer and Z buffer would all be 640x448x4bytes = about 1 meg each. You need the front buffer to sit in EDRAM until the video output hardware is done scanning out the image over the video cable to the TV. That process takes most of 1/60th of a second. If you are running 30fps, the video out will have to scan out your front buffer twice in a row while it waits for you to finish drawing the back buffer. So, there's no way to avoid having a front buffer sitting around in EDRAM.

There was one trick you could do: Before progressive-scan HDTVs, traditional TVs were interlaced-scan displays. They only updated either the even or the odd scanlines every 1/60th of a second. So, if your game was a rock-solid 60fps, you could get away with having a 640x224 front, back, depth buffer because that's all the TV needed anyway. But, if you stutter for a frame, the single half-height buffer will go to both the even and odd scanlines and your game will look half-rez until the frame rate gets back up to 60.

Great! But how do you think effects like watersplashes, but not on water, on whole screen is done?

I'm not sure what you are describing. In the end, everything comes down to triangles and quads. Water splash particles sound like particle systems made of triangles.

When working on a PS2 post-processing system, I did put in a feature where you could write a function to warp 2D the UV coordinates for a regular grid of triangles. I'd then draw to the temp buffer using the screen as the texture and the warped UVs. That way you could do a full-screen 2D warp in a post-processing pass. You could do simple stuff like wavy or swirly distortions similar to some Photoshop filters. It wasn't per-pixel accurate. The 2D grid was 16x16 pixels per quad and UVs were only warped on the quad corner vertices.

You mean fullscreen postprocessing effect regulary done by one big sprite was done by same sprite split to many?

One big sprite would work. But, it turned out that a row of tall sprites was actually faster because of how the read-write caches worked.
 
So, if your game was a rock-solid 60fps, you could get away with having a 640x224 front, back, depth buffer because that's all the TV needed anyway. But, if you stutter for a frame, the single half-height buffer will go to both the even and odd scanlines and your game will look half-rez until the frame rate gets back up to 60.
I believe is was used in DOA2 and Sly Cooper 2, 3. They did not rework it even on PS3 port.
 
That is exactly what I wanted to know! So all buffers should be in EDRAM all the time except Z buffer what can be replaced with temp buffer when didn't needed anymore? Right?
Next question I remember what someone said what in Jak 3 there was 250k polygons per frame. But how that can be possible if there's only ~280k pixels on screen? Polygons can't be so small.
 
Next question I remember what someone said what in Jak 3 there was 250k polygons per frame. But how that can be possible if there's only ~280k pixels on screen? Polygons can't be so small.

There's a lot of things to consider:

1) A bunch of polygons are drawn on top of each other (overdraw). The amount varies depending on the game and scene, but it's common for one location on the screen to cover 3-4 polygons on average. So the actual number of pixels being drawn could be several times the screen resolution.
2) Particles probably count as polygons and are one pixel each by definition and can have a ton of overdraw.
3) Some polygons really may end up being smaller than a pixel, although with PS2 level technology it's probably best if they can be rejected early in the pipeline.
4) That number might include polygons that are not displayed because they're facing away from the camera (and are therefore known to be occluded by what's on the other side)
5) That number might include polygons that are not actually in the screen viewing area.

Polygon figures tend to be kind of ambiguous, hard to know what they really mean...
 
That is exactly what I wanted to know! So all buffers should be in EDRAM all the time except Z buffer what can be replaced with temp buffer when didn't needed anymore? Right?

Yep. You only need the Z buffer while you are drawing 3D stuff. Outside of that time, you can reuse that space for whatever.

Next question I remember what someone said what in Jak 3 there was 250k polygons per frame. But how that can be possible if there's only ~280k pixels on screen? Polygons can't be so small.

Exophase answered this as well as anything I had in mind.
 
I remember Corysama told what on PS2 polygons rasterised by one at time. I have some questions about that.
1) Did all polygon rasterised and textured all in one go or by 8 pixels clock (because there is 8 pixel pipelines what can texture)?
2) After rasterisation that polygon is written to back buffer, but did it also written to z buffer?
3) If as Corysama said polygon not only rasterised but also textured, did that mean what textures is written to EDRAM befor display lists are sent?
4) If that polygon should have, let's say two passes, after it rasterised, textured and written to back buffer, VU1 send that polygon to GS again, did GS need that polygon what is already in back buffer read it and blend in with second same polygon for second pass?
 
Thank you for link, but doesn't PS2 also have some specific things?
Also next question: does PS2 have some upscaler? How progressive resolution is made in PS2 games. Also how 1080p is done in GT4?
 
I remember Corysama told what on PS2 polygons rasterised by one at time. I have some questions about that.
1) Did all polygon rasterised and textured all in one go or by 8 pixels clock (because there is 8 pixel pipelines what can texture)?
2) After rasterisation that polygon is written to back buffer, but did it also written to z buffer?
3) If as Corysama said polygon not only rasterised but also textured, did that mean what textures is written to EDRAM befor display lists are sent?
4) If that polygon should have, let's say two passes, after it rasterised, textured and written to back buffer, VU1 send that polygon to GS again, did GS need that polygon what is already in back buffer read it and blend in with second same polygon for second pass?

First you would set the size and location of the frontbuffer and z buffer. Then, you would make sure the textures are in EDRAM before you use them. EDRAM would look like the RAM dump I posted eariler with the textures, palletes and screen buffers all in EDRAM at the same time. Now you are ready to draw something. Point at the texture to use. Draw some polygons using that texture.

When drawing a single triangle, the GS would figure out what pixels are covered by the triangle in 8-pixel blocks. It would simultaneously figure out what texels are needed for that block of the triangle, it would read from both the framebuffer pixels and the texture texels, blend them according to the blend setting and write them back to the frame buffer. It would also write to the depth buffer at the same time. Then it would move on to the next 8-pixel block. And, so on until it finished the triangle.

This all happens all at once under the during a single draw-triangle command. From the point of view of the GS interface, you say "Draw Triangle" and all of this finishes before the next Draw Triangle command is even looked at.

Also next question: does PS2 have some upscaler? How progressive resolution is made in PS2 games. Also how 1080p is done in GT4?

It did not have an upscaler. I think if you gave it a 448 image to display as NTSC 224, the video out could blend pairs of lines together to make NTSC's interlacing work out better.
I don't recall the details of how progressive scan was done, but it was complicated. It could do 480p and 1080i, but not 1080p. I don't recall what was involved in getting a 1080i image out of the GS.
 
Back
Top