Questions about PS2

Discussion in 'Console Technology' started by Liandry, Apr 7, 2016.

  1. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    Interesting!

    Wasn't it supported in lates games?
     
  2. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    Yes, exactly that process is done by GS pixel pipelines?
     
  3. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    Thank you for explanation and link. I've already know abot that link, but for now I just don't understand a lot from those documents. :cry:

    Great, that explains a lot. BUt tell me one more thing, at start of frame rendering, can EDRAM be used in that way:
    1 MB for backbuffer, 1 MB for Z buffer and 2 MB for textures. And only then, when frame is ready write front buffer to EDRAM?

    Great! But how do you think effects like watersplashes, but not on water, on whole screen is done?

    You mean fullscreen postprocessing effect regulary done by one big sprite was done by same sprite split to many?
     
  4. phoenix_chipset

    Regular Newcomer

    Joined:
    Aug 26, 2016
    Messages:
    546
    Likes Received:
    246
    I know Conker used it heavily.

    I remember Nintendo noting when they made the gamecube to be as efficient and easy to use as possible, Sony doubled down on the N64's complex design. Ps2 could be thought of as an N64 2.0 + eDRAM.
     
  5. Nammo

    Newcomer

    Joined:
    Dec 30, 2010
    Messages:
    13
    Likes Received:
    29
    Sorry, I didn't mean it was completely disallowed (although it was at first). I meant from the developer's perspective, there was basically no support. Nintendo just dumped their internal tools and docs on the developer, then walked away. The tools and docs were not up to devkit standards - there wasn't even a debugger, no tutorials or getting started guides, FAQs or internal forums. The stories of the developers who did ship new microcode are impressive man-against-nature tales!
     
    #445 Nammo, Feb 9, 2017
    Last edited: Feb 9, 2017
  6. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    429
    Location:
    Cleveland, OH
    Virtually all N64 games (maybe excluding Midway's Greatest Arcade Hits) used the RSP. The distinction is that most developers only ran standard Nintendo/SGI library code (called "microcode") on them.
     
    milk likes this.
  7. corysama

    Newcomer

    Joined:
    Jul 10, 2004
    Messages:
    174
    Likes Received:
    117
    The front buffer, back buffer and Z buffer would all be 640x448x4bytes = about 1 meg each. You need the front buffer to sit in EDRAM until the video output hardware is done scanning out the image over the video cable to the TV. That process takes most of 1/60th of a second. If you are running 30fps, the video out will have to scan out your front buffer twice in a row while it waits for you to finish drawing the back buffer. So, there's no way to avoid having a front buffer sitting around in EDRAM.

    There was one trick you could do: Before progressive-scan HDTVs, traditional TVs were interlaced-scan displays. They only updated either the even or the odd scanlines every 1/60th of a second. So, if your game was a rock-solid 60fps, you could get away with having a 640x224 front, back, depth buffer because that's all the TV needed anyway. But, if you stutter for a frame, the single half-height buffer will go to both the even and odd scanlines and your game will look half-rez until the frame rate gets back up to 60.

    I'm not sure what you are describing. In the end, everything comes down to triangles and quads. Water splash particles sound like particle systems made of triangles.

    When working on a PS2 post-processing system, I did put in a feature where you could write a function to warp 2D the UV coordinates for a regular grid of triangles. I'd then draw to the temp buffer using the screen as the texture and the warped UVs. That way you could do a full-screen 2D warp in a post-processing pass. You could do simple stuff like wavy or swirly distortions similar to some Photoshop filters. It wasn't per-pixel accurate. The 2D grid was 16x16 pixels per quad and UVs were only warped on the quad corner vertices.

    One big sprite would work. But, it turned out that a row of tall sprites was actually faster because of how the read-write caches worked.
     
    milk likes this.
  8. Rikimaru

    Veteran Newcomer

    Joined:
    Mar 18, 2015
    Messages:
    1,014
    Likes Received:
    395
    I believe is was used in DOA2 and Sly Cooper 2, 3. They did not rework it even on PS3 port.
     
  9. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    That is exactly what I wanted to know! So all buffers should be in EDRAM all the time except Z buffer what can be replaced with temp buffer when didn't needed anymore? Right?
    Next question I remember what someone said what in Jak 3 there was 250k polygons per frame. But how that can be possible if there's only ~280k pixels on screen? Polygons can't be so small.
     
  10. ProspectorPete

    Regular Newcomer

    Joined:
    Feb 1, 2017
    Messages:
    414
    Likes Received:
    137
    GT4 had it as well
     
  11. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    429
    Location:
    Cleveland, OH
    There's a lot of things to consider:

    1) A bunch of polygons are drawn on top of each other (overdraw). The amount varies depending on the game and scene, but it's common for one location on the screen to cover 3-4 polygons on average. So the actual number of pixels being drawn could be several times the screen resolution.
    2) Particles probably count as polygons and are one pixel each by definition and can have a ton of overdraw.
    3) Some polygons really may end up being smaller than a pixel, although with PS2 level technology it's probably best if they can be rejected early in the pipeline.
    4) That number might include polygons that are not displayed because they're facing away from the camera (and are therefore known to be occluded by what's on the other side)
    5) That number might include polygons that are not actually in the screen viewing area.

    Polygon figures tend to be kind of ambiguous, hard to know what they really mean...
     
    Liandry, milk and corysama like this.
  12. corysama

    Newcomer

    Joined:
    Jul 10, 2004
    Messages:
    174
    Likes Received:
    117
    Yep. You only need the Z buffer while you are drawing 3D stuff. Outside of that time, you can reuse that space for whatever.

    Exophase answered this as well as anything I had in mind.
     
    Liandry likes this.
  13. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    I remember Corysama told what on PS2 polygons rasterised by one at time. I have some questions about that.
    1) Did all polygon rasterised and textured all in one go or by 8 pixels clock (because there is 8 pixel pipelines what can texture)?
    2) After rasterisation that polygon is written to back buffer, but did it also written to z buffer?
    3) If as Corysama said polygon not only rasterised but also textured, did that mean what textures is written to EDRAM befor display lists are sent?
    4) If that polygon should have, let's say two passes, after it rasterised, textured and written to back buffer, VU1 send that polygon to GS again, did GS need that polygon what is already in back buffer read it and blend in with second same polygon for second pass?
     
  14. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,894
    Likes Received:
    2,431
  15. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    Thank you for link, but doesn't PS2 also have some specific things?
    Also next question: does PS2 have some upscaler? How progressive resolution is made in PS2 games. Also how 1080p is done in GT4?
     
  16. dogen

    Regular Newcomer

    Joined:
    Oct 27, 2014
    Messages:
    335
    Likes Received:
    259
    Minor correction, but the PS2 actually had 16.
     
    Liandry likes this.
  17. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    Yes, but only 8 can make texturing.
     
  18. dogen

    Regular Newcomer

    Joined:
    Oct 27, 2014
    Messages:
    335
    Likes Received:
    259
    That's what I get when I post first thing in the morning.
     
  19. corysama

    Newcomer

    Joined:
    Jul 10, 2004
    Messages:
    174
    Likes Received:
    117
    First you would set the size and location of the frontbuffer and z buffer. Then, you would make sure the textures are in EDRAM before you use them. EDRAM would look like the RAM dump I posted eariler with the textures, palletes and screen buffers all in EDRAM at the same time. Now you are ready to draw something. Point at the texture to use. Draw some polygons using that texture.

    When drawing a single triangle, the GS would figure out what pixels are covered by the triangle in 8-pixel blocks. It would simultaneously figure out what texels are needed for that block of the triangle, it would read from both the framebuffer pixels and the texture texels, blend them according to the blend setting and write them back to the frame buffer. It would also write to the depth buffer at the same time. Then it would move on to the next 8-pixel block. And, so on until it finished the triangle.

    This all happens all at once under the during a single draw-triangle command. From the point of view of the GS interface, you say "Draw Triangle" and all of this finishes before the next Draw Triangle command is even looked at.

    It did not have an upscaler. I think if you gave it a 448 image to display as NTSC 224, the video out could blend pairs of lines together to make NTSC's interlacing work out better.
    I don't recall the details of how progressive scan was done, but it was complicated. It could do 480p and 1080i, but not 1080p. I don't recall what was involved in getting a 1080i image out of the GS.
     
    Liandry likes this.
  20. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,492
    Likes Received:
    10,855
    Location:
    Under my bridge
    GT4 wasn't 1080p - it was 1080i. They interlaced full-res (640x540 ish?) framebuffers. Horizontal res was just stretched.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...