Questions about PS2

Due to the limited size of the GS eDRAM it's possible that they had to split the framebuffer into two pieces drawn separately, but from a graphics algorithm point of view I wouldn't call that "multiple passes."
PS2 had enough eDRAM for a 1280*1024@24bit display; that's what the PS2 Linux system used. Champs of Norrath and its followup(s) used 1280*480 rez, then downscaled horizontally to 640*480, then used PS2 line flicker filter to downsample vertically to 640*240 on video scan-out IIRC. So you had sort-of 4x SSAA in that game, as long as you did not use progressive VGA or component output to your TV/monitor. Pretty smart and unique. :)
 
Never heard about it. But it's sounds awesome, just want to know why there is need for 1024-bit buses if only 8 pipelines can do texturing. Or all 16 can be used in pixel work?

That's true. Why it needs to reed so many, because only 8 pipelines are used for texturing.

Also don't understand how 512-bit bus is enough if there is 4 texels per pixel?

The 1024-bit buses have nothing to do with texturing.

Bilinear filtering takes a weighted average of 4 texels that are neighbors of each other. Usually a lot of those texels from the different pixel pipelines will overlap with each other. Presumably the texel pipelines are able to consolidate accesses so that the 512-bit read is usually enough to service all of them.

There did that number came from? :oops:

All of this information comes from leaked Sony documents. I won't link anything here but it's pretty easy to find on Google.

Caches also is on GS chip?

Yes.


Because having multiple buses access the same memory simultaneously presents a bunch of design challenges. And I've never heard of a DRAM with multiple buses at all.

Refresh what?

The eDRAM. All DRAM has to be refreshed periodically or else the charge will decay. A refresh is usually like a read operation. And with DRAMs read operations are destructive, so they have to be followed by a write operation unless you're okay with losing the data.

Presumably the read/write buffers work with the pipelines in such a way that they can work around the destructive reads a lot of the time, because they'll be followed by writes to the same location anyway (when doing alpha blending or updating depth). But the refreshes probably need read + write.
 
And I've never heard of a DRAM with multiple buses at all.
More costly/upscale hardwired, oldtime windows GDI accelerator graphics cards from the mid-late-ish 1990s era sometimes used dual-ported VRAM with a read and a read/write port (former reserved for RAMDAC scanout IIRC.)

How the ports were implemented on a fundamental level is not something I've ever seen described however, so I'm not going to make any claims here. ;)
 
True, but as far as I can tell the GS can't render to this framebuffer format, so it'd be pretty useless for games

You could perhaps implement a software blitter in VU1 to run graphics acceleration that way, using GS just as a dumb framebuffer... :p
 
Refresh what?
The eDRAM. All DRAM has to be refreshed periodically or else the charge will decay.
Alright.

Pneumatics electronics(?) lesson.

This is a basic DRAM cell, a single bit:

LPa8XdO.png


Let's assume that an empty balloon is a "0" and a full balloon is a "1."

Let's say you want to store a 0. You turn the value so that air can flow through, and wait a moment while the balloon empties itself. Then, you shut the value off.

Now, you want to know what value was stored in the cell. You turn the valve on, and note that no air blows out, so it was a 0.

Now let's say that you want to store a 1. You start trying to blow air into the tube, then you turn the valve on. You wait a moment as the balloon inflates, then you turn the valve off.

Now, you want to know what value was stored in the cell. You turn the valve on, and note that there's air blowing at your face, so it was a 1. You also note that you just emptied the cell, so you blow more air in and close it.

Then you wait a few days and come back.

You want to know what value was stored in the cell. You turn the valve on, and because it was leaky even while off*, the air has left the balloon. So we read back a 0, even though what we last stored was a 1!

Alright, so we need a solution. We hire someone into the position of Refresh Person. Every few hours, they read the value in the balloon, and then write to the balloon. This way, it never gets so low that you weren't sure if it had been a zero or a one; you can leave the balloon for a year, and as long as the Refresh work gets done, you'll be able to come back to the last value stored.

Electrical DRAM cells work basically the same way, but the balloon is a capacitor and the valve is a transitor, and the time scales for refresh cycles are many orders of magnitude quicker than "hours."

*Well, if we want to get thorough, latex is also quite air-permeable, so most balloons are going to lose plenty of air anyway. As you will learn if you ever switch the butyl inner tubes on your bicycle for latex ones. Whatever.

PS2 had enough eDRAM for a 1280*1024@24bit display
I guess they could save a lot if they ordered their draws and didn't need depth?
 
You could perhaps implement a software blitter in VU1 to run graphics acceleration that way, using GS just as a dumb framebuffer... :p

Well that's a.. creative idea.

If ditching the GS I think you'd actually be better off using the R5900 + VU0 to replace it and keeping the VU1 doing the T&L and maybe some other stuff like triangle setup. VU1 isn't very well suited for rendering. Trying to use a portion of a tiny data RAM as a software texture cache via DMAs is hard to do without huge slowdown. And 128-bit integer SIMD is actually a better fit for a lot of the rendering pipeline, not to mention that it has twice the clock speed.
 
VU1 calculates anything you like. It's a stand-alone CPU basically (if set to operate independently from EE), with some limitations IIRC like it couldn't access main RAM directly, you had to DMA into its own local memory, and may have had some gaps in its instruction set making it not ideally suited for general purpose processing. Its relatively small local memory would be a hindrance there for example, you'd have to page code in and out heavily.
Wasn't this about VU0 which can operate in Micro-mode or macro mode?

or an old dusty PS2 programmer
If only I can find one. :D hat would be great to cha wit one of them.
 
If ditching the GS I think you'd actually be better off using the R5900 + VU0 to replace it and keeping the VU1 doing the T&L and maybe some other stuff like triangle setup.
I don't think the PS2 Linux environment was really meant for running 3D apps. After all, PS2 only had 32MB main RAM, and the OS would occupy a fairly substantial amount of that. :p
 
So you had sort-of 4x SSAA in that game, as long as you did not use progressive VGA or component output to your TV/monitor.
And what happened if you use VGA or component?

The 1024-bit buses have nothing to do with texturing.
I know this, but there is 512-bit texture bus, but as I know it's read only bus. So how texture data go to EDRAM?

Bilinear filtering takes a weighted average of 4 texels that are neighbors of each other. Usually a lot of those texels from the different pixel pipelines will overlap with each other. Presumably the texel pipelines are able to consolidate accesses so that the 512-bit read is usually enough to service all of them.
Yes! Now I see how it works.

All of this information comes from leaked Sony documents. I won't link anything here but it's pretty easy to find on Google.
I'll try o find something.

And I've never heard of a DRAM with multiple buses at all.
Wasn't it three independent buses? 1024-bit read, 1024-bit write, 512-bit texture?

Presumably the read/write buffers work with the pipelines in such a way that they can work around the destructive reads a lot of the time, because they'll be followed by writes to the same location anyway (when doing alpha blending or updating depth). But the refreshes probably need read + write.
Completely new info for me. I will think about it. :D

Alright.

Pneumatics electronics(?) lesson.

This is a basic DRAM cell, a single bit:
Absoluely amazing explanation and especially picture! You made my day!!! :D A lot of kudos to you! :D So it's clear for me about refresh. But should't it use some part of bandwith and then there is a less bandwith available?
 
And what happened if you use VGA or component?
Then you'd get all 480 lines drawn on-screen instead, thereby losing the "supersample" effect on the vertical axis (but retaining it on the horizontal.)

The flicker-filter is there because of the way analog standard definition television works. If you haven't ever studied the (rather ancient; TV was first developed some 80-ish years ago) tech, each frame, typically 480 scanlines at 30 frames per second for NTSC TV excluding overscan, is divided into two fields, because the bandwidth to transmit full frames as analog signals was deemed excessive at the time. Each field consists of either every even or every odd numbered scanline, drawn in full alternatingly one field after the other by the electron beam of the cathode ray TV tube at twice the framerate, and relying on the persistence of glow of the phosphorous coating on the inside of the picture tube to retain the image while the frame's other set of scanlines are drawn.

Typically there is some flicker anyway, as the phosphorous cannot have too long persistence or you'd get smearing of moving imagery. Normally, that is not a very big deal as analog video typically does not have a lot of sharply contrasting thin horizontal lines that would show this flicker. Computer graphics however, and especially textured 3D polygon graphics can have a lot of such contrasting lines, both along polygon edges or in textures inside polygons.

Therefore, flicker-filter hardware merges two scanlines to reduce contrast (and as a function halving vertical resolution from 480 lines to 240), but requires PS2 hardware to draw full frame images even though only half the frame would be shown on standard definition TVs. (Not half the frame literally, as in the lower or upper half only, but half the amount of lines from the originally rendered frame - bahhh! Hopefully you understand what I'm trying to say. ;))

Some games, mainly fairly early ones, relied on field rendering instead. Instead of drawing 60 full frames per second you drew only a field's worth of lines at 60 frames/sec. This meant you only needed to allocate half the eDRAM space for frame and Z buffers, leaving more room for textures, potentially giving a higher visual fidelity. However, you had to maintain 60fps rendering at all times, since if code execution ran slow and you overran your 1/60th of a second time allotment, you had to display the previous frame again, making the action judder and giving the appearance of losing half your vertical resolution for a brief instant. It looked very odd indeed! Also, field rendering on a CRT tended to flicker hellishly and was fairly bothersome to watch for any length of time, and on progressive TVs it just doesn't work well at all since it abuses the peculiarities of standard definition TV and CRT tubes.

As developers got more comfortable in streaming textures to eDRAM in an efficient fashion, field rendering fell out of favor, because it was just hellish to look at, really. :p

(All of this coming out of the musty archives of my brain, I may have misremembered and fucked up the explanation in some way, it's been so many years now since I last bothered with this stuff - and good riddance anyway. I always hated analog TV. :D)
 
Last edited:
Therefore, flicker-filter hardware merges two scanlines to reduce contrast (and as a function halving vertical resolution from 480 lines to 240), but requires PS2 hardware to draw full frame images even though only half the frame would be shown on standard definition TVs. (Not half the frame literally, as in the lower or upper half only, but half the amount of lines from the originally rendered frame - bahhh! Hopefully you understand what I'm trying to say. ;))
Yes, I hope I understood you! Very detailed explanation.

Some games, mainly fairly early ones, relied on field rendering instead. Instead of drawing 60 full frames per second you drew only a field's worth of lines at 60 frames/sec. This meant you only needed to allocate half the eDRAM space for frame and Z buffers, leaving more room for textures, potentially giving a higher visual fidelity. However, you had to maintain 60fps rendering at all times, since if code execution ran slow and you overran your 1/60th of a second time allotment, you had to display the previous frame again, making the action judder and giving the appearance of losing half your vertical resolution for a brief instant. It looked very odd indeed! Also, field rendering on a CRT tended to flicker hellishly and was fairly bothersome to watch for any length of time, and on progressive TVs it just doesn't work well at all since it abuses the peculiarities of standard definition TV and CRT tubes.
This sound absolutely crazy! :D But here games what does it! Unbelivable! How much more secres PS2 have?

I think I understood you more or less. Very great! Thank you so much for so big and informative post!
But are you sure Champions of Norath and Champions Return to Arms used 4xSSAA and not 2xSSAA as hifty said before? And yes, are you PS2 game developer? :D
 
This sound absolutely crazy! :D But here games what does it!
Tekken Tag Tournament, Devil May Cry (the first one), Dead or Alive (first one on PS2; DoA 2 maybe?), Zone of Enders (again, first one), first PS2 Grand Tourismo Racing as well IIRC and probably a bunch of others, but these are the ones I know of/recall offhand.

And yes, it was a bit nuts. :p But PS2 era consoles was the first batch to really be capable to run in high res so the tech was a bit wonky back then. Well, Sega Dreamcast was an earlier example compared to PS2, and SNES had an interlaced high-res display mode also, but virtually no games software used it (because of the pretty horrific flickering.)

Unbelivable! How much more secres PS2 have?
Have you heard about the midgets yet? @Shifty Geezer tell him about the midgets! He'll love it.

But are you sure Champions of Norath and Champions Return to Arms used 4xSSAA and not 2xSSAA as hifty said before?
It's only "4x" SSAA if you run it on (preferably) a SDTV using SD video connection (IE, S-video, composite or technically RF, but who plays PS2 using RF? Ugh, ghacckk...)

...And I don't know if it technically counts as 4x SSAA since vertically it relies on the flicker filter feature, and if you count that as SSAA then every game drawing full frames and uses the flicker filter would also count as using 2x SSAA at minimum, so it's a bit of a cheat I guess.

And yes, are you PS2 game developer? :D
Nope... Just a console hardware nerd on the internet. :p
 
Tekken Tag Tournament, Devil May Cry (the first one), Dead or Alive (first one on PS2; DoA 2 maybe?), Zone of Enders (again, first one), first PS2 Grand Tourismo Racing as well IIRC and probably a bunch of others, but these are the ones I know of/recall offhand.
But newer one games had same textures without this stuff, and even better textures? And they had same or better textures because of streaming?

Have you heard about the midgets yet?
No, never heard about it. What is it !? :D

...And I don't know if it technically counts as 4x SSAA since vertically it relies on the flicker filter feature, and if you count that as SSAA then every game drawing full frames and uses the flicker filter would also count as using 2x SSAA at minimum, so it's a bit of a cheat I guess.
Ok, maybe Shifty can also tell something about it. :D

Nope... Just a console hardware nerd on the internet.
Ok. But when you post something it's looks like you are. :D Maybe you also can make clear for me question about PS2 EDRAM buses. If EDRAM have one 1024-bit bus for framebuffer read with 19.2 Gb/s, one 1024-bit bus for framebuffer write with 19.2 Gb/s, and one 512-bit bus for textures read with 9.6 Gb/s, how textures get to EDRAM? Because if GS write to EDRAM 19.2 Gb/s and read 19.2 Gb/s then EDRAM already empty, but GS also read 9,6 Gb/s texture data. How?
 
Ok. But when you post something it's looks like you are. :D Maybe you also can make clear for me question about PS2 EDRAM buses. If EDRAM have one 1024-bit bus for framebuffer read with 19.2 Gb/s, one 1024-bit bus for framebuffer write with 19.2 Gb/s, and one 512-bit bus for textures read with 9.6 Gb/s, how textures get to EDRAM? Because if GS write to EDRAM 19.2 Gb/s and read 19.2 Gb/s then EDRAM already empty, but GS also read 9,6 Gb/s texture data. How?
DRAM is only destructive on the cell level. Most DRAM implementations use refresh circuitry to internally write data back after it's been read.
 
DRAM is only destructive on the cell level. Most DRAM implementations use refresh circuitry to internally write data back after it's been read.
And what does it mean? I mean how it answers my question? :D I understand how framebuffer data gets to EDRAM, and how it gets out, but what about texture data. As I know texture data bus is read only, so how texture data gets to EDRAM? :D
 
As I know texture data bus is read only, so how texture data gets to EDRAM?
There's little or no difference between texture and framebuffer data as far as GS is concerned; texture data goes in the same way everything else goes in. While the GS AFAIK can't render palettized 4/8-bit buffers (common texture format on PS2 to save space compared to full 16-bit textures) for example, it could render a scene, and in next step apply that as a texture to another scene. So a texture can be a buffer can be a texture.

HTH. :)
 
There's little or no difference between texture and framebuffer data as far as GS is concerned; texture data goes in the same way everything else goes in. While the GS AFAIK can't render palettized 4/8-bit buffers (common texture format on PS2 to save space compared to full 16-bit textures) for example, it could render a scene, and in next step apply that as a texture to another scene. So a texture can be a buffer can be a texture.
So you say texture data goes to EDRAM through 1024-bit write bus, right?
But there is one moment, still. :D EDRAM have 19,2 Gb/s write and 28,8 Gb/s read. How GS can read more than write? Because on each other system all buses always have same read/write bandwith.
 
But PS2 era consoles was the first batch to really be capable to run in high res so the tech was a bit wonky back then. Well, Sega Dreamcast was an earlier example compared to PS2, and SNES had an interlaced high-res display mode also, but virtually no games software used it (because of the pretty horrific flickering.)
Good design could get around that. I wrote a couple of high res (640x512) Amiga games in AMOS using the high res and the flicker was bearable. One was a silent-movie clone of Jet Pac - greyscale palette, flickering, and film grain plus artefacts. The other was a (combat free) space shooter prototype. By avoiding high brightness and contrast flicker, was pretty manageable.

Have you heard about the midgets yet? @Shifty Geezer tell him about the midgets! He'll love it.
You know we're not supposed to talk about the midgets. You shouldn't even have mentioned the midgets.
 
Last edited:
Back
Top