Questions about PS2

The GS's EDRAM was for framebuffers, textures and texture palettes only.
Ok, but where geometry data stored in GS? Is it stored in pixel pipelines registers?

Multipass very easy on PS2 because the GS could switch textures in just a few instructions.
There was info what there's possible to do up to 16 texture changes per frame. Does it mean what when texture is switched that is multipass? Exampe. There's 4 MB of EDRAM. Let's say there is resoution 512x512, so framebuffer will be 1 MB, and Z buffer is 1 MB. It leaves only 2 MB for textures. And here comes more questions.
1) Is 2 MB enough for textures, of there should be more per frame?
2) Is it possible to write 2 MB textures to EDRAM, then texture part of frame, then write next part of 2 MB textures and then texture another part of frame?
3) If yes, will it be multipass of not? Or multipass is something different?

So, you could do a lot of work to transform and clip a small batch of triangles once, then send them to the GS multiple times with a different texture each time.
Also some questions here.
1) Does it mean what there's calculatet ony part of frame polygons or all frame polygons?
2) Does it mean what VU1 needs to calculate polygons each time when they sent to GS with different texture?
 
There's 4 MB of EDRAM. Let's say there is resoution 512x512, so framebuffer will be 1 MB, and Z buffer is 1 MB. It leaves only 2 MB for textures. And here comes more questions.
1) Is 2 MB enough for textures, of there should be more per frame?
2) Is it possible to write 2 MB textures to EDRAM, then texture part of frame, then write next part of 2 MB textures and then texture another part of frame?

The actual devs will be able to give you a more detailed explantion, but not only can you do that on ps2, but you pretty much must, if you want to use the most of it.
Yet, that's not multipass. It's just memory management.
Multipass is when you draw more than a single texture on the same poly. Think: Detail mapping, where a tiled high-frequency texture is overlayed on top of a larger low frequency one, or think lightmapping on quake, where you have the diffuse texture overlayed by the lightmap, or think of a terrain renderer, where a tiled grass texture transitions to another tiled dirt texture based on vertex weights. To achieve those effects, you either need your rasterizer to be able to use more than one texture at once, which ps2 can't, or you need to draw multiple polys on top of each other for every new texture you might need. What has been clarified here, is that ps2's architecture gives devs room to do that eficiently, avoiding redundant transforms. But it's all up to the dev to devise and implement.
 
Geometry is stored in main memory and chunks dma'ed into VU (1 usually) memory for transformation. Upload (and sometimes downloads) from EDRAM had several paths, the best being DMA, throughout a frame, a lot of textures would be sent up to EDRAM.
In a PS2 the DMA unit was effectively the CP (Control Processor), you built the DMA list to move the geometry and 'shader' program to VU, whilst textures were uploaded, so that when the GS did its thing, all the data was where it should be. DMA programming was the most vital part of a PS2 renderer and we often built quite powerful tools to help us find/fix bugs.
 
The actual devs will be able to give you a more detailed explantion, but not only can you do that on ps2, but you pretty much must, if you want to use the most of it.
Yet, that's not multipass. It's just memory management.
Multipass is when you draw more than a single texture on the same poly. Think: Detail mapping, where a tiled high-frequency texture is overlayed on top of a larger low frequency one, or think lightmapping on quake, where you have the diffuse texture overlayed by the lightmap, or think of a terrain renderer, where a tiled grass texture transitions to another tiled dirt texture based on vertex weights. To achieve those effects, you either need your rasterizer to be able to use more than one texture at once, which ps2 can't, or you need to draw multiple polys on top of each other for every new texture you might need. What has been clarified here, is that ps2's architecture gives devs room to do that eficiently, avoiding redundant transforms. But it's all up to the dev to devise and implement.
That explains a lot for me. As I understand there's option to upload textures multiple time to EDRAM per frame and texture different pats of frame and that is not mutipass. For multipass there should be multiple texturing per polygon, but does it require multiple texture upload to EDRAM? Or there's possible to write many textures to EDRAM at once and use different textures for each pass? Or there is needed different textures uploads to EDRAM for each pass? In other words, there can be up to 16 times texture upload to EDRAM per frame, and some of them will be different textures for each part of frame and some will be different textures for each pass? I know my questions is complicated. :D
 
Upload (and sometimes downloads) from EDRAM had several paths, the best being DMA, throughout a frame, a lot of textures would be sent up to EDRAM.
From EDRAM or for EDRAM? Other part is understandable, but still there geometry is stored in GS after it's calculated in VU1 and sent to GS?
 
Ok, but where geometry data stored in GS? Is it stored in pixel pipelines registers?

Geometry is not stored in GS ram. It's stored in main memory. GS is only pixels. Geometry data lives in main mem, passes through VU to get animated and ends up as triangular splats of pixels in GS mem. Data flow is one-way. Main mem -> VU mem -> GS mem -> TV. Reading back data from the GS to anywhere else is very rare. Like, you could that for screen shots or maybe primitive GPGPU. But, that's it.

There was info what there's possible to do up to 16 texture changes per frame. Does it mean what when texture is switched that is multipass? Exampe. There's 4 MB of EDRAM. Let's say there is resoution 512x512, so framebuffer will be 1 MB, and Z buffer is 1 MB. It leaves only 2 MB for textures. And here comes more questions.
1) Is 2 MB enough for textures, of there should be more per frame?
2) Is it possible to write 2 MB textures to EDRAM, then texture part of frame, then write next part of 2 MB textures and then texture another part of frame?
3) If yes, will it be multipass of not? Or multipass is something different?

You bring up an important issue that I left out. 4 mb is not a lot of space. After the front, back and z buffers, there's not much room for textures in there. So, the EE would need to coordinate DMAing textures into GS ram at the same time that it was piping geometry through the VU. When I said the GS could switch textures in a couple of cycles, I was talking about textures that are already in GS ram. Getting the textures there takes a bit of time. But, if all of the texture you need for an object are already there, switching between them is trivial. I don't remember what the DMA bandwidth for textures from main mem -> GS mem was. But, it was pretty high. So, you could cycle a lot of textures through the GS EDRAM every frame.

"Multipass" means rasterizing the same triangles multiple times. Early hardware could barely do anything in a single pass. So, we had to do multipass a lot. These days hardware can do a huge amount in a single pass. So, multiple passes are rarely needed any more.

Also some questions here.
1) Does it mean what there's calculatet ony part of frame polygons or all frame polygons?
2) Does it mean what VU1 needs to calculate polygons each time when they sent to GS with different texture?

The VU1 only has 16k of RAM for code, incoming data from the EE, working set data and outgoing data to the GS. That's not a lot. So, you might only have 2k of room to place outgoing transformed polys going to the GS. However, once you've done the transform work and formatted it for DMA to the GS, it's pretty trivial to go back over that pre-formatted buffer and tweak it to send it a second time. So, you could keep the positions the same and modify the UVs for a second pass with a different texture super-cheap.

Drawing a single model will probably require more that 2kbytes of polys. So, it would require drawing the model in lots of little chunks. But, each chunk only needs to be transformed once and it's triangles can be rasterized multiple times if the model wants to use multiple textures. You don't need to push the whole scene through the VU multiple times.
 
Last edited:
GS is only pixels. Geometry data lives in main mem, passes through VU to get animated and ends up as triangular splats of pixels in GS mem.
But are those splats of pixels stored in EDRAM as framebuffer?

Reading back data from the GS to anywhere else is very rare.
Is it even possible? How? There's just one way bus between EE and GS? Or not?

Like, you could that for screen shots or maybe primitive GPGPU.
Can you give some examples of GPGPU on PS2 please? :oops:

After the front, back and z buffers
So, frontbuffer and backbuffer both stored in EDRAM?

When I said the GS could switch textures in a couple of cycles, I was talking about textures that are already in GS ram.
So it's for multipass, right? If someone want texture part of frame, texture should be uploadet to EDRAM for each part of frame?

But, each chunk only needs to be transformed once and it's triangles can be rasterized multiple times if the model wants to use multiple textures. You don't need to push the whole scene through the VU multiple times.
So, to make it clear. Geometry calculated on VU1 one time, then it sent to GS multiple times?
Thank you for answers.
 
So it's for multipass, right? If someone want texture part of frame, texture should be uploadet to EDRAM for each part of frame?

EDRAM is GS's working memory. It reads from it for texturing and Z-tests when rasterizing polys, and the results from that are written into it. How much of that memory is used for each thing, in what formats, and how many times data from it is shuffled around is up to the dev.
 
But are those splats of pixels stored in EDRAM as framebuffer?

Yes. When I said "splats" I was just being clear about the triangles being pixels in the framebuffer --as opposed to being verts and indices.

Is it even possible? How? There's just one way bus between EE and GS? Or not?.

I know there was a read-back path because we used it to save screen shots.

Can you give some examples of GPGPU on PS2 please? :oops:

While working on a image post-processing system for the PS2, at one point I had this effect sorta running on the GS just as a fun experiment. Here's a modern equivalent. Not a good idea in reality because the numeric precision was so bad. As I said, very primitive.

So, frontbuffer and backbuffer both stored in EDRAM?

Yep

So it's for multipass, right? If someone want texture part of frame, texture should be uploadet to EDRAM for each part of frame?

Yep

So, to make it clear. Geometry calculated on VU1 one time, then it sent to GS multiple times? Thank you for answers.

Yep. No problem.
 
Sonys biggest mistake with the PS2 was not getting the tools right before launch. Or indeed any reasonable time after that. If at all.
There was no real incentive to push the hardware because everything would sell reasonably well because of the insanely big user base. Everyone "knew" what it could do so a false floor or level of normal was created. Thus the machine was hardly ever pushed. Even the games that did dare to push the envelope hardly scratched the surface, being content to do a few things right and mostly be mediocre games otherwise.

The EE had an on-die macro block decompressor with insanely high throughput, much higher than would have been ever needed for playing a DVD. The intent was of course to decompress textures for games on the fly.
No one to my knowledge ever utilized that feature. With MIP mapping and decent culling the texture throughput per frame would never need to go above a few megabytes, when the frame itself was only one megabyte. Most likely it should have been less had 16 bit textures been used.
Huffman decompression was done by the CPU and VU´s for textures but why use precious time for that, when there is much better hardware with higher compression sitting right there?!

Geometry on the PS2 was meant to be synthesized. That much is clear from what Kutaragi said early on before release. The geometry was meant to be largely dynamic, instanced and generated on the fly. If that had been done more throughly PS2 games would have looked a hell of a lot better.
The MIP level could have been set for inclined surfaces or whole objects with approximately the same surface inclination towards the screen to avoid shimmer. Culling could have been done more effectively, and the triangles could have been generated to the right size to avoid thrashing the cache if too small and have texture jumping if too large, etc.

Doing stuff like that of course would have meant that the game would have been harder to port to other less advanced platforms. But who cares if the install base is so large?!
The real answer is I suppose that the computer industry is the most conservative, stiff and dogmatic of all the branches of engineering when it comes down to it. Double up for game programmers. ;-)
 
Last edited:
I've made some calculations, correct me someone if I'm wrong. If there is 250k polygons per frame in Jak 3 and game runs at 60fps, that will be 15 million polygons per second. If one vertex is 16 KB, that will be ~228 MB of geometry data. Bus between EE and GS can send 1228 MB/s, that leaves 1000 MB or ~16 MB per frame of texture data. That is exactly as much as possible to send texture data to GS per frame. Is all this correct? And one more question, as I know Naughty Dog didn't use multipass in Jak games, is it true?
 
I've made some calculations, correct me someone if I'm wrong. If there is 250k polygons per frame in Jak 3 and game runs at 60fps, that will be 15 million polygons per second. If one vertex is 16 KB, that will be ~228 MB of geometry data. Bus between EE and GS can send 1228 MB/s, that leaves 1000 MB or ~16 MB per frame of texture data. That is exactly as much as possible to send texture data to GS per frame. Is all this correct? And one more question, as I know Naughty Dog didn't use multipass in Jak games, is it true?
In real life you have to probably halve that number. But that is still more than enough for a buffer of 1 megabyte.
Most games just fills up the buffer with the main set of large textures (character, ground. etc.) and only changes a few texture during rending. leaving much of the bandwidth wasted (actually huge bandwidth considering frame buffer was on die).
 
Sonys biggest mistake with the PS2 was not getting the tools right before launch. Or indeed any reasonable time after that. If at all.
There was no real incentive to push the hardware because everything would sell reasonably well because of the insanely big user base. Everyone "knew" what it could do so a false floor or level of normal was created. Thus the machine was hardly ever pushed. Even the games that did dare to push the envelope hardly scratched the surface, being content to do a few things right and mostly be mediocre games otherwise.

The EE had an on-die macro block decompressor with insanely high throughput, much higher than would have been ever needed for playing a DVD. The intent was of course to decompress textures for games on the fly.
No one to my knowledge ever utilized that feature. With MIP mapping and decent culling the texture throughput per frame would never need to go above a few megabytes, when the frame itself was only one megabyte. Most likely it should have been less had 16 bit textures been used.
Huffman decompression was done by the CPU and VU´s for textures but why use precious time for that, when there is much better hardware with higher compression sitting right there?!

Geometry on the PS2 was meant to be synthesized. That much is clear from what Kutaragi said early on before release. The geometry was meant to be largely dynamic, instanced and generated on the fly. If that had been done more throughly PS2 games would have looked a hell of a lot better.
The MIP level could have been set for inclined surfaces or whole objects with approximately the same surface inclination towards the screen to avoid shimmer. Culling could have been done more effectively, and the triangles could have been generated to the right size to avoid thrashing the cache if too small and have texture jumping if too large, etc.

Doing stuff like that of course would have meant that the game would have been harder to port to other less advanced platforms. But who cares if the install base is so large?!
The real answer is I suppose that the computer industry is the most conservative, stiff and dogmatic of all the branches of engineering when it comes down to it. Double up for game programmers. ;-)
If true thats kinda crazy and sad at the same time
 
Can anyone tell something about my calculations about Jak 3?
I already did. You are correct but for real world use you have to take the throughput down a notch. halving it is not too pessimistic. But 8 Mb is still more than enough for one frame. Texels should preferably be kept larger than pixel size to not get fighting. But that's what MIP mapping is for.
And of course they used multipass in all the Jak games. That's the only way to achieve the effects seen in those games.
 
Last edited:
Back
Top