Render resolutions of 6th gen consoles - PS2 couldn't render more than 224 lines? *spawn

I am saying in my experience, the PS2 has a hard limit of 224 lines.
Its limit was available VRAM. An old discussion here with all the old devs.

A couple of responses challenging points made about GS limitations (from a trouble maker!):

GS :
1. Must keep all three buffers onchip(eats 3.5 MB out of 4 MB)
2. Has no texture compression.
3. Has only 512K available for texture buffer.
4. Has no external memory.
5. A developer must continually upload new texture into texture buffer inside the GS before sending polygons.


Re: ...

1. 640x480x24/8/1000x3=2764Kb. That leaves 1332Kb for textures, and that's even a generous set up, some games would look fine at 640x240x16/8/1000x3=921Kb, which leaves 3174Kb for texture.
2. Then what would you call CLUT, if not a compression format?
3. See 1.
4. There is main mem, it’s just that the memory controller is on the EE die, as opposed to GC and xbox, where it resides on the GPU.
5. So it’s a manual cache, that’s called a scratchpad-RAM, right? You can find many papers around the web that will tell you that scratchpads are very well suited for multimedia applications.
I’ve read that synchronisation between geometry and textures can be problem, especially if using the fast MSKPath 3 upload, but allegedly it’s “onlyâ€￾ a matter of finding the right balance, and send the texture really early on.

Re: ...

Actually, there should be more than half a meg free. 640*448 (standard PS2 res, I believe), 16-bit everything, double-buffered with Z is less than 2MB, triple-buffer it's little over 2. Then you could cheat and do half-high front buffer or even all buffers and free up even more. Dunno what math you used though to come up with 3.5 megs, sounds like arithmabogutics to me.
 
Which statements? The ones about 480p buffers and how the limiting factor was VRAM are factual. We also have examples of games rendering at higher than 480x244.

Found another one with confirmation from a PS dev about PlayStation 1 supporting 480p
No, Tobal No. 1 and Tobal 2 also run at 640x480. However VERY few Playstation games ran at that resolution.
And it used the same principle as PS2:
Basically the hardware provided a 2 dimensional block of video memory which had to be used for both frame-buffers and all textures used for rendering.
Bigger the framebuffers, the less space available for textures, which is why you see odd sized buffers as a compromises.

So, um, yeah, PS2 could and did render games above 480x224. As to how many, I've no idea, and it produced a lot of games rendering at lower than SDTV resolution too. But there's no hard limit to resolution, as that teapot experiment showed producing 720p. The devs just had to balance RAM requirements and pick suitable buffers with enough working space for what they were rendering.
 
I am seeing it all as upscaling until it is demonstrated otherwise. I recognize the PS2 could put it all into a 2D framebuffer and upscale it.

Perhaps my bias is based only in the lack of mip mapping and 16-color palletized textures in PS2 games.

Either way, these things do not apply to how the Dreamcast or any other 3D system would handle the same scene. They would have to be accounted for before any assertions that XYZ system cannot handle an exact port.
 
16-color palletized textures in PS2 games.
I know bugger all about the ps2 but we had palletized textures on the pc but they were 256 colour palletized you sure it wasnt the same on ps2 16 seems a little low
maybe one of the clever people will know but afaik 16 and 256 both take up 8bits (well you can represent 16 with 5 bits but no systems use 5 bits)
 
Last edited:
Either way, these things do not apply to how the Dreamcast or any other 3D system would handle the same scene. They would have to be accounted for before any assertions that XYZ system cannot handle an exact port.
Of course. This line of discussion came from someone asserting that DC could render GA at a higher res than PS2 though, not the other way around. Given DC can run GTA better than PS2 res, what res was PS2? Was it only 480x224 because that was the highest resolution the hardware could manage? No, turns out PS2 can handle higher resolution rendering. Okay, so GTA could be any resolution and 480x224 is not the target for DC to know it can surpass PS2 GTA rendering. Does that mean DC cannot surpass PS2? No, we'll have to wait and see.

All we've done with this talk is eliminate a criterion. There are no conclusions on anything, and nothing learnt about GTA's rendering on PS2 other than it could be higher than 480x224, but it may be that res or lower. Actually a pixel count of a direct grab would easily solve the render res.
 
I know bugger all about the ps2 but we had palletized textures on the pc but they were 256 colour palletized you sure it wasnt the same on ps2 16 seems a little low
16 colour CLUT, but you could layer textures in passes. Sometimes people struggle to understand just how different PS2 was!
 
That may not be most feasible, but IMO GTA3 renders at 640x448. You need that res to have properly looking 480p mode if you force one. Any oddball like 512x448 would end up in squashed image. Or, if game is field rendered it will only output half of the lines. That happens if using GSM. Xploder HDTV Player actually handles field rendered games properly.
 
Actually, an important plot point here is 480p30 versus 480p60. GTA is running :love:0 fps. It's not drawing 240 unique lines every 60th of a second, but 480 lines every 30th. Ergo, it's rendering '480p30'. Or whatever res it is, it's 'xxxp30', not 'xxxi60'. If looking at a 60 Hz game, then 480i60 is half the render res of 480p60, but the moment you get to 30 fps output, there's no difference, rendering 640x480 pixels in one frame, one 30th of a second.
 
I know bugger all about the ps2 but we had palletized textures on the pc but they were 256 colour palletized you sure it wasnt the same on ps2 16 seems a little low
maybe one of the clever people will know but afaik 16 and 256 both take up 8bits (well you can represent 16 with 5 bits but no systems use 5 bits)

You can in principle have CLUT textures of any number of bits per texel, but older GPUs have hardware support for a normally 4 and 8 bit.

4 bit per pixel is 16 colours, 8 bit per pixel is 256. On PC where you might (in the old days) have been drawing on the CPU it probably wouldn't have made sense to use less than a byte for an element of sprite or bitmap (memory is typically byte addressable). On PS2 4 bit, 16 colour textures were very common.

One of the few real hardware advantages that the DC had over the PS2 was its support of 2bpp VQ compressed textures. A simple 4bpp, 16 colour CLUT texture on PS2 would almost always look worse for general textures (HUD and menu stuff excepted) while taking up more memory. As Shifty says, there were more complicated uses of multiple layers of textures which changed the arithmetic in some situations though.
 
Output resolution, or internal rendered resolution? We need examples and evidence of games rendering internally on a 640x480 buffer to disprove sheathx013's reference to the official documentation. Actually we only need evidence that PS2 can render above 640x448

We could also do with @sheathx013 referencing the docs with a quote and source to support their assertion.
Actually we don't need evidence it can render above 640x448 internally because the claim was that it could only do 448 interlaced or 224 lines progressive. So just a game with an internal frame buffer of 640x448 would disprove this claim. There's quite a few games that do this including Soul Calibur 2, Shadow the Hedgehog, Sonic Unleashed, etc. Assuming PCSX2's "Take screenshot at internal rendering resolution" option is accurate.

Soul Calibur II_SLUS-20643_20240823190716.png
 
Coming back to this comment, are you saying rendering in pieces and putting them together isn't how other 3D hardware works?
I dont think I've ever read about another renderer that puts polygons on top of polygons, or renders multiple frame buffers and combines them for one frame in a game.

Obviously everything is rendering in bits and pieces, but as I understand it for game hardware it is usually one frame at a time at whatever resolution.
 
I dont think I've ever read about another renderer that puts polygons on top of polygons, or renders multiple frame buffers and combines them for one frame in a game.
XB360 rendered in tiles, two for 720p and three for 1080p. Furthermore, PVR in DC is a tile based deferred renderer meaning it takes the notion of rendering the scene piece-by-piece to a whole other level! ;)

On earlier PC hardware with limited texture units, you'd use overdraw to add more textures per surface. This came at quite the cost as they lacked bandwidth.

Whole-scene deferred rendering is a common technique developed in the PS360 era where a whole scene is rendered into multiple buffers that are composited, kinda paralleling what PS2 was doing (but very different in maths and composition). In a deferred renderer you'll render an opaque (albedo) geometry pass, and then a specular pass, and combine the two. On PS2, you'd render an opaque geometry pass and then on top of that a specular highlight pass.

This technical paper on Killzone 2 covers Deferred Rendering perfectly as one of the pioneering games using it.

Obviously everything is rendering in bits and pieces, but as I understand it for game hardware it is usually one frame at a time at whatever resolution.
That's a whole frame immediate renderer. XB360 was a tiled immediate renderer, very good at that but not so great at deferred rendering. There are lots of alternative approaches. Current tech is kind of a mix between PS2 and OXB et al. You can render in one pass geometry with multiple materials and lighting, but then you renderer multiple buffers and combine them. Indeed GPUs were evolved to allow multiple render targets, not just a 'back buffer'. PS2's use of overdraw is unique as having limited rendering per triangle and using massive overdraw to solve rendering features, but it's just one member of a spectrum of hardware solutions.
 
Yes, the set top boxes introduced controller lag along with their nominal HD resolutions as well. The 360 in particular was designed to upscale, I think Digital Foundry was the group that made this particular fact understood. I think their focus was to see which console was rendering internally at "true" HD, a standard that has also since shifted. This was around the time that I started enjoying my budget PC outperforming consoles as well.

The tile based renderer in PowerVR does not, as far as I understand it, require special programming. It is just on, all the time, which is why the cards didn't need DDR memory at that time or on chip transform and lighting. That is not to say that games didn't need optimizations for PowerVR, but the benefit was somewhat free and then considerations needed to be made about how to optimize further that rarely were made in quick ports to the Dreamcast.
 
Um, what happened to my post on 480i/p? Try again!

Thinking about it, at 30fps there's no difference between 480i and 480p. You render the same 640x480 pixels in 1/30th of a frame and present it to the screen. 480p only affects render target size for games that are 480p60. A 60 fps game at 480i60 is rendering half as much as that game rendering at 480p. So for any 30 fps games on PS2, there can't even be a 640x240 render buffer.

If we want to get technical, you could render two tiles, 640x240 for the top half, 640x240 for the bottom half, and combine for final output.
Correct. For what it's worth Soul Calibur 2 renders at a resolution of 640x448 at 60fps.
 
Yes, the set top boxes introduced controller lag along with their nominal HD resolutions as well.
What?
The 360 in particular was designed to upscale, I think Digital Foundry was the group that made this particular fact understood.
What? It's tile based rendering, not upscaling. At 720p XB360 renders 1280x720 pixels per frame.
I think their focus was to see which console was rendering internally at "true" HD
Those discussions basically started here on B3D with pixel counting. Leadbetter had perfect video capture tech which could record uncompressed output, and he engaged the board to develop a methodology to count pixels and determine internal render resolutions (and count individual frames to establish framerate) - thus Digital Foundry was born. Upscaling and later reconstruction techniques allowed devs to get more complexity per pixel at a reduction in visual fidelity, and pixel counting allowed us to see what tradeoffs were happening.
The tile based renderer in PowerVR does not, as far as I understand it, require special programming.
There's nothing particularly special about tile based rendering (or even multipass rendering) on other platforms and it produces exactly the same results with a nominal overhead over rendering to a single buffer when triangles overlap the edges. Tile-based rendering doesn't reduce the number pixels or upscale or soften the image, it doesn't add lag, and it doesn't compromise the output in any way. It's just an alternative approach, rendering pixels to make use of very-fast-but-expensive-so-smaller-pool DRAM for situations where that's advantageous.
 
There's nothing particularly special about tile based rendering (or even multipass rendering) on other platforms and it produces exactly the same results with a nominal overhead over rendering to a single buffer when triangles overlap the edges. Tile-based rendering doesn't reduce the number pixels or upscale or soften the image, it doesn't add lag, and it doesn't compromise the output in any way. It's just an alternative approach, rendering pixels to make use of very-fast-but-expensive-so-smaller-pool DRAM for situations where that's advantageous.
That's essentially how Xenos works, right? The EDRAM is, essentially, just like the tile buffer in that it has lots of bandwidth and writing into that super fast buffer, then transferring the finished result into off chip memory before output. And it didn't require any special programming, unless I'm totally understanding what happens if you exceed the EDRAM's capacity. The main difference is that Xenos has enough EDRAM to potentially render the scene in 1 tile.
 
Back
Top