Render resolutions of 6th gen consoles - PS2 couldn't render more than 224 lines? *spawn

That's essentially how Xenos works, right? The EDRAM is, essentially, just like the tile buffer in that it has lots of bandwidth and writing into that super fast buffer, then transferring the finished result into off chip memory before output. And it didn't require any special programming, unless I'm totally understanding what happens if you exceed the EDRAM's capacity. The main difference is that Xenos has enough EDRAM to potentially render the scene in 1 tile.

'Free' 4xMSAA was a thing Microsoft spent a lot time talking about and hyping when it came to the EDRAM.

But it required tiling which also came with its own performance cost such as having to process certain things more than twice in the over lapping section of the tiles.

It's also why some 360 games had below 720p resolution as developers chose a resolution and MSAA combination that fit in to the 10MB without the need for tiling.

I do wonder if they saw what PS2 was doing with its EDRAM and thought that was the way to go.
 
The tile based renderer in PowerVR does not, as far as I understand it, require special programming. It is just on, all the time, which is why the cards didn't need DDR memory at that time or on chip transform and lighting. That is not to say that games didn't need optimizations for PowerVR, but the benefit was somewhat free and then considerations needed to be made about how to optimize further that rarely were made in quick ports to the Dreamcast.

It's the same (or a very similar) approach with PVR, it's just that on the DC the polygons are binned and then drawn on the GPU rather than using software on the CPU to do it (Neon 250 moved some of this over to the CPU via the driver to save on die / cost).

The reason the DC was so effective with only a 64-bit SD-RAM bus for the GPU was not just because fully occluded polygons were never drawn, but also because overdraw, blending and z test / write was done on a fast on-chip tile buffer.
 
It's also why some 360 games had below 720p resolution as developers chose a resolution and MSAA combination that fit in to the 10MB without the need for tiling.
While PS3 just went low low low res on too many titles for other reasons! But this is going OT to retro consoles and what DC could and couldn't do. Suffice to say all consoles had different approiaches over the years.
The reason the DC was so effective with only a 64-bit SD-RAM bus for the GPU was not just because fully occluded polygons were never drawn, but also because overdraw, blending and z test / write was done on a fast on-chip tile buffer.
I guess the same principle as Ps2 but with a teeny-weeny scratchpad to render teeney-weeny tiles.

So...what happened with huge triangles on DC? How was a full screen quad binned and rendered? Or put that another way, how many tiles was a 640x480 framebuffer divided into and drawn, and how many triangles could occupy a tile?
 
What?

What? It's tile based rendering, not upscaling. At 720p XB360 renders 1280x720 pixels per frame.

Those discussions basically started here on B3D with pixel counting. Leadbetter had perfect video capture tech which could record uncompressed output, and he engaged the board to develop a methodology to count pixels and determine internal render resolutions (and count individual frames to establish framerate) - thus Digital Foundry was born. Upscaling and later reconstruction techniques allowed devs to get more complexity per pixel at a reduction in visual fidelity, and pixel counting allowed us to see what tradeoffs were happening.

There's nothing particularly special about tile based rendering (or even multipass rendering) on other platforms and it produces exactly the same results with a nominal overhead over rendering to a single buffer when triangles overlap the edges. Tile-based rendering doesn't reduce the number pixels or upscale or soften the image, it doesn't add lag, and it doesn't compromise the output in any way. It's just an alternative approach, rendering pixels to make use of very-fast-but-expensive-so-smaller-pool DRAM for situations where that's advantageous.
The HD devices or their contemporary screens, introduced controller lag in an inordinate degree. I think most of this was due to the wireless controllers and LCD native resolutions. But the shift in gameplay is well documented. I have not bought any of these devices since the shift.

I forgot the 360 GPU had a sort of tile based renderer. What I am saying is that the gaming industry was driven by Sony and Nvidia from that point on, and not for particularly important reasons. Hardware transform and lighting on the graphics card is a requirement after a certain point with Direct X games, for example. The Kyro cards had updated drivers to work around these limitations, but in my experiance the earlier PowerVR cards did not.

The tile based renderer in PowerVR 1 and 2 cards was "free" in the sense that it did not have to be coded for like the PS2. As far as I know the 360 and later devices were using something similar for upscaling. But what the PS2 was used for is not the same due to the rendering resolution.
 
The HD devices or their contemporary screens, introduced controller lag in an inordinate degree. I think most of this was due to the wireless controllers and LCD native resolutions. But the shift in gameplay is well documented. I have not bought any of these devices since the shift.
If we really want to nitpick here we could argue the Saturn introduced input lag with consoles as every game is hardware double buffered resulting in at least 1 frame of lag. The reality is a lot of games on older consoles can have input lag and it depends highly on how well the game is programmed.
 
3DAnalyzer doesn't work for the Neon 250. I think it may require a higher CPU than the PIII 800 I have it with. The card I have doesn't have the pinout for newer motherboards, so I've been stuck for a while now at i440bx.
 
The HD devices or their contemporary screens, introduced controller lag in an inordinate degree.
Why is that being raised in this thread? It has nothing to do with any consoles rendering and has no bearing on whether one old console can run the games from another old console.
I have not bought any of these devices since the shift.
What is the relevance of what gaming hardware you've bought?
I forgot the 360 GPU had a sort of tile based renderer. What I am saying is that the gaming industry was driven by Sony and Nvidia from that point on,
No, you were saying that PS2 could only render 640 x 224, and then that it doesn't really render 640 x 480 even when it does because that sort of tile-based image construction is what you'd call 'upscaling' even though it's producing a full 640 x 480 framebuffer, and then you said PS2 was unique and no-one else chops the display up into tiles even though DC chops the scene up into 300 tiles. Now you are saying something about PS360 onwards being driven by nVidia and Sony as a non-sequitor to all the previous discussion.
The tile based renderer in PowerVR 1 and 2 cards was "free" in the sense that it did not have to be coded for like the PS2.
It was free because the hardware was designed to work that way, but I think you are overstating what PS2 has to do coding wise. You just render two buffers and combine, though with an overhead of duplicated triangles on boundaries. Furthermore that's moot as it's only for larger framebuffers, but plenty of games rendered directly to a 640x480 buffer. I've linked to historic discussions on this board from actual PS2 devs talking about this. And I've explained that at 30 fps there's no difference in requirements to 480i output and 480p output. Even if PS2 was incurring a rendering overhead by rendering to tiles, it's irrelevant to most games and certainly the GTA port where we now have a measured framebuffer of 640 x 480 from PCSX2 emulation.
As far as I know the 360 and later devices were using something similar for upscaling. But what the PS2 was used for is not the same due to the rendering resolution.
I don't understand this. What do you mean "something similar for upscaling"? Similar to PVR's TDBR? 360 and PVR have nothing in common, and it had nothing to do with upscaling. 360 used tile based rendering as a full 720p buffer couldn't fit in the 10 MB eDRAM. When rendering 720p, there was no upscaling. Two tiles (1280 x 360 or 640 x 720) were combined to a final 1280 x 720 display buffer. If devs chose a lower resolution, such as to fit the entire framebuffer in the eDRAM at once, then it would upscale, but the rendering hardware and choice of eDRAM and tile rendering was not designed for 'upscaling'.

PS2 had no hardware scaler as back then there wasn't a need to output to a range of SDTV and 720/1080 displays, but some games chose to render at lower than output resolutions (ICO) and either letterboxed or upscaled manually.

Can be put this part of the discussion to bed now?
  1. PS2 did render internally at 640 x 480 in plenty of games, including GTA going by PCSX
  2. even if rendering was performed with tiles, that was not upscaling and produces exactly the same output as if a single rendertarget was used.
  3. Rendering an image in pieces isn't anything odd or wierd or particularly unique to PS2 or snubbed by developers. DC chops a 640 x 480 display into 300 tiles to render!
  4. And lastly, there are lots of different ways to produce a rendered image, each with their pros and cons.
 
Why is that being raised in this thread? It has nothing to do with any consoles rendering and has no bearing on whether one old console can run the games from another old console.

What is the relevance of what gaming hardware you've bought?

No, you were saying that PS2 could only render 640 x 224, and then that it doesn't really render 640 x 480 even when it does because that sort of tile-based image construction is what you'd call 'upscaling' even though it's producing a full 640 x 480 framebuffer, and then you said PS2 was unique and no-one else chops the display up into tiles even though DC chops the scene up into 300 tiles. Now you are saying something about PS360 onwards being driven by nVidia and Sony as a non-sequitor to all the previous discussion.

It was free because the hardware was designed to work that way, but I think you are overstating what PS2 has to do coding wise. You just render two buffers and combine, though with an overhead of duplicated triangles on boundaries. Furthermore that's moot as it's only for larger framebuffers, but plenty of games rendered directly to a 640x480 buffer. I've linked to historic discussions on this board from actual PS2 devs talking about this. And I've explained that at 30 fps there's no difference in requirements to 480i output and 480p output. Even if PS2 was incurring a rendering overhead by rendering to tiles, it's irrelevant to most games and certainly the GTA port where we now have a measured framebuffer of 640 x 480 from PCSX2 emulation.

I don't understand this. What do you mean "something similar for upscaling"? Similar to PVR's TDBR? 360 and PVR have nothing in common, and it had nothing to do with upscaling. 360 used tile based rendering as a full 720p buffer couldn't fit in the 10 MB eDRAM. When rendering 720p, there was no upscaling. Two tiles (1280 x 360 or 640 x 720) were combined to a final 1280 x 720 display buffer. If devs chose a lower resolution, such as to fit the entire framebuffer in the eDRAM at once, then it would upscale, but the rendering hardware and choice of eDRAM and tile rendering was not designed for 'upscaling'.

PS2 had no hardware scaler as back then there wasn't a need to output to a range of SDTV and 720/1080 displays, but some games chose to render at lower than output resolutions (ICO) and either letterboxed or upscaled manually.

Can be put this part of the discussion to bed now?
  1. PS2 did render internally at 640 x 480 in plenty of games, including GTA going by PCSX
  2. even if rendering was performed with tiles, that was not upscaling and produces exactly the same output as if a single rendertarget was used.
  3. Rendering an image in pieces isn't anything odd or wierd or particularly unique to PS2 or snubbed by developers. DC chops a 640 x 480 display into 300 tiles to render!
  4. And lastly, there are lots of different ways to produce a rendered image, each with their pros and cons.
The Tile Based Renderer in PowerVR was designed to eliminate overdraw, and limit bandwidth needs. It is not the same as the PS2 combining 320x224 or 640x224 images into a single frame. That is a benefit of its fast VRAM that has to be accounted for in any comparison of ports to another system. The image quality of the PS2 is affected by this, regardless of what it was technically doing.
 
The Tile Based Renderer in PowerVR was designed to eliminate overdraw, and limit bandwidth needs. It is not the same as the PS2 combining 320x224 or 640x224 images into a single frame.
In terms of drawing pieces of the final image and assembling them to make the final image, it is. Except that's not even what PS2 is doing most of the time!!
The image quality of the PS2 is affected by this, regardless of what it was technically doing.
How?
 
In terms of drawing pieces of the final image and assembling them to make the final image, it is. Except that's not even what PS2 is doing most of the time!!

How?
Drawing a single scene in multiple passes is not the same as drawing the same scene in hardware supported tiles to eliminate overdraw.

The PS2's aliasing issues, low texture resolution, and generally muddy image quality is well known. It was rendering at higher color counts for the sparklers and lighting, not for the textures and overall scene. Even the lack of mip mapping would have to be accounted for in a comparison like this thread suggests. The Neon 250 in particular has special mip mapping driver settings that improve the image quality and performance. Dreamcast launch games and homebrew games have poor stepping in the bilinear filter,
 
Drawing a single scene in multiple passes is not the same as drawing the same scene in hardware supported tiles to eliminate overdraw.
Of course not. It's a completely different approach. Why are you now talking about multipass rendering in a response talking about tiled rednering?? 1) Drawing in tiles is not the same as drawing multiple times. 2) Using multipass rendering doesn't produce 'low quality textures' or 'muddy image quality'.
The PS2's aliasing issues, low texture resolution, and generally muddy image quality is well known.
That has nothing whatsoever to do with multipass rendering. You could have a multipass renderer with great IQ and texture res and high def output if you wanted. PS2 had all sorts of issues, not least the fact it was horrifically documented! eg. Mip-maps could be use - the was nothing about multipass or tiled rendering preventing mipmapping - but devs didn't.

 
Of course not. It's a completely different approach. Why are you now talking about multipass rendering in a response talking about tiled rednering?? 1) Drawing in tiles is not the same as drawing multiple times. 2) Using multipass rendering doesn't produce 'low quality textures' or 'muddy image quality'.

That has nothing whatsoever to do with multipass rendering. You could have a multipass renderer with great IQ and texture res and high def output if you wanted. PS2 had all sorts of issues, not least the fact it was horrifically documented! eg. Mip-maps could be use - the was nothing about multipass or tiled rendering preventing mipmapping - but devs didn't.


I have been saying the entire time that the PS2 is limited to 224 lines. If the Snoblind games and Dreisbach got around this by rendering multiple times, it is the exception, not the rule. Are you saying that the Snowblind games are not rendering a single frame in multiple passes? I thought this was just an understood thing due to the VRAM bandwidth.
 
I have been saying the entire time that the PS2 is limited to 224 lines.
And you're wrong. You have so many statements from PS2 developers and points of evidence that PS2's internal framebuffer was typically 640 x 480 and even above.
Are you saying that the Snowblind games are not rendering a single frame in multiple passes? I thought this was just an understood thing due to the VRAM bandwidth.
All PS2 games render in multiple passes! That's how the PS2 hardware works!! The Snowblind Engine (most likely) renders a tile-based buffer in multiple passes. GTA and a lot of PS2 games render a single 640x480 buffer in multiple passes. Some PS2 games render a lower than 640x480 framebuffer in multiple passes.

Are you struggling with the distinction between multipass rendering, drawing over and over, and tiled rendering, chopping the scene into pieces and assembling for the final image? The two are completely different and unrelated.
 
And you're wrong. You have so many statements from PS2 developers and points of evidence that PS2's internal framebuffer was typically 640 x 480 and even above.

All PS2 games render in multiple passes! That's how the PS2 hardware works!! The Snowblind Engine (most likely) renders a tile-based buffer in multiple passes. GTA and a lot of PS2 games render a single 640x480 buffer in multiple passes. Some PS2 games render a lower than 640x480 framebuffer in multiple passes.

Are you struggling with the distinction between multipass rendering, drawing over and over, and tiled rendering, chopping the scene into pieces and assembling for the final image? The two are completely different and unrelated.

Actually most games render 640 x 448 or some weird 512x448 on PS2. But yeah it's not too bad . There weirder resolutions if you ask me which is probably why some stuff just looks flat out strange.

Actually the snowblind game engine technique was already dissected multiple times because it's a pain to PS2 emulators. It just seems to be super sampling. Back buffer is basically rendered huge and the display buffer has the actual resolution after its been shrunk. There's two explanations but it just seems it might render 2 halves to make a full image. Far cry from tiles like some ppl describe here. Impressive none the less even if it just seems to be a huge render to texture then downsized.

Explanation 1:
Improved no interlacing patches for Champions of Norrath and Champions: Return to Arms. Let's use an internal game engine feature instead. Read the comments included in the .pnach file for technical details. Let's see how the 2x2 SSAA frame buffers are set:
NTSC: 1280x448 back buffer -> 640x224 front buffer
PAL: 1024x512 back buffer -> 512x255 front buffer
If we disable the interlacing, we will get a lowres image. But the game engine does support a 30 fps performance mode with a 2xSSAA, instead of 2x2:
NTSC: 1280x448 back buffer -> 640x448 front buffer
PAL: 1024x512 back buffer -> 512x509 front buffer
We are going to use this mode, but with 60 frames per second!
Few personal remarks:
1. Snowblind Engine is a technical marvel, but needs a decent art direction to look phenomenal. But on the CRT unfortunately, as it was designed for interlaced screens. The image quality does really, really suck. You need to force bilinear filtering to fix the sprites (a hacky way indeed) and fiddle with Round Sprite (Half) and Half-Pixel Offset (Special) settings to reduce the awful lines when uprendering. Blending Accuracy needs to be set to High for more accurate colours.
2. Regular no interlacing codes are bad, since the games are designed with the interlacing in mind. Adaptive deinterlacing is a way to go, losing the half of pixels is not.

Explanation 2:
finally got to the bottom of this - so I know why it happens, but not really how to fix it Smile

BGDA renders everything to a 1280 x 1024 buffer located at 0xA0000 in GS memory. It then transfers that to the display buffer at address 0 for display each frame. The display buffer is 640 x 512 on the pal version.
Now, the problem it has is that a source texture has a maximum width of 1024 pixels and it has a source of 1280 ... so it uses a trick.
It sets a TEX0 at 0xA0000 and transfers a 640 x 1024 section of that to (in the pal version) a 320 x 512 area in the display buffer. This is the 'half screen' that we see. To do the right hand side of the screen, it sets TEX0 to 0xB4000 and does the same thing only shifted right by half a screen in the destination. 0xB4000 is pixel 640 of the texture at 0xA0000.
In software mode this is fine. For hardware mode though, we see the texture at 0xb4000 as a different texture rather than simply an offset into 0xA0000 ... so we get a blank texture for the right hand side and hence the bug.

For those thinking that 0xB4000 is not half a scan into 0xA0000, you need to remember that the GS memory is not liner but swizzled in square blocks of pixels.

Ian
 
Actually most games render 640 x 448 or some weird 512x448 on PS2. But yeah it's not too bad .
Gets weirder with PAL.
There's two explanations but it just seems it might render 2 halves to make a full image. Far cry from tiles like some ppl describe here.
The only talk of tiles was exactly this, simply rendering a scene in more than one piece. Although sheathx013 is claiming PS2 is doing this for 'every' game and rendering a 640x224 buffer twice because PS2 literally can't render more than 224 lines.
Why are we still having this discussion?
I'm not even sure what 'this' is! Hopefully sheathx013 can finally see PS2 can render greater than 224 lines after a lot of evidence and that 224 lines is not the upper limit of PS2 resolution for DC to be matched against.
Basically, it turns out Dreamcast could.
The more I'm seeing, the more I'm thinking the real challenge is a Baldur's Gate: Dark Alliance port! :mrgreen:
 
Gets weirder with PAL.

The only talk of tiles was exactly this, simply rendering a scene in more than one piece. Although sheathx013 is claiming PS2 is doing this for 'every' game and rendering a 640x224 buffer twice because PS2 literally can't render more than 224 lines.

I'm not even sure what 'this' is! Hopefully sheathx013 can finally see PS2 can render greater than 224 lines after a lot of evidence and that 224 lines is not the upper limit of PS2 resolution for DC to be matched against.

The more I'm seeing, the more I'm thinking the real challenge is a Baldur's Gate: Dark Alliance port! :mrgreen:
Well going by his logic and considering that the DC was using a TBDR, which renders the scene in multiple (hundreds?) of pieces not just two, it's as if someone can claim that the DC could only render super low res, making even the Saturn an HD machine in comparison. Can he simply accept that the PS2 was designed differently and thus required a different method to construct an 640x480 or 640x448 scene which isn't simply an upscale?
 
...and thus required a different method to construct an 640x480 or 640x448 scene which isn't simply an upscale?
It didn't though! :runaway:

Tiled rendering ended up a theoretical point, that even if only 224 lines would fit in EDRAM - which was sheathx013's assertion of a limit in the console - that wouldn't prevent a full 640x448 frame being constructed and presented. However this 224 line limit does not exist. As I linked to in my reference to earlier conversations on this board with real PS2 devs talking about VRAM allocations, PS2's EDRAM was plenty enough for full-frame 640x480 buffers.

Both consoles produce full frame buffers. DC does it by drawing 300 separate pieces of 32x32 pixels to make a 640x480 buffer. PS2 does it by drawing many 640x480 (or 448) images on top of each other to produce the final full screen buffer.
 
The source I provided and others here have stated that the PS2 does not render at 640x480. In my experience of PS2 games, especially prior to 2003, this meant 320x224 or 640x448 interlaced.

The Tile Based Renderer was not designed to help render a full framebuffer because it couldn't do it otherwise. This is pure and simple fact obfuscation. The PS2, Nvidia cards and even the Xbox had a "brute force" method of dealing with overdraw that the PowerVR cards resolved with SD-RAM. Eliminating overdraw on the graphics side does not equate to what you have admitted the PS2 does. It is simply not rendering polygons behind other polygons.

I have no doubt that emulators render the PS2's oddly low resolutions higher. This is why PS2 game footage online, by far (not entirely!), is from emulators not the real hardware. Video capture solutions couldn't handle the way PS1 and PS2 shifted resolutions from one scene to another. Crazy Taxi 2 does this as well, killing my HD PVR unless I have it hooked up to an upscaler. Either way, it has been evident to me for years that anything above 640x448 interlaced on PS2 is upscaled and I have wondered whether 640 itself is even upscaled. If I'm wrong about this so be it.

So instead of twisting my statements and making claims that the PS2 could and did in most games render at 640x480, can this be proven beyond a reasonable doubt?
 
Last edited by a moderator:
Back
Top