Render resolutions of 6th gen consoles - PS2 couldn't render more than 224 lines? *spawn

There has been actual developers on this very forum who have made countless statements about PS2 being able to render 640x480.

Not average people, or fan boys, but actual developers with actual released games, and actual experience developing on PS2.

Why are you still not accepting that?

EDIT: Gran Turismo 4's "1080i" mode completely debunks your claims on PS2's limitations.

It renders at 576x480 per frame..

Screenshot_2024-08-27-11-52-16-20_f9ee0578fe1cc94de7482bd41accb329.jpg
 
Last edited:
The source I provided and others here have stated that the PS2 does not render at 640x480. In my experience of PS2 games, especially prior to 2003, this meant 320x224 or 640x448 interlaced.
You're confusing 'render' with 'display'. As I've already said, at 30 fps there's no difference between interlaced or progressive. With an interlaced display, you have a 640x448 display buffer and show alternate lines every 60th of a second.
The Tile Based Renderer was not designed to help render a full framebuffer because it couldn't do it otherwise. This is pure and simple fact obfuscation. The PS2, Nvidia cards and even the Xbox had a "brute force" method of dealing with overdraw that the PowerVR cards resolved with SD-RAM.
nVidia et al did not have a 'brute force' method of overdraw. Only PS2 used 'brute force overdraw'. Other GPUs had other strategies, notably combining effects in a single pass via shaders and multiple texture units.
Eliminating overdraw on the graphics side does not equate to what you have admitted the PS2 does. It is simply not rendering polygons behind other polygons.
I never said it was. This is one of the craziest discussions in a long time!
I have no doubt that emulators render the PS2's oddly low resolutions higher. This is why PS2 game footage online, by far (not entirely!), is from emulators not the real hardware. Video capture solutions couldn't handle the way PS1 and PS2 shifted resolutions from one scene to another. Crazy Taxi 2 does this as well, killing my HD PVR unless I have it hooked up to an upscaler. Either way, it has been evident to me for years that anything above 640x448 interlaced on PS2 is upscaled and I have wondered whether 640 itself is even upscaled. If I'm wrong about this so be it.
You are. Stop relying on what you think your eyes saw and instead go by the technical knowledge of the people who actually made games on the hardware. ;)
So instead of twisting my statements
There's no twisting. I'm struggling to follow the thread and get to the final consensus.
and making claims that the PS2 could and did in most games render at 640x480, can this be proven beyond a reasonable doubt?
Probably not to you because the evidence is already there. I refer you to my earlier posts from previous talks on this board. Here's another one: https://forum.beyond3d.com/threads/standard-ps2-framebuffer-size.2571/

Just look up everything Fafalada and ERP and Archie4oz et al wrote back then. They went into detail. I see that post history in B3D's search doesn't go back as far as the content, so it's worth Googling instead.

Those of us who were a part of those conversations have a clearer understanding of PS2's operation than you who's primary interest is knowing about DC. So we learn from you, and you learn from us, right?
 
Last edited:
The source I provided and others here have stated that the PS2 does not render at 640x480. In my experience of PS2 games, especially prior to 2003, this meant 320x224 or 640x448 interlaced.
Looking up some tech docs, the GS seems to support a maximum rendering resolution of 2048x2048? It doesn't have the EDRAM to do that, but it should be possible to do render-to-texture for a 128x2048 texture if you wanted to, for some strange reason. I see no reason a game couldn't render to a 640x480 pixel frame buffer. The developer might want to use something lower because their texture streaming system is weak, but that's a developer's choice, rather than a hard technical limitation.

The trick where you render a 640x240 frame buffer and display it as a higher resolution 640x480 interlaced image only works if you hit a consistent 60 FPS, because you have to render a new frame buffer on every refresh (with the viewport offset half a pixel up/down to match the displayed field). Games with inconsistent 60 FPS, or running at 30 FPS, would have full height frame buffers. All PS2 games with 480p support would have full height frame buffers as well.
 
i remember one of the cleanest PS2 games I had at the time was ghosthunter, wonder what res it was running at, and seemed to have good AA too.
 
@sheathx013 : You reference that Sony "where are we now doc." I've just looked at it. Which part makes you think PS2 can only render 224 lines?

We have
95% were using Full Height buffers

and
3.6M pixels output. A full screen is about 0.3M pixels. 640x480 is 0.3M.

I can't see anything at all that alludes to half-height framebuffers.
 
Another thing I've just thought of. Some of the PS2 resolutions seem oddly lower than 640x480, particularly the much tooted 448 lines. But we're forgetting that CRT TVs (not so much monitors) had overscan. The guaranteed viewable area was about 93% of the full image area, meaning drawing outside that meant the user might never see what you were rendering.

93% of 480 lines is 446.4, and 93% of 640 columns is 595. So a render resolution of 592 x 448 (each multiples of 16) or somesuch will be a full screen as far as the viewer is concerned and be an intelligent optimisation. If these resolutions became more prevalent in later titles, it'd show the devs were learning to optimise for their displays.
 
PS2 outputs something like 704x480 but renders at 640x448 (or 512x448 and then stretches). This footage isn't perfect but it shows what I mentioned.
 
You're confusing 'render' with 'display'. As I've already said, at 30 fps there's no difference between interlaced or progressive. With an interlaced display, you have a 640x448 display buffer and show alternate lines every 60th of a second.

That was exactly what I was trying to relate. If the PS2 isn't rendering in 224 lines, it's 448 lines does not look like any single frame render I have seen. This is clearly an advantage for the PS2, but it does not help the image quality in my opinion. Smoke, trails, sparks, even washing out textures as light sourcing, doesn't change the aliasing that I think is obvious. They are advantages, they are hardware features, even some that the Dreamcast wouldn't be able to handle in a straight port.

The straight port concept in the other thread was what I was responding to. If the Dreamcast is rendering a PC port of a PS2 game at 640x480x24bits and the PS2 game is doing something else, with more effects, was my main point.

nVidia et al did not have a 'brute force' method of overdraw. Only PS2 used 'brute force overdraw'. Other GPUs had other strategies, notably combining effects in a single pass via shaders and multiple texture units.

I never said it was. This is one of the craziest discussions in a long time!

Hardware T&L was Nvidia and ATI's "brute force method". From what I have seen it worked, though with lower image quality than the Matrox and PowerVR cards. Eventually T&L on the card took over and so it is very difficult to show later games on earlier cards on real hardware.

You are. Stop relying on what you think your eyes saw and instead go by the technical knowledge of the people who actually made games on the hardware. ;)

There's no twisting. I'm struggling to follow the thread and get to the final consensus.

The developer quotes are all over the place are they not? Then we have some claiming the PS2 could render internally at 1080i. It is a quagmire.

Probably not to you because the evidence is already there. I refer you to my earlier posts from previous talks on this board. Here's another one: https://forum.beyond3d.com/threads/standard-ps2-framebuffer-size.2571/

Just look up everything Fafalada and ERP and Archie4oz et al wrote back then. They went into detail. I see that post history in B3D's search doesn't go back as far as the content, so it's worth Googling instead.

Those of us who were a part of those conversations have a clearer understanding of PS2's operation than you who's primary interest is knowing about DC. So we learn from you, and you learn from us, right?

I will read these as I have time. Thank you.
 
That was exactly what I was trying to relate. If the PS2 isn't rendering in 224 lines, it's 448 lines does not look like any single frame render I have seen.
A PS2 framebuffer is literally no different to a FB on any other hardware. All use the same 24 bit colour palette.
This is clearly an advantage for the PS2, but it does not help the image quality in my opinion.
Issues with IQ would come from elsewhere like a lack of antialiasing or low quality textures or even the video output hardware. Or maybe softness if the 448 line image is being stretched to 480 lines. Unlike monitors, TV's didn't have pixel-perfect displays and PS2 rendering took that into account.
Hardware T&L was Nvidia and ATI's "brute force method".
It seems you've decided TBDR is the only 'right' way to render and other strategies are 'brute force'? The only console I've ever heard referred to as 'brute force' rendering is PS2 because it was very simple and just did what it did powerfully, like a strongman lifting a heavy load instead of using pulleys. PS2 was also very versatile in the hands of clever developers though. It's T&L units, the VUs, were more programmable.
From what I have seen it worked, though with lower image quality than the Matrox and PowerVR cards. Eventually T&L on the card took over and so it is very difficult to show later games on earlier cards on real hardware.
Again, the only IQ thing I can think of here is AA. T&L on the card? I don't understand where that fits in to render quality. T&L hardware just speeds up how quickly you can do that maths, but the maths is the same no matter where you do it. the same lighting model will produce the same results whether calculated on CPU, accelerator, functional unit, or shader.
The developer quotes are all over the place are they not?
They're spread out, but consistent in message.
 
I am only saying the PS2 renders differently. I am not stating its differences are inferior, except the rendering resolution was established here and by Digital Foundry, as all important.

These platforms rendered graphics differently, and so a port to the Dreamcast, for example, might look better but have less effects. The hardware effects of the PowerVR2 were not explored as thoroughly as the PS2 was, particularly after 2003.

I would like to see what those limits of PowerVR 2 actually are. I think the PS2 has a well deserved reputation hardware wise, but it is still misunderstood to this day.
 
These platforms rendered graphics differently, and so a port to the Dreamcast, for example, might look better but have less effects.

PS2 has proven it can do 480p in high quality games at 60fps, and said games look better than anything on DC. So I'm not sure I understand that comment.

If the PS2 games was made within the first 2 years of PS2's release it likely wouldn't have a progressive scan option, but also wouldn't be pushing the system either.

If we're talking about a port from a game at the end of PS2's life it would have a progressive scan option, thus a DC port wouldn't look better.

You also need to factor in VRAM, as developers started to push lighting and geometry on DC they might have sacrificed progressive scan to gain a bit more VRAM to use for other things.
 
I am only saying the PS2 renders differently. I am not stating its differences are inferior, except the rendering resolution was established here and by Digital Foundry, as all important.
Rendering differently doesn't inherently mean different pixels.
These platforms rendered graphics differently, and so a port to the Dreamcast, for example, might look better but have less effects.
Look better in what way? Different aspects of graphics is affected by choice of rendering method. eg. Deferred rendering might have less AA than Forward Rendering but have more dynamic light-sources. TBDR might have better edge quality from AA but less particle effects. When it comes to render resolutions, 640x480 on DC will be the same as 640x480 on PS2. Any difference in what you see are down to variation in textures, lighting, blending, potentially colour depth.

You need to be a lot more specific about what you are seeing that's better/worse on which platform. But that's more for the other thread. This thread is about what resolution games were rendered at, plain and simple, to keep that subtopic out of the retro talk.
The hardware effects of the PowerVR2 were not explored as thoroughly as the PS2 was, particularly after 2003.
What has that got to do with rendering resolutions?

PS2 has proven it can do 480p in high quality games at 60fps, and said games look better than anything on DC.
"Look better" is not objective. Ways in which games 'look better' need to be qualified. For some people, more particles is better than high image quality, and vice versa for others.

But also, this is a discussion just on resolution, not rendering output. It really should be ended now with everyone appreciating there was no 224 line limit on PS2 and the console could and did render 640x480 games and other resolutions too.
 
Managed to find this copy and pasted in a forum.

The main website is no longer available as it's closed down.

By: James Mielke April 2, 2002 6:46 PM PST

While most people are content to plug their video game consoles in with the standard RCA cables (or, god forbid, with RF adapters), some swing to the higher end of audio-visual mayhem and prefer their games in as high a resolution as they can get. This usually means using either S-video, or optimally, component cables. Then, for folks who have the luxury of watching their movies on a progressive scan-enabled television, you have to have a progressive scan-enabled DVD player.

Take a look at this story for more on the specifics of progressive scan technology

Since the PS2 is a DVD player, you may have wondered if your PS2 can do progressive scan output. While the PS2 isn't currently equipped to do so, we're still glad to confirm that, yes, it can.

To get the full poop, we contacted Seth Luisi, producer on SOCOM: Navy Seals, over at Sony to see what he had to say on the subject. Word up, yo!

"The PS2 can indeed output in progressive scan and the new Sony Libraries that were just released allow PS2 developers to do Anamorphic Widescreen Progressive Scan output in their games. Actually, Tekken4, which was just released in Japan, is the first PS2 game with progressive scan output.

"We are actually considering adding progressive scan output to SOCOM but we may not be able to do to our tight schedule. In order to do progressive scan output you need to run with a full size frame buffer, not a half size frame buffer that is interlaced, which is what a lot of PS2 games use in order to save VRAM. In order to realistically use a full size frame buffer you have to use dynamic texturing and store a lot of your textures in main RAM, which is the method that SOCOM and most advanced 3rd generation PS2 games will use.

"Dynamic texturing has many pluses besides just allowing a full size frame buffer and progressive scan output. With dynamic texturing, the amount VRAM in the PS2 does not matter because you are only using it as a texture cache, as it was meant to be used. We use over 4MB of textures for our character models in SOCOM alone, even though the PS2 only has 4MB of VRAM. The textures are swapped into and out of VRAM every game frame (1/60th or 1/30th of a second) which the PS2 can do since it has such fast memory access.

"Regarding DVD playback, I am not certain it will be included in the future. However, it is definitely possible and since SCEI has made Anamorphic Widescreen Progressive Scan output available for game it only makes sense to add it to the next release of the DVD Drivers."

So it appears that any PS2 games released 18-24 months after PS2 released couldn't have 480p support as the SDK didn't support it.

This means that even if PS2 could have offered 480p in the DC ports it received, the SDK at the time simply meant it wasn't possible to offer it.
 
Last edited:
I am trying to understand how the main character's textures, who was always displayed on screen had more than 4MB textures which. How did texture swapping work on that?
 
Imagine that for things like characters that are always on screen, their textures are always in RAM.

It's the stuff that changes that would likely be streamed in.

But remember that developers on here have said PS2 was never VRAM limited when it came to textures.
 
I am trying to understand how the main character's textures, who was always displayed on screen had more than 4MB textures which. How did texture swapping work on that?
PS2's EDRAM was effectively a scratchpad. You'd keep the render targets in there, and then load in whatever other assets were needed for the immediate drawing. Exactly like streaming assets from SSD to RAM for use, only you streamed from RAM to EDRAM.

One of the threads I saw, I think I linked to above, had the old PS2 devs talking about how PS2 was never handicapped by texture RAM because it could stream all the textures you could possibly use.
 
PS2's EDRAM was effectively a scratchpad. You'd keep the render targets in there, and then load in whatever other assets were needed for the immediate drawing. Exactly like streaming assets from SSD to RAM for use, only you streamed from RAM to EDRAM.

One of the threads I saw, I think I linked to above, had the old PS2 devs talking about how PS2 was never handicapped by texture RAM because it could stream all the textures you could possibly use.
I am not sure I understand.
How I understood it, was that the VRAM could stream in and out textures super fast. I.e As you traversed the environment, it would steam in the textures required for the particular scene and take out the textures no longer required for that particular scene.
If one object though is constantly visible and has more than 4MB which is the size of VRAM, I don't get how that works. It is loaded and displayed constantly. It can't be streaming in and out like the same 8MB textures that are constantly on screen i.e your character plus all the other textures,would it?
 
I am not sure I understand.
How I understood it, was that the VRAM could stream in and out textures super fast. I.e As you traversed the environment, it would steam in the textures required for the particular scene and take out the textures no longer required for that particular scene.
If one object though is constantly visible and has more than 4MB which is the size of VRAM, I don't get how that works.

It doesn't work on the 4MB at the same time.
 
Back
Top