PS1 / PS2 resolution and framerate

sribo

Newcomer
Hi

Thinking about the best looking games to me on PS1 (fully 3D) and PS2 I realized that they are often at higher resolution or 60 fps, for example on PS1 it's the case for Tomb Raider III-V which are at 512x240 with fairly open areas (unlike Crash Bandicoot games or fighting games) and on PS2 many are 60 fps such as Onimusha 3, Shadow of Rome, and even Shadow of The Colossus (despite a less stable framerate).
I don't really understand these choices since it doesn't allow to have the best possible graphics in theory, maybe the higher resolution on PS1 was used to make the jaggies less obvious and they couldn't significantly push the hardware further even without increasing the resolution?, for 60 fps on PS2 this makes obviously more sense but what I understand less is why there is no or few 30 fps games that are as impressive as these 60 fps games, same for PS1 with fairly open games at 320x240, unless resolution and framerate are among the strengths of PS1 and PS2.
Sorry if these questions seem too basic for you guys but it seems weird to me, and for other consoles it appears to be less true (except maybe for Gamecube with the Star Wars games and Metroid Prime but the latest best looking games were rather 30 fps)
 
Shadow of the colossus has unlocked framerate if I recall. Wouldn't call it a 60fps game.
 
We have a recent thread about PS2 resolutions. I hope you find the discussion in it to be informative and educational.

 
I was pondering, probably because of the rise of more graphics options and performance settings in the modern console generations, the extent to which turning down or off some of the edge AA, the VI blend, and even having multiple resolution options would be for the N64, would have been a worthwhile avenue. I really love the N64 (and have been running one UltraHDMI in most recent playing sessions, albeit, a few years ago).

Without edge AA, and with a resolution cut, perhaps 12 - 15fps becomes 20 - 30fps a bit more often. Given the modest texel counts in most games, I half wonder whether lower output resolution, in the era of CRT, would make that much difference.

The PS1 and Saturn generally seemed to manage better in terms of frame rates and resolutions (as I saw it in a PAL region), though without the N64 'quality of pixel' (approximated bilinear, high precision 'stable' geometry, perspective correction etc.).

Apologies, in retrospect, too tangential to the thread.
 
Last edited:
Shadow of the colossus has unlocked framerate if I recall. Wouldn't call it a 60fps game.
yeah, I think it's especially during the colossus fights that the framerate drops, on the other hand during horseback travel it stays close to 60fps normally (according to videos I made which are 60fps and I didn't see duplicate frames with virtualdub)

Gran Turismo Hi-Fi, Ridge racer remaster included in ridge racer type 4 deluxe edition.
racing games. i classify them a bit like crash bandicoot and fighting games, somewhat biased to really be considered technical showcases (even if they really push the hardware)

We have a recent thread about PS2 resolutions. I hope you find the discussion in it to be informative and educational.

great, I'll see

Indeed. It was mostly a 20fps game. And 15fps during many Colossus fight. :)

and about 17% less fps in PAL land...
the PAL version allows you to choose 60hz, for the framerate (according to videos I made) it's very close to 60fps when it's not during the colossus fights (but they were short videos in 2 areas so I'm not 100% sure)

I was pondering, probably because of the rise of more graphics options and performance settings in the modern console generations, the extent to which turning down or off some of the edge AA, the VI blend, and even having multiple resolution options would be for the N64, would have been a worthwhile avenue. I really love the N64 (and have been running one UltraHDMI in most recent playing sessions, albeit, a few years ago).

Without edge AA, and with a resolution cut, perhaps 12 - 15fps becomes 20 - 30fps a bit more often. Given the modest texel counts in most games, I half wonder whether lower output resolution, in the era of CRT, would make that much difference.

The PS1 and Saturn generally seemed to manage better in terms of frame rates and resolutions (as I saw it in a PAL region), though without the N64 'quality of pixel' (approximated bilinear, high precision 'stable' geometry, perspective correction etc.).

Apologies, in retrospect, too tangential to the thread.
I think the framerate issues on N64 were due to its memory speed or something like that (?), which would explain why there are mostly frame-drops, in any case it's clear that disabling some of its features would have helped but Nintendo was too strict at the time (maybe we can at least improve the performance of games via rom hacks)

For the best framerate and resolution on PS1 and Saturn I suppose it's also due to the memory because apart from that I don't see how the N64 was significantly worse than the others.


it's on topic, knowing what capabilities make these differences for these consoles and to what extent it favored these choices.
 
I was pondering, probably because of the rise of more graphics options and performance settings in the modern console generations, the extent to which turning down or off some of the edge AA, the VI blend, and even having multiple resolution options would be for the N64, would have been a worthwhile avenue. I really love the N64 (and have been running one UltraHDMI in most recent playing sessions, albeit, a few years ago).

Without edge AA, and with a resolution cut, perhaps 12 - 15fps becomes 20 - 30fps a bit more often. Given the modest texel counts in most games, I half wonder whether lower output resolution, in the era of CRT, would make that much difference.

The PS1 and Saturn generally seemed to manage better in terms of frame rates and resolutions (as I saw it in a PAL region), though without the N64 'quality of pixel' (approximated bilinear, high precision 'stable' geometry, perspective correction etc.).

Apologies, in retrospect, too tangential to the thread.
While he doesn't talk too much about framerate, long lapsed forum member ERP has posted some performance related information about N64 regarding both shipped games and non-shipped code. Do a search for "N64" by member ERP and you'll get a goldmine of information. Here's a nugget:
I've said this on here before.

Using the original Fast3D graphics code you' be lucky to hit 100K polygons on an N64.

Using the Turbo3D code you'd get about 500-600K PS1 quality polygons (Nintendo never allowed this uCode in a shipping game).

If you are looking at pure transform rate it was possible to do sugnificantly more than that. However the uCode was also responsible for triangle setup, and that always dwarfed the transform time.

The last couple of N64 games I worked on used custom uCode, which distributed the work between the processor and the RSP somewhat differently than any of the SGI uCode, and would pretty easilly push >100K on screen polygons.

If we're talking about graphics and PS1 quality polygons, there really is no comparisson, with the exception of the 4K texture cash an N64 is better in every measurable way. And a damn site harder to get the performance out of.
 
While he doesn't talk too much about framerate, long lapsed forum member ERP has posted some performance related information about N64 regarding both shipped games and non-shipped code. Do a search for "N64" by member ERP and you'll get a goldmine of information. Here's a nugget:
Why didn't Nintendo allow this for shipping games? This is confusing
 
I remember those discussions (though I was much younger!) - ERP provided such patient and clear explanation. It seemed like, on balance, many features, with the notable exception of edge AA, were 'free' - you got the quality per pixel from the RDP without gaining or losing much (in effect, owing to constraints of the memory subsystem, assuming Z-buffer being used, not Z-Sort). The whole option space for trading frame rate against image quality seems more flexible on Saturn and PS1.

I remember turning on point-sampled texturing in Quake 64 - it was amusing. Almost certain it did nothing in terms of frame rate.

Quake 2 64 did have an interesting change in output mode available with the Expansion Pak - it seemed maybe a higher colour depth or more elaborate VI blend mode preserving high frequencies. I think that cost some performance (hard to tell) - but not of that order of the higher output resolution options in Turok 2 or Perfect Dark (pretty easy to tell there's been a performance decrement on those games when that setting is enabled..)
 
Diubling the vertical resolution from 240/224 to 480/448 lines also had another nice effect on a CRT:

You did not have every second line of the image black anymore, so you had a brighter, completely filled image with no visible scanlines.
 
You've lost me there. Must be talking about progressive scan or something. Interleaving meant you were looking at odd/even lines every frame, producing interlaced flicker. Half-res doubled up the lines so no flicker but still interleaved scanlines. TBH I can't see a situation were progressive scan would cause missing lines either - the pixels should have been doubled up to fill 480 lines.
 
You've lost me there. Must be talking about progressive scan or something. Interleaving meant you were looking at odd/even lines every frame, producing interlaced flicker. Half-res doubled up the lines so no flicker but still interleaved scanlines. TBH I can't see a situation were progressive scan would cause missing lines either - the pixels should have been doubled up to fill 480 lines.

It would look even better in progressive scan, you're right about that, don't get me wrong.


What I'm trying to say is that on a CRT, there's more benefits to the higher resolution then just reduced jaggies/pixel crawling. That alone would make it worthwile, but the higher vertical resolution also helps the image quality.

While each field still has only 240 lines and thus scanlines visible, it is not the same as having 240p where you have visible scanlines. In 480i, the final image is still composed of two fields combined instead.
Now a CRT TV from back then has a certain afterglow (even more than a CRT monitor from those days) and also the humen eye/bain is a bit slow, so you will not see the scanlines but a picture with much more pixel density. It's looking great, but of course at the cost of a slight interlace flickering (and sawtooth edges while in motion at 60fps).

I really do not know if I can explain it any better... it is best to try yourself if you still own a CRT TV from those days.

Take a game like Tekken 3 on PS1, for example. It does not have a really high horizontal resolution (it's only 384 pixels), but it shows how much more important vertical resolution is on a CRT TV.
Tekken 3 has 480 lines of vertical resolution in interlace mode (480i) and if you compare it to any game with only 240 pixels vertical resolution, you will definitely see the difference!

Other examples would be Tobal No 1/2 on PS1 (doing 512x480) or Ridge Racer 1 Hi Spec demo (also 512x480 i think).

It really helps image quality. :)
 
Diubling the vertical resolution from 240/224 to 480/448 lines also had another nice effect on a CRT:

You did not have every second line of the image black anymore, so you had a brighter, completely filled image with no visible scanlines.

It didn't make the image brighter. Whether or not the image is interlaced, a CRT TV would output the same number of photons per square inch. Brightness is perceived as by many photons hit a rod/cone in a given time. Non-interlaced vs interlaced brightness is like comparing an unmoving light bulb vs a light bulb vibrates by half a millimeter 60 times a second; the brightness of the light bulb won't change. Interlaced CRT TV output just meant that the image shifted up/down slightly each time the raster scanned the screen, which adds flicker not found in non-interlaced output. The number of photons entering your eyes will remain constant, but different groups of rod/cones will recieve the photons on each display refresh.

Now, if you take a CRT monitor, and adjust the image size to make it smaller, that will increase the apparent brightness a bit, since the photons emitted will be in a smaller area, and thus more photons per inch, however the total number of photons emitted per frame by the monitor wouldn't change.
 
It didn't make the image brighter. Whether or not the image is interlaced, a CRT TV would output the same number of photons per square inch. Brightness is perceived as by many photons hit a rod/cone in a given time. Non-interlaced vs interlaced brightness is like comparing an unmoving light bulb vs a light bulb vibrates by half a millimeter 60 times a second; the brightness of the light bulb won't change. Interlaced CRT TV output just meant that the image shifted up/down slightly each time the raster scanned the screen, which adds flicker not found in non-interlaced output. The number of photons entering your eyes will remain constant, but different groups of rod/cones will recieve the photons on each display refresh.

Now, if you take a CRT monitor, and adjust the image size to make it smaller, that will increase the apparent brightness a bit, since the photons emitted will be in a smaller area, and thus more photons per inch, however the total number of photons emitted per frame by the monitor wouldn't change.

Regarding 240p vs 480i:
But two fields combine to form the final image in 480i. The lines alternate quickly, this is why I would say that the perceived pixel densitiy seems higher when you look at a 480i image vs 240p. It just looks very good. You do not see the black lines that are displayed persistantly in 240p anymore.

About brightness... thinking about it, I am not sure anymore. In 240p vs 480p there is a difference for sure.
If we take a completely white image for example, filling the entire screen - and then take the exact same image and make every second line of it black, then the first version would appear brighter.

Like in this example:
1740264938002.png

Please note that this is just an example. The image on the left is what it would look like on an LCD monitor and is not of a highe resolution, that is why I would still say the right one looks better, but that is not what I wanted to have this example for. It is just to compare brightness.

Now, of course this left one is non-interlaced. In interlace-mode, each field still has those black lines, so I get what you are saying.
I always thought that even on a CRT, the lines won't immediatly go blank from 1 to 0, perhaps that makes a difference.
 
About brightness... thinking about it, I am not sure anymore. In 240p vs 480p there is a difference for sure.
If we take a completely white image for example, filling the entire screen - and then take the exact same image and make every second line of it black, then the first version would appear brighter.
That's true, but that's not what happens. 240p should be doubled up to 480 lines. I also don't think 240p even existed! If it did, it might have had constant scanlines and half the brightness, but that'd be weird and rare. ;) At a time when consoles and computers were working with small framebuffers, displays were interlaced so you'd be drawing 240 lines per 60th of a second, alternating odd and even every 60th second*. By the time progressive scan became commonplace, they were producing full screen framebuffers and drawing 480 lines in 1/60th of a second.

Brightness could be affected by display type, but not resolution.

* Yes, I'm mixing PAL and NTSC numbers, but these are the most convenient!
 
That's true, but that's not what happens. 240p should be doubled up to 480 lines. I also don't think 240p even existed! If it did, it might have had constant scanlines and half the brightness, but that'd be weird and rare. ;) At a time when consoles and computers were working with small framebuffers, displays were interlaced so you'd be drawing 240 lines per 60th of a second, alternating odd and even every 60th second*. By the time progressive scan became commonplace, they were producing full screen framebuffers and drawing 480 lines in 1/60th of a second.

Brightness could be affected by display type, but not resolution.

* Yes, I'm mixing PAL and NTSC numbers, but these are the most convenient!
Although not an official standard, 240p (or numbers thereabouts depending on region) did exist. It was where the gun was forced to write to field 1 again, after fly back, rather than alternating between field 1 and field 2.
Exactly the same number of fields were being written a second, but the resolution was halved.
 
Back
Top