Questions about Sega Saturn

Ok, hen big par if not biggest of those days CPU performance was used for stuff that oday do GPUs?
There are things done on GPU today which, back then, would have been done on CPU.

Whether they form the majority of a CPU's workload depends on a lot of factors.

If the only "GPU stuff" you're doing on CPU is T&L, and you've got a low-poly scene and aren't doing much with each vertex, that could potentially not be a very catastrophic CPU workload.

On the other hand? Consider that it wasn't until the latter half of the 90s that graphics acceleration really became a thing for PC gaming. Even in 1996, the original release of Quake had no support for any graphics cards; everything to do with rendering was done fully on CPU. Not just T&L, but rasterizing and shading pixels. It's certainly easy to kill a CPU with that sort of thing.

How big these gaps were? If this question is correct.
Depends on the TV displaying the image. NTSC SD CRTs scan 240 lines of video at a time, to either the even set or odd set of lines in a "480-line" total. If the electron guns had a very focused beam, and the console is telling the TV to only scan the odd lines each time ("240p"), you'll see a dark gap where the even lines would be scanned in normal 480i video. If the electron guns have a very fuzzy beam, the odd lines might be wide enough to fill the gap well enough that you don't see any darkness between scanlines.

These gaps are what people are talking about when they say "scanlines" while discussing vintage gaming.
 
Last edited:
This is why some people say what Saturn is 2D machine and not 3D?
I personally feel it's more appropriate/correct to call Saturn sprites a special-case quadrilateral rather than say that polys are distorted sprites. The Saturn has no sprite hardware, it draws everything into a framebuffer; actual sprites aren't drawn into a framebuffer, they are built/generated in realtime by the video hardware as it scans out the display.

Real sprites are discrete objects, manipulated, defined and controlled using dedicated registers in the video hardware. Saturn has none of that.

How big these gaps were? If this question is correct.
Depends largely on the size of the picture tube, and as Tupolev mentioned, focus. A large tube, like a big family telly, would have a much fuzzier beam to try and hide the gaps, but you'd still see them. A PC monitor, with its very sharp and focused beam to produce crisp text and graphical UI elements, would have extremely pronounced scanlines at 240ish lines per frame, even if the size of the picture tube is much smaller than the family telly.
 
If the only "GPU stuff" you're doing on CPU is T&L, and you've got a low-poly scene and aren't doing much with each vertex, that could potentially not be a very catastrophic CPU workload.
How about Saturn and PS1 time? In that time CPU done only T&L and low-poly scene?

I personally feel it's more appropriate/correct to call Saturn sprites a special-case quadrilateral rather than say that polys are distorted sprites. The Saturn has no sprite hardware, it draws everything into a framebuffer; actual sprites aren't drawn into a framebuffer, they are built/generated in realtime by the video hardware as it scans out the display.
To be honest, I don't think I understood what you say here. :D
 
We talked about this in your ps2 related thread. Sprite, in the original terminology, designated graphics rendered without the need to update a frame buffer, or without even having a frame buffer at all. It's all done chasing the beam. Most home consoles up to the 16bit era worked like this.
 
How about Saturn and PS1 time? In that time CPU done only T&L and low-poly scene?
Original PS did not do geometry on its CPU; it was (as I recall) the first consumer device to feature a dedicated hardware geometry processor for that. Or at least the first mass market device to do it that way. It didn't support what has become known as geometry shading though, so it couldn't do skinning for example in hardware.

This lead to stuff like arms, legs and such being discrete objects from the torso of a character.

To be honest, I don't think I understood what you say here. :D
OK! Well, how to try and explain it more clearly... :p We begin with sprites then! In early computers, shifting around pixels was processor intensive. CPUs were very slow, memory was also slow. Early 8 or 16/32-bit CPUs like the MOS 6502 or MC68000 could only work on one instruction at a time, and an instruction generally took multiple clock cycles to complete, say, 3-8 clocks or so. More complex instructions like multiplies and divisions on the 68k took 60-100+ clocks IIRC, and that was integer math. These chips did not support floating point math at all (in hardware; you could emulate it at ginormous performance hit in software.)

So to draw several objects, every frame, you would quickly end up burning a large percentage of your processor time just to update the screen. Nevermind anything else you might want to do in a game! So thus sprites were born. The graphics chip has a bunch of registers that A: accepts an address to a memory location for the sprite data itself, B: a palette register holding colors or address to a set of colors in RAM, C: screen coords for where you want to display the sprite. There could be some other fluff as well, like priority register for making sprite appear in front or behind the background, collision register telling you when the sprite hits something (like another sprite or the background), and often horizontal/vertical flipping of the sprite, because such operations are also expensive for old-time processors.

So now you have an independent object you can move around just by poking a few values into some registers, giving you a perhaps 100-fold saving on work. The video chip takes care of all the work of actually drawing the sprite, it even does the work of telling you if you crash into something. Great!

However, memory is slow, so you can't fetch enough data to show very many sprites per video line (giving rise to the dreaded sprite flicker). There's no overdraw elimination in these old sprite engines; there must be enough bandwidth for the background AND each sprite, even if they all end up covering one another in the end. You were often also limited in sprite size, amount of colors and so on. Also, if you need advanced features such as scaling and rotation, transparency and so on, only your video chip does not support them, then having sprites doesn't much help you (this was frequently an issue when doing arcade ports back to home consoles.)

So you have a set of factors that are hard limits for your game software. If you don't have enough sprites with the features you need, or perhaps if you have no particular need for sprites at all - like you're doing 3D graphics for example - then you've wasted a fair bit of transistors on hardware sprites.

So Saturn - and original Playstation - did away with that. Playstation did not have separate video processors for 2D/3D graphics and background compositioning, but the basic principle is the same. You use the 3D rendering hardware to draw flat, square or rectangular polygon objects aligned perpendicular to the viewport (IE, monitor screen); this makes the object look like a traditional sprite, only you get "free" rotation, scaling and also transparency if your hardware supports that kind of thing (Saturn had some quirks and peculiarities on that front). "Alpha channel" texturing mode is used, so that the background color of the texture is replaced with whatever color is behind the object. Also, making these objects flat and aligned to the viewport lets you skimp on the calculations to generate them IIRC, thus making them faster and more efficient to draw.

As neither Saturn or PS supported Z-buffer, you would sort your polygon objects on the CPU using a fast algorithm before rendering so that they would appear in the right order on screen, then draw them back to front, covering up objects as you go. (This was true for 3D graphics as well, so when the sorting algorithm failed you would get polygons "fighting" with each other over which one to stay on top. Very occasionally visible even in modern hardware/games btw.)

Now, polygons aren't separate objects floating on top/under the background like sprites are. They're drawn permanently into the framebuffer (well, until you clear the buffer of course). This is not that a big issue, because at this point in time drawing speed and memory bandwidth are vastly higher than they were in older hardware. You could probably draw a couple thousand "sprite"-sized objects per frame at 60 frames/sec without breaking much of a sweat on either PS or Saturn consoles.
 
Original PS did not do geometry on its CPU; it was (as I recall) the first consumer device to feature a dedicated hardware geometry processor for that. Or at least the first mass market device to do it that way. It didn't support what has become known as geometry shading though, so it couldn't do skinning for example in hardware.
Tha is very important info for me. Thanks!

So to draw several objects, every frame, you would quickly end up burning a large percentage of your processor time just to update the screen. Nevermind anything else you might want to do in a game! So thus sprites were born. The graphics chip has a bunch of registers that A: accepts an address to a memory location for the sprite data itself, B: a palette register holding colors or address to a set of colors in RAM, C: screen coords for where you want to display the sprite.
Here I realy start tu understand what's going on.

So now you have an independent object you can move around just by poking a few values into some registers, giving you a perhaps 100-fold saving on work. The video chip takes care of all the work of actually drawing the sprite, it even does the work of telling you if you crash into something. Great!
Game development for old systems, I mean before 5th gen was nightmare for sure.

So Saturn - and original Playstation - did away with that. Playstation did not have separate video processors for 2D/3D graphics and background compositioning, but the basic principle is the same. You use the 3D rendering hardware to draw flat, square or rectangular polygon objects aligned perpendicular to the viewport (IE, monitor screen); this makes the object look like a traditional sprite, only you get "free" rotation, scaling and also transparency if your hardware supports that kind of thing (Saturn had some quirks and peculiarities on that front). "Alpha channel" texturing mode is used, so that the background color of the texture is replaced with whatever color is behind the object. Also, making these objects flat and aligned to the viewport lets you skimp on the calculations to generate them IIRC, thus making them faster and more efficient to draw.
This part absoluely explains all what I coudn't understand before!

Now, polygons aren't separate objects floating on top/under the background like sprites are. They're drawn permanently into the framebuffer (well, until you clear the buffer of course). This is not that a big issue, because at this point in time drawing speed and memory bandwidth are vastly higher than they were in older hardware. You could probably draw a couple thousand "sprite"-sized objects per frame at 60 frames/sec without breaking much of a sweat on either PS or Saturn consoles.
Thank you a lot! This is one of absoutely best explanations I've ever seen!
 
There are things done on GPU today which, back then, would have been done on CPU.

Whether they form the majority of a CPU's workload depends on a lot of factors.

If the only "GPU stuff" you're doing on CPU is T&L, and you've got a low-poly scene and aren't doing much with each vertex, that could potentially not be a very catastrophic CPU workload.

On the other hand? Consider that it wasn't until the latter half of the 90s that graphics acceleration really became a thing for PC gaming. Even in 1996, the original release of Quake had no support for any graphics cards; everything to do with rendering was done fully on CPU. Not just T&L, but rasterizing and shading pixels. It's certainly easy to kill a CPU with that sort of thing.


Depends on the TV displaying the image. NTSC SD CRTs scan 240 lines of video at a time, to either the even set or odd set of lines in a "480-line" total. If the electron guns had a very focused beam, and the console is telling the TV to only scan the odd lines each time ("240p"), you'll see a dark gap where the even lines would be scanned in normal 480i video. If the electron guns have a very fuzzy beam, the odd lines might be wide enough to fill the gap well enough that you don't see any darkness between scanlines.

These gaps are what people are talking about when they say "scanlines" while discussing vintage gaming.
I agree the VDP1 is a blitter not a sprite rasterizer.

As for scanlines, analog tvs draw a half line at the end of each frame to insert a vertical offset between the odd and even frames. When used in progressive scan this half line is usually omitted and the result is a noticeable gap between two consecutive horizontal lines (what retro gamers call 'scanline'). However nothing forbids you to insert it also in a progressive scanned signal: the result would be to make the horizontal lines double width at the price of some flickering because over the two consecutive passages the electron gun hits different areas.
 
The 3D part is limited to the 256KB anyway. Maybe you could have higher resolution and only show the 2D layers on the other part of it. Like 640x240 @ 16bpp, but the 3D part only takes up 640x200 and the rest is a HUD or something.

Since the VDP1 does texutring "backwards", that means that instead of having textures limited to a power of 2 size (for easy address calculation) and and arbitrary size frame buffer, like on normal systems from the era, you have arbitrary sized textures and a power of 2 size frame buffer.

In games like VF2, DOA, or Last Bronx, the VDP1 draws to a 1024*256 pixel frame buffer at 8 bits per pixel. The VDP2 takes 704 pixels of this frame buffer, combines it with the background layers, and outputs it to the TV as a 480i signal.
 
Since the VDP1 does texutring "backwards", that means that instead of having textures limited to a power of 2 size (for easy address calculation) and and arbitrary size frame buffer, like on normal systems from the era, you have arbitrary sized textures and a power of 2 size frame buffer.

In games like VF2, DOA, or Last Bronx, the VDP1 draws to a 1024*256 pixel frame buffer at 8 bits per pixel. The VDP2 takes 704 pixels of this frame buffer, combines it with the background layers, and outputs it to the TV as a 480i signal.

Forgot about power of two framebuffer restriction. Another annoying thing about VDP1, a lot of framebuffer RAM is almost always wasted. It would have been a lot better if VDP2 could output 512 wide. And I wouldn't call the textures completely arbitrarily sized because they still have to be multiples of 8 in width, thanks to the way the sprite rendering works.

So I guess in my hypothetical you would have had 512x256 @ 16bpp with 64 pixels on both sides being unused for a 640x240 output. This is a lot less attractive, games don't usually put HUDs and what have you on the left and right sides.

I still don't count interlacing as 704x480 @ 60Hz. It's fair to call it 480 high or 60Hz but not both at the same time.
 
Certainly Sega was aware of what Nintendo was doing as Silicon Graphics first had approached Sega of America to pitch their low cost, entertainment 3D solution.
One of the craziest things I learned about that generation, and it made me thing "Oh my god what if....". If you think about the shortcomings of Saturn because of it's exotic hardware configuration, and the restrictions placed on N64 hardware by Nintendo, not just the insistence of using carts, but requiring developers to use specific features that crippled performance, it makes you wonder what a Sega console with N64 hardware would have been like. Much faster than what we got with 64 because developers would have been free to tune the games to their preference, and without the space limits of carts. It isn't just that Sega could have had a system that matched N64, I think gamers might have even got more out of it in the long run.
 
I have a feeling Sega wasn't shown what Nintendo eventually released. Nintendo didn't get it out the door until '96 after all. Maybe there was a much less appealing 500-600nm version with more issues.
 
More issues? Like what? 200 polys a second an 16x16 textures. All rendered beutifully with perspective correct projection, texture filtering, AA, sub-pixel precidion, of course. Basically the pretiest 20 polygon objects you could get in the 90s'
 
Original PS did not do geometry on its CPU; it was (as I recall) the first consumer device to feature a dedicated hardware geometry processor for that. Or at least the first mass market device to do it that way. It didn't support what has become known as geometry shading though, so it couldn't do skinning for example in hardware.

This lead to stuff like arms, legs and such being discrete objects from the torso of a character.
Actually some developers started to find solutions for that and we saw games like Tekken 3 where the limbs and head were part of the whole body.

But even before that, talented developers like Naughty Dog made characters that didnt have separate limbs like in Crash Bandicoot
 
I have a feeling Sega wasn't shown what Nintendo eventually released. Nintendo didn't get it out the door until '96 after all. Maybe there was a much less appealing 500-600nm version with more issues.
Well, yeah. But even so, I have a hard time believing that what Sega was shown was in any way worse than what they got with Saturn, and probably not worse that what Sony had in Playstation at the time in terms of performance but with likely a better feature set.
 
Actually some developers started to find solutions for that
You did geometry calculations for those objects on the CPU. Like the much later Gamecube, the GTE had no innate skinning support. IIRC, skinning with hardwired T&L didn't appear until the original Geforce line, and software support for those cards was quite spotty since 3dfx was still around back then and lagged behind, feature-wise...

Whoah. The memories, eh? Just as a comparison, a Geforce 3 GPU had about 60 million trannies as I recall. 60 million is just 3 sqmm's worth of chip (give or take a little) these days... Scary!
 
You did geometry calculations for those objects on the CPU. Like the much later Gamecube, the GTE had no innate skinning support. IIRC, skinning with hardwired T&L didn't appear until the original Geforce line, and software support for those cards was quite spotty since 3dfx was still around back then and lagged behind, feature-wise...

Whoah. The memories, eh? Just as a comparison, a Geforce 3 GPU had about 60 million trannies as I recall. 60 million is just 3 sqmm's worth of chip (give or take a little) these days... Scary!

AFAIK you can do skinning by changing the transform matrix in between vertexes. DS games do this, there's no direct skinning support in the geometry processor there either.
 
AFAIK you can do skinning by changing the transform matrix in between vertexes. DS games do this, there's no direct skinning support in the geometry processor there either.
Saturn Quake apears to have skinned enemies. Any idea how that was done?
 
Saturn always did poly transformations in software, so skinning shouldn't really be an issue, other than that skinning requires more complex calculations than rigid objects, but Quake monsters are so ridiculously low-poly so it seems to have been doable anyhow. :)
 
I don't think the animations in Saturn Quake are different from the original anyway.. which are pre-baked AFAIK.
 
Back
Top