Should the SEGA DC and Saturn have launched with these alternative designs?

I had a friend introduce me to System16.com a while back, it's the place to get basic arcade system info. What I really wanted was real examples, not spec sheets lol. It would be fun to get one's hands on the hardware and screw around with it.

Question regarding the PS2's eDRAM: so now that I know it was used for texture storage, in what relationship is it used for framebuffer storage (I assume as in the finished frame?)? Is it's size the reason why the PS2 effectively could render out to 1280 x 1024 while the Gamecube and Wii are stuck at 480p with their 2 MB framebuffer? I had always thought that 4 MB was way too small to store textures for building a complete scene (or is it done differently?), so it had to be a framebuffer. That's the point of the 360's eDRAM, framebuffer only correct?
I remember during the PS2 launch that people were complaining about the PS2's low quality texture work in general compared to the DC.
There were various discussions regarding the limited memory and some developers but mostly Sony tried to explain that 4MB wasnt simply for texture storage in the traditional manner.

They said that 4MB of EDRAM in conjuction with the massive amount of bandwidth could give results above 4MB of normally storage textures. They said that this gave the ability to stream in and out textures very fast.
 
Over ten years of development and the most successful console ever - if it's not being done there's got to be a reason.

Selling the most doesn't mean it's the best, or not open to improvement.

I preferred to get every game I could on XBOX last gen.

Yes PS2 was better than DC.
 
I remember during the PS2 launch that people were complaining about the PS2's low quality texture work in general compared to the DC.
There were various discussions regarding the limited memory and some developers but mostly Sony tried to explain that 4MB wasnt simply for texture storage in the traditional manner.

They said that 4MB of EDRAM in conjuction with the massive amount of bandwidth could give results above 4MB of normally storage textures. They said that this gave the ability to stream in and out textures very fast.

I've heard that the GS could do some extremely impressive things with either particle or other effects to do some rather awesome looking stuff that specifically tied to the "huge bandwidth" available. At that time I didn't know they were referring to the eDRAM. I think I remember reading that the first Mercenaries game on the PS2 benefited highly from this in some preview article for the game. Personally I always thought the game was a bit of an eyesore.....
 
I've heard that the GS could do some extremely impressive things with either particle or other effects to do some rather awesome looking stuff that specifically tied to the "huge bandwidth" available. At that time I didn't know they were referring to the eDRAM. I think I remember reading that the first Mercenaries game on the PS2 benefited highly from this in some preview article for the game. Personally I always thought the game was a bit of an eyesore.....

Yeah, I didnt like Mercenaries either.

ZOE2 was a superb example of particle effect capabilities on the PS2. I think that would have had a performance hit on the XBOX even.

The PS2 had some crazy capabilities in general. Inconsistent though but sometimes it made you wonder. Tekken Tag was outstanding for example and probably the best looking fighting on the console. It hast be rivaled imo even by Tekken 4 and Tekken 5. I dont know how they did it in 2000. The polygon counts were up the roof, the lighting was crazy in some levels such as Ogre's, and the textures gave the impression of bump mapping. All at 60fps.
 
Question regarding the PS2's eDRAM: so now that I know it was used for texture storage, in what relationship is it used for framebuffer storage (I assume as in the finished frame?)? Is it's size the reason why the PS2 effectively could render out to 1280 x 1024 while the Gamecube and Wii are stuck at 480p with their 2 MB framebuffer?
Yes, but that kind of resolution would only be usable with a very limited range of graphical styles and techniques. There would be little to no space for textures, frontbuffer or z-buffer.

I had always thought that 4 MB was way too small to store textures for building a complete scene (or is it done differently?), so it had to be a framebuffer. That's the point of the 360's eDRAM, framebuffer only correct?
If you are doing it right you should never need more texturememory for a single frame, than what you are using for the framebuffer and in most cases half or less. Even if you are using straight 32 or 16bit textures, which you won't on a PS2 (or any never system for that matter). The memory is much better spend upping resolution than colourdepth (at least as it is with texture resolution now).
Most of the time you'll be using 8, 4, or 2bit textures with MIP mapping. You can do the math...
 
A much newer machine that costs the same should beat or at least equal the old one in all regards.

Not really.

That's why developers should have done more if possible to do higher resolution textures on PS2.

Choosing to compromise the integrity of your PS2 game just to "beat the DC" at something would not seem to be a particularly rational or sane way to go about running a game development business.

You are talking as if 2bpp VQ textures were the norm/general case on DC. Lets just recap: That's four 2x2 quads with your colours of choice (minus alpha and cycling, that palette textures can do) to do a picture. That would either be a pseudo noisy texture or a texture with lots of lines or angles. NOT the general case. And in most cases less useful than 2bpp palette textures, where you still have full control over the individual texels placement.

This thread - in which you posted ironically enough - shows that DC VQ textures worked well:

http://forum.beyond3d.com/showthread.php?t=8800

2bpp VQ is fine for stuff like rocks, grass, dirt, (opaque) water, concrete, tarmac, bricks, slates, carpets, clothes and basically 99% of the textures in 99% of games, especially once you take into account mip maps and bilinear filtering. In the far-from-ideal case above it even does a reasonable job of a face + hair + clothes (sort of) + multi-coloured background, and a good modeller/texture artist would usually be able to avoid this kind of strain (especially on something like a main characters face).

Your claims about the uselessness of 2bpp VQ are nonsense and provably so, but I look forward to seeing how your 2bpp (4 colour) CLUT compares for the image used above. ;)

Once you take into account the increased resolution and/or variety that they allow, VQ compressed textures would frequently be preferable to 4bpp CLUTs too. There's no great mystery about this.

When something becomes the norm, or the norm not to do, most people will follow. Double so if you are pressed for time and money, as most developers are. For other examples of underutilized hardware, think of the SNES soundchip or the Saturns or even think of the wiimote, which could have been the world greatest fps. controller if imployed in a mouse style setup, but now wastes its potential as a half-hearted cross between a limp springless joystick and a crippled light-gun, in all Wii fps. games, just because "that's the way everybody does it".

That's just a general "some things don't get used" response and under the circumstances I don't find it convincing.

We're taking about a huge gain that would revolutionise the way PS2 games looked, allow it beat Xbox for textures and wouldn't even require a significant change in the way artists produced assets (like, say, normal mapping did). There's got to be a cost or a tradeoff or a downside somewhere.
 
There was nothing technically wrong with the DC, and time after time, we've seen weaker hardware lead the market.

Yep, I agree.

The Saturn on the other hand, even Yu Suzuki had a few suggestions. The obvious one being the bad dual CPU choice, even for its time. Going with quads instead of triangles, the choice for the sound chip and how it was under-used. RAM was the only thing that was competitive, especially via the expansion pack but it only benefited 2D games.

Yeah again, Saturn was a mess! Especially funny when you think they turned down the N64 hardware, and that Sega of America and Sony America wanted to work together on a console.

Ironically, the system was successful in Japan so even then, it's really all about marketing.

I know it sold well and shifted a lot of games, but I wonder if it made a profit even in Japan...

If only the DC launched with the Naomi-2 board specs, then I wonder how well it would have stacked up with the PS2 :p

It would have had Virtua Striker 2 vs Fifa Soccer. So it would still have lost. :p

If you are doing it right you should never need more texturememory for a single frame, than what you are using for the framebuffer and in most cases half or less. Even if you are using straight 32 or 16bit textures, which you won't on a PS2 (or any never system for that matter). The memory is much better spend upping resolution than colourdepth (at least as it is with texture resolution now).
Most of the time you'll be using 8, 4, or 2bit textures with MIP mapping. You can do the math...

Not really, unless by texture memory you mean something like "unique texture sample reads". But even then it wouldn't be true.

Perhaps I don't understand what you mean.
 
In my opinion N64 was the first console to do generally "good looking" 3D. Clean 3D. Even back then I couldn't really stand the pixelated, skewed messes that PS1 and Saturn put out. PS1 and Saturn 3D reminds me of Matrox Mystique actually but I think that Mystique is actually superior to the consoles (perspective correction?)

Well I meant good looking in respects to the competition. ;)

Personally the N64 was a huge let down for me after the launch hype died down. More games impressed me on the saturn/PSone than on the N64, maybe due to the fact that it was supposed to be more capable than the former machines so I expected more. I could handle the pixelation found in the PSone/saturn games more than I could stand the blurry textures found in most N64 games.

I actually hooked up the N64 to my HDTV over the weekend, man do those games look fugly on a newer TV :LOL:
 
Yes, the Saturn made a nice profit in Japan and is what primarily kept SEGA afloat during those days besides the shrinking arcade market, which was still decently sizable at the time. It wasn't a huge success in Japan like the PS1 was, but it sold enough that it managed to make money for them.

Now this thread has been off topic a little bit for a while, and I am going off topic slightly but it is still highly relevant to the thread. We had a lot of technical discussion of Dreamcast and PS2 at this place called Dreamcast Technical Pages which was originally Saturn Technical Pages or something. That forum was a godsend for SEGA fans where we could discuss the games, tech in consoles, and various things SEGA. I remember in 1998 there were leaked images of Sonic Adventure posted and most there thought they were pre-rendered images and couldn't possibly be real time. Never the less, a short time later most of them went nuts when they found out that the images were from an actual game and it looked just as good in motion. Good times, good times. I miss that forum, and a few people from there are members here. The only one I can really think of off the top of my head is Simon F, who was basically our go to guy back in the Dreamcast days when it came to discussing the PowerVR side of the Dreamcast and Naomi 1/2. Nostalgia rocks and is also sad at the same time.

Even on that board we had a lot of technical discussion on the PS2 before and after it was released and some of us even though we were die hard SEGA fans. I remember a poster by the name of Suneet Shah who linked specs to the Emotion Engine in PS2 days before it was being unveiled at some engineering convention. We were in disbelief that the CPU in PS2 could have so much more grunt than the SH4 in the DC it was funny yet we were shitting bricks. When it was confirmed it was like a cloud of doom and gloom came over us and we were sweating for the future of Dreamcast from that moment onward. Enough of that.

Regarding PS2's texturing and the IPU. If it were usable to make/decompress really nice textures I have little doubt it would have been used or experimented with at some point during PS2's lifespan. The PS2 was a machine that developers did many things with because the GS was pretty versatile and had not only huge bandwidth with the eDRAM but also a whopping 1.2 gigapixel fill rate. I do not doubt for a second that the IPU would have been in use if developers found it could be used for textures in an efficient way. Now we hadn't actually seen this come to fruition so it leads me to believe that it wasn't so good at it.

Bumpmapping on the Dreamcast...I think there may have been a stage in Soul Calibur where the floor was bump mapped, though I do not recall which one. It might be the last stage.

One game that shows off what the Dreamcast is really capable of is Test Drive: Le Mans. Not only is the gameplay excellent, but it is one of the few games that has anisotropic filtering done well and at a stable framerate. That is probably the best looking racing game on the Dreamcast from a technical point of view, it was just gorgeous. I still have mine but haven't played my Dreamcast in a while so now it's time to bring it out and play!
 
One game that shows off what the Dreamcast is really capable of is Test Drive: Le Mans. Not only is the gameplay excellent, but it is one of the few games that has anisotropic filtering done well and at a stable framerate. That is probably the best looking racing game on the Dreamcast from a technical point of view, it was just gorgeous. I still have mine but haven't played my Dreamcast in a while so now it's time to bring it out and play!

I loved that game back in the day, could be wrong but I thought I remember reading news how the team was somehow pushing 5 million polys per second or something.

While most games still look great IMO, I remember this one looking a whole lot better than it does now :LOL:
 
Last edited by a moderator:
Sega just never bothered with revisions like that in the 90's. All they pushed for is more advanced hardware in the arcades. Model 3 was the most powerful thing out there at the time, but it was too expensive for operators. Interestingly, they used the Yamaha sound chip for the Saturn in that one.

You do realise that there are several versions of both the Model 2 and Model 3 HW?
 
Not really.

Yeah really.
If the hardware designer has the same target group as the competition, he will match or exceed the competition on the most important ways. Texturing here being perhaps the most important of all.

Choosing to compromise the integrity of your PS2 game just to "beat the DC" at something would not seem to be a particularly rational or sane way to go about running a game development business.

"Compromise the integrity"?! What exactly do you mean by that? You optimise your engine to get better graphics, what is compromising about that? If you have to push less polygons or do more passes to do that, then so be it.

This thread - in which you posted ironically enough - shows that DC VQ textures worked well:

http://forum.beyond3d.com/showthread.php?t=8800

2bpp VQ is fine for stuff like rocks, grass, dirt, (opaque) water, concrete, tarmac, bricks, slates, carpets, clothes and basically 99% of the textures in 99% of games, especially once you take into account mip maps and bilinear filtering. In the far-from-ideal case above it even does a reasonable job of a face + hair + clothes (sort of) + multi-coloured background, and a good modeller/texture artist would usually be able to avoid this kind of strain (especially on something like a main characters face).

All right, to be honest I was thinking of the 1 bit VQ which is close to 2bpp in many cases, with codebook.
Look I'm not putting down DC VQ, on the contrary, it's an awesome scheme (only PVR TC exceeds it in cost/compression ratio) and it would probably have been better if PS2 had something similar.
It is however not the reason for the bad texturing of PS2.
Look at the contents of the very thread you linked, There are ways to have close to 2bpp textures that comes very close or is better than DC VQ. One of them is luminance compression, at the cost of only one extra simple pass. Sadly some of the most interesting pics are down. Perhaps not surprisingly, it is six years ago. :LOL:

Your claims about the uselessness of 2bpp VQ are nonsense and provably so, but I look forward to seeing how your 2bpp (4 colour) CLUT compares for the image used above. ;)

Again, read the thread you linked.

That's just a general "some things don't get used" response and under the circumstances I don't find it convincing.

I did give some examples. There are plenty of examples of people doing boneheaded things, "just because" or "that's how we do it". What makes you think the programming world is going to be different, if not worse because of the extra stress.

We're taking about a huge gain that would revolutionise the way PS2 games looked, allow it beat Xbox for textures and wouldn't even require a significant change in the way artists produced assets (like, say, normal mapping did). There's got to be a cost or a tradeoff or a downside somewhere.

First off I never claimed it was straight forward. It requires you to decompress the textures in batches and have them ready uncompressed before they are to be loaded to the GS. Sort of like streaming from main mem to itself. That's not straight forward, but with a little care in the game design it can be done. One approach could be invisible portals like in many PC games.
 
Yeah really.
If the hardware designer has the same target group as the competition, he will match or exceed the competition on the most important ways. Texturing here being perhaps the most important of all.

I won't get into the rest of this discussion you guys have here, but I have to say that I don't really agree here.

Every console has it's own budget and development priorities. On top of that 99% of the time the design for a console is usually done before a company even learns what their competition is up to.

I don't understand how anyone can think a newer system should equal or outperform the competition in all regards when that has hardly been the case in any generation. Even when MS knew the specs for the ps2 and were able to quickly slap the xbox together in a reaction to said specs, the PS2 still had a few advantages in it's design.

Maybe I'm mistaken on the points you two are making on this specific subject, but if I understand it correctly, then I can't agree with your reasoning.
 
Not really, unless by texture memory you mean something like "unique texture sample reads". But even then it wouldn't be true.

Perhaps I don't understand what you mean.
Here is why.
Most textures will be tiled or have a colour bit depth lower than the buffer (unless for special reasons. Most texels will cover equal or more than one pixel.
If you have any kind of texture management you will load only the relevant MIP levels. And with virtual texturing the texel bitrate per frame will be even lower.
Overdraw is still only about two and most of that is re-tiled textures.
 
I won't get into the rest of this discussion you guys have here, but I have to say that I don't really agree here.

Every console has it's own budget and development priorities. On top of that 99% of the time the design for a console is usually done before a company even learns what their competition is up to.

I don't understand how anyone can think a newer system should equal or outperform the competition in all regards when that has hardly been the case in any generation. Even when MS knew the specs for the ps2 and were able to quickly slap the xbox together in a reaction to said specs, the PS2 still had a few advantages in it's design.

Maybe I'm mistaken on the points you two are making on this specific subject, but if I understand it correctly, then I can't agree with your reasoning.

First of, xbox had "advantages" because it was allowed to cost as much as it did. There's no art in making something better if you are allowed to run your business at a loss. Good engineering is making something for $1 that anyone can make for $100.

PS2 was a pretty well engineered system, with many features that was way ahead of the curve, and some that was completely off the beaten track. That leads me to think that the texturing part was not simply fudged or done half-heartedly, but done in the spirit of the rest of the system. That is of a frugal, coherent system where all the parts are multifunctional, flexible and work with the rest of the system to achieve the goal.

Perhaps Sonys greatest mistake was not the hardware, but not doing a strong enough libs and an API for the system, initially none, when xbox came with DX and GC with OpenGL.

Plus it was quite apparent what the standard was going to be at the time, by just looking at the latest PC games and then adding some. DC didn't catch Sony by surprise, at least not initially.
 
You are talking as if 2bpp VQ textures were the norm/general case on DC. Lets just recap: That's four 2x2 quads with your colours of choice (minus alpha and cycling, that palette textures can do) to do a picture. That would either be a pseudo noisy texture or a texture with lots of lines or angles. NOT the general case. And in most cases less useful than 2bpp palette textures, where you still have full control over the individual texels placement.
You get 256 tiles, at a size of either 2*2 or 4*1 texels each, with 4444, 1555, 565, or YUV422 color formats (and maybe normals, I haven't tested it), in the 2 bpp VQ format. The VQ format can be easily bent to get the equivalent of uncompressed 8 bit, 4 bit, or 2 bit textures (i.e. no color position limitations), although it does mess up bilinear filtering on 8 bit and 4 bit. I've used the VQ format to get a linear (not twiddled) 4 bit texture for a hardware accelerated Genesis renderer I've written.

While I haven't tried it out yet, my guess on how the higher compression levels work would be that you use a VQ compressed 4 bit or 8 bit texture. Since the codebook's tiles are using smaller pixels, you can fit more pixels into one tile (4*2 or 2*4 for 8 bit and 4*4 for 4 bit), increasing compression. It ends up adding an extra layer of indirection to decompression since now you have to fetch a index value, a codebook entry, and then a palette color. The quality for 4 bit would undoubtedly be pretty terrible for any photoish images and limited to noisy textures like gravel, but 8 bit should work ok for things like brick, grass, dirt, etc. But you're limited to the PVR's four 256 color palettes, though. And trying to mipmap those would be a waste of time; plain 2 bbp VQ already has poor mipmap transitions. (This is all assuming my guess on how the more highly compressed VQ textures work is right, of course.)
 
Yes, but that kind of resolution would only be usable with a very limited range of graphical styles and techniques. There would be little to no space for textures, frontbuffer or z-buffer.


If you are doing it right you should never need more texturememory for a single frame, than what you are using for the framebuffer and in most cases half or less. Even if you are using straight 32 or 16bit textures, which you won't on a PS2 (or any never system for that matter). The memory is much better spend upping resolution than colourdepth (at least as it is with texture resolution now).
Most of the time you'll be using 8, 4, or 2bit textures with MIP mapping. You can do the math...
Let's see if my math is correct here.......

So running a (640 x 480 resolution) x 32 bit colour depth = 9830400 bits

/8 = 1228800 Bytes
/1024 = 1200 KB
/1024 = ~1.17 MB (right?)

If the amount of texture memory being used should be in the same range of size as the finished z-buffer, then I can see why Sony went with 4 MB of eDRAM, and why the GC and Wii have texture and framebuffers in the size that they do. What were the typical texture sizes used in PS2 games? 64^2 and 128^2 like sizes? How does this ratio compare to current consoles and PC graphics? Can't remember too well but I think the original Far Cry specifically had a VRAM usage counter for the current frame, IIRC ~40 MB was the usual amount......and I have no idea if that was counting the framebuffer too.

Questions, questions, questions........sorry to ask so much, but this is the best source on the web for such in depth stuff.
 
this thread make me wish that digital foundary would do a last gen face off between DC, PS2, XBOX, and GC again.
 
You get 256 tiles, at a size of either 2*2 or 4*1 texels each, with 4444, 1555, 565, or YUV422 color formats (and maybe normals, I haven't tested it), in the 2 bpp VQ format.

I acknowledged my mistake in an earlier post.

The VQ format can be easily bent to get the equivalent of uncompressed 8 bit, 4 bit, or 2 bit textures (i.e. no color position limitations), although it does mess up bilinear filtering on 8 bit and 4 bit. I've used the VQ format to get a linear (not twiddled) 4 bit texture for a hardware accelerated Genesis renderer I've written.

Well they are both VQ. DC VQ just uses 2x2 codebook entries instead of a single colour.

While I haven't tried it out yet, my guess on how the higher compression levels work would be that you use a VQ compressed 4 bit or 8 bit texture. Since the codebook's tiles are using smaller pixels, you can fit more pixels into one tile (4*2 or 2*4 for 8 bit and 4*4 for 4 bit), increasing compression. It ends up adding an extra layer of indirection to decompression since now you have to fetch a index value, a codebook entry, and then a palette color. The quality for 4 bit would undoubtedly be pretty terrible for any photoish images and limited to noisy textures like gravel, but 8 bit should work ok for things like brick, grass, dirt, etc. But you're limited to the PVR's four 256 color palettes, though. And trying to mipmap those would be a waste of time; plain 2 bbp VQ already has poor mipmap transitions. (This is all assuming my guess on how the more highly compressed VQ textures work is right, of course.)

Are you still talking DC VQ? I've never heard of it doing more than 2x2 texel quads. It would give you a very coarse image I should think, just like with ordinary ordered dither.
 
Let's see if my math is correct here.......

So running a (640 x 480 resolution) x 32 bit colour depth = 9830400 bits

/8 = 1228800 Bytes
/1024 = 1200 KB
/1024 = ~1.17 MB (right?)

If the amount of texture memory being used should be in the same range of size as the finished z-buffer, then I can see why Sony went with 4 MB of eDRAM, and why the GC and Wii have texture and framebuffers in the size that they do. What were the typical texture sizes used in PS2 games? 64^2 and 128^2 like sizes? How does this ratio compare to current consoles and PC graphics? Can't remember too well but I think the original Far Cry specifically had a VRAM usage counter for the current frame, IIRC ~40 MB was the usual amount......and I have no idea if that was counting the framebuffer too.

Questions, questions, questions........sorry to ask so much, but this is the best source on the web for such in depth stuff.
Sony recommends having a sligthly smaller buffer to use the memory better. Most well made games do that. ICO for example runs at 512x512, as do MGS2.
So that's 2Mb for the whole back buffer, and ∼ 512Kb for the front buffer, which can safely be 16bit once blending is done (IIRC front buffer has to be 640 wide or a multiple of 64...), leaving 1.5Mb for textures and effects buffers (they share some of the space since most effects are usually done after the rendering).

The typical size for textures on PS2 was probably 64^2 and 128^2. But that's if you look at all games. The well made ones had up to 512^2 ingame. The small textures are probably because of a misconception that you had to fit the texture to one memory page. Or just general unwillingness from the developers towards the architecture (it takes some training to get a texture flow going from frame to frame and not just shovel all the textures for a given scene into VRAM. The thinking was probably something like: "how dare Sony make us take on this extra burden, and learn new ways of doing things that already work perfectly with the ways we already know".
Most PS2 multiplatform games are mostly grudgely done with little or no effort put into optimising for the irritating PS2 version, that you had to do because of the install base. It would sell truckloads either way.
 
Last edited by a moderator:
Back
Top