Gamecube/PS2/Xbox stuff... again

Cross platform games AFAIK looked better on GC. Resident Evil 4 was a technical showcase on the GC. The PS2 couldnt reproduce it perfectly due to its limitations. Some of the effects were even impossible to pull off. Hence unlike the GC version the PS2's cut scenes were actually pre-recorded to get it on par with the GC's real time cut scenes.
 
PS2 has texture compression, even if you've convinced yourself it doen't
It actually doesn't have texture compression, I don't have to "convince" myself of things that are true you know. :) Palettized textures isn't a valid form of texture compression (it's simply colorspace reduction), and using the MJPEG decoder requires a lot of fiddling as the graphics chip can't accept MJPEG textures, you need to manually decompress each texture into a free buffer in main RAM before uploading the full-size uncompressed texture to the graphics chip. Naturally, as a result almost no PS2 games actually bother doing this. (I could challenge you to find a single one which does this, but then it'd turn out your google-fu is greater than mine and you manage to dredge up a few examples... :LOL: Nevertheless, they weren't many.)

That 16MB pool of memory in GC is too slow for 90% of actual tasks
Yeah? It's still there. It's being used, so it has to be taken into consideration. A-RAM is part of the GCs cohesive whole, you can't just dismiss it out of hand for some handwavey reason.

Bringing up things like TEV and claiming a win for GC while complaining no one used it is the exact counter point to bringing up VU1 on PS2
No it's not, because VU1 went unused in games because it was just too inefficient, difficult and cumbersome to use in real life, while the TEV was a comparatively straight-forward, functional piece of hardware that largely went unused because developers were unwilling to spend the time/resources to employ it (for whatever reason; bad Nintendo documentation/devtools likely reason, and/or poor returns on investment due to bad GC marketshare and games sales etc.)

Compare multplatform games and you'll more often than not see mipmap lines closer to the camera than on PS2/XB
That's unsubstantiated hearsay, and [original research]. Never ever have I heard this claimed about GC before, what's your sources? Anyway, at least you have games on GC using trilinear filtering, which you rarely if ever see on PS2 due to the graphics chip's broken-as-designed MIP map calculation logic. Even any mipmapping at all on PS2 was unusual since you had to manually "massage" polygon data to make it work properly. Most PS2 games were ant city as far as textures were concerned...

The console has limitations that weren't there on it's competition.
AS DO THEY ALL. So what?! Get off your anti-GC bandwagon already. The console was obviously quite well matched against PS2 overall, in the real world, when not looking only at paper specs. It was just poorly utilized by developers for the most part, often including nintendo itself.

To put this another way from an earlier generation, look at PS1 and N64. N64 trounces PS1 in power and feature set
Feature set sometimes; while it had many more 3D feature checkboxes (which you often couldn't afford to tick in games due to weak GPU and memory subsystems), it was clearly inferior in storage space obviously but also sound, due to lack of dedicated audio hardware. Furthermore, it also lacked PS's serial port or an equivalent of PS's MDEC hardware.

Power, not so much really. N64 had poor main RAM bandwidth despite monstrous paper figures and even worse memory latency, bad fillrate (made much worse if you enabled Z-buffering) and a really weak CPU. Both of the latter ones largely due to the terrible Rambus main memory SGI chose to use. N64 also had a very slow cartridge interface, and carts were ludicrously expensive and cramped for space.

The arguments have gone back and forth wether N64 or PS was the stronger hardware, in a real-world scenario N64 probably would have been faster if Nintendo had allowed PS-level image quality in games, but they did not. You had to use perspective correct texturing, sub-pixel precision rendering and so on, which bogged down the system a lot. Nintendo's SGI-supplied 3D libraries were poorly optimized and even worse documented, meaning performance suffered and some hardware features were underused. N64 also had a color combiner for example which hardly anyone ever used. Only a few developers at the end of the console's life were allowed to write their own RCP vector code, and documentation was terrible and the hardware itself finicky and wonky in the extreme.

One of the few 60fps games on N64, F-Zero, looks extremely primitive even by the standards of the time. So games generally ran faster and more fluidly (certainly with less fogging!) on PS. They also offered much more varied graphics due to CD storage format instead of tiny-ass cartridges.

I'm curious, Grall, if you aren't judging power by achievement as you claim, and don't believe in paper specs, how are you measuring the power of a system.
"Power" isn't a precisely quantifiable metric when comparing different closed hardware architectures like games consoles. Even when running CPU benchmarks across different architectures you run into issues where one platform's compiler may be using suspect handwritten optimizations for that specific benchmark that skews the results.

...Not so much these days perhaps, as x86 is the lone survivor of the CPU wars (except in the big iron/supercomputer space, where a few competitors still barely cling to life), but we still see it more or less regularly in GPU benchmarks (see the "quack.exe" debacle and many others over the years), and more recently, on cell phone platforms.

"Power" is largely a judgement call than anything that can be concretely measured IMO, even when it comes to quite similar systems like PS4 and Xbone (where PS4 is pretty much universally seen as the stronger box) will you find instances where one game runs better on the bone I believe. It's almost inevitable.
 
That's unsubstantiated hearsay, and [original research]. Never ever have I heard this claimed about GC before, what's your sources? Anyway, at least you have games on GC using trilinear filtering, which you rarely if ever see on PS2 due to the graphics chip's broken-as-designed MIP map calculation logic. Even any mipmapping at all on PS2 was unusual since you had to manually "massage" polygon data to make it work properly. Most PS2 games were ant city as far as textures were concerned...

Metal Gear Solid 3 was really noisy, it's consistent with not even being mip-mapped.. That was so bad I couldn't play the game beyond the very early stuff.
 
Metal Gear Solid 3 was really noisy, it's consistent with not even being mip-mapped.. That was so bad I couldn't play the game beyond the very early stuff.

hm? I am pretty sure it was mip-mapped. It looked noisy because it had the same green and brown textures pretty much everywhere with not enough color variation. The lighting was flat and the perception of depth was weak because of it. The image was a smudge of green brown textures and objects.
 
Are you guys confusing VU1 with VU0? I am under the impression VU1 in Emotion Engine plays a much more integral role in the graphics pipeline of PS2 compared to TEV inside the Gamecube. TEV is more akin to early pixel shaders and not geometry transformation. To say VU1 goes underutilized is perplexing to me.
 
That's unsubstantiated hearsay, and [original research]. Never ever have I heard this claimed about GC before, what's your sources? Anyway, at least you have games on GC using trilinear filtering, which you rarely if ever see on PS2 due to the graphics chip's broken-as-designed MIP map calculation logic. Even any mipmapping at all on PS2 was unusual since you had to manually "massage" polygon data to make it work properly. Most PS2 games were ant city as far as textures were concerned...
Play the same game on both platforms side by side and tell me I'm wrong. Gamecube games have softer textures and visible mipmap lines.


AS DO THEY ALL. So what?! Get off your anti-GC bandwagon already. The console was obviously quite well matched against PS2 overall, in the real world, when not looking only at paper specs. It was just poorly utilized by developers for the most part, often including nintendo itself.
Ummm... Gamecube is my favorite console from that generation. I already said that. Own 70+ games for Gamecube, 2 systems and 2 Wiis I play GameCube games on. I can play GameCube games in most of the rooms in my house, excluding bathrooms but including my kitchen. Yeah, I'm a hater. For reals.

You really think it was underutilized by Nintendo? First party games look great.

And again, you've taken every comment I've made out of context. Instead of going into a rage, slow down and read what I'm saying. GameCube has lower specs than Xbox or PS2, but overachieved while the latter 2 underachieved. It's a compliment. You want to paint me in a box and act like I'm a hater. But the truth is that I can look at the system objectively and see it's shortcomings, and respect it's achievements.


Power, not so much really. N64 had poor main RAM bandwidth despite monstrous paper figures and even worse memory latency, bad fillrate (made much worse if you enabled Z-buffering) and a really weak CPU. Both of the latter ones largely due to the terrible Rambus main memory SGI chose to use. N64 also had a very slow cartridge interface, and carts were ludicrously expensive and cramped for space.
Really? Slower memory bandwidth than PS1? Ummm... No. RDRAM's maid advantage is bandwidth, at the cost of latency. "Very slow cartridge interface"? Campared to CDs? What are you talking about. It looks like you googled for N64's issues, but forgot to put them in the proper context of it's competition. At this point you are just being contrary.

It actually doesn't have texture compression, I don't have to "convince" myself of things that are true you know. :) Palettized textures isn't a valid form of texture compression (it's simply colorspace reduction), and using the MJPEG decoder requires a lot of fiddling as the graphics chip can't accept MJPEG textures

It's still texture compression, even if you don't like it. You can tell yourself it isn't, but it is, by definition, texture compression. In hardware. Even if you don't like it, or devs didn't use it. Again, you can't just change the definition of things just because they don't fall into your argument.

"Power" is largely a judgement call than anything that can be concretely measured IMO, even when it comes to quite similar systems like PS4 and Xbone (where PS4 is pretty much universally seen as the stronger box) will you find instances where one game runs better on the bone I believe. It's almost inevitable.
So "Power" means whatever you feel like it means whenever you feel like it. OK, I got it. You define what you want when you want to suit your argument.
 
Are you guys confusing VU1 with VU0? I am under the impression VU1 in Emotion Engine plays a much more integral role in the graphics pipeline of PS2 compared to TEV inside the Gamecube. TEV is more akin to early pixel shaders and not geometry transformation. To say VU1 goes underutilized is perplexing to me.

That's my fault, I brought the VU's into the discussion with the wrong number associated with it. VU0 is coupled with the CPU while VU1 is RS. On a somewhat related note, TEV is the part of the rendering pipeline that contains texture lookups, so any Gamecube game that has textures uses TEV, so it's pretty integral.
 
It's still texture compression, even if you don't like it. You can tell yourself it isn't, but it is, by definition, texture compression. In hardware. Even if you don't like it, or devs didn't use it. Again, you can't just change the definition of things just because they don't fall into your argument.
It's certainly nice that Amiga and other old computers had a hardware texture compression and a compressed frame buffer. ;)
Metal Gear Solid 3 was really noisy, it's consistent with not even being mip-mapped.. That was so bad I couldn't play the game beyond the very early stuff.
If I remember correctly ps2 mipmapping didn't account surface slope, just distance and thus mipmaps were switched too late for most surfaces.
Would have loved to see some 'pre-pre?' filtered mipmaps on the system. (small amount of extra blur to each mipmap to help with roping artifacts.)
 
Interesting. It's possible that texture LOD can be adjusted to give a sharp and noisy image, as well.
Suffice to say I don't like "sharpen" filters much either (those built into TVs), in general I like smooth and blurry.

The N64 was especially great in getting everything right, i.e. texture filtering, perspective correction, alpha etc. Voodoo1 was similar. They descend from the GPU stuff SGI was doing in 1992 I think.
Of course the N64 is too blurry and slow, it would be really nice if it had a faster CPU and was able to use higher res textures.
 
I remember alot of Ps2 games looking extremely shimmery and jaggie around alot of edges. Dont get me wrong their are alot games on the Ps2 I remember being very impressed by. I know jaggies were very common on all 3 consoles that gen but having owned all 3 at the same time I remember alot of multiplat games looking much better on the GC and Xbox as far as IQ is concerned. That gen of consoles was one of the best times in gaming for me. I had a widescreen 720p CRT and had the component cables for all 3 systems I thought progressive scan was the bees knees.
 
FFX was the most eye-watering game I saw on PS2. Zero texture filtering on high-noise grass textures. It was like watching green static!
 
Last edited by a moderator:
You need to define slow with the 16MB of memory, because latency and bandwidth are two different things, and CPU task are typically more latency bound than bandwidth. Not to mention that at the very least, it would speed up streaming from the disk, even slow memory would still be faster than the disk drive, allowing them to cache things in the slower memory.

I'm pretty sure A-RAM has an 8bit bus at about 80MHz, giving it 80MB/s bandwidth. Compare that to 2.7GB/s from it's main memory. Faster than the 2-3MB/s from disc, sure. Like I said, it ended up being a disc cache more than anything. Oh, and the CPU has no direct access to A-RAM. It can copy to and from system memory but cannot run code from it, so latency is irreverent.

Before Grall comes and tells everyone I'm making bold claims from hearsay, here's a couple of links:
http://blog.lse.epita.fr/articles/38-emulating-the-gamecube-audio-processing-in-dolphin.html
ARAM is an auxiliary memory which is used to store sound data. The CPU cannot access ARAM directly, it can only read/write blocks of data from RAM to ARAM (or from ARAM to RAM) using DMA requests. As ARAM is quite large, games often use it to store more than sound data

http://www.gc-linux.org/wiki/Memory_and_Filesystems
In addition to its 24M of system RAM the GameCube contains 16M that is normaly used to hold audio data buffers. The CPU cannot address the audio RAM directly so this memory cannot be used like normal system memory but it is possible to use the audio RAM as a swap device.
 
Please note that I did not once say the GC CPU could execute stuff out of a-ram, just that you can't disregard it being there in the first place. Thank you for not putting words in my mouth in the future!
 
Some insight from an actual gamedev:
http://www.gamasutra.com/view/feature/131402/postmortem_factor_5s_star_wars_.php?print=1

Dealing with Gamecube's two-part memory architecture, which has 24MB of "fast" (very fast, actually) RAM and 16MB of "slow" RAM that is pretty close to a small ROM cartridge in terms of access and speed, can be a bit of a hassle. This is especially true if one has to make multiple subsystems - implemented by multiple programmers - using the ARAM at the same time. Using the main processor's virtual memory unit, we mapped a section of the ARAM area into the normal address space. We ended up using this dynamic mapping system to avoid having to deal with code overlays by moving code into this virtual memory area, as well as to make access to data in ARAM much easier and more flexible then with manual ARAM DMA transfers.

The time implementing this system was well spent. The last weeks of development saw a number of situations in which we would have lost hours and hours implementing specialized code, but instead the virtual memory system took care of all of them nicely.

http://www.nintendoworldreport.com/...julian-eggebrecht-all-about-rogue-squadron-ii
Julian Eggebrecht: The only RAM-expansion that would be possible would be more ARAM and 16 Megs of that are really enough. We only use 10 Megabytes of it for sound and the rest for game data and program code.
 
Please note that I did not once say the GC CPU could execute stuff out of a-ram, just that you can't disregard it being there in the first place. Thank you for not putting words in my mouth in the future!

I never said it didn't exist, simply that you couldn't use it for "90% of tasks", that it was "too slow", and and that it wan't "system memory". Thank you for not putting words in my mouth in the future!
 
I know jaggies were very common on all 3 consoles that gen but having owned all 3 at the same time I remember alot of multiplat games looking much better on the GC and Xbox as far as IQ is concerned.
I bust out the Xbox occasionally and I have built up quite a library. I've noticed that The Thing uses Quincunx 2x MSAA, and there are some that appear to use 4x MSAA like Turok Evolution. NV2x-era MSAA wasn't as good as what we have today but it's still very nice. Quincunx is of course controversial because of the blur but it is an interesting effect. There are also the few 720p games that are really special for that era like Enter the Matrix.

I don't know if I've ever seen anisotropic texture filtering, but I doubt it because it had a very large performance hit on that generation of NVIDIA GPUs. I think trilinear filtering is used though.
 
I found Quincunx to be very useful for playing Doom 3 on the PC.
There's also the hidden 4x 9-tap mode which combines that "wrong" 4x MSAA with even more blur. Not so useful.
 
comparing the Xbox to the PS2... to me it feels almost like different generations,
compare some of the multiplaform games with emphasis on the Xbox, like Spinter Cell Chaos Theory, it's not even funny,

now games made primarily for the PS2, like Burnout, or let's say "Black" (impressive looking FPS for the PS2) there was no big compromises for the xbox with image quality or performance.

no PS2/GC game comes even close to Half life 2, Doom 3, Chronicles of Riddick, which by 2005 were still some of the best looking PC games around, and the Xbox could run acceptable versions of it

when people were to ambitious with the PS2 they ended with games like SOTC, interesting looking, but to much aliasing, low quality textures and 10FPS, it could have been so much better with the Xbox hardware.

now going back to GC, the GC didn't do as well as the Xbox running PS2 ports as far as I know, but if you look at perhaps one of the best looking GC games ported to the PS2, RE4, the difference is also pretty big, they had to make so many changes (for the worse) for the game to work on the PS2, I doubt the same amount of effort was made for the PS2 to GC ports, simply because the PS2 was the console making the most money.

I think overall the GC was clearly a more capable system compared to the PS2, I think it's interesting that people are even debating
 
Back
Top