Gamecube/PS2/Xbox stuff... again

see colon

All Ham & No Potatos
Veteran
Supporter
Maybe Nintendo wasn't sure what their successor to Wii was to be, should they continue the philosophy behind Wii or return to the path of Gamecube and they choose the former.
........
It didn't stop PlayStation to rule over Nintendo 64 nor PlayStation 2 over Gamecube which had twice the computational performance.

You point to the Gamecube as some brute force, uber powerful console. GC has the lowest paper specs of it's generation, and it's upgraded iteration (Wii) still has lower specs in many regards when compared to PS2 and Xbox. In fact, the few games that were released on Xbox and Wii still look and / or perform better on Xbox. Furthermore, GC is the only console from it's generation to not support 32bit color, unless you count the Dreamcast (which outputs 24bit but IIRC renders internally at 32bit). Point is, the Gamecube isn't a super powerhouse, it's simple super efficient. And that's the philosophy that Nintendo has carried on with, in an exaggerated manor.

Also, there's been HOW many mario karts now? How big can demand for this thing really be?
There will be 8 Mario Karts when Mario Kart 8 is released. Which is exactly 1 less Mario Karts than there are Call of Dutys on Xbox 360. And Call of Duty is still selling well. It's also 7 less games than the 15 Assassins Creed games, and that series debuted on 360/ps3 in 2007. They also sell pretty well.
 
Regardless what one might say about AC or CoD tho, there's a lot more story and depth in either of those two titles than friggin Mario Kart.

Also, curious how you consider Wii lower spec than GC compared to the GC gen of consoles. Wii has 50% higher clocks, lots more (and faster) RAM, faster and bigger optical drive and wifi, bluetooth.
 
Regardless what one might say about AC or CoD tho, there's a lot more story and depth in either of those two titles than friggin Mario Kart.

Also, curious how you consider Wii lower spec than GC compared to the GC gen of consoles. Wii has 50% higher clocks, lots more (and faster) RAM, faster and bigger optical drive and wifi, bluetooth.

Point 1 is not relevant when complaining that Nintendo has released too many Mario Karts, when younger franchises have many more titles. Furthermore, Mario Kart's real depth, much like CoD is in it's multiplayer experience, and you have more time between titles (usually an entire console generation for Kart vs 1 year for CoD) to actually enjoy that depth.

On the second point, I'm not saying Wii is weaker than GC. I'm saying that even with the increased specs, it still doesn't reach the paper specs of PS2 or OG Xbox, and that titles that appeared on both Xbox and Wii either looked better or ran better on Xbox. There are a few cases of this with games that were on Wii and PS2 as well, mostly in the color department (less banding on PS2). Go play House of the Dead 3 on XB and Wii and you'll see the difference. The xbox version looks better and doesn't stutter. Farcry is the same way. As is every game that I've tried that was released on both platforms.
 
Point 1 is not relevant when complaining that Nintendo has released too many Mario Karts, when younger franchises have many more titles. Furthermore, Mario Kart's real depth, much like CoD is in it's multiplayer experience, and you have more time between titles (usually an entire console generation for Kart vs 1 year for CoD) to actually enjoy that depth.

On the second point, I'm not saying Wii is weaker than GC. I'm saying that even with the increased specs, it still doesn't reach the paper specs of PS2 or OG Xbox, and that titles that appeared on both Xbox and Wii either looked better or ran better on Xbox. There are a few cases of this with games that were on Wii and PS2 as well, mostly in the color department (less banding on PS2). Go play House of the Dead 3 on XB and Wii and you'll see the difference. The xbox version looks better and doesn't stutter. Farcry is the same way. As is every game that I've tried that was released on both platforms.


The bolded part could be to do with a number of things and not necessarily "power". The XBox's feature set was much more industry standard/friendly for a start. Power or lack thereof isn't always going to be the reason for a game looking or performing worse on a particular platform. Power is obviously a factor, but worse perfomance or less appealing visuals bewteen two versions of the same game could be for multiple reasons.

I've rarely if ever heard GC referred to as "less powerful" than PS2 and certainly never heard of Wii being considered less powerful. With XBox I understood it was mostly the feature set and API which set it apart from GC.
 
Point 1 is not relevant when complaining that Nintendo has released too many Mario Karts, when younger franchises have many more titles. Furthermore, Mario Kart's real depth, much like CoD is in it's multiplayer experience, and you have more time between titles (usually an entire console generation for Kart vs 1 year for CoD) to actually enjoy that depth.

On the second point, I'm not saying Wii is weaker than GC. I'm saying that even with the increased specs, it still doesn't reach the paper specs of PS2 or OG Xbox, and that titles that appeared on both Xbox and Wii either looked better or ran better on Xbox. There are a few cases of this with games that were on Wii and PS2 as well, mostly in the color department (less banding on PS2). Go play House of the Dead 3 on XB and Wii and you'll see the difference. The xbox version looks better and doesn't stutter. Farcry is the same way. As is every game that I've tried that was released on both platforms.

When Nintendo put out their specs for GC, they put out real world numbers. Polygons per second were all the rage back that then, but Microsoft and Sony released numbers for raw polygons, meaing no textures applied, not lighting, nothing, just a bunch of polygons with nothing else. Nintendo released a real number of what could be achieved in game, and some games blew by that 12 million polygons per second spec, I believe Rogue Squadron 3 pushed around 20 million polygons per second. Xbox had a big advantage of having some built in shaders, developers could mimic most of those with the TEV, but they had to write that code themselves, and most didnt bother with it. All the best looking games on Wii and GC took advantage of the TEV, but unfortunately most developers didnt utilize it at all.

I agree about Mario Kart, seeing as how its a once a generation game Nintendo isnt milking the series like other publishers with franchises like Assassins Creed and Call of Duty. Mario Kart 8 has some new ideas that should keep it fresh, and the game looks gorgeous. If you dislike Nintendo, you will probably try to discredit Mario Kart, but I am pretty confident that sales will reflect the games timeless appeal to gamers. It will be in the top 5 system exclusive games sold this generation even though Wii U will sell far less than the other consoles, my opinion of course.
 
The bolded part could be to do with a number of things and not necessarily "power". The XBox's feature set was much more industry standard/friendly for a start. Power or lack thereof isn't always going to be the reason for a game looking or performing worse on a particular platform. Power is obviously a factor, but worse perfomance or less appealing visuals bewteen two versions of the same game could be for multiple reasons.

I've rarely if ever heard GC referred to as "less powerful" than PS2 and certainly never heard of Wii being considered less powerful. With XBox I understood it was mostly the feature set and API which set it apart from GC.
Compare the performance of any part of GC to PS2. PS2 has more fillrate, more bandwidth, more system memory, more CPU, and the most flexible geometry pipeline of that console generation. Xbox had even more system memory, more fillrate, more CPU. It is, part for part, less powerfull than either of those consoles.

When Nintendo put out their specs for GC, they put out real world numbers. Polygons per second were all the rage back that then, but Microsoft and Sony released numbers for raw polygons, meaing no textures applied, not lighting, nothing, just a bunch of polygons with nothing else. Nintendo released a real number of what could be achieved in game, and some games blew by that 12 million polygons per second spec, I believe Rogue Squadron 3 pushed around 20 million polygons per second. Xbox had a big advantage of having some built in shaders, developers could mimic most of those with the TEV, but they had to write that code themselves, and most didnt bother with it. All the best looking games on Wii and GC took advantage of the TEV, but unfortunately most developers didnt utilize it at all.
It doesn't matter what the press releases state. Look at the design of the parts. Calculate the fillrate, bandwidth, the compute power, all of that. Gamecube is the weakest of those three consoles overall. The fact that it performs as well as it does is a testament to balanced design, and the fact that it was re-released with minor changes as Wii further reinforces the good design choices made.

I'm not hating on Nintendo with the claim that GC is the weakest. It's mearly a fact. I'm impressed with it's achievement, i fact. And in the scope of this conversation about WiiU, I think it's only important to understand why Nintendo made the design choices they made here. Nintendo designed GC with as few bottlenecks as possible, and paper specs and raw power were sacrificed, and they won a generation with that design. So when it came time for a Wii successor, they designed another system with the same design principles. It's good enough to make compelling games on, and I can't think of a single PS4/XBone game that couldn't be ported to the system and not be recognisable as the same game. Certainly we saw NES/SNES/GEN ports from arcades that were further away from the original material.

So carry on about how other companies lie about performance or whatever. Or how the magic of TEV makes all of the difference but no one used it (the same argument can be made for PS2's VU1, btw). It doesn't matter because when you get down to nuts and bolts the GC is capable of less than PS2 or Xbox, but was able to achieve more of it's potential thanks to having a more cohesive design.

You guys are arguing philosophy (there is no power without achievment) while I'm doing math (X>P>G).
 
Nintendo designed GC with as few bottlenecks as possible, and paper specs and raw power were sacrificed, and they won a generation with that design.

Maybe I misunderstood the context but Gamecube wasn't much more successful than N64. But yeah it certainly was nicely designed hardware for the time because some people see it as an equal to Xbox, thanks to the handful of games that did utilize TEV for some impressive D3D8-like effects. GC's relative cheapness also appeared to let Nintendo easily cut prices to try to make some sales. Though they did simplify later GC models in some ways.

Wii surely benefitted from the main hardware being cheap when they had to pack in a costly new controller design. There is also a budget Wii without GC compatibility. WiiU looks like the same strategy but they failed to bring in an attractive new gimmick this time.
 
Compare the performance of any part of GC to PS2. PS2 has more fillrate, more bandwidth, more system memory, more CPU, and the most flexible geometry pipeline of that console generation.
That's just on paper. "More CPU" in particular, as PS2 CPU was notoriously difficult to program with its separate, wonky vector co-processors (one of which was basically impossible to use for any substantial work in-game due to design flaws, meaning roughly a third of the chip's peak gflops were more or less wasted), as well as hugely inefficient due to in-order design and lack of L2 cache. Meanwhile, GC had a robust desktop-class CPU with out-of-order execution support, very low-latency main RAM which CPUs love, and a for-the-time large L2. PS2 GPU also lacked multitexturing which meant it HAD to have lots of fillrate. GC meanwhile had solid multitexturing support.

I'm not hating on Nintendo with the claim that GC is the weakest. It's mearly a fact.
Categorically untrue in reality. Paper specs quite literally aren't worth the paper they're printed on. Now, generation AFTER gamecube, wii is hilariously outgunned, sure, but gamecube was not as weak as you claim and good GC software backs that up.
 
Also, if I remember right, PS2 had 32MB RDRAM and 4MB embedded, while Gamecube had 24MB 1T-SRAM, plus another 16MB chunk, and 3MB embedded, so technically Gamecube had more system RAM, not the other way around.
 
Categorically untrue in reality. Paper specs quite literally aren't worth the paper they're printed on. Now, generation AFTER gamecube, wii is hilariously outgunned, sure, but gamecube was not as weak as you claim and good GC software backs that up.
Again, you are arguing achievement over power. That's a philosophical argument, not one based on facts and numbers. Gamecube is less capable by every definition than PS2, but also more efficient and therefore able to achieve more of it's capability. You can't just change the definition of stuff to suit your argument. Gamecube is outspec'd, period. It's a solid performer that does well almost everywhere, but the other consoles from it's generation are more powerful in almost every metric available.

Good GC software simply backs up my claim of efficiency. Never does Gamecube out perform it's theoretical limits, it's just easier to get there. There are plenty of PS2 and Xbox games that look as good or better than what's on GC. Do you think Gamecube could have handled the port of Doom3 Xbox got? How come Burnout runs better on PS2 than any other console? Why are all of the post filter effects missing from UFC on Gamecube? How come some PS2 and Xbox games support HD resolutions but GC does not? Why is the texture filtering on GC so much blurrier tha the other two? Cherry picking some "good GC software" that showcase where the hardware excels is fine, but remember that PS2 and XB also have software that shows of their capabilities as well.

Also, if I remember right, PS2 had 32MB RDRAM and 4MB embedded, while Gamecube had 24MB 1T-SRAM, plus another 16MB chunk, and 3MB embedded, so technically Gamecube had more system RAM, not the other way around.
The 16MB is too slow for games to use for program code and is mostly used as a disc cache and for audio. The size is pretty excessive for audio, so whatever wasn't used for that or a disc cache is wasted. It isn't really system memory like the 24MB is, therefore, PS2 has more system memory.


Look, this is getting way off topic. The point is, regardless of what your opinion of GC is, it is spec'd lower than Xbox or PS2. It's a simple fact. So when someone makes a statement to the effect that Nintendo abandoned the philosophy of Gamecube, with the assumption the GC was a computational powerhouse, it's wrong. What Gamecube is is super efficient. Think about it. It's smaller than the competition. Has lower paper specs including the most limited color pallet of it's generation. The worst video and audio output of it's generation. And still managed to not just compete, graphically, surpassed them by your own standards. Every Nintendo console since then has been build using the same principles. Small, quiet, modest specs with "good enough" performance. Sure you can argue that Wii was severely underpowered, but it did do well on SD displays and still managed to get playable, recognizable ports of top franchises with few gameplay sacrifices (Call of Duty comes to mind here). But it's obvious that the lessons Nintendo learned from that generation is that that is what they did "right", just like Sony learned with PS2 that a bunch of fillrate with a decent CPU and a cluster of co-proccessors is a successful design. Sometimes you take away the wrong things, and only history can tell you that you are wrong.
 
Gamecube is less capable by every definition than PS2
No, not at all. It has more total memory (some less capable, but some moreso, than the PS2's pool). Its disc drive is faster. It has an VASTLY more feature-rich GPU.

Compared with Xbox, it also has a faster disc drive, and it has significantly more total bandwidth available to its various memory pools (in addition to cleaner arbitration, since resources are more isolated).

The 16MB is too slow for games to use for program code and is mostly used as a disc cache and for audio. The size is pretty excessive for audio, so whatever wasn't used for that or a disc cache is wasted. It isn't really system memory like the 24MB is, therefore, PS2 has more system memory.
The other consoles needed audio memory as well. If it winds up using lower bandwidth than other applications, that doesn't mean "dedicated audio memory" doesn't count, it means the separate "slow" pool is doing a very good job of serving some purposes of general system memory.

Considerations like this make things difficult to quantify, but it strikes me as weird to just throw out the DDR pool like it's nothing. Especially if you're going to act like the GCN's larger pool and the PS2's pool are practically the same thing as each other, when they have fairly different characteristics.

But it's obvious that the lessons Nintendo learned from that generation is that that is what they did "right", just like Sony learned with PS2 that a bunch of fillrate with a decent CPU and a cluster of co-proccessors is a successful design. Sometimes you take away the wrong things, and only history can tell you that you are wrong.
Hehe, Sony most certainly did not include "bunch of fillrate" in "things PS2 made us feel good about."

PS3's render output performance is pretty terrible. Bad enough that it's blatantly obvious in the design of games on the platform.
Just look at how they approach alpha. Seems like lots of games are constantly in a nasty tradeoff between alpha-to-coverage dithering, low-resolution blending that makes the image look rather 240p, and just not using much alpha. To some extent that's probably a GPU bandwidth issue in general... but seriously, Xenos is the fillrate king of the seventh gen (which in many ways accomplished nowhere near as much as Microsoft and AMD seem to have hoped, but still gave the component some advantages over RSX).

//=================

Not that you're all that wrong. The Gamecube seems to have designed so that if you shoved a game into its performance characteristics, the components along the graphical system wouldn't bottleneck much on each other, and you'd wind up with relatively little wasted hardware (see: lack of options compared with oXbox or in some respects even PS2).
 
Last edited by a moderator:
Again, you are arguing achievement over power.
Looks to me as if YOU'RE the one doing that. You're pointing at paper specs and declaring winners, when in reality the opposite might be true! "Power" is meaningless and pointless if it's just figures printed on a sheet.

Gamecube is less capable by every definition than PS2, but also more efficient and therefore able to achieve more of it's capability.
That's just not true. How can something be less capable yet more efficient, and still end up less capable by every definition? That's a contradiction of epic proportions.

You can't just change the definition of stuff to suit your argument.
Lol wut, where am I doing that? I'm taking the real world into consideration, paper specs mean fuckall, seriously. If you want to stare yourself blind at them and masturbate, fine, go ahead, but in the real world, things are NOT as clear cut as you make believe. So PS2 has more main RAM than gamecube. Okay, now we have arbitrary distinction of what is "main RAM", versus "other RAM" (which doesn't count at all, according to you), so who exactly is arbitrarily defining things here, really?! A piece of hardware has to be seen as a cohesive whole, or else it just won't work.

Suppose you want to compare hardware triangles/sec spec instead - well, PS2 measures a big fat zero there. Is that a fair metric, you think? Is it representative of the console's power as a whole? Of course not. How about hardware sound voices, GC - two. Left, and right. So it can't play sound effects then if it's outputting music? Must be so! :rolleyes:

And who REALLY has more main RAM, you might want to ask yourself. PS2 has only 2MB audio RAM IIRC (that which is main RAM in PS emulation mode), so if it needs more the rest has to spill over into the main RAM pool. It also lacks texture compression, which GC features. Meaning textures will either look worse (by a lot) or they will consume a lot more space, meaning the 8MB lead PS2 has might well end up a deficit instead vs. GC.

But 32 > 24, so it's dead simple, right?! Just in your book, obviously. :devilish:

CPU flops paper spec - 6.4 vs 2.1 or something like that. But lacking hardware T&L, PS2 needed lots of flops to get anything done, that was its design. Most of those flops couldn't be used for anything else if you wanted a decent amount of tris put up on the screen. Using them as a bragging metric is therefore pointless! Meanwhile, GC having tons of flops in its hardware T&L unit, needed less for the CPU to reach the same target. As we know, 6.4 is basically a lie anyway as not all of it by far could realistically be utilized, meaning the chasm between the two is significantly narrower even before factoring in hardware T&L flops. IN THE REAL WORLD, that is, not on paper. But let's disregard that, as you have said only paper specs count! :rolleyes:

Gamecube is outspec'd, period.
No it isn't.
 
Yes, the PS2 has a few flashy specs, one being fill rate, but the feature set is much more limited. For example, GC has texture compression and the PS2 does not. GC still had more, even if the DDR memory was less useful than the 24MB 1T-SRAM. Keep in mind that thanks to the S3TC texture compression, the GC would have more and higher quality textures in its 24 MB of memory than PS2 could in the 32MB of rambus memory. GC also had 3MB of embedded memory on the GPU, 2 MB for the Z-buffer, reducing bandwidth requirements to the main memory, and 1 MB for the texture cache. Saying the PS2 is more powerful than the GC is like saying an old 300 Gflop GPU that benchmarks worse than a new 200 Gflop GPU is still somehow more powerful. You cant just look at the specs Sony list, compare them to the GC, but then dismiss the features the GC has that the PS2 is lacking. Its those features that make a difference.

With all that said, Nintendo has definitely stuck to the hardware philosophy that they created with the Gamecube. Heck, they have stuck with a PPC750 based CPU for three consoles in a row now, obviously some changes were made to create a tri core processor with the Wii U, but still a PPC750 based CPU none the less. The small fast embedded memory on the GPU is still how Nintendo approached its memory hierarchy. Wii U pays homage to the GC in a lot of ways.
 
This is kind of like arguing that PS3 has more CPU capability than PS4 because Cell has more peak FLOPs than the 8x Jaguar..

Outside of the SIMD Gamecube has a significantly better CPU (much higher clock speed, OoOE, low latency L2 cache, 32KB L1 dcache instead of only 8KB)

VU0 and VU1 can be powerful for certain tasks, much like the SPEs in Cell. But they're highly constrained, with even tinier caches and VLIW instructions..
 
Wow, back in the times I thought that PS2 had blurry textures because of a 4MB RAM limitation but the actual reason is it didn't support compressed textures. Sorry, every 21st century GPU without that feature would be laughed out of the room. PS2 is really fine 90s hardware, too bad they couldn't get that vital feature.

As for GC it was really great, period. Too bad it didn't last very long and had too few games. And the graphics quality was limited by the composite cable it came with (it's heinous that this gen of console didn't come with a RGB SCART cable in the countries that supported it), though the 24bit RGBA format made it a bit weird with noticeable banding.

What made it a regular console anyway, was that the primary input method was a regular controller, and a rather good one at that (even if somewhat derided, kiddy etc.)
I must be too old by now. Back in my time, we played console games with controllers, not remotes, tablets, cyberpucks or whatever.
 
I can't add anything to this discussion as Grall has pretty much knocked it out of the park there. But just wanted to say, although this has gotten rather OT its great fun to read as I remember having these exact same arguments at the time :LOL: it's like stepping back in time to 2002 and its great!
 
Some points:
PS2 has texture compression, even if you've convinced yourself it doen't
That 16MB pool of memory in GC is too slow for 90% of actual tasks
Bringing up things like TEV and claiming a win for GC while complaining no one used it is the exact counter point to bringing up VU1 on PS2
Output isn't the same on GC vs PS2/XB because it's inherently limited by it's color pallet
Gamecube's lack of fillrate forced most games to use aggressive filtering. Compare multplatform games and you'll more often than not see mipmap lines closer to the camera than on PS2/XB

The console has limitations that weren't there on it's competition. It's certainly a more ballanced design than either the Xbox and PS2, but both of those consoles could outperform it in many cases. GT4 can output 1080i, can any GameCube games do that? Or do any look as good as GT4 in general?

Plus, you guys are taking my comments all wrong. GameCube is my favorite console from that generation. It's an engineering marvel; a true overachiever in console design. and the fact that Wii is mostly unchanged is a testament to that. But if you measure each console's capabilities with the same yard stick, the Gamecube comes up short.

To put this another way from an earlier generation, look at PS1 and N64. N64 trounces PS1 in power and feature set, but you can certainly make the claim that many games looked better on PS1, even ones that didn't rely on the massive storage discrepancy. The cream of the crop did look better on N64, of course, but PS1 was the overachiever there.

I'm curious, Grall, if you aren't judging power by achievement as you claim, and don't believe in paper specs, how are you measuring the power of a system.
 
I think GC's efficient design was specifically a win for Nintendo because it was competitive hardware that was still profitable. I think I read it was still profitable at <$100. Nintendo had cost cut and removed component output from later models though.

Overall I find the fact that arguments endlessly come up comparing these consoles makes it obvious that there were diminishing returns. It's the same with PS360 and undoubtedly will be again with XboxOne/PS4. Exclusives that look amazing on each system muddy the waters in comparisons, and none of the hardware is so much faster as to be able to look like its clearly in another class.
 
If the PS2 had texture compression, then it must have been so terrible that no one every spoke of it. You need to define slow with the 16MB of memory, because latency and bandwidth are two different things, and CPU task are typically more latency bound than bandwidth. Not to mention that at the very least, it would speed up streaming from the disk, even slow memory would still be faster than the disk drive, allowing them to cache things in the slower memory. If all the pieces of hardware are faster on the PS2 than GC, then it would be faster. Your basically saying there is one such bottleneck that brings performance way down. You dont gain performance through efficiency, you simply lose less. You cant outperform the sum of your parts. Nintendo hard locked developers from being able to go past 480p, its not because the hardware couldnt do more.

Someone correct me if I am wrong, but didnt 1080i for GT4 use some sort of weird rendering technique where it only rendered every other line per frame, basically lining it self up with an interlaced signal. I am pretty sure it wasnt actually rendering 2,073,600 pixels.
 
Some points:
PS2 has texture compression, even if you've convinced yourself it doen't
Sort of. From what I've read, it sounds like PS2 had some super cheapo LUT-based "compression". It certainly allowed things to be smaller than raw bitmaps, but it doesn't sound particularly stellar, and it sounds like it encouraged various sorts of limited visual designs.

That 16MB pool of memory in GC is too slow for 90% of actual tasks
Supposing that's the case, it would still be appropriate for the other 10% of tasks.

Allocating different things to different memory regions with different speeds doesn't make slower memory worthless; if it wasn't doing anything for the game, they wouldn't have shoved it into the box.

Bringing up things like TEV and claiming a win for GC while complaining no one used it is the exact counter point to bringing up VU1 on PS2
That's maybe sort of true, though all of it is total bollocks. Maybe I'm missing something, but the notion that developers straight-up didn't use the TEV comes across as kind of weird to my ears.

Gamecube's lack of fillrate forced most games to use aggressive filtering. Compare multplatform games and you'll more often than not see mipmap lines closer to the camera than on PS2/XB
You mean texel fillrate in particular? Because the Gamecube's pixel fillrate was fine; probably much better than Xbox's once you account for the bus isolation.

The Gamecube supposedly had a pretty good texturing setup aside from the slow raw texel clock though, so I'm confused as to why mipmap lines would be "closer." I could certainly understand if they were better-hidden on Xbox due to trilinear filtering (erm, I can't recall atm if Gamecube games tended to use trilinear), but closer MIP transitions would imply a severe texture cache and/or texture BW issue, which doesn't make much sense to me given the Gamecube's memory architecture.

can any GameCube games do that? Or do any look as good as GT4 in general?
Lol, plenty of people would argue that many Gamecube games stomp all over GT4.

And some probably come close to GT4 in terms of raw pixel throughputs; GT4's pixel rate isn't actually much higher than 480p60.

Someone correct me if I am wrong, but didnt 1080i for GT4 use some sort of weird rendering technique where it only rendered every other line per frame, basically lining it self up with an interlaced signal. I am pretty sure it wasnt actually rendering 2,073,600 pixels.
Yes, it used interlaced field-rendering, which from what I can tell was quite common that gen (why do anything else when you're rendering a 60fps game to be output to an interlaced 60Hz CRT?).

In addition, it was anamorphic; it certainly wasn't rendering 1920 pixels wide.
 
Last edited by a moderator:
Back
Top