Strengths and weaknesses of GameCube relative to its peers *spawn

Theoretically if the 1tsram is on die it should work for the framebuffer. Perhaps we could look at design docs. The big hint here is that Wii U also uses 1tsram for edram.
I remember the Gamecube has a color depth limitation that caused banding to be a problem. 16-bit color depth maximum if using alpha blending? That's something I remember noticing when I had Cube. That and the occasional lack of mip mapping (memory conservation?) and that causing distant texture aliasing.

It also has only supersample AA which isn't exactly efficient. XBox games sometimes use MSAA 2X, MSAA 4x, or Quincunx 2x. Technically NV2A has all of the AA modes from the desktop GeForce 3/4.

Indeed AA is a weak point in cube's design, however the plus of this is all the power went to the details. Can't say I have any issues with the IQ on either gekko based machine on my hdtv, (assuming you don't force 16:9 on cube) only N64 and Ps2. The latter being the result of weird buffer sizes and often a lack of mip maps.

I have no idea which xbox games had Mssa or not, any examples?
 
I have no idea which xbox games had Mssa or not, any examples?
Offhand I think Predator Concrete Jungle, The Thing and Turok Evolution use it. Maybe there is a list somewhere. It's something I notice sometimes when trying games on it. I also had a Quake2 port on it once that could have any of the AA modes enabled and run at 720p.

By the way I also added to my previous post that the Splinter Cell games use NVidia Shadow Buffers for real-time shadowing. I'm not sure if other games use that tech. There are some details regarding that (somewhat PC related) here:http://www.rage3d.com/board/showpost.php?s=262638866a08bf061ce2764f9d4b90da&p=1332266711&postcount=1
 
It's really not a bottleneck for Gc or wii at 480p, only in this hd hypothetical which frankly is a waste on all 6th gen machines.

Wii can output 854x480 true widescreen so that's not HD but it is more than gc was able to do. I could not find the overlord quote but from what I can tell of the Wii, the gddr3 is external but the 1tsram is on the gpu die itself unlike the cube which probably makes a difference?
The 1TSRAM more or less probably emulates the same function it did in on the Gamecube, and probably is 50% faster like the CPU and GPU. Being on the same die may help with some latency, but the interface is probably the same width. It had to be fully hardware backwards compatible with GCN games afterall. The GDDR3 is obviously faster than whatever cheap 81 MB/s DRAM was used in the GCN's data/sound cache.

Regardless, the graphics die acts as the system hub/northbridge in both systems and all the memory is connected to it. The CPU has fairly direct access to the 1TSRAM and GDDR3 for game execution, so theoretically, you would use the 1TSRAM as graphics assets memory, and the GDDR3 as system memory if we look at from a traditional dedicated PC graphics point of view.
 
Gddr3 could be used for whatever graphics tasks, except apparently the framebuffer but i'm still not convined on that with no hard evidence. It doesn't have to work the same as gc i mean, xbox one x runs xbox one games with no edram. As it stands this is all guesswork including my end of it.

I suppose i'll have to research this when I have time.
 
I have a feeling that if it was somehow possible to output 720p on Wii that the homebrew people would have used it. That 480p limit was a bummer for emulators and video playback.
 
Fun fact : the difference between 360's DVDs and ps3 blu ray was greater than the gap in DVD vs cube's mini discs and developers like treyarch said that limited the texture quality in COD black ops. I have never read any devs comment on the mini discs, it just seems to be a popular thing to say was an issue.
Is there a link where Treyarch said that? I did a quick search and found nothing, but I did find the eurogamer face off's for BLOPS 1 and 2 and found that the disc size for 360 is less than the 9ish GB limit of a dual layer DVD (they are about 7 GBs a piece) while the PS3 versions are 15+GB and were visually inferior. I'm pretty curious what is taking up all the space, to be honest, because it isn't like there was enough RAM to spare to use massive textures on PS3, and there isn't a visible advantage anyway. Maybe they had redundant data strategically located on the disc to speed up load times.

Anyway there are Gamecube games that have visibly inferior textures, more compressed audio, and more compressed videos. Plus, there are more than a few multi disc Gamecube games that are single disc on XB or PS2. Space was clearly an issue in some cases. But again, I think another issue was developers simply compressing textures wholesale, with little regard to the end quality. I think that best explains the texture quality differences between GC exclusives and multiplatform titles in general, because exclusive games generally had exceptional texture quality when compared to PS2 and XB.
 
Gddr3 could be used for whatever graphics tasks, except apparently the framebuffer but i'm still not convined on that with no hard evidence. It doesn't have to work the same as gc i mean, xbox one x runs xbox one games with no edram. As it stands this is all guesswork including my end of it.

I suppose i'll have to research this when I have time.
I'm assuming you mean no ESRAM. X's main memory bandwidth is 300+GB/s. ESRAM is 100 GB/s, with reports that it was 100 read and 100 write simultaneously, and that the APU could also access main system memory at about 70GB/s. So best case scenario Xbox One has 270GB/s memory bandwidth, if you are reading and writing ESRAM plus reading or writing main system memory. X's 300+ GB/s exceeds that. I would imagine that Microsoft simply sets aside 32MB of system memory to act as ESRAM when running older games that aren't X enabled.

Regarding Wii running at higher resolution, the embedded framebuffer is too small to fit higher resolutions and there isn't any hardware to assist in tiling. That means there is a performance penalty. Or, if your theory is correct, you could render directly to the GDDR3. What type of bandwidth does that have? AFAIK bus width and clock rate have never been released. But what we do know is that GC's embedded framebuffer bandwidth is about 18GB/s, and if it scales linear with Hollywood's clock increase over Flipper, embedded framebuffer bandwidth would be at 27GB/s. For comparison, PS3's RSX's GDDR3 bandwidth is 22GB/s, and that's a 128 bit bus at 650MHz. I think it's likely Hollywood maintains the 64bit bus from Flipper, and based on speculated clock speeds it would put Wii's Hollywood to GDDR3 bandwidth at 7.6GB/s. Fairly slow but not exclusionary for HD rendering, but a far cry when compared to the embedded framebuffer, even if it's limited to Flipper's spec. But either way you would have a performance penalty beyond the normal scaling of fill rate and bandwith for running at higher resolutions. And then... All of the video output is handled by a custom analog video encoder. And knowing Nintendo, it probably doesn't support resolutions higher than 480p. So assuming you have the performance to draw an image that is higher resolution than SD, it's likely that the video encoder either doesn't support images that large being input or would resize the image when output.
 
@see colon No I can't find the treyarch quote and i'm honestly pretty irritated I can't find any of this shit anymore. I need to screenshot this stuff lol

What I will say it wasn't a press statement it was one programmer in a forum post, and the 360 at the time only had 6.8 gb's of space while a real dual layer dvd has... 8.47? Let's just call it 8.5. A year later MS optimized the copyright protection on their dvd's and got that up to 7.8 gbs for gears 3 release. Maybe epic got on MS's case again, hah. I remember when lords of shadow came out, mercury steam said they made ps3 the lead platform (we know COD were 360 lead titles) and they made a point of using ps3's blu ray for texture variety and the final game is 2 discs on 360.

Somewhere I saw the Wii's gddr3 is 6.4 gb/s, in other words that'd be exactly as fast as Xbox's pool. But maybe you're right, not sure on the exact speed but i'd venture to guess Nintendo were looking at Xbox's clocks as a target for Wii. Like they aimed for 360 raw gpu grunt with Wii U.

But The 1T Sram is 2.6 gb/s, and the eDRAM for CUBE as you say was 2 megs 7.8 gb/s for the framebuffer and 1 meg at 10.4 for the texture cache. Combined with the gddr3 That is effectively a crap ton more bandwidth than the single pool Xbox has that is shared by everything. And I am wondering if the eDRAM scales with wii's clocks as well but I have no idea about that.

Edited for spelling
 
Last edited:
OG xbox was like a half or a gen ahead of ps2.
Have you any idea how ridiculous a statement that is? The generation after PS2 was XB360 and PS3. You're saying OX was as good as PS360??

XB was some ways better than PS2, but nothing like a half generation. Even being twice as powerful, it'd be a 'quarter generation' ahead going by a factor of 8x for a generational advance. It's silly to correlate performance of machines a year apart with those 6+ years apart (unless you're talking Wii ;)).
 
Have you any idea how ridiculous a statement that is? The generation after PS2 was XB360 and PS3. You're saying OX was as good as PS360??

XB was some ways better than PS2, but nothing like a half generation. Even being twice as powerful, it'd be a 'quarter generation' ahead going by a factor of 8x for a generational advance. It's silly to correlate performance of machines a year apart with those 6+ years apart (unless you're talking Wii ;)).

8x seems kinda low balling it there unless you're talking ps4 > 360. Ps1 to 2 or 64 to cube seems massively more than that no? actually just looking looking at the gpu and ignoring the wimpy jaguar cores ps4 was roughly a 9x leap over 360.
 
Yeah, 8x can easily be low balling it*. It's just an illustrative figure.

* How can one even measure a overall 'power' in terms of the sum of its parts? Is twice the clock speed and twice the RAM twice the power or 2x2 = 4x the power?
 
Yeah, 8x can easily be low balling it*. It's just an illustrative figure.

* How can one even measure a overall 'power' in terms of the sum of its parts? Is twice the clock speed and twice the RAM twice the power or 2x2 = 4x the power?
Well in the 360 and ps4 example I was using teraflops as a metric, but perhaps it's less than 9x if we look at say... memory bandwidth. And I had to ignore jaguar in that metric as well, so yeah not very accurate in totality.

Also to answer your question I would call that a 2x increase assuming there's no bandwidth bottleneck there. Like, I wouldn't stack the cpu and gpu increases but rather add and divide to find the average.

Keeping with that metric For 64 to cube i'd suppose i'd look at the theoretical Mflops each machine can handle which seems to be a pretty massive difference compared to 7th gen to 8th.
 
Last edited:
Anyhow why has single transistor SRAM fallen out of favour for embedded use in gaming. Doesn't it inherently have much lower latencies than DDR? If you massively increase the clock and stick in a 512bit bus it could be a beast.
 
Anyhow why has single transistor SRAM fallen out of favour for embedded use in gaming. Doesn't it inherently have much lower latencies than DDR? If you massively increase the clock and stick in a 512bit bus it could be a beast.

Money, and I would be shocked if we get anything more than an 384 bit bus and we probably won't see eDRAM of any kind. But indeed that would be something.
 
XB was some ways better than PS2, but nothing like a half generation.

Said it wrong, meant it felt like half a gen ahead at the time, just as ps2 was much superior to DC in about the same way.

Some ways, you make it sound the gap is smaller then it was between xb and ps2.
 
Based on feature set, I would definitely agree the Xbox was a half gen ahead of the PS2. Programmable shaders really allowed the system to shine, just like real z-buffering and 32 bit precision allowed the N64 to really shine over the PSX in many cases.

The end results however are not a half gen ahead though. PS2 managed to have a few outlier cases where it matched the Xbox or emulated it's normal and bump mapping capabilities (Hitman: Blood Money, Matrix: Path of Neo). Then you have the Gamecube sitting squarely between the two, in some cases giving the Xbox a real run for it's money when used correctly.

As for the Gamecube and Wii's possible HD capabilities, I'm still of the supposition that output can only be read from the embedded framebuffer. Maybe had Nintendo took some real steps to bump up the clocks or increase the pipelines in the Flipper GPU and in concert the eDRAM buffers, there would've been some incentive to give the Wii at least 720p output.

Then again, had it been my responsibility, I would've cut the Gamecube BC, and just thrown in a PowerPC 7448 single or dual core (w/ 512KB cache per core), a Radeon X1600, all tied to 256 MB of GDDR3. Fully 720p capable, DX9 level capabilities, good CPU SIMD, and easy porting to & fro.
 
Last edited:
Based on feature set, I would definitely agree the Xbox was a half gen ahead of the PS2. Programmable shaders really allowed the system to shine, just like real z-buffering and 32 bit precision allowed the N64 to really shine over the PSX in many cases.

The end results however are not a half gen ahead though. PS2 managed to have a few outlier cases where it matched the Xbox or emulated it's normal and bump mapping capabilities (Hitman: Blood Money, Matrix: Path of Neo). Then you have the Gamecube sitting squarely between the two, in some cases giving the Xbox a real run for it's money when used correctly.

As for the Gamecube and Wii's possible HD capabilities, I'm still of the supposition that output can only be read from the embedded framebuffer. Maybe had Nintendo took some real steps to bump up the clocks or increase the pipelines in the Flipper GPU and in concert the eDRAM buffers, there would've been some incentive to give the Wii at least 720p output.

Then again, had it been my responsibility, I would've cut the Gamecube BC, and just thrown in a PowerPC 7448 single or dual core (w/ 512KB cache per core), a Radeon X1600, all tied to 256 MB of GDDR3. Fully 720p capable, DX9 level capabilities, good CPU SIMD, and easy porting to & fro.
They were clearly in the same console generation graphically. The differences were were more like the difference between Mega Drive, SNES and 16bit Amigas/Home computers - whereby each platform had its own unique strengths based on its architecture but they were also clearly within the same generation. Altough the home consoles with their dedicated hardware had smoother scrolling and were more capable of 60fps rendering than the home computers.

The shader tech in the Xbox was honestly in its infancy and developers were still learning how to make best use of shaders.

I think the later Burnout games ran better and more consistently on PS2 than on the Xbox. There were also effects completely missing from the Xbox builds of the games. So there were areas outside of shaders where PS2 had real tangible advantages.
 
Last edited:
So there were areas outside of shaders where PS2 had real tangible advantages.

No doubt, but the Xbox GPU was much more straightforward and capable in most use cases compared to the vector unit -> GS pipeline. NV2A was much more inline with future silicon developments, and it showed in the best looking OG Xbox titles. To be fair, the same kind of argument could be made about the Dreamcast vs PS2, as the DC actually supported alot of features in hardware that the PS2 did not. But the PS2's brute force approach and flexibility did allow it to well exceed the DC. I think alot of people see the Xbox as using brute force to run what was originally PS2 assets, but in the Xbox's silicon there was both brute force with elegance that allowed it to be much more if devs wanted.
 
Last edited:
Back
Top