WiiGeePeeYou (Hollywood) what IS it ?

Status
Not open for further replies.

On an unrelated note, nvidia continues to be the market leader, despite not having taken similar approaches towards memory bandwidth. Or at least that's what happened with the ring-bus, but if we just go by press releases, then bitboys had market leading technology too. (I do prefer ati cards and tech btw, but nvidia has shown they can perform just a bit worse and beat ati to market by 6 months, and the advantages of ati's memory controller and higher bandwidth have really only shown themselves with AA enabled; the 9600 pro kicked ass even with a 128bit memory bus)

Maybe it was important for compatibility,and I'm not talking just games but game engines designed around GC specs. An important part of the cost savings idea with the Wii was for devs to be able to bring their last gen engines over.

A key point about those devs.....that really only includes Nintendo. Most of the GC devs who had worthwhile GC engines jumped ship once Nintendo announced wii wouldn't compete technologically, and so far most of the engine ports have been ports from Xbox or PS2. Nintendo didn't really have a large enough dev base to make engine portability important imo, besides Nintendo's own engine. PC based hardware would have been much more attractive for picking up where xbox left off.
 
You mean like Radeon's 7000 series and the Geforce 2 MX? I think it's been too long since you had one of those cards, because I still have a GF2 MX in an older machine, and it can't do jack.


Oh come now. GF2MX was super popular in its day (2001) and could run lots of games just fine. Try out Jedi Knight 2 at 640x480 and see how it compares to Cube.. Radeon 7000 (or VE) is more like a G400.

And with regards to bandwidth saving tech, Radeon had HyperZ in early 01. It's paltry 2x2.5 (might as well have been 2x2) design could keep up with the GF2 because of that. NV was way behind with that stuff.
 
Last edited by a moderator:
Oh come now. GF2MX was super popular in its day (2001) and could run lots of games just fine. Try out Jedi Knight 2 at 640x480 and see how it compares to Cube..

And with regards to bandwidth saving tech, Radeon had HyperZ in early 01. It's paltry 2x2.5 (might as well have been 2x2) design could keep up with the GF2 because of that. NV was way behind with that stuff.

AFAIK, Flipper has 0 Radeon tech in it. It was ArtX's baby, and they didn't have HyperZ. For that matter, why bother with bandwidth saving tech at that time when you already have your hands full designing a chip to use edram that solves the same problems?
Oh, and I thought radeon was 1x3? Which would be even weaker, actually.

And I can't see where the gf2MX would have an advantage over flipper, besides being paired with far more powerful cpus. (granted, jedi knight 2 shouldn't take much more than a 500mhz processor and a geforce2mx to run based on how old it is, I can't imagine the minimum requirements being much higher)
 
Most of the GC devs who had worthwhile GC engines jumped ship once Nintendo announced wii wouldn't compete technologically

Only one did, Factor 5. Retro continues to be owned by Nintendo, Sega's still supporting them (F-Zero GX had a great engine), Capcom's got another Resident Evil on the way, nSpace is working on a Wii title (Geist really did have a quite competent graphics engine), and Square-Enix is still on board (FF:CC had another fantastic engine). I also wouldn't be surprised to see the guys who did the WWE:DOR games to do a Wii wrestler or two. We have seen only one game from a 3rd party that supported Cube significantly, and that's Super Monkey Ball...which seems to have a pretty competent graphics engine, even if the style is very simple. I also have high hopes for the Red Steel engine, which really shines at times. Anyway, wait for one of those aforementioned devs to release a game.

I never saw my GF2 MX do the kinds of water, shading, and postprocessing effects I saw the Gamecube do. Not even close. Cube's feature set was just a bit beyond DX7, no getting around that.
 
Alright, so the flipper was the right design for the time, but that still doesn't change that fast ram is comparatively much cheaper now than it was then. At the time gamecube came out, memory bandwidth was still the hurdle to overcome in graphics, now shader processing power is.

Bandwidth will always be important. What's the big issue Xbox 360 is designed around? Bandwidth. X360 is pushing 3x the resolution of Xbox, but way more than 3x the textures and shaders. A mere 3.5x increase in bandwidth isn't going to cut it--and remember, Xbox didn't have enough to start with. 16 GB/s would have been nice, actually. PS3's designed with similar concerns, although it tackles them differently.

RAM is only "fast" in comparison to how fast the processor can process. Using a rather silly, arbitrary, but still I think poignant example, look at the X360's max theoretical texel fillrate of 8 Gtexels/s. Assuming 32 bits per texel and 8x compression, that's 32 GB/s of information the Xenon's TMUs alone can theoretically toss around. Now, obviously, you don't need to be reading textures in every single clock cycle, but hopefully that illustrates just how data-hungry a processor can be and how non-trivial having enough bandwidth still is. Which is why MS wisely went with the 10 MB of eDRAM. Too bad it's not more, but that's got to be a real boon to the X360.
 
Last edited by a moderator:
Only one did, Factor 5. Retro continues to be owned by Nintendo, Sega's still supporting them (F-Zero GX had a great engine), Capcom's got another Resident Evil on the way, nSpace is working on a Wii title (Geist really did have a quite competent graphics engine), and Square-Enix is still on board (FF:CC had another fantastic engine). I also wouldn't be surprised to see the guys who did the WWE:DOR games to do a Wii wrestler or two. We have seen only one game from a 3rd party that supported Cube significantly, and that's Super Monkey Ball...which seems to have a pretty competent graphics engine, even if the style is very simple. I also have high hopes for the Red Steel engine, which really shines at times. Anyway, wait for one of those aforementioned devs to release a game.

I never saw my GF2 MX do the kinds of water, shading, and postprocessing effects I saw the Gamecube do. Not even close. Cube's feature set was just a bit beyond DX7, no getting around that.

You're right, I was thinking Capcom has jumped ship also. It's likely Nintendo will receive support from Square-Enix and Capcom just because they can use their existing engines. Well, not so sure about Square. With how long their games take to come out, I wouldn't be surprised to see FF:CC2 use an entirely new graphics engine.
Ok, so the Cube really lacked any significant 3rd party support to begin with. Cube had what, 2 major releases from sega that weren't ports/multiplatform, 1 release from Square, and a handful of titles from Capcom, most of which made by Capcom's now defunct studio. It's still mainly Nintendo that benefits from keeping the same hardware and programming libraries, as I'm sure both Ubisoft and EA would have preferred something that resembled a low end PC instead.
 
AFAIK, Flipper has 0 Radeon tech in it. It was ArtX's baby, and they didn't have HyperZ. For that matter, why bother with bandwidth saving tech at that time when you already have your hands full designing a chip to use edram that solves the same problems?
Oh, and I thought radeon was 1x3? Which would be even weaker, actually.

And I can't see where the gf2MX would have an advantage over flipper, besides being paired with far more powerful cpus. (granted, jedi knight 2 shouldn't take much more than a 500mhz processor and a geforce2mx to run based on how old it is, I can't imagine the minimum requirements being much higher)

I didn't really mean to say that Flipper = a NV11. But hey it might not be far off!

NV11 has the NVIDIA shading rasterizer that was more flexible than Radeon's texture processors (which were touted as pretty neato). Radeon was 2x3. GF2MX was 2x2. Radeon shamed the GF2MX every time because of its greater memory efficiency, even with SDR. Don't forget 3D Tables if you forget such critical data! ;)

I think that Flipper did such neat effects because devs had total access to every tiny corner of the architecture. This was possible because of the closed console hardware platform and the APIs. That's something no PC GPU has ever really enjoyed (except with APIs like Glide/S3D/Speedy3D/RRedline). Same with all PC hardware other than occasionally CPUs. But yeah I'm sure Flipper is more capable than NV11, but not ground-breakingly so.

And of course Flipper doesn't have Radeon tech in it. ArtX wasn't part of ATI yet.

JK2 came out in 2002, during the reign of GF3/4 and Radeon 8500. I ran it on a 8500 and Athlon XP, I believe. It ran on a Q3 Team Arena engine. It was ported to Cube and Xbox.
 
Last edited by a moderator:
According to ERP that number is indeed correct, just some devs count polys in a diferent way.

If you count only visible polygons then the number might be nearer to reality. Though its questionable wether Nintendo were doing that since most developers/hardware manufacturers don't.

AFAIK, Flipper has 0 Radeon tech in it. It was ArtX's baby, and they didn't have HyperZ.

Flipper doesn't have HyperZ but it does have early Z rejection. The other parts of HyperZ (Z compression and Fast Z Clear) were purely bandwidth saving AFAIR which is lilkely why they weren't included.

I didn't really mean to say that Flipper = a NV11. But hey it might not be far off! ..... I think that Flipper did such neat effects because devs had total access to every tiny corner of the architecture.

Pretty far off in terms of geometry performance and bandwidth though and with a few features NV11 didn't have (8 texture layers in a single pass for instance).

Also I don't think many if any developers outside of Nintendo had access to micro code for GC. Though I'd be interested to find out for sure, I do wonder wether Factor 5 and Capcom had access to low level code (Factor 5 probably did).
 
Last edited by a moderator:
YCube had what, 2 major releases from sega that weren't ports/multiplatform

Sega's Cube exclusives were F-Zero GX, Super Monkeyball, Super Monkeyball 2, Amazing Island, Billy Hatcher, Beach Spikers, and Phantasy Star Online 3. They know how to make a Gamecube engine.

Surprisingly, EA Canada has really stepped up to the plate on Wii. They're the only studio to release a game for both last gen and Wii that wasn't a straight graphical port of the PS2 version...Madden looks great. They've already got all the Renderware tools in place, but this time, they don't have to use 4-bit textures. ;-) I also expect EA Big to do well.

Ubisoft's French studios aren't disappointing, either. Seeing the great things they did with the Jade engine (King Kong, BG&E, Prince of Persia), no surprises there. It's Ubisoft Montreal, who did the terrible Gamecube ports of the Tom Clancy games, that really doesn't have its act together.

There are definitely devs who are not prepared at all to deal with the Wii, at least graphically speaking. Some of EA's and Ubi's studios, Konami, Activision, and of course all the devs who stuck exclusively to PS2 just aren't going to be up to snuff.
 
Last edited by a moderator:
Surprisingly, EA Tiburon has really stepped up to the plate on Wii. They're the only studio to release a game for both last gen and Wii that wasn't a straight graphical port of the PS2 version...Madden looks great

EA Canada did Madden for Wii, not EA Tiburon.
 
http://www.am.necel.com/edram90/edramoptions.html

"A 15 x 15 mm die can incorporate as much as 256 Mb, for example, assuming that the eDRAM occupies half the chip's area."

3 megs 21 sqmm.

Good find, though actually I think it would be around 10.6mm^2. Because as the article mentions its assuming only half the 15 x 15 mm die is taken up by the 32MB embedded memory. 15 x 15mm / 2 = 112.5mm^2, 32MB / 10.6 = 3MB, 112.5mm^2 / 10.6 = 10.6mm^2.

So if Hollywood does only have 3MB embedded memory that means the memory takes up, roughly, 15% of the die space. How much of Flipper's die was taken up by the 3MB embedded memory? I'm pretty sure it was either 25% or 33%.
 
Last edited by a moderator:
Good find, though actually I think it would be around 10.6mm^2. Because as the article mentions its assuming only half the 15 x 15 mm die is taken up by the 32MB embedded memory. 32MB / 10.6 = 3MB, 15 x 15mm / 2 = 112.5mm^2, 112.5mm^2 / 10.6 = 10.6mm^2.

you are right,I forget the /2.
 
Status
Not open for further replies.
Back
Top