WiiGeePeeYou (Hollywood) what IS it ?

Status
Not open for further replies.
Didn't GC have 40 MB RAM? That would make it slightly more than doubled. Which would make Wii2 a little over 160 MB.

the 16 megs of A ram had an 8 bit interface, so that was good only for the dvd buffering and for audio.(the machine was unable to use the data that was sored in the A ram for rendering purpose)
 
I always heard GC as capable of 6-12M in game.

6-12 Million PPS in game was just a very conservative estimate from Nintendo. The real number was much higher (RL, a launch game hit 12 million PPS).

C'mon!
Voodoo 1:
4 megs EDO RAM on 66 MHZ.(2*32bit data bus)
It was divided into two pieces: one for texture mem and on for frame buffer.
The two logic was on diferent chip.(the texel and the pxiel unit).
Other big diference is the V1 has to do the double buffering in the mem,that mean the V' was able to do 640*480*16bit, the GC is able to do 640*480*24bit.(bit difference because the multi texturing)

Not really sure what your getting at here. I didn't compare Flipper and Voodoo1 and obviously they don't compare anyway since Flipper is years ahead. I was only talking about memory in response to Swaaye's post.

Didn't GC have 40 MB RAM? That would make it slightly more than doubled. Which would make Wii2 a little over 160 MB.

GC had 27MB of fast memory comparable to that in Wii (but slower obviously) and 16MB of A-ram with an extremely low bandwidth of 81MB/s used as audio and cache memory. Wii has 91MB of fast memory and 512MB of flash memory (a portion of the flash memory can be set aside to be used similarly to GC's A-Ram according to devs).
 
Last edited by a moderator:
It's low on the priority list in a few certain instances. It was cleraly high on the priority list for Retro, Factor 5, and most of the 3rd parties. The number of games I've seen where it was a problem is in the single digits.

And yes, I did in fact mean take Rebel Strike, up its quality (texture resolution, effects, # of bad guys onscreen and detail of terrain) by however much is possible with an additional 64 MB of RAM and 50% clockspeeds (incl bandwidth gains--that means ~30 GB/s to the eDRAM on Hollywood), and that's not going to be something a Geforce 3 could handle. The effects play entirely to Flipper's strengths and would really crap out nvidia's first DX8 chipset. For one thing, the self-shadowing is basically being done with the TMU, meaning the geometry engine isn't getting tied up futzing around with shadow volumes. For another, many of the effects take advantage of the high bandwidth, which Flipper just has more to play around with than GF3. Design to a machine's strengths, and you'll do things that another machine that isn't as strong in those areas can't do. I mean, that's pretty basic, isn't it?

You're taking my geforce 3 +50% comment way too literally. In polygon pushing power, that 7100 chip will blow a geforce 3 away, and I'm sure a flipper as well.

Edit: And looking up benchmarks online, the 7100, which is a die shrunk 6200TC, which performed about the same as a 9600pro, while it will only perform like a geforce 3 + 50% in dx7 games, has roughly 3x the performance in games that start to use shaders. So in a game like oh.......super smash bros melee, the 7100 would perform like an xbox +50%, in a game like rebel strike, it would be like 3x an xbox. Of course, it's still highly possible nintendo beefed the hell out of the vertex and pixel shader capabilities in hollywood, and while it only shows itself as a cube +50% right now, will show 3x to 4x the performance in a year.

Anyhow, this is missing the whole point of my comment, just that, for a retail price around $40, Nintendo could have gotten a more powerful/featured graphics chip with more memory, so the analyst could be wrong about the price Nintendo is paying, and if not, then Nintendo is getting ripped off providing a last gen gpu overclocked by 50%. If Wii's gpu doesn't show at least a 3 fold improvement over the cube by the end of this gen (and hopefully even more than that, as the Wii's gpu is significantly larger than the 6200TC/7100 and 3x flipper is overall still less than 3x geforce 3) then it will be very disappointing. Oh well, at least low end PC graphics chips give us an idea of the kind of performance we should expect out of hollywood, 9600pro level performance wouldn't be horrible for a console stuck at 480p.
 
Last edited by a moderator:
the 16 megs of A ram had an 8 bit interface, so that was good only for the dvd buffering and for audio.(the machine was unable to use the data that was sored in the A ram for rendering purpose)

GC had 27MB of fast memory comparable to that in Wii (but slower obviously) and 16MB of A-ram with an extremely low bandwidth of 81MB/s used as audio and cache memory. Wii has 91MB of fast memory and 512MB of flash memory (a portion of the flash memory can be set aside to be used similarly to GC's A-Ram according to devs).

So if I'm getting this correctly, all of Wii's memory could be used for graphics? If so, how will the audio and stuff be accessed?
 
Anyhow, this is missing the whole point of my comment, just that, for a retail price around $40, Nintendo could have gotten a more powerful/featured graphics chip with more memory
Are you referring to the 7100GS? As pointed out, discounted clearance is not the same as retail. MSRP on a 7100GS is closer to $100 (for 128 MB). Using such a chip, as proposed, would also throw backwards compatibility out the window and would set development of dev kits back.
 
So if I'm getting this correctly, all of Wii's memory could be used for graphics? If so, how will the audio and stuff be accessed?
The same as any other computer/console device. You have a pool of RAM and data is read from/written to it, splitting the available bandwidth and storage capacity between the different tasks.
 
Are you referring to the 7100GS? As pointed out, discounted clearance is not the same as retail. MSRP on a 7100GS is closer to $100 (for 128 MB). Using such a chip, as proposed, would also throw backwards compatibility out the window and would set development of dev kits back.

It's really surprising that a part that's only a few months old would be selling at discounted prices less than half the retail price. Additionally, it still stands as fast that the die size of the 7100 is considerably smaller than hollywood, and as someone pointed out, gddr2 is less expensive than the gddr3 that wii uses (despite the 7100 actually having faster speed ram). Would you expect a product with a smaller die size and older memory technology to be more expensive to produce than Hollywood + GDDR3? Besides, the 7100 is too new a card to be sold below cost, so at best, that $40 price point is just slightly above the cost to produce, which is what the prices estimated for the wii's gpu + memory should be considering the volume they're being produced in.

That said, it's been established that the 7100 is a smaller chip than Hollywood (and smaller than the 7300, remember 7100 is a die shrunk 6200tc) and it uses older memory technology, so it's possible that the costs for the Hollywood gpu are correct and that Nintendo just didn't go with a very good design (or devs aren't even close to taking advantage of the design). It could be that edram is a horrible waste of space on a gpu, and just not the way to go to gain optimal performance per cost.
 
The same as any other computer/console device. You have a pool of RAM and data is read from/written to it, splitting the available bandwidth and storage capacity between the different tasks.

Hmm, I see. Well, guess that means the RAM improvement is quite huge then (bigger than I thought).
 
So if I'm getting this correctly, all of Wii's memory could be used for graphics? If so, how will the audio and stuff be accessed?

All 91MB fast memory can potentially be used for graphics yes (compared to 27MB in GC) while a portion of the 512MB flash can be set aside for caching (similar to HDD caching I'd guess) according to initial dev comments. How each developer will actually use the memory is down to them.
 
Last edited by a moderator:
Anyhow, this is missing the whole point of my comment, just that, for a retail price around $40, Nintendo could have gotten a more powerful/featured graphics chip with more memory, so the analyst could be wrong about the price Nintendo is paying

I don't think that the 7100 or any other chip is powerful as the gekko.The 3 megs of embedded ram can make miracle.(there was a reason for the 4 megs of embedded ram in the EE and in the xbox2 gpu)
 
I don't think that the 7100 or any other chip is powerful as the gekko.The 3 megs of embedded ram can make miracle.(there was a reason for the 4 megs of embedded ram in the EE and in the xbox2 gpu)

Maybe you should re-read his older post on this page. The EDRAM is just there cuz it's easy to make a small amount of super fast memory available with strict resolution limitations. That way they don't have to bother with a larger pool of fast RAM, ala PC cards. Gekko has nothing on a 7100.
 
I don't think that the 7100 or any other chip is powerful as the gekko.The 3 megs of embedded ram can make miracle.(there was a reason for the 4 megs of embedded ram in the EE and in the xbox2 gpu)

You mean Flipper, Gekko was GC's CPU, also its GS in PS2 that had the 4MB embedded memory, again EE is the CPU.

Maybe you should re-read his older post on this page. The EDRAM is just there cuz it's easy to make a small amount of super fast memory available with strict resolution limitations. That way they don't have to bother with a larger pool of fast RAM, ala PC cards. Gekko has nothing on a 7100.

I wouldn't say nothing, Flipper has its advantages (bandwidth for example). Though of course 7100 is more advanced as a GPU, Flipper was designed over 7 years prior after all.
 
Last edited by a moderator:
You mean Flipper, Gekko was GC's CPU, also its GS in PS2 that had the 4MB embedded memory, again EE is the CPU.



I wouldn't say nothing, Flipper has its advantages (bandwidth for example). Though of course 7100 is more advanced as a GPU, Flipper was designed over 7 years prior after all.

I wouldn't say bandwidth is a large advantage without the power to use it. Bandwidth saving techniques have gone far since flipper (and even nv2a had substantially better bandwidth efficiency), and pc graphics chips have long shown that you don't need 30GB/s of bandwidth for decent performance, especially at low resolutions with no AA. The low def no AA standard Wii games have shown so far makes the 7100 even more appealing in comparison, not less. If hollywood was showing framebuffer effects and multisampling up the wazoo, I'd accept that as the edram showing its value, but right now the wii is struggling to put out graphics that cards with 1/10th the memory bandwidth can do. (assuming the edram really is 30GB/s, I seem to recall the gamecube's being only around 10GB/s and a 50% overclock would only bring that up to 15GB/s)

And the GS and Flipper were both design for entirely different visual styles than have taken hold. The GS sacrificed advanced effects for that edram to be able to push more polygons and pixels and had absolutely no bandwidth saving techniques. The flipper sacrificed power for...you know even retrospectively I'm not sure what the rationale is for choosing edram over a more powerful chip. We know why the 360 has edram, even if games aren't really utilizing it, but the cube wasn't capable of the same type of multisampling effects. Maybe it just aided in ease of development? The cube was pretty well known for not needing much optimization for code to run well. While xbox games looked better on the whole, they would generally struggle with transparency effects that the cube could pull off no problem (see the bespin clouds in rogue leader). I also heard the average cube dev team for a port was 1/10 the size of the ps2 team and 1/3 the size of the xbox team. Then again, the cube usually had the worst version of any multi platform game, but it's a testament to its ease of development that almost no effort beyond a straight code conversion could be applied to the cube and it would still put out acceptable performance.

Serious question though: What design win did the cube's edram provide over utilizing a more powerful gpu and external ram?
Well, I can think of a few things......
1. Bandwidth saving tech was much less advanced at the time, if even existent. Prior to the Geforce 3, bandwidth saving techs consisted of dual gpus to have twice the data bus and deferred rendering.

2. Memory was more expensive at the time, especially for fast ram, than it is now. Xbox had some of the most advanced bandwidth saving tech at the time and some of the most expensive memory, yet ultimately was still severely bandwidth constrained. Designing a powerful graphics chip at the time without edram meant going the xbox way and having an expensive memory subsystem to match the gpu, having a wider databus and thus increasing production difficulties, or going for tile based deferred rendering. And based on what has been produced using TBDR since then, I'd imagine TBDR is even more limiting to the power of a gpu than edram is. Naomi 2 represented the peak of tbdr hardware at the time, and I believe overall it gave worse performance than the cube. (and naomi 2 had a dual solution to the memory problem, tbdr and a wider databus by using two chips)

Now though, fast ram is so cheap that it's hard to recommend an edram based design, unless you're aiming for high resolutions with MSAA like microsoft did. Without antialiasing, is there any PC card on the market today that will reach a bandwidth limit before it reaches a gpu performance limit?
 
Last edited by a moderator:
Maybe you should re-read his older post on this page. The EDRAM is just there cuz it's easy to make a small amount of super fast memory available with strict resolution limitations. That way they don't have to bother with a larger pool of fast RAM, ala PC cards. Gekko has nothing on a 7100.

Oh.hollywood and the GF7800gtx has the same effective bandwith .:)
ANd the 7800 have a four cross-bar controller,but the hollywood has somethong like 16.
impresssive, isn't it?
 
6-12 Million PPS in game was just a very conservative estimate from Nintendo. The real number was much higher (RL, a launch game hit 12 million PPS).

According to ERP that number is indeed correct, just some devs count polys in a diferent way.

It could be that edram is a horrible waste of space on a gpu, and just not the way to go to gain optimal performance per cost.

In flipper it uses about 1/2 of the transistores but only about 1/3 of the die size and IIRC it scales very well with new process, so I would think that it is just a under use of new HW in Wii. Althought I wouldnt expect a big improvement in pixel shading bacause that could mean normal mapping tech a potential much higher dev cost (overall, as the small teams may also need to do them to compete with bigger teams).
 
If hollywood was showing framebuffer effects and multisampling up the wazoo, I'd accept that as the edram showing its value

It already is whenever you pop in one of the AAA Gamecube titles.
but right now the wii is struggling to put out graphics that cards with 1/10th the memory bandwidth can do.

You mean like Radeon's 7000 series and the Geforce 2 MX? I think it's been too long since you had one of those cards, because I still have a GF2 MX in an older machine, and it can't do jack.

I seem to recall the gamecube's being only around 10GB/s and a 50% overclock would only bring that up to 15GB/s)

I got the specs from the wrong source. However, the 10.4 GB/s was to the texture cache. The framebuffer had 7.8 GB/s, for a total 18.2 GB/s. 50% overclock would give you 27.3 GB/s.

you know even retrospectively I'm not sure what the rationale is for choosing edram over a more powerful chip.

The bandwidth makes it easier to actually use the available power. Take the N64 as the extreme counterexample. Its logic silicon was theoretically pretty awesome for the time, but the high-latency, low-bandwidth RAM meant that only programming gods could get non-craptacular graphics out of the machine. With Gamecube, even EA could get the machine making pretty pictures with minimal effort. Think about it: this is a machine that could push only 30m polys/sec max theoretical, but could do ~12m in game. That's 40% of its available peak. Xbox could do 106m theoretically, but you'd never get anywhere close to 40m in-game, ever. NV2a had all kinds of power that couldn't get tapped because it was bandwidth-starved. The fact is, your processor can't do anything if there's no data for it to process...it just sits around picking its nose. Another good example is the entire Geforce FX series, which was completely bottlenecked by bandwidth. Great chips on paper, but the memory ineffiency meant Radeon 9x00 cards crapped all over them.

We know why the 360 has edram, even if games aren't really utilizing it, but the cube wasn't capable of the same type of multisampling effects.

First, as we all know, special effects on Gamecube depended on multitexturing. High bandwidth and low latency to your texture memory are absolutely key to being able to take advantage of multitexturing. Second, AAA Cube games had lots of framebuffer effects. Off the top of my head, the water and lava in RE4, the targeting computer in the Rogue Squadron games, the proton bombs in Rebel Strike, the depth-of-field in Windwaker, Baten Kaitos, and Mario Sunshine, and numerous special effects in Metroid Prime 1 & 2 were all framebuffer effects.

Then again, the cube usually had the worst version of any multi platform game

Generally because it lacked online. Graphically, the PS2 was usually the worst.

In retrospect, what was the point of MS blowing all that money on NV2a? NV2a was significantly more powerful than a GF3, but in real-world games, a PC with a Geforce 3 would generally output the same games in higher resolutions, better framerates, and with better textures than the Xbox, and a Geforce 4 would just blow it away.

To put it in perspective, the NV2a theoretically processed polygons 3.5x as fast as Flipper, had 3x the texel fillrate and 5.7x the pixel fillrate, 2.7x as much RAM, and 10x the overall floating-point performance. Save bandwidth and texture passes, the slowest things on NV2a were still 3 times as fast as their antecedents on Flipper.

Theoretically, Xbox should have been able to run Rebel Strike in HD with ~3x the number of TIE fighters on screen, 3x the geometric and texture detail on the terrain and/or capital ships, and possibly even more shader effects on top of that. It should have been able to run Half-Life 2 and Doom 3 comfortably on high settings and a steady 30 fps. There are a lot of things NV2a could do that it never will do, because the system it's embedded in just didn't cut it.
 
Last edited by a moderator:
Alright, so the flipper was the right design for the time, but that still doesn't change that fast ram is comparatively much cheaper now than it was then. At the time gamecube came out, memory bandwidth was still the hurdle to overcome in graphics, now shader processing power is.
 
Alright, so the flipper was the right design for the time, but that still doesn't change that fast ram is comparatively much cheaper now than it was then. At the time gamecube came out, memory bandwidth was still the hurdle to overcome in graphics, now shader processing power is.

Maybe it was important for compatibility,and I'm not talking just games but game engines designed around GC specs. An important part of the cost savings idea with the Wii was for devs to be able to bring their last gen engines over.
 
Status
Not open for further replies.
Back
Top