A Summary of the Huge Wii Thread

Status
Not open for further replies.
I don't see any reason to respond to new guys who have jumped into the discussion this late in the game and are just repeating stuff that was said a million tims in the WiiGeePeeYou thread.
 
This is what we should know about hollywood based on all the researched gathered.

-It is not an overclocked flipper ( it's too big)
-It is custom built for the Wii (says ATI)
-It is in between the xbox and 360's GPU in performance. Probably right in the middle.
- The 24MB of the GC's main memory is now located on the GPU's die.
-There is edram on Hollywood but no one is sure how much.( could be more than the 3MB's on flipper)
-There are 2 or more dies on hollywood (napa and vegas)
-The GC's "Aram" seems to have been replaced by 24MB of 1t-sram.
-Hollywood was made on the 90nm process
-Most likely clocked at 243mhz
Ok. Excuse me but what the fuck! A couple hundred pages about this stuff have already gone by!
*boil*

Correct from your list:
-It is not an overclocked flipper ( it's too big)
-It is custom built for the Wii (says ATI)
-Hollywood was made on the 90nm process
-Most likely clocked at 243mhz
-There is edram on Hollywood but no one is sure how much.( could be more than the 3MB's on flipper)

For the remaining stuff:
Vs 360's graphics chip, make it a quarter the raw performance, to fill one third of the pixels per frame. Write that on a piece of paper and flush it down the toilet because even though it's the closest you'll ever get, it still doesn't tell you anything about real-world game graphics. Done.

-There are 2 or more dies on hollywood (napa and vegas)
There are exactly 2, as is patently obvious from the images with the heatspreader removed.

-The GC's "Aram" seems to have been replaced by 24MB of 1t-sram.
False. It has been replaced with a single 64MB GDDR3 chip, on a 32 bit wide bus.

- The 24MB of the GC's main memory is now located on the GPU's die.
No. It is one of the two chips/dice on the multi-chip module known as Hollywood.
I.e. one of the two chips is 24MB of 1T-SRAM, the other contains all the logic plus an as of yet unknown (smaller) amount of embedded memory for the framebuffer.
http://www.beyond3d.com/forum/showpost.php?p=928306&postcount=4 (from the first page of right here!)
 
I don't have a Wii, but if the graphics are still :
- same SD res
- no fsaa
- same 6 bits per component RGBA framebuffer (you can tell from banding, like 16bit on PC. is it so?)
then I bet my left kidney that EDRAM has stayed at 3MB.
 
-same 6 bits per component RGBA framebuffer (you can tell from banding, like 16bit on PC. is it so?)

32-bit (8:8:8:8), 24-bit (6:6:6:6), and 16-bit (4:4:4:4) banding all look different. And yes, they all have banding. The more bits you have, the less banding you have, but 8 bits per component does not magically grant you the ability to render infinitely many colors, and you will still get visible color banding in certain situations.
 
There is no way that Hollywood's fillrate, vertex, and shader performance are at the midpoint between Xbox and X360. Even a doubled-up Flipper at the known clockspeed would only be equivalent to the XGPU in fillrate and still have weaker shaders.

While I'm not going to argue that it would be half way between XGPU and Xenon I think even a doubled up Flipper at 243Mhz would be significantly more powerful then XGPU. It would have twice the pixel fillrate compared to XGPU and at least twice the polygon power (around 60mpps possible in game). Yes the texel fillrate would be the same and it still wouldn't be as flexible, but it would be significantly more powerful then XGPU overall. Also considering the size I wouldn't say a doubled up Flipper is the absolute best we could expect from Hollywood.
 
Ok. Excuse me but what the fuck! A couple hundred pages about this stuff have already gone by!
*boil*

Correct from your list:
-It is not an overclocked flipper ( it's too big)
-It is custom built for the Wii (says ATI)
-Hollywood was made on the 90nm process
-Most likely clocked at 243mhz
-There is edram on Hollywood but no one is sure how much.( could be more than the 3MB's on flipper)

For the remaining stuff:
Vs 360's graphics chip, make it a quarter the raw performance, to fill one third of the pixels per frame. Write that on a piece of paper and flush it down the toilet because even though it's the closest you'll ever get, it still doesn't tell you anything about real-world game graphics. Done.

-There are 2 or more dies on hollywood (napa and vegas)
There are exactly 2, as is patently obvious from the images with the heatspreader removed.

-The GC's "Aram" seems to have been replaced by 24MB of 1t-sram.
False. It has been replaced with a single 64MB GDDR3 chip, on a 32 bit wide bus.

- The 24MB of the GC's main memory is now located on the GPU's die.
No. It is one of the two chips/dice on the multi-chip module known as Hollywood.
I.e. one of the two chips is 24MB of 1T-SRAM, the other contains all the logic plus an as of yet unknown (smaller) amount of embedded memory for the framebuffer.
http://www.beyond3d.com/forum/showpost.php?p=928306&postcount=4 (from the first page of right here!)


Dude, don't curse at me. BTW how are you not banned for that? Anyway... I said the aram has been replaced because the 1t-sram is where the aram use to be in the gamecube. The 64MB's of GDDR3 did not replace the Arams purpose. Aram was never the GC's main memory, the 24 MB's of 1t-sram was.
 
32-bit (8:8:8:8), 24-bit (6:6:6:6), and 16-bit (4:4:4:4) banding all look different. And yes, they all have banding. The more bits you have, the less banding you have, but 8 bits per component does not magically grant you the ability to render infinitely many colors, and you will still get visible color banding in certain situations.


16bit is usually 5:6:5.
sure, you're right. I should add, when games are targetted for the low bit-ness it's often not that much of a problem (most gamecube games, PC oldies such as HL, UT, quake 1/2). But there was aweful banding in Prince of Persia on game cube for instance (like a q3 powered game in 16bits)
 
Dude, don't curse at me. BTW how are you not banned for that? Anyway... I said the aram has been replaced because the 1t-sram is where the aram use to be in the gamecube. The 64MB's of GDDR3 did not replace the Arams purpose. Aram was never the GC's main memory, the 24 MB's of 1t-sram was.

ARAM was a separate little 16 MB SDRAM chip on 8-bit bus @ 81 MHz . The GDDR3 in Wii is a separate chip, off package and off die from the GPU. So, on Wii, we have the 24 MB 1T-SRAM and the 64 MB GDDR3. The GDDR3 looks like ARAM then, huh? There's no other RAM to do the job.

ARAM was for buffering various things (audio, disc) that needed very little bandwidth and probably poor latency. The GDDR3 undoubtedly replaces the ARAM in function for Cube games with Wii. The still-24MB 1T-SRAM then again is the "main RAM". It only makes sense cuz any other way would really hugely change how the machine would act for running Cube titles.
 
While I'm not going to argue that it would be half way between XGPU and Xenon I think even a doubled up Flipper at 243Mhz would be significantly more powerful then XGPU. It would have twice the pixel fillrate compared to XGPU

Do the math: 650 MP/sec / 162 MHz * 243 MHz = 975 MP/sec, only half of XGPU's fillrate. And that's without the hardware MSAA that XGPU has (XGPU does 3.7 GSamples/sec). You'd need to double the pipelines to achieve XGPU's fillrate.

We will see more real-world geometry, primarily because the T&L unit won't be used to run special vertex shader effects. We just...won't have those effects.
 
32-bit (8:8:8:8), 24-bit (6:6:6:6), and 16-bit (4:4:4:4) banding all look different. And yes, they all have banding. The more bits you have, the less banding you have, but 8 bits per component does not magically grant you the ability to render infinitely many colors, and you will still get visible color banding in certain situations.

Isn't it 24bit (8:8:8:6, Wii specific) or do you mean in general?

I've been wondering why Wii games don't fill the entire screen(16:9), what resolution are the games rendered at?
 
Do the math: 650 MP/sec / 162 MHz * 243 MHz = 975 MP/sec, only half of XGPU's fillrate. And that's without the hardware MSAA that XGPU has (XGPU does 3.7 GSamples/sec). You'd need to double the pipelines to achieve XGPU's fillrate.

We will see more real-world geometry, primarily because the T&L unit won't be used to run special vertex shader effects. We just...won't have those effects.

So basically overall, compared to the Xbox, Wii is more powerful. Not by much, meaning, not in any specific area. Wii only advantage is memory and bandwith, I assume.

Also Julian comment about "insane fillrate", could have meant, compared to Flipper.
 
Dude, don't curse at me. BTW how are you not banned for that? Anyway... I said the aram has been replaced because the 1t-sram is where the aram use to be in the gamecube. The 64MB's of GDDR3 did not replace the Arams purpose. Aram was never the GC's main memory, the 24 MB's of 1t-sram was.
Dude, I find it pretty offensive that you find it offensive! Seriously, 200 pages of all of us going in circles and now the information is on the first page of this very aptly named thread. I'm not so sure what the bigger insult is, asking for yet another repeat of the cycle of madness, maybe because it's more convenient than locating the summary, on page one of the thread with "summary" in the title, or spelling out wtf. I do have an opinion though.

Yes, the Gamecube had 24MB 1T-SRAM as its main memory plus the slow, generic A-RAM. The Gamecube's main memory exists roughly unchanged in the Wii, it's still 24MB of 1T-SRAM. The A-RAM OTOH has been replaced with something entirely different than it was before, GDDR3, but not 1T-SRAM. That's what I said and that's what I meant, because it's true.
I.e. when the Wii runs in Gamecube "emulation" mode, the 24MB of 1T-SRAM become the "emulated" Gamecube's main memory. All the wile a portion of the GDDR3 chip acts like the Gamecube's A-RAM. Which it can, because the original A-RAM had such a slow spec that any current memory technology has plenty of headroom to match and exceed its timing parameters under any and all circumstances.

Replacing the slow A-RAM wasn't a problem. Replacing the 1T-SRAM main memory though was a problem, because there are access patterns where 1T-SRAM has such low latencies that current mainstream memory can't match it.
 
Do the math: 650 MP/sec / 162 MHz * 243 MHz = 975 MP/sec, only half of XGPU's fillrate. And that's without the hardware MSAA that XGPU has (XGPU does 3.7 GSamples/sec). You'd need to double the pipelines to achieve XGPU's fillrate.
You're overestimating XGPU's fillrate by a factor of two. It's two billion texels per second, theoretically, not pixels.
Besides, the claim was about a doubled-up Flipper, i.e. eight pipelines for roughly 2 Gpixels/s.

Hollywood (and even Flipper) have the upper hand in framebuffer bandwidth and texture bandwidth though. That allows them to achieve a larger portion of their theoretical peak rates in practice.
 
Replacing the slow A-RAM wasn't a problem. Replacing the 1T-SRAM main memory though was a problem, because there are access patterns where 1T-SRAM has such low latencies that current mainstream memory can't match it.

Explains why certain GameCube games emulated run slightly faster?
 
Explains why certain GameCube games emulated run slightly faster?

The 1t-sram is still there...
Besides that, it's highly possible the cpu speed and graphics chip speed are still...well full speed even in cube mode, and that the games aren't synced to the cpu or fsb mhz.
 
Do the math: 650 MP/sec / 162 MHz * 243 MHz = 975 MP/sec, only half of XGPU's fillrate. And that's without the hardware MSAA that XGPU has (XGPU does 3.7 GSamples/sec). You'd need to double the pipelines to achieve XGPU's fillrate.

We will see more real-world geometry, primarily because the T&L unit won't be used to run special vertex shader effects. We just...won't have those effects.

He did specify a doubled up flipper at 234mhz, so you'd have to double your numbers.
Also, xgpu's texel fillrate was close to 2 gigatexels, but it had under 1gigapixel of...well pixel fillrate, so it's likely not to be on par with the wii having around 2 gigapixels of fillrate. Usage patterns and memory bandwidth would not support the xgpu's full 2gigatexels anyway.
 
My mistake, I thought XGPU had 2 GTexels and 2 Gpixels.

Blazcowicz, I am completely confused. You were complaining about the 6:6:6:6 banding on Wii, but you brought up a Gamecube game. It will be interesting to see if Two Thrones on Wii has the same banding issues that the Cube version did.
 
My mistake, I thought XGPU had 2 GTexels and 2 Gpixels.

Blazcowicz, I am completely confused. You were complaining about the 6:6:6:6 banding on Wii, but you brought up a Gamecube game. It will be interesting to see if Two Thrones on Wii has the same banding issues that the Cube version did.

Are they rereleasing the game?

BTW, an XGPU with 2Gpixels would have been roughly comparable in power to a radeon 9700 pro, assuming shader capability increased with that. Well, minus the extreme lack of memory bandwidth, so more like a 9500 pro se, if such a thing existed.
 
Status
Not open for further replies.
Back
Top