WiiGeePeeYou (Hollywood) what IS it ?

Status
Not open for further replies.
If the 24MB 1-T SRAM is really integrated on the Hollywood die, that would explain everything for me. Flipper was ~51M transistors on a 0.18 process. About half was logic and half was 1-T. For the sake of argument, lets say that a 0.09 process allows 4x the number of transistors with the same die size (I think Flipper was 10mmx10mm). 24MB of 1-T takes up ~192M transistors. Throw in an extra 24M transistors of logic and you arrive at 216M transistors. Even if you through in a few million more, this still borders on feasible. They could even increase the die size a bit without hurting yields too much. I'd imagine that die sizes have been increasing on average anyway due to the move to 12-inch wafers (although I don't know what the wafer size was for NEC's 0.18 process, or what the wafer size is for the Hollywood 0.09 process).
 
Ooh-videogames said:
Can you get an increase in raw polygon numbers and realworld numbers just from increasing a GPU clockspeed?

Assuming bandwidth increases as well, yes, of course. Raw numbers are basically found by multiplying clockspeed by how many of a certain operation can be done in a cycle. Faster processors mean more data can be processed. Otherwise, everything would be clocked at 1 Hz.
 
I agree this is a good question, after all, if this is true, there is similar CPUs on the same process as fast as this and the GPU is supossed to be 200Mhz at the time (aparentelly bad yields made the slower chips on the final console) they would only need better colling soluctions to have a fully functinal Wii.

The gpu is downclocked to 200mhz now? Not only would that be really disappointing that they can't hit 243mhz (or whatever it was supposed to be) after 3 major process shrinks from what the flipper was originally produced on, but 200mhz was what the original design specs for flipper called for.

BTW, a flipper at 200 to 250mhz would be weaker than even some current passively cooled PC gpus. If it had 2 TEVs, it might be on par with some of the weakest PC hardware.
 
No, fliper is supossed (originally) to be 200Mhz IIRC but ended being 163Mhz due to bad yields, I know nothing about hollywood.
 
pc999 said:
No, fliper is supossed (originally) to be 200Mhz IIRC but ended being 163Mhz due to bad yields, I know nothing about hollywood.

Flipper was originally supposed to be 202.5 MHz,
and supposedly contain between 8 MB and 16 MB of eDRAM.


at Space World 2000, the final amount of embedded 1T-SRAM was only 3.12 MB. however the main system memory was no longer 32-64 MB of NEC DDR memory, but 24 MB of the low latency 1T-SRAM.

then around E3 2001, or shortly after, it was announced that Flipper would run at 162 MHz.
 
hupfinsgack said:
Well, although I am still sceptical of the "leaked" specs, I am playing devil's advocate here:

As am I

They obivously had to implement a new memory controller, the bus structure is different, the dsp is probably revised and there's more embebbed eDram. So even with the new specs it'd be more than just a simple overclocking.

Exactly. And in the meantime, the devkits would ship with Flipper which would still end up being close to, although not identical, to the Hollywood GPU.
 
Megadrive1988 said:
then around E3 2001, or shortly after, it was announced that Flipper would run at 162 MHz.

The CPU though went up to 475 MHz from the original 400(?) I believe. The change in clock speeds was all multiplier related. The CPU went up and the GPU went down, for whatever reasons. I remember reading that "devs wanted more CPU power", which may or may not have been true.
 
swaaye said:
The CPU though went up to 475 MHz from the original 400(?) I believe. The change in clock speeds was all multiplier related. The CPU went up and the GPU went down, for whatever reasons. I remember reading that "devs wanted more CPU power", which may or may not have been true.

It went up to 485Mhz. While more cpu power is definetely something the cube needed due to its lack of vertex shaders, I'd say it's more likely that the gpu just couldn't hit 200mhz.
I think the next possible ratio would have been a 600mhz cpu and a 200mhz gpu btw, which would have been pushing it for what a G3 based cpu could hit at the time.
 
Fox5 said:
It went up to 485Mhz. While more cpu power is definetely something the cube needed due to its lack of vertex shaders, I'd say it's more likely that the gpu just couldn't hit 200mhz.

the cube has a fairly potent fixed T&L hw, thankyouverymuch.

more cpu is just more cpu - you can always find good use for any extra of it, as long as you can afford that.
 
The reason I read back in the day was that at 400/200, there was GPU power going wasted because it was basically unusable, and the CPU was both too weak and acting like a bottleneck. Upping the CPU to 485 and downing the GPU to 162 struck a much better balance between needed CPU power (remember that with fixed T&L, a lot more work is done on CPU) and full utilization of the GPU. There's no point in having a super-powerful GPU if the rest of the system is too weak to make use of it...except that it helps you sell more units to people who just read spec sheets.
 
fearsomepirate said:
The reason I read back in the day was that at 400/200, there was GPU power going wasted because it was basically unusable, and the CPU was both too weak and acting like a bottleneck. Upping the CPU to 485 and downing the GPU to 162 struck a much better balance between needed CPU power (remember that with fixed T&L, a lot more work is done on CPU) and full utilization of the GPU. There's no point in having a super-powerful GPU if the rest of the system is too weak to make use of it...except that it helps you sell more units to people who just read spec sheets.

Well, I don't think having more fillrate would have hurt, even with the less powerful cpu.
 
http://1up.com/do/newsStory?cId=3152589

While speaking on a panel at the Ziff Davis Electronic Games Summit in Napa earlier today, Ubisoft North America President Laurent Detoc revealed that his company currently has seven titles in development for Nintendo's upcoming Wii platform. This show of support came as a surprise to many, as the company had previously only demonstrated two of those seven titles: Rayman Raving Rabbids and Red Steel.

On the same panel, Midway President David Zucker stated that his company has six titles in development for the Wii.

About ubisoft, remember this (till today it is very accurate), I wonder (if those are the games) how much downgraded they will be but if the gameplay/design/features remains intact this is great news :)grin: AssassinsCreed:D).

 
If that's a comprehensive release list, why isn't Red Steel showing.
The Wii games (marked Rev(olution)) are...

Tom Clancy's Rainbow Six 5 (PS3, Xbox 360, Rev, PC, PSP) - November 2006
Assassin (PS3, Xbox 360, Rev, PC, PSP) - March 2007
Rayman 4 (PS3, Xbox 360, Rev, PC, PS2, handheld) - November 2006
Brothers in Arms 3 (PS3, Xbox 360, PC, Nintendo Revolution, DS ) - February 2007
Ninja Turtles (PS3, Xbox 360, Nintendo Revolution, PC, handheld) - Spring 2007

We've seen pics of Rayman and I think BIA for Wii.
 
At least now we know why one analyst said that Ubisoft was one of the few third parties that are actually prepared for the Wii launch.

BTW, one of the seven Ubisoft titles could be one of the Naruto Gekitou Ninjataisen that have not been released in the west. There was some talk of putting the unreleased ones on the Wii instead of releasing them for the dying GCN. Ubisoft is the western publisher for that series, so it could be one the titles missing on that list.
 
hupfinsgack said:
Well, although I am still sceptical of the "leaked" specs, I am playing devil's advocate here: They obivously had to implement a new memory controller, the bus structure is different, the dsp is probably revised and there's more embebbed eDram. So even with the new specs it'd be more than just a simple overclocking.

From reading these supposed specs it seems they are saying that the 24MB of 1T-Sram is seperate from the GPU itself. They describe Hollywood as a LSI which includes a GPU with 3MB of embedded ram and 24MB of ram, so the ram isn't inside the GPU. But if not then how is it "internal"?, a daughter die? Why do that if the chip is so uncomplicated? And wether it was inside the GPU or just on the same package why do that at all? What advatage is there in putting 24MB of memory on a chip if its only going to have the same bandwidth as the external memory (slightly less even according to these "specs"). There's no advantage there at all, it makes no sense at all.
 
Last edited by a moderator:
RancidLunchmeat said:
Clearly not, because what you are asking makes no sense.

Taking the position that the Hollywood is 'just' a Flipper overclocked that was manufactured under the 90nm process, what is your question?

Why would N rush, or even care if Hollywood was included in the early devkits? Including a Flipper, which is NOT Hollywood, would still allow the devs to get their kits.. probably earlier and at a reduced cost.

His question is quite clear. If Hollywood was nothing but a 243Mhz Flipper then why not include it in the very first development kit? We're tallking about a chip designed back in 2000, a chip that possibly could have been made to run at 243Mhz in 2001 at a cost (we know it did 200Mhz back then). In 2005/6 with two die shrinks its an extremely trivial task. So what was stopping Nintendo from including Hollywood? Reduced cost?, I don't see why a 243Mhz Flipper on a 90nm process would cost much more then a normal 180nm 162Mhz Flipper, certainly not the the degree where Nintendo would not include it (hindering game development for Wii) to reduce costs.. To allow developers to get the kits earlier?, they got the intitial kits in 2006, the chip was designed in 2000 and could run at 200Mhz on 180nm even back then..
 
Last edited by a moderator:
Teasy said:
To allow developers to get the kits earlier?, they got the intitial kits in 2006, the chip was designed in 2000 and could run at 200Mhz on 180nm even back then..

Right.

Again, I don't seem to understand this 'quite clear' question. Was Hollywood available at 90nm and 243Mhz when the devkits came out?

Was there any need for N to include it, rush it, spend any extra expense on including it when they could just include the Flipper instead?

I don't see how the fact that the chip was designed in 2000 and was capable of 200Mhz on 180nm has any impact on the 243Mhz 90nm chip.

So you're saying it should be easy in order to reduce wafer size and increase Mhz? Sure. Maybe it was. Maybe it was so easy to do that N was in no rush to do it. They knew they wouldn't have fab or yield problems.

I still don't grasp the 'If Hollywood is 'only' an overclocked Flipper at 90nm why wasn't it in the early devkits' question.

Because it wasn't available? Because those 'only' propositions make it a chip that is definitionally different than the Flipper (so developers who say the kits didn't include Hollywood are correct), but not practically different (so developers have a good representation of the final hardware even without Hollywood).

It seems like a very plausible answer to the question is readily available, so I don't understand the mystery.
 
Status
Not open for further replies.
Back
Top