WiiGeePeeYou (Hollywood) what IS it ?

Status
Not open for further replies.
mattcoz said:
If what we've heard is correct...

GameCube
Main RAM: 24MB 1T-SRAM @ 324MHz/64bit = 2.6GB/s
A RAM: 16MB SDRAM @ 81MHz/8bit = 81MB/s

Wii
Main RAM: 24MB 1T-SRAM-Q @ 486MHz/64bit = 3.9GB/s
A RAM: 64MB 1T-SRAM-Q @ 486MHz/64bit = 3.9GB/s
I assumed a 50% increase in memory clock to match the 50% we got for CPU/GPU, looks like it's double instead.

Main RAM: 24MB 1T-SRAM-Q @ 650MHz/64bit = 5.2GB/s
A RAM: 64MB 1T-SRAM-Q @ 650MHz/64bit = 5.2GB/s

Nice. :)
 
This is good news, it wouldnt be any beast but at least better than IGN a AMD 3000 is enought for some nice games and a X1300 is too.

Althought this is strage comparitions I take this as any game that would run in a 3000+X1300 will run in there (which the CPU somewhat weak for the GPU?), which lead for a very interesting question which is about multiplatform games, there will be a while till any PC game asks for more than X1300+AMD3000 so should we expect many low end versions of PC games (here he could put BIA3, UT07, BS...) (the 733Mhz XCPU run many games that asked for 1,4Ghz more IIRC). Plus 4xAA (SS?).

And the best of all
I can tell you that it has double the number of pixel pipelines (of gamecube), and that it processes physics. It really takes a huge load off of the cpu.
is the CPU comparable to a AMD3000 with or without physics (does this haave animations too)?

Fully programable GPU is good to and 4Mgs for Texture cache is very good too (high rex Tex).

Nice news .
 
Last edited by a moderator:
I'm really suprised that nobody has called this article fake yet. Not that I'm saying it is, but usually there are always some skeptics. For some reason it feels slightly unatural that everyone is accepting this interview as real :)
 
That is easy to explain as it as been a respectable member that put it in there, plus we all want it to be true.

Anyway it is always good (without ofense) to take everything on the net with a grain of salt, althought I wouldnt be suprised if that is true.
 
darkblu said:
firesome, if you're pulling a fast one you're gonna be sorry! :)

giving it the benefit of doubt, couple of things:

* there's somethig in the air that promisses the system to be indeed devoid of any major bottlenecks *fingers crossed*. the GC was already a little mean machine... hell, the GC must be the most flattered hw in console history, given what compliment ATI made to it with the 360, MS' marketting stupidity aside.

* the CPU power quoted in the interview is impressive. if this turns out to be right the Wii will be the most power-efficient and value-per-cm^3 device in many households (presently in my house this position is held by the mighty mini on my desktop). kudos to that!

The wii must be smaller than the Mini, because I don't see even an AXP 3000+ and a sub X1600 class gpu outperforming the Core Duo (or even Core Solo) in the mini...I was about to say the sub x1600 class gpu wouldn't either, but I don't think the mini has an x1600 (and apple's drivers for it seem pretty bad anyway, or at least optimized for offline rendering and not games). Still, it seems you were only talking cpu power, though perhaps IBM will introduce a mini-blade based on the Wii cpu.

After all, he's comparing a PPC to an AMD processor. Maybe it's a modestly-clocked dual-core solution. Or maybe they put some extra execution threads in there? Perhaps a fat vector unit (but then physics would be done on the CPU, right)? Or maybe he means that the Wii CPU will perform like the AMD CPU does in normal PC gaming situations, which of course have all kinds of WinXP tomfoolery going on in the background (this seems to me most likely).

There may be a reason for that. The AXPs are rather outdated, so it seems strange he could compare to them and not a processor that's more in the public eye right now, especially as chipset quality varied so much on the AXPs that performance could be anywhere. Maybe no other modern mainstream processor is that slow, or maybe the memory bandwidth of the Wii cpu will be most similar to an AXP.
WinXP doesn't add that much cpu overhead, its biggest hit is to memory requirements. I could see him meaning that Wii's low latency ram will give it that much of an advantage. Wii could possibly have memory latency an order of magnitude better than the typical AXP system (IE, using a VIA chipset KT400 or before), and probably will have memory latency at least 2x to 3x lower than the best (Nforce2 with low latency PC3200 ram)

As would a CPU similar in power to a XP2800+ (this is a closed platform at 640x480 after all).

Especially, as indicated, if it had discrete hardware to handle sound and physics. With those two elements out of the picture, an Athlon XP 2800+ would do at least 60 fps in any game engine out there. Actually, with such low memory latency, the Wii cpu could potentially even outperform an AXP 2800+ at certain tasks. (though maybe that's why he gave the range, worst case to best case scenario)

I wonder if the physics hardware is decidated, or if it tasks away from shader power. If the shaders are actually complex enough to run a physics engine, that means that the gpu's feature set is far more advanced than anyone thought. If it takes away from shader ability, that could explain why currently shown games haven't been showing much graphically, they're taking away from graphics power to make up for the weak cpu. (though imo, animation was the biggest improvement metroid prime 3 is showing so far over metroid prime 1 and 2)

I'm really suprised that nobody has called this article fake yet. Not that I'm saying it is, but usually there are always some skeptics. For some reason it feels slightly unatural that everyone is accepting this interview as real

The hardware specs indicated are in line with what seems possible from what we know, rather than being decidely too low or too high.
 
Fox5 said:
The wii must be smaller than the Mini, because I don't see even an AXP 3000+ and a sub X1600 class gpu outperforming the Core Duo (or even Core Solo) in the mini...I was about to say the sub x1600 class gpu wouldn't either, but I don't think the mini has an x1600 (and apple's drivers for it seem pretty bad anyway, or at least optimized for offline rendering and not games). Still, it seems you were only talking cpu power, though perhaps IBM will introduce a mini-blade based on the Wii cpu.

well, putting aside the fact that the mini i had in mind (my mini) was the original G4 at 1.25GHz (motorola 7447a), you should not forget that:

the mini has traditionally had totally 'value' GPUs. the G4 minis have the RV280 and the intels have the GMA950. none of them is exactly a stellar perfomer, plus the latter has no vertex shaders, so the CPU has to do those. wii's CPU will not only be free of TnL work but also as it seems will be generally vector-loads assisted, and not just by a smart vector ops set but possibly by a fully-concurrent vector engine that can fetch its own data (this could be that same GPU itself, of course). all that running over a very smooth memory lane, to conclude all that (whereas the minis are not exactly cache champs, since IIRC apple dropped the L3 cache cpu's option right before the introdction of the mini) [ed: sorry, unintentionally referring to the original G4s again, you can disregard that]

so there's the chance that wii may actually trounce the new mini at heavy vector crunching.

btw, in case you consider these two machines too far apart due to the price factor, don't forget that the mini has a 2.5" HDD, a burner, and a heafty apple markup (bundled sw too), whereas, as much as people expect ninty to start selling the wii at a profit from day one, we can't be quite sure of that yet.
 
Last edited by a moderator:
fearsomepirate said:
To add some fuel to the fire, here's an interesting interview...

http://www.easynintendo.net/viewtopic.php?t=9

I've helped work on this site (this is its second incarnation), so I don't have any major cause to distrust the Sitelord...can't wait to see her pics from E3! Of course, you have no cause to trust me, which is why I'm not making a new thread for this. You have to admit, if this is a fake interview, it's the best fake interview ever.


wow, I just got home, just saw this myself and was about to update this thread, but realized immediately I had been long since beaten to the punch on that. good job fearsomepirate.
 
Ooh-videogames said:
So do you guys think Red Steel could feature normal mapping on characters up on release?

dunno about that, but what they showed at e3 had some real nice self-shadowing (inferring from the press material, as no e3 for me this year)
 
This info seems real as Konami has said in an IGN interview that Wii does have built-in physics hardware? It sounds about right. The lowend X1300 boards which includes 128MB onboard GDDR2 RAM and DVI, VGA, TV output are selling at retail for less than $70. The GPU itself comes out to less than 100 million transistors too when you take out Avivo. At 90nm using 1-TSRAM-Q eDRAM it looks to have the same die area as Flipper at 180nm. The logic in Flipper is only 26 million transistors and eDRAM only 25 million. As for the CPU a low power Sempron 2800+ level chip should fit the bill nicely. If this was what Perrin Kaplan was talking about when she said the hardcore will not be disappointed then she is right.:D

We also know that Wii consumes about 50W with CPU/GPU chips and RAM made at 90nm. GC consumes 37W at 180nm.

So to recap:

Main memory runs at 650MHz which is 2X speed of GC
GPU is between X1400 - X1600
2X the pixel pipelines as Flipper
8MB of eDRAM 2MB frame buffer 2MB Z buffer 4MB texture cache
4x Antialiasing and 8x Anisotropic Filtering
Fully programmable geometry engine
Physics built into GPU
CPU is between AthlonXP 2400 - 2800
90nm process for both chips
50W power consumption for Wii

Extrapolating further, eDRAM uses ~ 67 million transistors so looks like it may have to be a separate daugther die like Xenos. Looks like the GPU and CPU has double the clockspeed of GC though they have more logic to increase performance even further.
 
Last edited by a moderator:
NANOTEC said:
Extrapolating further, eDRAM uses ~ 67 million transistors so looks like it may have to be a separate daugther die like Xenos. Looks like the GPU and CPU has double the clockspeed of GC though they have more logic to increase performance even further.

Actually since it would be using 1t-sram Q, that 67million transistors would fit in the same physical space as ~17 million transistors would it not? And since the chip is on a 90nm process vs a 180nm process would it then not fit in 1/4th the actual physical space the flippers 3megs used?
 
Correction 1/3rd the physical space the 3meg in flipper used.

~66.6m transistors / 4 (for 1t-sramQ) = the same space as ~16.6m transistors /2 (to account for the 90nm fab vs the 180nm fab) would = the same space as ~8.3m transistors or roughly 1/3rd the actual physical space on a die of the 3meg in flipper.
 
Going by how large the daugter die of Xenos currently is, 8MB of eDRAM would be quite large so unless the logic side of Hollywood is small, it would need to be a separate die or be a pretty big single die. Of course I'm assuming the logic alone will be around 100 million transistors. Also keep in mind that 1T-SRAM-Q is 4 times the density of 6T-SRAM not 4 times the density of 1T-SRAM-R. It is only 2 times the density of the latter.
 
Last edited by a moderator:
NANOTEC said:
Going by how large the daugter die of Xenos currently is, 8MB of eDRAM would be quite large so unless the logic side of Hollywood is small, it would need to be a separate die or be a pretty big single die. Of course I'm assuming the logic alone will be around 100 million transistors. Also keep in mind that 1T-SRAM-Q is 4 times the density of 6T-SRAM not 4 times the density of 1T-SRAM-R. It is only 2 times the density of the latter.

then it would fit in 2/3rd the physical space of the flippers 3meg of Edram. adding the equivalent amount of 17 million transistors to a ~100m transistor chip would be too much?
 
Fearsome

The person who conducted this interview claims that it was recorded. Could you ask to hear it please so we can confirm wether its real or not?
 
fearsomepirate said:
To add some fuel to the fire, here's an interesting interview...

http://www.easynintendo.net/viewtopic.php?t=9

I've helped work on this site (this is its second incarnation), so I don't have any major cause to distrust the Sitelord...can't wait to see her pics from E3! Of course, you have no cause to trust me, which is why I'm not making a new thread for this. You have to admit, if this is a fake interview, it's the best fake interview ever.

That interview is meat among the breadcrumbs of information that we have had to date! I like the 480p 60fps w/4xAA & 8xAF comment, self shadowing, normal maps, physics/gpu and fully programmable T&L. It really lifted my expectations, but all hope lies in the implementation of the controller in games, which im sure will work well.
 
NANOTEC said:
Going by how large the daugter die of Xenos currently is, 8MB of eDRAM would be quite large
The xenos daughter die contains pixel rasterizers, Z-buffering and AA logic and stuff like that in addition to 10MB of eDRAM. That's why it's large.

so unless the logic side of Hollywood is small, it would need to be a separate die or be a pretty big single die.
It'll be just one die. Nintendo isn't the kind of company who would go for a multichip approach, that would be unacceptable to them.
 
NANOTEC said:
This info seems real as Konami has said in an IGN interview that Wii does have built-in physics hardware? It sounds about right. The lowend X1300 boards which includes 128MB onboard GDDR2 RAM and DVI, VGA, TV output are selling at retail for less than $70. The GPU itself comes out to less than 100 million transistors too when you take out Avivo. At 90nm using 1-TSRAM-Q eDRAM it looks to have the same die area as Flipper at 180nm. The logic in Flipper is only 26 million transistors and eDRAM only 25 million. As for the CPU a low power Sempron 2800+ level chip should fit the bill nicely. If this was what Perrin Kaplan was talking about when she said the hardcore will not be disappointed then she is right.:D

We also know that Wii consumes about 50W with CPU/GPU chips and RAM made at 90nm. GC consumes 37W at 180nm.

So to recap:

Main memory runs at 650MHz which is 2X speed of GC
GPU is between X1400 - X1600
2X the pixel pipelines as Flipper
8MB of eDRAM 2MB frame buffer 2MB Z buffer 4MB texture cache
4x Antialiasing and 8x Anisotropic Filtering
Fully programmable geometry engine
Physics built into GPU
CPU is between AthlonXP 2400 - 2800
90nm process for both chips
50W power consumption for Wii

Extrapolating further, eDRAM uses ~ 67 million transistors so looks like it may have to be a separate daugther die like Xenos. Looks like the GPU and CPU has double the clockspeed of GC though they have more logic to increase performance even further.

I wonder if the A-RAM is still in there (it should for backward-compatibility purposes), I just find it a bit strange they changed it to 1-T SRAM running at full memory clock and more than doubling its size.

I think AGEIA did say a while ago that deals with Sony and Microsoft had not been reached, but there was some more talking to be done...
 
Panajev2001a said:
I wonder if the A-RAM is still in there (it should for backward-compatibility purposes), I just find it a bit strange they changed it to 1-T SRAM running at full memory clock and more than doubling its size.

I think AGEIA did say a while ago that deals with Sony and Microsoft had not been reached, but there was some more talking to be done...

Why you need the A-RAM for compatibility purposes?

The only problem that Nintendo could have from the CPU perspective (using the A-RAM for the Graphics in GCN is lowly recomended) are the 30 SIMD instruccions included in the Gekko but I believe that they could be added easily to a G5 VMX without problems
 
Fox5 said:
Especially, as indicated, if it had discrete hardware to handle sound and physics. With those two elements out of the picture, an Athlon XP 2800+ would do at least 60 fps in any game engine out there. Actually, with such low memory latency, the Wii cpu could potentially even outperform an AXP 2800+ at certain tasks. (though maybe that's why he gave the range, worst case to best case scenario)


I can be also very interesting as many are unifing physics with animation if this is possible with Wii then the CPU could be a beast for many games. (plus with dedicated HW is nothingh hard to put physics that are at least as good as any CPU)


I wonder if the physics hardware is decidated, or if it tasks away from shader power. If the shaders are actually complex enough to run a physics engine, that means that the gpu's feature set is far more advanced than anyone thought. If it takes away from shader ability, that could explain why currently shown games haven't been showing much graphically, they're taking away from graphics power to make up for the weak cpu. (though imo, animation was the biggest improvement metroid prime 3 is showing so far over metroid prime 1 and 2)

I really doubt it, the Konamy guy and this both said that it as physics built in, everyone can say that his GPU can run physics but this has been really specific so I guess it is really for physics (animation?) HW. Anyway I guess that on E3 they are using the first SDKs.


The hardware specs indicated are in line with what seems possible from what we know, rather than being decidely too low or too high.

Plus it explains all the somewhat obscure comments about it (eg MarkRein).

Wii is starting to shape like the console we want, sub 200$, powerfull and inovative. I am happy:D .


Teasy said:
Fearsome

The person who conducted this interview claims that it was recorded. Could you ask to hear it please so we can confirm wether its real or not?

Good idea.
 
Status
Not open for further replies.
Back
Top