WiiGeePeeYou (Hollywood) what IS it ?

Status
Not open for further replies.
That confirms it? Wii has 64MB RAM total running at 5.6GiB/s?

I don't think it's running at full 700MHz. It doesn't jibe with any of the bus speeds. It's probably running @ 2x the GPU or something like that. I think the GPU's 243MHz is the base bus speed and everything is a a mulitlier from that (729MHz of CPU is 3x 243 etc..). 486MHz would give us around 3.9GB/s bandwidth for the GDDR "external" RAM.

Also, IIRC from NEC/MoSys press info that IGN reported, the Hollywood die contains both the 3MB of 1T-SRAM eDRAM and the 24MB of 1T-SRAM "main" memory. Both the eDRAM and the "main" RAM are running @ 243MHz probably. And if the 24MB of "main" RAM is 64 bit, that's also around 3.9GB/s bandwidth.

So we probably have 24MB of 1T-SRAM @ 3.9GB/s plus 64MB of GDDR3 @ 3.9GB/s for the Wii (not counting the 3MB eDRAM).
 
So is there still an NDA in effect or something? Now that its launched, couldn't a developer simply step forward and describe the differences between Hollywood and Flipper item by item?
 
No. Console NDAs are never lifted. The only time a dev can (legally) talk about something covered by an NDA is if that information has been released.
 
And if the 24MB of "main" RAM is 64 bit, that's also around 3.9GB/s bandwidth.

Anyone care to comment on this? Embedding 24MB of RAM on the Hollywood die only to get the same performance as an external chip seems crazy to me - won't the extra die space used on Hollywood for this just push yields down and hence price up?

Is any the 24MB of embedded RAM likely to be usable as texture cache? framebuffer? How can Hollywood support non-anamorphic widescreen PAL 576P with only 2MB framebuffer?
 
well I got my Wii, and yet, still don't know what's inside Hollywood. a block diagram and full specs like we had for Flipper would be nice, but seeems Nintendo is happy never to release such things :|
 
Anyone care to comment on this? Embedding 24MB of RAM on the Hollywood die only to get the same performance as an external chip seems crazy to me - won't the extra die space used on Hollywood for this just push yields down and hence price up?

It might just be on the same package...
 
It might just be on the same package...

I think you're probably right. The IGN article was worded very haphazardly. No one saw any other RAM on the mobo other than the 64MB Samsung GDDR, so unless the 1T-SRAM "main" RAM is on the underside of the mobo, it has to be part of the package or die somehow.

OMG edit:

The IGN article may be just totally off base (shock!). Someone @ GAF found this video where @ 1:24 timemark shows underside of the mobo and what could possibly be the 24MB of 1T-SRAM. If that is indeed the "main" RAM, then that whole freakin article was babbling on about the 3MB of eDRAM that was present since the GC! They could have made a big whup about RAM on die for nothing.
 
Last edited by a moderator:
It might just be on the same package...

Why would they put that piece of memory on package and the GDDR3 RAM on the motherboard? PS3 did that because there was no more room for another memory bank, not the case here.



And to Shogmaster: Why would they put the memory on the back side instead of the front where it could be properly cooled? Video cards did it only because there was no more room, not because they wanted to.



I think the straightforward answer is the correct one: Wii has only 64MB of RAM entirely on one GDDR3 RAM chip. There is no other RAM except for some eDRAM, but that is for the framebuffer. There's also another issue with having yet another bank of RAM: The GDDR3 RAM right there is pretty high-end. This is not some auxillary RAM of the original GC but clearly meant to be main memory. Having another bank of RAM would be pretty stupid from a design point of view for such a low-end device.
 
Last edited by a moderator:
I think the straightforward answer is the obvious one: Wii has only 64MB of RAM entirely on one GDDR3 RAM chip.

I don't think so, every rumour/spec has mentioned 24MB of 1T-Sram. Also NEC's press release mentions embedded DRam using mosys technology (embedded 1T-Sram) and seperate 1T-Sram system memory.

There's also another issue with having yet another bank of RAM: The GDDR3 RAM right there is pretty high-end. This is not some auxillary RAM of the original GC but clearly meant to be main memory. Having another bank of RAM would be pretty stupid from a design point of view for such a low-end device.

Your talking about 64MB of 700Mhz specced ram (likely running at no more then 486Mhz) on a 32bit bus, which is only 3.9GB/s. Why on earth would it be stupid to include 24MB of lower latency ram (which could be used for more random access files such as game data ect) with its own 3.9GB/s bandwidth. That would give you just over 3.5 times the amount of fast ram as GC and just over 3 times the main memory bandwidth. As well as giving Wii perfect backwards compatability with GC.
 
I don't think so, every rumour/spec has mentioned 24MB of 1T-Sram. Also NEC's press release mentions embedded DRam using mosys technology (embedded 1T-Sram) and seperate 1T-Sram system memory.



Your talking about 64MB of 700Mhz specced ram (likely running at no more then 486Mhz) on a 32bit bus, which is only 3.9GB/s. Why on earth would it be stupid to include 24MB of lower latency ram (which could be used for more random access files such as game data ect) with its own 3.9GB/s bandwidth. That would give you just over 3.5 times the amount of fast ram as GC and just over 3 times the main memory bandwidth. As well as giving Wii perfect backwards compatability with GC.

Well said. There's no way that 1T-SRAM is missing the Wii.

Now it would be nice to see some pics of the procs with the heat spreaders off as well as the underside of the mobo. Anyone with a Wii willing to take one for the team? :D
 
I don't think so, every rumour/spec has mentioned 24MB of 1T-Sram. Also NEC's press release mentions embedded DRam using mosys technology (embedded 1T-Sram) and seperate 1T-Sram system memory.

Rumors can be wrong. 1T-SRAM isn't really made anymore as it's actually pretty expensive compared conventional DRAM. Put it on the package is even more more expensive especially considering that there is absolutely no reason whatsoever to not put it on the motherboard.

Your talking about 64MB of 700Mhz specced ram (likely running at no more then 486Mhz) on a 32bit bus, which is only 3.9GB/s. Why on earth would it be stupid to include 24MB of lower latency ram (which could be used for more random access files such as game data ect) with its own 3.9GB/s bandwidth. That would give you just over 3.5 times the amount of fast ram as GC and just over 3 times the main memory bandwidth. As well as giving Wii perfect backwards compatability with GC.

Why would you use 700Mhz specced RAM and downclock it so? There are slower grades of GDDR3 RAM going at pretty much every 100Mhz. And like I just said, it's pretty expensive to do it this way. 1T-SRAM does not offer massively better latency than GDDR3 RAM (especially considering how close the memory chip is to the GPU) and virtually perfect backward compatibility can be done with GDDR3.

So let me conclude by saying that while it's not impossible for there to be another 24MB of 1T-SRAM hidden somewhere, it's becoming increasingly obvious that there is no reason for there to be one. 1T-SRAM is expensive, and even more so to hide it somewhere (not to mention why you would hide it). GDDR3 is cheaper and plenty fast. It's also probably not downclocked significantly so it can replace all previous RAM in the GC without major issues. The only sticking point is that the memory design suddenly became a lot less "interesting" that the GC, and that it's basically slower than what the Xbox had (though latency is still better, plus the addition of the framebuffer).
 
If its not a discrete component on the motherboard, the 24MB of 1T must be embedded in Hollywood. That would explain the increase in GPU die size despite the smaller feature size (90nm vs. 180nm) and the (supposed) lack of new GPU features. 1T is essentially DRAM with some additional banking, caching, and refresh logic in order to give it SRAM-like latency for random accesses. That explains the talk about NEC's embedded DRAM process tech. Physically, the storage unit for 1T IS a DRAM cell.

Edit: AFAIK, GDDR3 has a tRC of 30-50 clock cycles. So for the degenerate case of random access patterns that consecutively hit single locations within the same bank in GDDR3 can never be as fast as 1T - unless of course its clocked 30-50 times faster.
 
Last edited by a moderator:
If its not a discrete component on the motherboard, the 24MB of 1T must be embedded in Hollywood. That would explain the increase in GPU die size despite the smaller feature size (90nm vs. 180nm) and the (supposed) lack of new GPU features. 1T is essentially DRAM with some additional banking, caching, and refresh logic in order to give it SRAM-like latency for random accesses. That explains the talk about NEC's embedded DRAM process tech. Physically, the storage unit for 1T IS a DRAM cell.

Which is even dumber than hiding a chip under the heat spreader. 1T-SRAM is less dense than conventional eDRAM, which means it will balloon the die to insane sizes. We're looking at >300mm^2, and more like >350mm^2 including the GPU itself, making it the largest GPU this gen if they did this. It's would literally be the size of a R580.

Edit: AFAIK, GDDR3 has a tRC of 30-50 clock cycles. So for the degenerate case of random access patterns that consecutively hit single locations within the same bank in GDDR3 can never be as fast as 1T - unless of course its clocked 30-50 times faster.

1T-SRAM is not that fast on average. In fact the degenerate case of your should never happen if you have proper caching.
 
Rumors can be wrong. 1T-SRAM isn't really made anymore as it's actually pretty expensive compared conventional DRAM. Put it on the package is even more more expensive especially considering that there is absolutely no reason whatsoever to not put it on the motherboard.

Rumours can be wrong yeah, but every single rumour since Wii was first announced? That seems pretty unlikely IMO. 1T-sram is definitely still made by the way and would it really be expensive to include the memory on the same package as the GPU? I suppose the one thing that would be achieved by putting the memory on the same package is that it would be extremely close to the system LSI.

Why would you use 700Mhz specced RAM and downclock it so? There are slower grades of GDDR3 RAM going at pretty much every 100Mhz. And like I just said, it's pretty expensive to do it this way.

For very cool operation?, at a guess.. I don't know but it seems that's what they're doing. The GPU runs at 243Mhz so the ram has to be a multiple of 243Mhz (likely 2 like GC).

1T-SRAM does not offer massively better latency than GDDR3 RAM (especially considering how close the memory chip is to the GPU) and virtually perfect backward compatibility can be done with GDDR3.

1T-Sram certainly offers far better latency then GDDR3 at similar clock speeds. GDDR3 is graphics memory after all, there's no way its going to have latency anywhere close to as low as 1T-Sram.

Even if they used a 2.5 multiplier for the memory (3 is to high as that would put it over the 700Mhz spec) that's still only 4.8GB/s bandwidth using relatively high latency memory. I definitely wouldn't call that plenty fast enough. What would be far better is having 24MB of very low latency memory for game code and other latency dependent tasks and 64MB of relatively high latency memory for stuff like textures, which are much less dependent on latency. More bandwidth (7.9GB/s) and as easy to develop for as GC.
 
Last edited by a moderator:
1T-SRAM is not that fast on average. In fact the degenerate case of your should never happen if you have proper caching.

1T-Sram is extremely low latency memory, I can't count the amount of times developers have commented on how incredibly forgiving it was when creating an engine for GC. I seem to remember a certain developer on these forums saying something along the lines of "GC's main memory is so efficient its actually hard to make code run poorly in it".
 
Rumours can be wrong yeah, but every single rumour since Wii was first announced? Also we know that Wii does include 1T-Sram in one form or another, so obviously it is still made. Not sure why it would be so expensive to put the memory on the same package as the GPU either?

The 1T-SRAM is almost certainly used as an embedded framebuffer. It's looking like that's the only use of it.

For very cool operation? I don't know but it seems that's what they're doing. The GPU runs at 243Mhz so the ram has to be a multiple of 243Mhz (likely 2 like GC).

Conceivable, but that's probably going too far. The Wii is not a hot enough device to require that level of downclocking. The memory clock speeds and GPU do not have to be synchronized. Anyways, this is more of a tangential issue.

1T-Sram certainly offers far better latency then GDDR3 at similar clock speeds. GDDR3 is graphics memory after all, there's no way its going to have latency anywhere close to as low as 1T-Sram.

Even if they used a 2.5 multiplier for the memory (3 is to high as that would put it over the 700Mhz spec) that's still only 4.8GB/s bandwidth using relatively high latency memory. I certainly don't call that plenty fast enough.

I'm not too sure. The CAS latency of 1T-SRAM is about 2 cycles, compared to 4-12 cycles in GDDR3. If the GDDR3 is really running 3-4 times faster I don't see latency as being much of a difference. GC emulation with GDDR3 is probably not an issue.

EDIT: Also, most of the latency of main memory comes from the fact that it's physically on another chip from the CPU, not that DRAM is inherently slow. 1T-SRAM was not the magic bullet people have made it out to be. In fact, it's its high cost of production is probably why Nintendo has abandoned it outside of e-1T-SRAM from the looks of it.
 
Last edited by a moderator:
The memory clock speeds and GPU do not have to be synchronized. Anyways, this is more of a tangential issue.

Well yeah you can run memory out of sync with the system clock speed. But this would just make the latency problems even greater (running high latency memory out of sync would make system latency as a whole even worse still). It just goes against what Nintendo seem to want in a system, which is easy to program and efficient. Plus look at the GPU and CPU speed, just like GC Wii's CPU is running at a multiple of 3 compared to the system LSI. I don't think there's any chance of asynchronous memory.

I'm not too sure. The CAS latency of 1T-SRAM is about 2 cycles, compared to 4-12 cycles in GDDR3. If the GDDR3 is really running 3-4 times faster I don't see latency as being much of a difference. GC emulation with GDDR3 is probably not an issue.

No way is the GDDR3 memory going to be running 3-4 times faster then GC's 1T-Sram, it ran at 324Mhz.
 
I'm not too sure. The CAS latency of 1T-SRAM is about 2 cycles, compared to 4-12 cycles in GDDR3. If the GDDR3 is really running 3-4 times faster I don't see latency as being much of a difference. GC emulation with GDDR3 is probably not an issue.

Its not the CAS latency that I'm talking about. Acessing a DRAM (GDDR3 included) is complicated. The memory is orgainzed into banks, rows, columns. You access a memory location by first "opening" the associated bank with the associated row. Then you can do consecutive burst accesses to columns within that row for as long as you want. However, if the next memory location you want is in the same bank but a DIFFERENT row, you have to re-open the bank. tRC is the latency timing paramter than defines the minimum amount of time between two accesses to DIFFERENT rows within the same bank. This is dozens of clocks cycles. If you have an access pattern which scatters among many different rows in the same bank (without touching the other banks), tRC is going to hit you hard and bring effective bandwidth way down. In order to alleviate this, there are multiple banks, each of which can be opened to a different row, and each of which can be opened and closed indepedently of one another. The idea when using such memory is to structure your accesses such that you interleave between the avaiable banks to hide the tRC penalty. If you want to avoid the problem of structuring memory access patterns, something like SRAM is a good solution.
 
Memory is not my forte. Perhaps you guys are right. However, I must point out that this is moot. No one really deals with getting low latency RAM anymore. This is a war that has been fought and lost a long time ago. The only thing people care about is hiding the latency. With a decent amount of well designed cache it's fully possible to mostly mask the higher latency of slower RAM. You guys are also arguingly about a chip that is clearly MIA in the Wii. I think it's much more plausible at this point to believe that Nintendo got a workaround and avoided using discreet 1T-SRAM altogether instead of hiding it somewhere.
 
You guys are also arguingly about a chip that is clearly MIA in the Wii.

I would have agreed with you if not for the fact that months before these pics ever came out sources were claiming that the systems 24MB of 1T-Sram would be somehow built into the system LSI. There's almost no way it could be 24MB of embedded memory and even less chance of 24MB of embedded memory using only a 64bit bus. So it would seem that its 24MB of memory on the same package as Hollywood, making up the system LSI.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top