Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
The GPU will end up faster and more flexible than the ones from previous generation consoles. But the CPU likely not (*some* stuff could run better on the Wii U CPU, because it is an OoO-Design, but floating point throughput is most likely lacking in comparison).
 
As I see the WII U will be competitive with the XB3/PS4
The GPU will be on par, the cpu probably will has less core, but if we compare it to the latest "competitive" Nintendo console, then it has similar comparable parameter,like the GC/XB/PS2

The PS2 had 3 core, the GC had only one.
The XB had 64megs, the GC only 24.

However the GS was the best from pure graphics/gameplay standpoint.

The wii is a different beast,but it has been a very profitable bet for the Nintendo :D
 
As I see the WII U will be competitive with the XB3/PS4. The GPU will be on par
The question (and thread) isn't about market competitiveness, but performance competitiveness. Unless MS and Sony take a dramatic shift in direction against the rumours we're getting, then Wii U will not be on a par in the GPU department. The visual difference may be less pronounced than Wii vs. PS360, but it'll almost certainly be very obvious.
 
The point is not the last minute changes - in best case the paper performance of the PS4/XB3 could be say twice better than the wiiu
But at the end of the day the WIIU will have a matured and well honed design - like in the case of the GC

So I don't think that we can see a dramatic difference in the performance between the machines.

The MS and the sony burned themselves in the last gen with the top of the range specs.
To make mayor changes eight month before the production is a sure receipt for similar mistakes.
 
The point is not the last minute changes - in best case the paper performance of the PS4/XB3 could be say twice better than the wiiu
But at the end of the day the WIIU will have a matured and well honed design - like in the case of the GC

So I don't think that we can see a dramatic difference in the performance between the machines.

The MS and the sony burned themselves in the last gen with the top of the range specs.
To make mayor changes eight month before the production is a sure receipt for similar mistakes.

You know that this isn't true.
 
The question (and thread) isn't about market competitiveness, but performance competitiveness. Unless MS and Sony take a dramatic shift in direction against the rumours we're getting, then Wii U will not be on a par in the GPU department. The visual difference may be less pronounced than Wii vs. PS360, but it'll almost certainly be very obvious.

Shifty,what do you think the realistic chance of this happening is,MS and Sony changing their plans?
MS seems far along, with Sony's plans less clear.
Reason I ask is what's the benefit in the console market to bring a dramatically more powerful system than your competition? Hypothetically if MS or Sony release a system that in game results show clear improvement in games compared to Wiiu and that equates to say 2x the performance. What would the benefit be to being 3-5 times more powerful which seems to be about the current rumor?
More is great but is much much more powerful potentially wasteful?
 
The point is not the last minute changes - in best case the paper performance of the PS4/XB3 could be say twice better than the wiiu
But at the end of the day the WIIU will have a matured and well honed design - like in the case of the GC

So I don't think that we can see a dramatic difference in the performance between the machines.

Based on?
 
Shifty,what do you think the realistic chance of this happening is,MS and Sony changing their plans?
They won't change their plans. Their plans were put in motion years back and they'll either see them through or pull the plug. They aren't going to switch from a...1TF machine to a 5TF machine on a last minute change of heart.

Hypothetically if MS or Sony release a system that in game results show clear improvement in games compared to Wiiu and that equates to say 2x the performance. What would the benefit be to being 3-5 times more powerful which seems to be about the current rumor?
More is great but is much much more powerful potentially wasteful?
I'd somewhat agree, although there's some marketing power behind being the most powerful. However, a 2x increase over Wii U will result in little noticeable improvement. A 5x increase will be a clear advantage. With diminishing returns, you need increasing amounts of improvement to get a significant advantage, which is necessary if you want people to upgrade.

I expect XB3 and PS4 to be '5 mythical measures' times more powerful than Wii U, as in the difference between what Wii U puts on screen and what the others put on screen to be the difference between say a Dreamcast and an XBox. It'll be plenty enough that Joe Gamer will see instantly that the other consoles have way more oomph, and I think the other consoles will get more non-graphically next-gen games too, with more RAM and better overall processing. That's assuming a conventional performance increase from MS and Sony.
 
Is it proveable that wiiu runs wii games natively instead of emulation? I'm trying to.convince someone but he keeps arguing its emulation and that I can't know its not
 
Is it proveable that wiiu runs wii games natively instead of emulation? I'm trying to.convince someone but he keeps arguing its emulation and that I can't know its not

Of course it's emulation, anything that's not running on the native hardware it was created for is emulation.

Wii U doesn't have any Wii hardware for backwards compatibility so thus its emulated.
 
Of course it's emulation, anything that's not running on the native hardware it was created for is emulation.

Wii U doesn't have any Wii hardware for backwards compatibility so thus its emulated.

But it has been pointed in that Itawa asks segment about the hardware that the tech is optmised for best emulation of wii.
 
Ever since Renesas was mentioned as a possible supplier of RAM on the MCM,
Im starting to doubt about Nintendo using IBM eDRAM for the GPU.
Outside of Digital Foundry, there has never been confirmation about the rumor.
And, IBM only states their eDRAM is for the CPU.


From 2006:

For companies that start out with SOI wafers for their high-performance processors, such as AMD, IBM or Freescale, the FB-RAM approach has several advantages: fast read and write times, and a cell size smaller than embedded DRAM and about one-fifth that of a six-transistor SRAM. With the trench and stacked capacitors of conventional embedded DRAM becoming more difficult to build, proponents claim that FB-RAMs are cheaper, and require few additional mask layers.

At the Symposium on VLSI Circuits, Toshiba engineer Takashi Ohsawa said the company's 128-Mbit floating-body cell (FBC) test chip demonstrated a nearly 100 percent bit yield. The cell size of 0.17 square microns is half that of comparable embedded-DRAM bit cells. "Scaling is easier than with conventional embedded DRAM," Ohsawa said. "With DRAMs, the process cost is getting high because of the deep-trench capacitor." With FBC, however, "it is important to have an accurate reference voltage." In the Toshiba FBC, the write time was consistently measured at 10 nanoseconds or less.

Also at the symposium, Renesas discussed an enhanced twin-transistor RAM based on an SOI technology. Though the design has a larger cell size than the single-transistor FB-RAM approach, Renesas said that it is less costly to build, with fewer process changes. Also, the company claims that the so-called eT2RAM is better-suited to modern system-on-chip (SoC) solutions, which rely on multiple voltages and other power-management schemes to keep power consumption under control. "This could be a mainstream memory for advanced SoCs," said Kazutami Arimoto, a Renesas engineering manager.

The company recently fabricated a 4-Mbit test chip using a 90-nm SOI process, and a full evaluation is under way. With the two-transistor architecture, the eT2RAM operated successfully with a 0.5-V power supply, achieving a respectable 56-MHz operation at such a low operating voltage.

If a high-k dielectric and metal gate electrode were used in the transistor, the cell size could be scaled further--partly because the high-k dielectric can be physically thicker and transistor variability can be better controlled, Arimoto said.

Since Renesas--ranked as the world's largest microcontroller vendor--does not have a line of high-performance MPUs that use SOI technology, one challenge is to find a home for its SOI-based eT2RAM.





http://maltiel-consulting.com/Substantially_Faster_DRAM_using_SOI_semiconductor_expert_maltiel.html
 
Ever since Renesas was mentioned as a possible supplier of RAM on the MCM,
Im starting to doubt about Nintendo using IBM eDRAM for the GPU.
Outside of Digital Foundry, there has never been confirmation about the rumor.
And, IBM only states their eDRAM is for the CPU.

Who's saying IBM is fabbing the GPU? Also haven't seen any of the DF articles that actually state that it's IBM eDRAM in the GPU...

Furthermore, what does quoting a huge chunk of a 6 year old article have to do with anything?
 
The press release only mentions the CPU.

The all-new, Power-based microprocessor will pack some of IBM's most advanced technology into an energy-saving silicon package that will power Nintendo's brand new entertainment experience for consumers worldwide. IBM's unique embedded DRAM, for example, is capable of feeding the multi-core processor large chunks of data to make for a smooth entertainment experience.
The only thing we really know about the eDRAM is that MoSys is not involved with WiiU (from one of their financial press releases IIRC).

edit:


In addition, Nintendo is expected to introduce a new gaming system in 2012, which does not incorporate our technology, and will likely cause a reduction in royalties we receive related to the existing gaming devices.
http://www.sec.gov/Archives/edgar/data/890394/000104746912002721/a2207956z10-k.htm
 
Wonder if they're using 1T-SRAM again? They've had it in the GC and Wii.

Also to note that certain foundries do refer to 1T-SRAM as eDRAM.
 
alstrong :oops: almighty

You know, I just noticed these are two accounts.
What a tag team.

From NEOGAF
Nightbringer
captura-de-pantalla-2012-10-11-a-las-13-11-58.png

The text is in Spanish since I have done it for my personal blog, I have took the image of the entire board that I have seen here and from it I have extrapolated for the finl composition. I have observed doing it that the system uses 2 different types of memory for it's main RAM. 2 GDDR5 modules (256MB each) and 2 DDR3 modules with 768MB each.

I have to agree, it does look like there is two different sets of ram modules. If that true, then what was stated here paints us a picture:

. Likewise, a computer transfers and manages data by layering storage, with the CPU at the top, high-speed low-capacity cache memory serving as short-term memory underneath, followed by low-speed large capacity main storage for managing hardware, and auxiliary storage for managing the OS on the bottom.
-NIntendo

of four layers.

CPU/GPU with their on chip memory L1
Mem1 on the MCM, possibly shared between the two LSIs? (24 up to 256MB?) can be seen as L2
Mem2 the large pool of ram on motherboard dedicated for gaming. (<1GB)
Mem3 the other pool of ram on the motherboard for OS (1GB)


By the way, who is to say that for the CPU Nintendo is not using two cores?
Everyone was expecting iPhone 5 to have a quad core CPU, but they ended up using two.
And its the fastest, or close to the fastest smartphone out there, atm.
 
I have to agree, it does look like there is two different sets of ram modules. If that true, then what was stated here paints us a picture:
If so, that's quite a change to expectations. It means split RAM, 512MB GDDR5 + 1536 MBs slow system RAM. I guess GDDR5 is being used for low power performance rather than high performance, but now we have to worry about RAM architecture. Nintendo have two buses, and devs have split pools. We need a way to determine bus size and transfer; it makes little sense to have fast GDDR5 in there.

By the way, who is to say that for the CPU Nintendo is not using two cores?
The devs EG spoke with.
 
If so, that's quite a change to expectations. It means split RAM, 512MB GDDR5 + 1536 MBs slow system RAM. I guess GDDR5 is being used for low power performance rather than high performance, but now we have to worry about RAM architecture. Nintendo have two buses, and devs have split pools. We need a way to determine bus size and transfer; it makes little sense to have fast GDDR5 in there.
.

Im not necessarily agreeing with the poster that GDDR5 is being used.
Im thinking some special RAM on the module like TTRAM from Renesas,
and DDR3 on the board.

A lot of developers are stating that they are happy working with a lot of RAM.
But Im not sure if they are talking about the main ram pool, or the on chip ram for the GPGPU. So far, I dont see any games that would be constrained with 512MB.
So Im thinking they are talking about the amount on the chip, and the VRAM on the MCM.
 
Status
Not open for further replies.
Back
Top