Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
How do we 'know' that it isn't 55nm? Do we have proof that it was created on X process? If it were 40nm then surely we would know about the extra GPU performance by now?
 
How do we 'know' that it isn't 55nm? Do we have proof that it was created on X process? If it were 40nm then surely we would know about the extra GPU performance by now?

We don't know, but with how old 55nm and 40nm both are, I don't see how using a 55nm GPU could possibly be cheaper. Throw this issue of power consumption in and I really don't see any reason why they'd go with 55nm. Again, I want to see a reason for it. Something that would benefit them. We can see reasons for what they did with the CPU and RAM, but the GPU being 55nm serves literally no benefit to the system, and even less if we go with the theory that a quarter of the GPU was cut out.
 
We don't know, but with how old 55nm and 40nm both are, I don't see how using a 55nm GPU could possibly be cheaper. Throw this issue of power consumption in and I really don't see any reason why they'd go with 55nm. Again, I want to see a reason for it. Something that would benefit them. We can see reasons for what they did with the CPU and RAM, but the GPU being 55nm serves literally no benefit to the system, and even less if we go with the theory that a quarter of the GPU was cut out.

They have eDRAM on the same chip. There may have been good reasons for using 55nm instead of 40nm. Had they not had eDRAM then I would have certainly thought there was no way they would be using 55nm. The performance profile of the chip doesn't suggest having say an RV740 in there and the memory bandwidth certainly doesn't seem to support that idea either.
 
But it would actually cost more to do something like that than it would to shrink an RV730 to 40nm.

Really? What would the comparative costs be? You're sure about this so hopefully you can share the same figures you're using.

There's absolutely no reason for it to be 55nm.

Unless you can show otherwise, cost might be a reason.

I know how you plan to respond to this, so instead please give a reason.

( )( )
 
How do we 'know' that it isn't 55nm? Do we have proof that it was created on X process? If it were 40nm then surely we would know about the extra GPU performance by now?

This what I want to know. It feels like it *should* be a 40nm chip but it sure as hell isn't performing like one, unless a huge amount of the chip is taken up with an unexpectedly large amount of edram and BC hardware is taking up more space than we're expecting. And yeah, the main memory BW doesn't seem like the kind of pipe you'd connect up to a fast chip. Whether it's 40nm or 55nm it's going to be around or below RV730.

As you say in your next post, there's the edram to consider. IBM have had a 32nm process that can incorporate edram ready for volume production (or so they say) since early this year, but AFAIK they're still shipping Power 7s (with huge amounts of edram) on 45nm. The economies of transitioning to edram friendly processes may simply be different.

Just because it's interesting, we can look at TSMCs figures and see that 65/5nm still, this quarter, accounts for 22% of their total wafer revenue and 40nm accounts for 27%. 28nm is still just 13%. Given the lower revenues for older nodes it's safe to say that there are still a hell of a lot of chips being made on 65~40nm and that going with smaller nodes doesn't automatically make sense.

http://www.xbitlabs.com/news/other/...f_28nm_Chips_by_Two_Times_in_One_Quarter.html

So while it probably isn't 55nm, I wouldn't completely rule out the idea that the idea that given Nintendo's very low performance target that 55nm might make more business sense. The die size and performance don't clearly rule it out IMO, and "bu bu bu 55nm is old PC GPU 28nm!?" doesn't do it for me either. Power consumption is a bigger issue IMO.

It would be great to know for sure how much edram the WiiU has (and what its BW is too!).
 
This what I want to know. It feels like it *should* be a 40nm chip but it sure as hell isn't performing like one, unless a huge amount of the chip is taken up with an unexpectedly large amount of edram and BC hardware is taking up more space than we're expecting. And yeah, the main memory BW doesn't seem like the kind of pipe you'd connect up to a fast chip. Whether it's 40nm or 55nm it's going to be around or below RV730.

Well we did have that rumour which suggested that it is literally an RV 7xx chip. What if the chip itself was just a little bit smaller than the RV 730? That would give room for 32MB of eDRAM and it would explain the relative lack of performance. Why would they stick an RV 740 with 640 SP onto a 12GB shared memory bus??? Perhaps we really need to start asking what kind of chip makes sense given the shown performance and the lack of memory bandwidth?

So while it probably isn't 55nm, I wouldn't completely rule out the idea that the idea that given Nintendo's very low performance target that 55nm might make more business sense. The die size and performance don't clearly rule it out IMO, and "bu bu bu 55nm is old PC GPU 28nm!?" doesn't do it for me either. Power consumption is a bigger issue IMO.

It would be great to know for sure how much edram the WiiU has (and what its BW is too!).

Perhaps we can consider the eDRAM as much an energy saving feature as a performance feature. Off chip memory bandwidth is very energy expensive so if they can keep the data on the die they make it more energy efficient.
 
Well we did have that rumour which suggested that it is literally an RV 7xx chip. What if the chip itself was just a little bit smaller than the RV 730? That would give room for 32MB of eDRAM and it would explain the relative lack of performance. Why would they stick an RV 740 with 640 SP onto a 12GB shared memory bus??? Perhaps we really need to start asking what kind of chip makes sense given the shown performance and the lack of memory bandwidth?

I think that's a really good question and I guess it's what I've been trying to work out.

Despite being 10% faster in terms of clock and supposedly having a lot more edram than the 360 it hasn't seen a resolution bump in anything, even though that should be the easiest thing to do even on day one. It can certainly do some things a little better than the 360 - like Trine 2 - so it probably has more usable ALU power, but it chokes on fillrate heavy bits of BO 2 so it appears to have either a ROP or edram BW issue (probably BW). And it has a poxy amount of main memory BW, so unless there's some kind of huge texture cache that can retain data between frames it doesn't appear to make sense having massively more texture sampling units than the 360.

And then there's the holistic view of the system - one where you might expect the parts of the system to be representative of the system as a whole. And you just can't get away from dat CPU and dat RAM. I mean, Nintendo wouldn't even spend the extra grain of sand on giving all three cores a larger L2. How would a big GPU - that no-one has really used yet - fit into that picture?

The Wii U wouldn't need to be tied to any particular configuration of shaders, TMU or ROPs, but just because it's easy (and fun) to look at PC benchmarks, here is a link to a 160 shader GPU (6450) where we can see that in a closed, console environment it might just be able to pull of decent Xbox 360 ports with something way, way below RV740 ...

http://www.anandtech.com/show/4263/amds-radeon-hd-6450-uvd3-meets-htpc

I feel dangerously close to suggesting that something between half and three quarters of RV730 might just be able to get the job done, on 55nm (and be within the die and power constraints). ;D

Perhaps we can consider the eDRAM as much an energy saving feature as a performance feature. Off chip memory bandwidth is very energy expensive so if they can keep the data on the die they make it more energy efficient.

I think the edram is primarily a cost saving feature, but as you say it should reduce power draw of the system too.
 
Honestly I think that 55nm is close to impossible.
The system power consumption is really low, it "keeps up" with the ps360 with only 13GB/s to the main pool of ram. It burns half the power or less.
The clock has to be pretty conservative, there should a nice amount of edram, it doesn't seem workable to me in +150mm^2 chip with a 55nm process.
 
Really? What would the comparative costs be? You're sure about this so hopefully you can share the same figures you're using.



Unless you can show otherwise, cost might be a reason.



( )( )

Okay, you're right, I can't be sure. But who even makes 55nm anymore? Plus, we know that CPU is ridiculously weak along with the RAM, which means that the GPU has to be heavily bottlenecked in these rushed, low-budget launch games. Going off of these and assuming that the GPU is <= Xenos really doesn't make sense. If you use that logic, those games shouldn't be able to run without severe sacrifices. Slightly lower frame rates and more pop-in doesn't quite cut it. We even have at least one game that looks and runs better than PS3/360 versions (Trine 2).
 
Okay, you're right, I can't be sure. But who even makes 55nm anymore? Plus, we know that CPU is ridiculously weak along with the RAM, which means that the GPU has to be heavily bottlenecked in these rushed, low-budget launch games. Going off of these and assuming that the GPU is <= Xenos really doesn't make sense. If you use that logic, those games shouldn't be able to run without severe sacrifices. Slightly lower frame rates and more pop-in doesn't quite cut it. We even have at least one game that looks and runs better than PS3/360 versions (Trine 2).

It's a little lower down, but I posted a link showing that TSMC are still doing an awful lot of business on 65/55nm.

If the Wii U is CPU bottlenecked and has a fast GPU then that's a good reason to bump the resolution up (if you have the memory, and the Wii U is supposed to). But nothing has. In fact it looks very much like the Wii U GPU is in the same ballpark as the 360 - likely a little better (certainly newer), but maybe with a couple of drawbacks too.

Again, I'd be thinking RV730 territory and definitely not RV740.
 
I also figure that RV730 is much likelier at this point. The rumors always suggested an HD4600 variant and performance seems to be in line with that. RV730 also has official support for DDR3 where RV740 does not. Even with the bandwidth deficiency, an HD4700 should be performing better at 720p.

To put it in context, AMD's integrated HD6550D has 400SPs running at 600mhz and manages quite a few 360 games at 1080p at 30+ fps running off the same 1600mhz DDR3 as the WiiU. Usually 60fps when running at 720p.
 
I've always thought the GPU theoretical would never hit 500gflops, even though many definitely did not agree with that assessment. Knowing what we do now about almost virtually every launch game, would that be an accurate assessment to make?

If it is indeed an RV730, while i'm sure it'll surpass Xenos in many ways when games are directly optimized for it, would it be innacurate to compare it directly to PS3's set up when the RSX of PS3 is weaker, but the Cell helps with the GPU rendering tasks on exclusive games built for that purpose?
 
It's a little lower down, but I posted a link showing that TSMC are still doing an awful lot of business on 65/55nm.

If the Wii U is CPU bottlenecked and has a fast GPU then that's a good reason to bump the resolution up (if you have the memory, and the Wii U is supposed to). But nothing has. In fact it looks very much like the Wii U GPU is in the same ballpark as the 360 - likely a little better (certainly newer), but maybe with a couple of drawbacks too.

Again, I'd be thinking RV730 territory and definitely not RV740.

I'm not implying RV740. However, I am contesting your idea that it's a severely cut-down RV730.
 
I also figure that RV730 is much likelier at this point. The rumors always suggested an HD4600 variant and performance seems to be in line with that. RV730 also has official support for DDR3 where RV740 does not. Even with the bandwidth deficiency, an HD4700 should be performing better at 720p.

To put it in context, AMD's integrated HD6550D has 400SPs running at 600mhz and manages quite a few 360 games at 1080p at 30+ fps running off the same 1600mhz DDR3 as the WiiU. Usually 60fps when running at 720p.

Also usually with a lot more RAM and a CPU 3+x as powerful as what's in the Wii U. That's not the most fair comparison.
 
Also usually with a lot more RAM and a CPU 3+x as powerful as what's in the Wii U. That's not the most fair comparison.

It is fair for most of his points.

The extra size for larger frame and G buffers is far, far smaller than the extra 512MB the WiiU has over the 360 and CPU power has nothing to do with res, the only thing that the CPU would effect is framerate in CPU limited games.
 
To put it in context, AMD's integrated HD6550D has 400SPs running at 600mhz and manages quite a few 360 games at 1080p at 30+ fps running off the same 1600mhz DDR3 as the WiiU. Usually 60fps when running at 720p.

The major discrepancy in your comparison is that the 1600MHz DDR3 in the HD6550D is running off of a 128-bit bus and therefore has double the bandwidth. The eDRAM is supposed to make up the slack but there may simply not be enough of it to handle 1080p for these game engines.

Since no commercial RV730 incorporated eDRAM or shared memory with a CPU there's no real point speculating that RV730 over RV740 just because it uses DDR3, obviously the memory arrangement is custom either way.
 
so what do you guys think about the 1GB RAM for the OS? can nintendo optimize the OS to use less RAM and free some of that up for games? for what we know the OS has to do does it need a full GB to function properly? keeping in mind we dont know what other features nintendo has in mind for the console and OS in general. but lets say they can free up enough for Wii U to have 1.5GB for devs to work with how will that impact game development?
 
so what do you guys think about the 1GB RAM for the OS? can nintendo optimize the OS to use less RAM and free some of that up for games? for what we know the OS has to do does it need a full GB to function properly? keeping in mind we dont know what other features nintendo has in mind for the console and OS in general. but lets say they can free up enough for Wii U to have 1.5GB for devs to work with how will that impact game development?

Well, that depends entirely on the software stack used. Which we know nothing about. The fact that you have to reboot to gain Wii compatibility shows the lack of flexibility. It should have done some kind of virtualization out of the box if the software really is that different.

Looking at modern, optimized operating systems that a UNIX-based, you can get away with a very little RAM footprint.

The way forward is for Nintendo to heavily optimize the software-side of the console and get those load times under control.

More RAM for games means bigger and better textures can be used.
 
Not much. If they can't make a game run better than on the old HD consoles using the existing 1GB, then an extra 512 megs won't help.

The low memory bandwidth makes most of that memory inaccessible anyway. By the time you can do something with that data, it could have been loaded from the disc just as well.
 
Status
Not open for further replies.
Back
Top