Fact: Nintendo to release HD console + controllers with built-in screen late 2012

How is it logical to fit a Radeon HD 4870 into the WiiU Chassis? Even if you take the 40nm rendition, which is the 5770.
 
I've got some possible explanations for that here:

http://forum.beyond3d.com/showpost.php?p=1567722&postcount=1784

That's really thin. If Nintendo put that much horsepower in their box and are going to launch ahead of their competition, I expect they would want to showcase it and brag about how much more powerful it is than the current competition. rv770 would roflstomp ps360 graphics with a bit of effort. Also, I would expect it would be much faster than rv770 in a pc, so I think even crysis would separate itself quite a bit with the advantages of a closed environment.
 
That's really thin. If Nintendo put that much horsepower in their box and are going to launch ahead of their competition, I expect they would want to showcase it and brag about how much more powerful it is than the current competition. rv770 would roflstomp ps360 graphics with a bit of effort. Also, I would expect it would be much faster than rv770 in a pc, so I think even crysis would separate itself quite a bit with the advantages of a closed environment.



We'll just have to wait and see.
 
Does anyone know what the final process node is going to be? Are we talking 32nm, 28nm or 45nm? If the development kits represent silicon which is going to be shrunk in the future then I think we have an obvious reason why there were reports of how hot they ran.
 
Does anyone know what the final process node is going to be? Are we talking 32nm, 28nm or 45nm? If the development kits represent silicon which is going to be shrunk in the future then I think we have an obvious reason why there were reports of how hot they ran.


CPU is confirmed by IBM to be 45nm. There's nothing about the GPU yet.
 
But does that mean that it won't be 32nm by the time they release?

Yes. Otherwise why would they even mention the nm?

"We're making these few dozen dev kit prototype chips on 45nm so we though we'd announce that the CPU is on 45nm in our press release" ?

32nm is too cutting edge for mass produced new console in 2012, by far. Armchair message boarders always get way too aggressive with process nodes.
 
Significant redesigns are required between full process nodes in order for it to be worthwhile. Otherwise you'll get really shoddy transistor scaling and power characteristics due to the difference in manufacturing procedure that is typical between said nodes. Half-nodes are just optical shrinks. Consider (for example) optimal doping conditions or how the gate designs can change to accommodate scaling issues with threshold voltage and leakage current.

If they say they're designing on 45nm, they'll need to do a whole new set of design testing to scale it down to 32nm. Just look at how long it took MS and Sony to move from 90nm to 65nm and to 45nm.
 
How is it logical to fit a Radeon HD 4870 into the WiiU Chassis? Even if you take the 40nm rendition, which is the 5770.

RV770 doesn't mean it'll have HD4870 clock speeds and power consumption. Actually HD4870 was never mentioned, only HD4850, RV700 and HD4800.

At 55nm, the MXM module with a 500MHz RV770 (HD4850 Mobility) had a TDP of 45W (1GB GDDR3 memory included).
Since it's highly unlikely to get a 55nm GPU out in 2012, at least a shrink to 40nm should be expected, and at the same clocks it'd consume less.

A GPU with a 30W TDP wouldn't be that difficult to put inside that case.
Plus, there's also the rumours claiming that the SDK units had overheating issues.
I can't believe a RV730 would bring overheating issues, unless it was clocked really high (>850MHz?).
 
Since it's highly unlikely to get a 55nm GPU out in 2012, at least a shrink to 40nm should be expected, and at the same clocks it'd consume less.

There's a reason why I mentioned 5770.

I can't believe a RV730 would bring overheating issues, unless it was clocked really high (>850MHz?).

RV730 is clearly much lower spec than rv770, so of course.
 
Do those run at full load for hours on end or just sporadically? How loud are they? The 6850M is essentially a 5770 with much lower clocks as well. It doesn't do me any good if I can only run the console for two hours before it toasts itself.

I've got a 17" laptop with a vastly inferior 4670M and a Core i720 (8 thread) that get toasty and verily unstable in just an hour of full load and it's noisy as hell. If you want another RROD hell, be my guest, but even the 360 was originally tested for running at load for more than a day and still they had issues. A lot of things are possible if you only run the devices for really short periods of time, but get real.
 
There's a reason why I mentioned 5770.
But it's unlikely it'll be an Evergreen-based GPU.
I just took the RV770 consumer model with the lowest TDP available.
Interestingly enough, there's a mobility version of Juniper (HD5830 Mobility) going at 500MHz with a 24W TDP, GDDR3 memory included.

And Juniper packs more transistors than RV770, so one could assume that a 40nm RV770 @ 500MHz could be at <22W, GPU alone.


RV730 is clearly much lower spec than rv770, so of course.
So you agree that, should the info about overheating consoles be true, a RV730 @ ~500MHz in there isn't the most probable option. That or at least the GPU wouldn't be responsible for the overheating.


There is Phenom II X4 and 6850 inside laptops, that are smaller a pack much more stuff.
Yes, the console isn't that small. The new macbook mini is a lot smaller and it carries a 624M transistor Core i5 Sandybridge @ 2.5GHz (35W), a 716M transistor Turks @ 500MHz (~15W) with GDDR5 and DDR3 system memory.




I've got a 17" laptop with a vastly inferior 4670M and a Core i720 (8 thread) that get toasty and verily unstable in just an hour of full load and it's noisy as hell.

That HD4670M is a 3 year-old GPU made on 55nm, with very high clocks for that time, consuming 35W, together with a 2 year-old CPU with a 45W TDP.
It's not really fair to compare old hardware using old processes with something that's coming out in Q2 2012. Even if the same nodes were used (55nm GPU + 45nm CPU), they'd be a lot more power efficient by now.
 
Last edited by a moderator:
But it's unlikely it'll be an Evergreen-based GPU.
The point is that it is similarly spec'd on shaders, TMUs, ROPs, and is a 40nm rendition of what rv770 was.

And Juniper packs more transistors than RV770, so one could assume that a 40nm RV770 @ 500MHz could be at <22W, GPU alone.
Juniper is DX11, so of course it packs more.

That HD4670M is a 3 year-old GPU made on 55nm, with very high clocks for that time, consuming 35W, together with a 2 year-old CPU with a 45W TDP.
It's not really fair to compare old hardware using old processes with something that's coming out in Q2 2012. Even if the same nodes were used (55nm GPU + 45nm CPU), they'd be a lot more power efficient by now.
Power consumption is still power consumption...

Power efficient in what sense? Idle clocks? Fine. But we're talking about something that's going to be full load. That's still power that has to be dissipated regardless of what the process node or hardware is. It's an absolute power figure that needs to be dissipated. That obviously gets harder to do the longer you operate the device within such a small chassis.

So you agree that, should the info about overheating consoles be true, a RV730 @ ~500MHz in there isn't the most probable option. That or at least the GPU wouldn't be responsible for the overheating.
Overheating at how many hours of operation? Who says the GPU is the only contributor? There is a 45nm CPU packed away in there too. What sort of cooling is in there? Too many assumptions.
 
You may want to consider also the implications of a lower clocked GPU Core with respect to triangle setup rates, but that's a separate issue.
 
It's the Radeon 4600 series, so that's 320SPs, 8 ROPs, 20 TMUs IIRC. On 40nm, the equivalent desktop GPU would be akin to the 55xx/56xx series, which sports 400SPs.

Anyways, you're all so keen on cutting down clocks of an rv770 class GPU. Clocks and power envelopes are a part of the equation, but these can always be tweaked. Do consider that rv770 is inherently a much larger chip than rv730. That is going to be an inherent factor in the BOM and will weigh some amount in what Nintendo wants to achieve with the WiiU. The rv770 is a gigantic leap in shading and pixel throughput power relative to Xenos. It's not just "a little". It's not even "ballpark".

At what point does lowering the clocks of an rv770 outweigh using a higher clocked low end part that is also a smaller chip (cheaper to manufacture)? You won't get more than double the clock, but what is good enough and how much do you expect Nintendo to spend? Will they have enough bandwidth to support rv770-like pixel throughput? How much will they spend for the memory? How many RAM chips fit inside the WiiU chassis? How many did they fit within the Wii? Was there much room left-over? How inexpensive do you believe GDDR5 2Gbit @ highest speed is? Does GDDR5 even make sense if they go with another edram for framebuffer usage? How much edram makes sense for them? How big will that chip end up being and what is its cost?
 
It's the Radeon 4600 series, so that's 320SPs, 8 ROPs, 20 TMUs IIRC. On 40nm, the equivalent desktop GPU would be akin to the 55xx/56xx series, which sports 400SPs.

Anyways, you're all so keen on cutting down clocks of an rv770 class GPU. Clocks and power envelopes are a part of the equation, but these can always be tweaked. Do consider that rv770 is inherently a much larger chip than rv730. That is going to be an inherent factor in the BOM and will weigh some amount in what Nintendo wants to achieve with the WiiU. The rv770 is a gigantic leap in shading and pixel throughput power relative to Xenos. It's not just "a little". It's not even "ballpark".

At what point does lowering the clocks of an rv770 outweigh using a higher clocked low end part that is also a smaller chip (cheaper to manufacture)? You won't get more than double the clock, but what is good enough and how much do you expect Nintendo to spend? Will they have enough bandwidth to support rv770-like pixel throughput? How much will they spend for the memory? How many RAM chips fit inside the WiiU chassis? How many did they fit within the Wii? Was there much room left-over? How inexpensive do you believe GDDR5 2Gbit @ highest speed is? Does GDDR5 even make sense if they go with another edram for framebuffer usage? How much edram makes sense for them? How big will that chip end up being and what is its cost?

I agree. Furthermore, I doubt any manufacturer's first option when designing a console is to take a custom chip and simply reduce speeds to meet a power envelope. Slowing down a gpu means reducing performance. A more efficient option is shaving off silicon which means reducing power requirements and performance while also reducing chip size and overall cost of each gpu.

Downclocking a gpu may make sense when you are talking off the shelves parts and the smaller silicons do not offer the performance you seek. It may also make sense where a reference design doesn't perform within expectation and a redesign isn't an option. But we are talking a custom gpu which gives Nintendo greater design flexibility and simple downclocking is a wasteful option in comparsion to using a smaller cheaper part, which can be clocked to fall within the required power envelope and still perform on the same level as a downclocked 4850.
 
Back
Top