Broadway specs

Darkblu

Do you think that power is/will be that important in both stand-by and in use?

I ask this because I understand in stand-by but in use I think it will not matter that much for almost everyone if instead of 35W it used lets say 60W for several reasons (for one almost any "in home entartainment" will use as much power as this).

And once that they could offer much more performance at the same power in stand-by (a multi core design that turn of the extra core(s)/offloading (vertex shading, physics, animation...) to the GPU or others chips (that arent used on stand-by), that lead me to think that the main reason is price.
 
Who told you that?

Same source that has been given me other broadway details I've been leaking. I mentioned it because the discussion was going a direction that doesn't really jive with what I know. Nintendo is still tweaking this chip btw and it's based on the GX line not gecko cpu either. I'm still wondering what's sucking up all the power if the wifi has been optimized for lower power consumption as well.
 
Same source that has been given me other broadway details I've been leaking. I mentioned it because the discussion was going a direction that doesn't really jive with what I know. Nintendo is still tweaking this chip btw and it's based on the GX line not gecko cpu either. I'm still wondering what's sucking up all the power if the wifi has been optimized for lower power consumption as well.

Just leak all the info and free us from the wild especulation:LOL: .
 
Do you think that power is/will be that important in both stand-by and in use?

well, from my understanding, it seems that with the wii nintendo are trying to push the public acceptance of this class of devices (i.e. game consoles) in general. i think that eventually they want to make them as commonly accepted as people now take fridges, vcr/dvd players, TVs, etc.

which automatically means they cannot rely on the hardcore gamer's mentality to appeal to anymore. normally things like power consumption versus percieved value matter alot for people's buying decisions. so even though almost everybody drives a car, very few of those are dodge vipers - not just because people can or cannot afford the price alone, but because for the majority of car consumers driving a viper to work daily is not quite justified. even though they'd fancy it, when they have to take the actual decision 'do i buy a viper to dirve to work' many people would come to their senses and say, 'though it's cool/nice/attractive, i know it's not justified.'

so i think yes, nintendo had a serious tab on the power consumption of their new console. now, whether that was more or it was less important than the price factor alone, and if so by how much - this i would not guess. but judging from the mere fact that they could've actually saved some bucks if they did not bother to make the device in the exact shape and form as it appears now, i'd say that nintendo may have just as well spent some extra buck here or there to meet their power consumption and appearance targets with the device. my speculations, of course.

I ask this because I understand in stand-by but in use I think it will not matter that much for almost everyone if instead of 35W it used lets say 60W for several reasons (for one almost any "in home entartainment" will use as much power as this).

maybe they could've gotten away with 60W too. for me, though, as a potential customer of theirs, that extra power-efficiency the device offers me intentionally by nintendo or not, is very welcome.

And once that they could offer much more performance at the same power in stand-by (a multi core design that turn of the extra core(s)/offloading (vertex shading, physics, animation...) to the GPU or others chips (that arent used on stand-by), that lead me to think that the main reason is price.

here already steps in a much stronger price factor; consider this - take one of the less power-efficient consoles of this gen and try to make it really power-efficient (in the wii ballpack) when in 'reduced workload' mode, while at the same time trying to preserve its original peak computational power - i think that you'll meet some serious budget issues. put on top of that the fact that you don't want to subsidize this hw before the consumer and voila - you just priced yourself out of the market.
 
Last edited by a moderator:
Just leak all the info and free us from the wild especulation:LOL: .

Most of it was leaked just check my post history on this thread or the cpu and the hollywood thread for it. Matt got in a lot of trouble for mentioning the last tidbits that I haven't flat out said, so until he is in the clear I know I couldn't get away with leaking waht nintendo is doing with this cpu.
 
But it can hit 1 GHz, in fact it was able to reach 1.1 GHz @130nm. The problem is obviously power. The 750CL datasheet has some info which is very interesting in that regard: @500MHz the 750CL consumes 2W on average, it consumes 6W @900MHz and 9.8W @1GHz. That's almost five times the power for twice the clock, i.e. less than twice the performance. That's a pretty huge drop in the performance/W ratio. Also notice that the power consumption numbers are absent for the 733 to 800MHz range, exactly the range in which Broadway is going to fall. I doubt that there isn't a relation between the two.

Hi Gabriele, I agree it's very likely Broadway falls into that 700-800 MHz range, particularly since it has been rumored to be 729 MHz for a long time now. Interestingly, Vdd is listed as to be determined for all frequencies. Recommended Vdd is 1.15V, but I suspect it's a good bit higher at 1 GHz.
 
well, from my understanding, it seems that with the wii nintendo are trying to push the public acceptance of this class of devices (i.e. game consoles) in general. i think that eventually they want to make them as commonly accepted as people now take fridges, vcr/dvd players, TVs, etc.

which automatically means they cannot rely on the hardcore gamer's mentality to appeal to anymore. normally things like power consumption versus percieved value matter alot for people's buying decisions. so even though almost everybody drives a car, very few of those are dodge vipers - not just because people can or cannot afford the price alone, but because for the majority of car consumers driving a viper to work daily is not quite justified. even though they'd fancy it, when they have to take the actual decision 'do i buy a viper to dirve to work' many people would come to their senses and say, 'though it's cool/nice/attractive, i know it's not justified.'

so i think yes, nintendo had a serious tab on the power consumption of their new console. now, whether that was more or it was less important than the price factor alone, and if so by how much - this i would not guess. but judging from the mere fact that they could've actually saved some bucks if they did not bother to make the device in the exact shape and form as it appears now, i'd say that nintendo may have just as well spent some extra buck here or there to meet their power consumption and appearance targets with the device. my speculations, of course.

Overall I agree with your post , althought I still think that (even if spending more for that) the power is mostly taking advantage.

Also they could soon use 65nm (before it had a chance of entering in the mass market), anyway it is just my guess too, so we cant argument much more.

What I think it is dangerous is neglect so much the "old gamers" if you can satisfy both

maybe they could've gotten away with 60W too. for me, though, as a potential customer of theirs, that extra power-efficiency the device offers me intentionally by nintendo or not, is very welcome.


But the question is: would you (or any significant number of people) stop buying Wii if it consume up 60W during use? Personally I doubt it.


here already steps in a much stronger price factor; consider this - take one of the less power-efficient consoles of this gen and try to make it really power-efficient (in the wii ballpack) when in 'reduced workload' mode, while at the same time trying to preserve its original peak computational power - i think that you'll meet some serious budget issues. put on top of that the fact that you don't want to subsidize this hw before the consumer and voila - you just priced yourself out of the market.

Just as a eg, if there is a PPU (130nm, 27W (max)) on Wii the console would consume ~60W during gameplay but on stand-by couldnt it turn of (?) so as long as you do have something that isnt expensive (IIRC the table that Urian had post here a 25mm^ chip (on 90nm that is ~ x4 the logic on Gekko) would cost less than 15$) you can have some performance booste that will hardly priced yourself of the market.

That is one of the reasons why I think that the main force behind Wii is price/proffit.

PS: Iam not sugestic that Wii should have anything, just that there is possibilitys of it being much more atractive from a "old gamer" POV that still fits in Nintendo strategy.

Most of it was leaked just check my post history on this thread or the cpu and the hollywood thread for it. Matt got in a lot of trouble for mentioning the last tidbits that I haven't flat out said, so until he is in the clear I know I couldn't get away with leaking waht nintendo is doing with this cpu.


OK, thanks anyway.
 
I think it'd be more interesting to see how Broadway does in performance per watt versus an ultra low voltage P-M at 1ghz.
Irrelevant if the primary concern is the maximum power draw. Even a pricy ULV P-M has a TDP that is roughly twice that of what Broadway will probably have.

If power is a non-negotiable constraint for Nintendo, then the P-M is out.

Depending on how the memory subsystem is arranged, the amount of bandwidth available to the CPU could cap the performance of a stronger CPU anyway.

BTW, why couldn't they have broadway at like 1.2ghz, and then downclock it to say 400mhz when the system is in its power save mode? Or even drop it to maybe 600mhz or possible 700mhz with a half multiplier?

The TDP would probably be in the realm of the P-M at that point. I don't know why Nintendo is that restrictive on the processor's power budget, but it is.

I'm still more of a fan of the idea that they did about the minimal work possible to make this system while maintaining gamecube compatibility and getting good, cheap yields.
Quite possible, but that wasn't what I was addressing in my original post. I simply said that dynamic clocking won't make a 3GHz processor draw as much power as a chip meant to run at less than 1 GHz.

I don't think it's that power efficient. Look at what some laptops and mini pcs are doing for power efficiency while still maintaining decent performance. I wouldn't be surprised to see if one of the old mac minis was die shrunk to 90nm SOI if it would be in the same ballpark for power consumption, and with barely any additional R&D.
Probably not. CPU consumption these days is only one part of the power equation.
The disk drive, hard drive, and RAM also take up a lot.

Depending on how aggressive Nintendo has been with power consumption of other parts of the system, we could figure out where their priorities are.

A DIMM could be expected to draw 10 or more Watts of power. I'm not certain if any specs are accurate, but even having 128 MB of RAM could put out twice the heat of Broadway. Only 64 MB would match it.

There are ways of reducing this, but I do not believe any of them can be done for free. If Wii's overall consumption, factoring in the GPU, RAM, and other components has also been minimized, then it may just be that Nintendo is at least partly focused on power draw, not just the BOM.
 
Depending on how the memory subsystem is arranged, the amount of bandwidth available to the CPU could cap the performance of a stronger CPU anyway.

P-M didn't really seem to be a bandwidth constrained design, its large amount of cache helped with that. It may have benefited a lot from the low latency memory used in the Wii, though Broadway is probably in the same ballpark of per mhz performance (at least a g4 and a pentium 3 were, and both broadway and p-m are beefed up versions of those processors, right?). You're right that if Nintendo couldn't spare the power budget for a higher clocked broadway, they likely couldn't spare it for a Pentium M either. They never would have switched architectures anyway, especially when that's about the least useful major change Nintendo could have made.

Oh, and the Mac Mini had power bricks rated at 85W, idled at 25W, and used about 40W while booting. For 130nm non-SOI with a harddrive, I don't think that's too bad. The Wii is around 50W to 60W right? Gamecube I think had a brick around 40W and drew around 25W during typical use, so we may be looking at what 30W to 40W in use for the Wii? I'd say that's achievable with the Mac Mini, what would need to be really worked on is the idling. If Wii can actually stay at full clock speed in its Wii connect mode and draw under 5W, that would be very impressive, though supposedly there are ultra lightweight notebooks that can do around 15W while idling with the screen on (but at very low brightness levels). Wii may not have the screen, but it will have WiFi to power, and is far cheaper than the >$1500 entry price point of those laptops.

I'm no expert on power draw of electronics, but it seems that the parasitic power draw (I'm including idling here) is much harder to lower than the max under use. Producing a 3W cpu doesn't seem that hard (mostly just lower clock speed and voltage), producing an entire computer of components that draw under 50W combined seems to be where the real expenses come into play. There's a minimum speed on the different buses in the computer, so things like memory and chipsets seem more difficult to optimize because they can't just lower clock speed so significantly.
 
P-M didn't really seem to be a bandwidth constrained design, its large amount of cache helped with that.
Depends on the application, cache can't manufacture bandwidth on streaming apps.

It may have benefited a lot from the low latency memory used in the Wii, though Broadway is probably in the same ballpark of per mhz performance (at least a g4 and a pentium 3 were, and both broadway and p-m are beefed up versions of those processors, right?).
A lot more went into Pentium-M than just a little beefing up. There were a number of significant changes to the internals that aren't apparent looking at the basic numbers. The higher IPC compared to a Pentium 3 indicates that.

I'm no expert on power draw of electronics, but it seems that the parasitic power draw (I'm including idling here) is much harder to lower than the max under use. Producing a 3W cpu doesn't seem that hard (mostly just lower clock speed and voltage),
Lowering clock speed past a certain point requires a redesign of the PLLs used to control the clock dividers. Complex clock circuitry is harder to make work reliably with tighter timings at high speeds.
It's either plan to go slow at the outset or start making sacrifices.

You also can't just lower voltage and expect things to work forever. The thresholds used by the transistors are based on a differential between the low and high signal states. Lower voltages mean lower differences, so silicon tuned for one target voltage range is going to get wonky if it's forced too low.

I've already noted that the lower bound of power draw is difficult to change, especially if the chip is designed to reach high clock speeds. Extra pipeline stages, leaky fast-switch transistors, and different logic configurations all draw power. Even at the same clock
 
I'm not sure about current designs where both AMD and Intel are putting much more effort into squeezing every last bit of performance out (and thus have those extra pipeline stages, fast-switch transistors, etc) but the Athlon XPs were able to downclock reliably from over 2ghz to 600mhz and were under 10W at that point, and though not all could, it seemed a significant number could downclock to 300mhz as well were the power draw was between 3W to 4W. Not too bad considering that the processor wasn't designed for low power at all (at least not exclusively).

Haven't heard about any Athlon 64s below 800mhz (maybe 600?), but that's still pretty low. AMD has started to differentiate their processors though, with the ones using fast-switching transistors being only the absolute top end processors, slow switching going to the low voltage and mobile, and then seemingly a mainstream range of processors. The mainstream processors are much cheaper, but at the same clock speeds about 10% to 20% less efficient than the slow switching, and 10% to 20% off the top speeds of the fast switching processors. (and likewise, the slow switching ones can't hit the clock speeds of the mainstream ones)

So uh, as evidenced by the power savings modes of currently existing desktop processors, significant savings can be seen just from downclocking, and at least another 20% in power savings (at the same speed) can come from explicitly designing a processor for just slightly slower (10% to 20% lower) max speeds. Could those kind of savings be extrapolated to a processor explicitly designed for sub ghz speeds? 50% the speed for 50% power savings at the same mhz?
 
at least a g4 and a pentium 3 were, and both broadway and p-m are beefed up versions of those processors, right?

well, 3dilettante already mentioned about P-M, and if we take the broadway =~ 750CL theory for a fact, then no, broadway is not a G4 derivative, it's a G3 derivative. G4s have better IPC than G3s and some other niceties (aside from AltiVec, the MPX buss support, IIRC)
 
Last edited by a moderator:
I'm not sure about current designs where both AMD and Intel are putting much more effort into squeezing every last bit of performance out (and thus have those extra pipeline stages, fast-switch transistors, etc) but the Athlon XPs were able to downclock reliably from over 2ghz to 600mhz and were under 10W at that point, and though not all could, it seemed a significant number could downclock to 300mhz as well were the power draw was between 3W to 4W. Not too bad considering that the processor wasn't designed for low power at all (at least not exclusively).
That's not a big stretch, since the lowest-clocked Palamino core was 800 MHz.
The 300 MHz clock sounds good, but there's no way that could be done on a comercial basis if it wasn't reliable.

Haven't heard about any Athlon 64s below 800mhz (maybe 600?), but that's still pretty low.
If some theoretical A64 were created that was designed to run at a top speed of 800 MHz, it would be
much better than a standard A64, possibly in cost, power, or performance (or all three).
Considering the very large silicon investment and other design trade-offs (cache latencies, decoder width) that were made to reach the target clocks, a lot could have been done to do a lot better.

In the case of a console, the thin margins already offered would be nonexistent for a chip as large as an A64.

Why downclock a chip that costs ~40 bucks to make that would sell for even less, when a design that acheives the same or better performance or wattage can be made for way less?

So uh, as evidenced by the power savings modes of currently existing desktop processors, significant savings can be seen just from downclocking, and at least another 20% in power savings (at the same speed) can come from explicitly designing a processor for just slightly slower (10% to 20% lower) max speeds. Could those kind of savings be extrapolated to a processor explicitly designed for sub ghz speeds? 50% the speed for 50% power savings at the same mhz?

I don't know if you can safely calculate savings without taking into account other design features, such as issue width and manufacturing process.

There are low-clocked chips that consume significant amounts of power, such as Itanium, which clocks nowhere near as high as current x86s.
 
The 300 MHz clock sounds good, but there's no way that could be done on a comercial basis if it wasn't reliable.

Well, I just know it wasn't a 100% success rate (which could have been tied to the chipsets as well), but it still seemed to hit 300mhz fairly often. I'd guess more than half could, though I obviously have no way of knowing how the processors would perform on the whole.

There are low-clocked chips that consume significant amounts of power, such as Itanium, which clocks nowhere near as high as current x86s.

I was referring more to architectures that are virtually the same, like a Turion and an Athlon 64. I wouldn't exactly say itanium is a low clocked chip though, current designs are around the 2ghz range right?
 
Lowering clock speed past a certain point requires a redesign of the PLLs used to control the clock dividers. Complex clock circuitry is harder to make work reliably with tighter timings at high speeds.
It's either plan to go slow at the outset or start making sacrifices.

You also can't just lower voltage and expect things to work forever. The thresholds used by the transistors are based on a differential between the low and high signal states. Lower voltages mean lower differences, so silicon tuned for one target voltage range is going to get wonky if it's forced too low.
I agree, we have also to take into account that Broadway will be produced by IBM thus it's 100% sure it's on a SOI process which has its own bag of problems when lowering frequencies too much.
 
well, 3dilettante already mentioned about P-M, and if we take the broadway =~ 750CL theory for a fact, then no, broadway is not a G4 derivative, it's a G3 derivative. G4s have better IPC than G3s and some other niceties (aside from AltiVec, the MPX buss support, IIRC)
Also G4s had a significantly improved cache subsystem to keep up with the vecor units thought the latest in the 750 family somewhat closed the gap in this respect. But then again 'modern' G4s (i.e. e600 cores) have also undergone quite a few changes...
 
Same source that has been given me other broadway details I've been leaking. I mentioned it because the discussion was going a direction that doesn't really jive with what I know. Nintendo is still tweaking this chip btw and it's based on the GX line not gecko cpu either. I'm still wondering what's sucking up all the power if the wifi has been optimized for lower power consumption as well.
Are you sure about your source? The fact that broadway is based on the GX doesn't sound right, the GX is manufactured at the 130nm node and is 52.5mm^2 in size. AFAIK Broadway will be a 90nm processor and a 130->90 scaling doesn't shrink your die by four times, especially if you are adding stuff. Now that doesn't rule out that Broadway might have some improvements seen in the GX but saying that it is based on the GX clashes quite a bit with the little info we have at hand.
 
Are you sure about your source? The fact that broadway is based on the GX doesn't sound right, the GX is manufactured at the 130nm node and is 52.5mm^2 in size. AFAIK Broadway will be a 90nm processor and a 130->90 scaling doesn't shrink your die by four times, especially if you are adding stuff. Now that doesn't rule out that Broadway might have some improvements seen in the GX but saying that it is based on the GX clashes quite a bit with the little info we have at hand.

Considering IGN is using info from him why should I or anyone dispute the only place that has been releasing semi accurate info on the hardware.
 
Same source that has been given me other broadway details I've been leaking. I mentioned it because the discussion was going a direction that doesn't really jive with what I know. Nintendo is still tweaking this chip btw and it's based on the GX line not gecko cpu either. I'm still wondering what's sucking up all the power if the wifi has been optimized for lower power consumption as well.

Most of it was leaked just check my post history on this thread or the cpu and the hollywood thread for it. Matt got in a lot of trouble for mentioning the last tidbits that I haven't flat out said, so until he is in the clear I know I couldn't get away with leaking waht nintendo is doing with this cpu.

AFAIR Matt claimed that Broadway was based on the 750CL (which is based directly on Gekko), but your saying its based on the 750GX. Not saying your wrong and Matt is right, I'd just like to clear this point up because you've mentioned Matt a couple of times as if your info is the same as his.
 
Are you sure about your source? The fact that broadway is based on the GX doesn't sound right, the GX is manufactured at the 130nm node and is 52.5mm^2 in size. AFAIK Broadway will be a 90nm processor and a 130->90 scaling doesn't shrink your die by four times, especially if you are adding stuff. Now that doesn't rule out that Broadway might have some improvements seen in the GX but saying that it is based on the GX clashes quite a bit with the little info we have at hand.

A few pages ago you can seen that from some estimations (if they are right) the chip can be up to 22mm^ and if like theafu say it has only 512Kb then it can be very well a upgraded GX/CL.


AFAIR Matt claimed that Broadway was based on the 750CL (which is based directly on Gekko), but your saying its based on the 750GX. Not saying your wrong and Matt is right, I'd just like to clear this point up because you've mentioned Matt a couple of times as if your info is the same as his.

If you costumize a GX to be BC with Gekko (and with less L2) it would probably end like a CL, so both info can be right. (IIRC he CL also does have the kind of cache improvements of the GX, right?)
 
Last edited by a moderator:
Back
Top