When are the 65nm PS3's and XB360's shipping?

Status
Not open for further replies.
Looks like the Inq has found a tidbit on the 65nm xboxes. It seems that the power supply will be smaller and have a different connector so that one can't accidentally plug the new power supply into a 90nm xbox. http://theinquirer.net/default.aspx

They also say it's on a "boat" heading this way. ;)

Which they no doubt got from Takahashi
Falcon is coming your way this fall. The first Xbox 360s with it are probably on ships coming from China.
Falcon is the name for the board that houses the 65-nanometer microprocessor from IBM. The board does not include a 65-nanometer version of the ATI graphics chip for the Xbox 360. That version of the graphics chip is coming later. A good question here is why not.

Now you would have thought that Zephyr would have come with the 65-nanometer IBM microprocessor. But it didn’t. The Xbox 360 Elite was introduced in limited quantities and it used the older 90-nanometer chips. The design for the 65-nanometer version of the IBM chip has been ready since last year. Why didn’t Microsoft introduce it earlier?

Well, Microsoft had to make a trade-off. It chose to put more of its hardware engineering team on the problem of repairing defective Xbox 360 consoles and troubleshooting how to redesign the console so that it would function properly. For whatever reason, that led to a delay in the launch of Falcon. (Maybe that tells you that hardware and chip engineers are limited quantities at Microsoft).

But here’s the problem for Microsoft. They have a lot of inventory of the older 90-nanometer machines.

Lots of new and good info, clarify's a lot.
 
Blast you and your insider info! :p

Does Xenos actually produce a lot of heat? The ratio of heatsink size between it and Xenon suggests the CPU is in much greater need of power reduction.

Maybe I should get an Elite now and wait for 45nm Core/Arcade. :/
 
Blast you and your insider info! :p

Does Xenos actually produce a lot of heat? The ratio of heatsink size between it and Xenon suggests the CPU is in much greater need of power reduction.

Maybe I should get an Elite now and wait for 45nm :/


The GPU heatsink is only unfortunatly small because the DVD drive needs to sit above it. Tests have shown that the 360's GPU gets much hotter than the CPU due to the inadequate heat sink.


http://techon.nikkeibp.co.jp/english/NEWS_EN/20070801/137224/

In only five minutes since we started playing the game, the temperature of the heat sink on the graphics LSI rose to 70°C. The thermal gradient was about 10°C/min.

In 15 minutes, the microprocessor heat sink temperature stabilized at 58°C, but the heat sink on the graphics LSI rose to 80°C, 57°C above the room temperature.

Assuming room temperature of 35°C in mid-summer, the gap is estimated to reach more than 90°C. In that case, the temperature of chips in the graphics LSI could exceed 100°C.
 
Haven't GPUs historically run much hotter than cpu's? My guess is that due to the environment GPUs typically operate in (on a PCIe card with a much smaller heatsink), they have been designed to run at much hotter temperatures. My question is what is the TDP of the cpu vs the gpu and which one contributes the most to the internal heat and power draw of the xbox360?

If the cpu contributes the most heat and power draw, then maybe it isn't worth it to wait for the 65nm xenos. Does the Beyond3d article on the xbox360 have this info? (If I wasn't at work, I'd look through the article myself)
 
The GPU heatsink is only unfortunatly small because the DVD drive needs to sit above it. Tests have shown that the 360's GPU gets much hotter than the CPU due to the inadequate heat sink.


http://techon.nikkeibp.co.jp/english/NEWS_EN/20070801/137224/

I don't know about you but the temperature in my house gets no where near 35C (95F) in mid summer.

Unless you like playing your 360 outside during a heatwave I doubt you have to worry about the LSI exceeding 100C.
 
I don't know about you but the temperature in my house gets no where near 35C (95F) in mid summer.
That all depends on location (you don't need a heatwave to hit 35 degrees C in many parts of the world) and if you have air-conditioning or not; and it's besides the point anyway. 100 degrees C isn't a particular value for failure or anything, it's just a nice round number on the Celsius scale. The major point is that the GPU runs hot. 65nm would be very welcome no matter where in the world you are (except perhaps the far north where the console helps warm the house ;))
 
Newer units have been shipping with a heatpipe solution for the GPU that extend the cooling to an open area in front of CPU heatsink.

I'm just picky about the tech inside, extra GPU heatsink or not. :p

The major point is that the GPU runs hot. 65nm would be very welcome no matter where in the world you are (except perhaps the far north where the console helps warm the house ;))

Like Edmonton! It's about 15-18C inside the house most of the time without any AC. In winter, it's gotten down to 11C and only because our heater wasn't turned on. :LOL:


I guess for me the question is (now that the 65nm chips won't be here for awhile), do I get a unit now, or wait until the holiday bundles with 5 games. :p
 
That all depends on location (you don't need a heatwave to hit 35 degrees C in many parts of the world) and if you have air-conditioning or not; and it's besides the point anyway. 100 degrees C isn't a particular value for failure or anything, it's just a nice round number on the Celsius scale. The major point is that the GPU runs hot. 65nm would be very welcome no matter where in the world you are (except perhaps the far north where the console helps warm the house ;))


For the majority of the gaming market in and around Europe, US and Japan, sustained temperatures at or above 35C would be considered a heatwave and thats outside.

If the average temperature during summer months is 35C in your living room and you have a 360, then maybe you should readjust your priorities. LOL
 
Last edited by a moderator:
Does the falcon have 65nm Xenos? Will the PS be smaller?

I rather doubt they will change the PSU. Lets say they've dropped the total power requirements by 20% (I doubt they could even do that well) how much would they really save moving from a 220 to a 180 watt PS?
 
Doesn't power supply efficiency come into play too?

edit:
I guess it may not be much of a difference for Microsoft, but the %load will change a bunch and the efficiency of the adapter will go down if they keep with the same rated output.
 
If they can reduce heat in the system whilst play, i am wondering if it can reduce disc scratching as well. I read somewhere that hot drives + discs are part of many causes of scratches. :smile:
 
I rather doubt they will change the PSU. Lets say they've dropped the total power requirements by 20% (I doubt they could even do that well) how much would they really save moving from a 220 to a 180 watt PS?
Are the current power draw and core voltages of Xenos and Xenon known? IIRC, the power consumption of an IC is roughly P = C*V^2*f + S [switch capacitance multiplied by supply voltage squared multiplied by frequency plus static consumption] and that the static consumption's share of the whole roughly double per process node.

Surely someone around here, with a bit more than my rudimentary understanding of EE, could make an educated guess as to how much they're likely to shave off by shrinking the two main ASICs.
 
Are the current power draw and core voltages of Xenos and Xenon known? IIRC, the power consumption of an IC is roughly P = C*V^2*f + S [switch capacitance multiplied by supply voltage squared multiplied by frequency plus static consumption] and that the static consumption's share of the whole roughly double per process node.

Surely someone around here, with a bit more than my rudimentary understanding of EE, could make an educated guess as to how much they're likely to shave off by shrinking the two main ASICs.

Short version: It's hard to predict accurately without knowing almost everything they won't tell you during their design stages. It really depends on how they implement things.

There are lots of engineering decisions and experiments, and just knowing the voltages won't help all that much.

Maybe I'm just sidestepping some variables that should be best guesses or common knowledge, but my mind is a bit fragged right now.


Long version:
That's one equation... Actually, when considering the entire chip, there is a fractional term in front of the dynamic power, which indicates the fraction of transistors actively switching. So C is the total capacitance of all gates.

But things are very very...very complicated.

Dynamic Power
Code:
Pd= A*C*V^2*f

V = operating voltage
A= fraction of transistors that are active
C = capacitance of all gates
f = frequency of operation
There also exists some dependency between frequency, operating voltage, and the transistors' threshold voltage (to switch) so it's not that simple for power reduction via simple voltage change. The threshold voltage has to be reduced correspondingly with the operating voltage, and keeping that operational frequency is tricky.

Things are even trickier with the leakage current, which is a combination of sub-threshold and gate oxide leakages.

Static Power
Code:
 S = V*(Isub + Iox)

Isub = K1 * W * (1-e^-(V/Vtheta)) * e^(- Vth / (n * Vtheta))

K1 & n are experimentally derived...
W = gate width
Vtheta = linearly temperature dependent voltage, ~25mV @ room temp. 
Isub can increase to change the temperature and thus Vtheta increases accordingly though... yikes.

Vth = voltage threshold of the transistor,
For V to decrease, Vth must decrease too, otherwise the transistor will just be shut off. But decreasing Vth increases Isub exponentially...

So, decrease W...but then you get electron tunneling at that scale. Problems problems... It's not unlike decreasing the gate oxide thickness - see below for one of the solutions.

Things get tricky with the oxide leakage too:
Code:
Iox = K2 * W * (V/Tox)^2 * e^(- a * Tox / V)

K2 & a are experimentally derived (tough one to predict..)
Tox = physical gate oxide thickness
The problem with leakage current here is that as the process size decreases (including the gate oxide thickness), the Iox increases exponentially. The gate dimensions have to decrease so that the Vth decreases too.

In order to "increase" or "maintain" the physical Tox so that Iox and Vth are both reduced, they can use materials with larger dielectric constants to create an "effective silicon dioxide thickness".

In other words, the physical thickness is maintained or increased while the rest of the transistor gets smaller; Iox is reduced. But due to dielectric properties, the short channel effect is avoided as the electrons 'see' the effective silicon dioxide thickness, which is "thinner" - thus Vth is reduced.

Vth can also be decreased by increasing the doping concentrations, and that has its own issues. Things are extremely sensitive here.

There are numerous other solutions - take Intel's tri-gate for instance. It's a bit difficult to grasp with so many things to consider. :s
 
Status
Not open for further replies.
Back
Top