Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
You're reasoning strikes me as very unsound. The fact Ms has a huge case with a huge fan implies they can deal with more heat than PS4, no? Ergo the possibility that MS can upclock but Sony can't.

Assuming yield concerns have been satisfied:

Adjusting noise margins and airflow limits might buy margin for both without changing the physical design of the box and cooler.

I'm curious how closely specced the power supplies are to the silicon.
One difference is that the PS4's internal power supply does imply the cooling apparatus and the box itself have a stronger physical reason to be sized to the whole console's power target, and changes here past a certain point may lead to impacts on the power supply, cooler, and case because of the physical integration.

There could be more leeway with the separate power supply on the Xbox One to vary things because there are fewer potential knock-on effects to deciding on a somewhat higher total power target.


However, if the upclock comes in due to improved manufacturing leading to better performance iso-power, this may not be something they need to worry about. It also might be something the competing design could try for. This is something that would be dependent on specific details of the actual physical implementation and manufacturing trends.

How about a day-one firmware patch from both that sets the final clocks they pick the day before release, for maximum levity?
 
Assuming yield concerns have been satisfied:

Adjusting noise margins and airflow limits might buy margin for both without changing the physical design of the box and cooler.

I'm curious how closely specced the power supplies are to the silicon.
One difference is that the PS4's internal power supply does imply the cooling apparatus and the box itself have a stronger physical reason to be sized to the whole console's power target, and changes here past a certain point may lead to impacts on the power supply, cooler, and case because of the physical integration.

There could be more leeway with the separate power supply on the Xbox One to vary things because there are fewer potential knock-on effects to deciding on a somewhat higher total power target.


However, if the upclock comes in due to improved manufacturing leading to better performance iso-power, this may not be something they need to worry about. It also might be something the competing design could try for. This is something that would be dependent on specific details of the actual physical implementation and manufacturing trends.

How about a day-one firmware patch from both that sets the final clocks they pick the day before release, for maximum levity?

800 000 001 Hz
 
The actual pictures of the XBox board would indicate how many channels there are.
That being said, having a nonpower of two on a power of two bus width is possible. The inverse has also been done by certain Nvidia GPU SKUs.

The memory controllers and whatever address partitioning they use can be route accesses appropriately, at the cost of non-uniform bandwidth if accesses to the additional space start hammering the controllers linked to the higher density channels and idling the others.

FWIW traditionally there has been no difference between the retail and devkit boards on MS consoles.
Beta kits obviously do differ but it's usually marginal.
 
Yeah, a couples a weeks before launch we are going to hear about the Xbox, 1 gigahertz edition, how cool would that be? Now that damned freaking name would make sense
my wife was floored when I told her about, I agree that is as nonsensical as it can get...
That and 6GB available to games.

If MSFT indeed overclock the system and don't do that 1GHz GPU and 2GHz CPU, they have no guts :LOL:

How about an online petition? :LOL:
 
Are there any possibilities that the changes have been produced since march-april?

I would not be surprised if the folks in charge of the heatsink/cooling solution have half a dozen different solutions that they tested/evaluated with reports and data. I could suggest something similar on the VRM and perhaps a few configurations/proposals on the memory. Most of that might be from the preceding 12-18 months.

If the decision is made to consider x or y change then sub-groups responsible might say "ok, then that is BOM such and such" or say "ok, then it is PCB such and such and you change from this model to that model on the fan" or "ok, then you add these additional solid polymer caps and change the FETs and chokes to these part numbers".

Perhaps a bit too simplistic but consider when a new family of graphics cards are released. Say it is the brand new NVR777 chip. The partners don't take forever to come up with new clocks, cooling solutions and the VRM/PCB that goes with it. The partners have a pretty good "catalog" of designs for cooling and VRM, etc.

And then sometimes you find after the release that the competition had a set of contigency plans that says (if they release X we release I, if they release Y we release J and if they release Z we release K) and they suddenly do just that.

Yes, over simplifying a little bit. But the group that did the cooling or the group that did the VRM did not look at just one precise and fixed configuration over the last couple of years. There would have been a range of solutions and contingencies (as well as costs and vendors and backups). It can be fairly late in the game when you have "the final" solid/big data set on the yields, clocks, power consumption, etc. In the industry people think about things as crazy as "Remember when supplier X was hit by that earthquake and we had to find another supplier overnight?" or "Remember the floods in Thailand?" or "What if the leakage is X% higher?" They have all kinds of contingencies and alternatives mapped out for multi-billion dollar projects.

They might have targeted 100W for the SoC and made designs for +15% and +30% contingencies for VRM and cooling. I doubt they assumed they would precisely hit an exact target 1-2 years in the future.

Some readers might be more stressed about the clocks and the heatsink than the engineers responsible for them.
 
Yeah, a couples a weeks before launch we are going to hear about the Xbox, 1 gigahertz edition, how cool would that be? Now that damned freaking name would make sense
my wife was floored when I told her about, I agree that is as nonsensical as it can get...
That and 6GB available to games.

If MSFT indeed overclock the system and don't do that 1GHz GPU and 2GHz CPU, they have no guts :LOL:

How about an online petition? :LOL:

If they wait until then no games will be taking advantage of either change...
Games will be going into QA probably in August.
I think any change at this point is going to be pointless.
Extra RAM might buy them something in the medium to long term, but I just don't see it happening.
 
I would hope these cooling solutions are designed to operate at ambient air tempteratures well above air conditioned. I imagine they'd have headroom to handle an overclock at 25 Celcius, easy. Question is whether they can do it at whatever ambient temperature they consider the max for their operating range. Who knows. There's also noise to consider. There is some balance in there. It's wait and see. It isn't impossible from a cooling perspective. Power and signal integrity would be the bigger problems, I'd think.

Again, I don't really see what a small overclock gains them in the big scheme of things. It might sound nice on paper, and developers might appreciate it for the odd thing, but I doubt it would lead to significant gains on screen, where the consumer is going to notice. Unless an overclock is incredibly easy and falls well into what they consider safe limits, I can't see them bothering.
 
Who says PS4 is superior to XB1? They are different, one is a truck and the other a car; depending on what I want to do one or the other offer advantages but in general both will work just fine.

A little awkward but, I just came back from an electronics store, and from my surroundings over here the word of mouth has already spread to the simple folk that the Xb1 is weaker than ps4.:rolleyes:

I have to admit it is a bit aggravating to hear the masses underline to a conclusion from either side, but i'm guessing it's from that underlining companies respond to. Apparently to the masses numbers are minuscule to them and so they draw to simple facts.

over here we say "Digital Rights Management" and on the other side others translate it to = Can't share games......which i'm guessing is probably why it got abolished.

--------------------------------------------------------

The power supply of the Xb1 is external is it not? maybe the power supply made an alter to give leave way for the ESRAM and possible GPU/CPU speeds to change?
 
You're reasoning strikes me as very unsound. The fact Ms has a huge case with a huge fan implies they can deal with more heat than PS4, no? Ergo the possibility that MS can upclock but Sony can't.

Stable operation at a given thermal profile isn't the only factor that determines binning selection; chips are also carefully tested for transistor leakage, which is a separate issue. One chip may run stably at 1600mhz but unstably at 1700mhz, even with superior cooling, simply because of the fabrication process and the physical makeup of the chip. The next chip may run at stably at 1800mhz with the superior cooling.

You can't just take a chip binned as stable 1600mhz, overclock it, crank up the cooling and hope it holds. The Xbox One isn't the USS Enterprise and Steve Ballmer isn't Scotty!
 
This all goes to what what Microsoft internally considers good enough yields and where the actual yield distribution ends up.

If they can hit their "good enough" line at a higher performance target and see a higher upside than the incremental good die increase from staying put, it's their call. We're not going to deduce that with the information on hand.
 
Stable operation at a given thermal profile isn't the only factor that determines binning selection; chips are also carefully tested for transistor leakage, which is a separate issue. One chip may run stably at 1600mhz but unstably at 1700mhz, even with superior cooling, simply because of the fabrication process and the physical makeup of the chip. The next chip may run at stably at 1800mhz with the superior cooling.

You can't just take a chip binned as stable 1600mhz, overclock it, crank up the cooling and hope it holds. The Xbox One isn't the USS Enterprise and Steve Ballmer isn't Scotty!

But it doesn't seem like they are asking for much. We have heard 800 MHz and 100W cooled silently. For that size SoC (seen in the Wired photos) it seems like the clocks and power dissipation is solidly on the low side.

[Meaning that we are assuming that they are far from the regime of issues that you mention.]

Put it this way, consider the process, transistor count, die size, clock and power dissipation and compare that to Tahiti. It seems like they are asking for very very little on the very low power end of things.

[Or put it this way, it seems that silence and 100W is what they targeted and thus is a self imposed limiting factor, not a silicon limiting factor. But that could be wrong if there turns out to be an unusual issue that is unknown.]
 
Last edited by a moderator:
But it doesn't seem like they are asking for much. We have heard 800 MHz and 100W cooled silently. For that size SoC (seen in the Wired photos) it seems like the clocks and power dissipation is solidly on the low side.
Electrical leakage. Not heat.

Put it this way, consider the process, transistor count, die size, clock and power dissipation and compare that to Tahiti. It seems like they are asking for very very little on the very low power end of things.
What they are asking for, is relative to what they have. If Microsoft bought chips binned as stable at 1800mhz then they would be safe to clock them to that speed and beef up the cooling. However, if Microsoft bought chips binned as stable 1600mhz (a 99.9% certainty) then some of those chips will run fine at a higher clock rate with better cooling and some of those chips will not because they will suffer electrical leakage that the cooling can't prevent. That would be been one of many reasons why they were binned for a lower clock rate.
 
Stable operation at a given thermal profile isn't the only factor that determines binning selection;...
Yeah, I appreicate that. But you raised the issue of XB1's heatsink, implying it was that big to deal with heat issues. Assuming the heat difference between PS4 and XB1 isn't massive, than the heating requirements would be the same, and MS's choice of oversized HSF could point to wanting quieter operation (potentially Sony could have a better overall airflow and cooling design), which means MS has more capacity to deal with heat by running louder.

I'm not really arguing in favour of that point - only that, the way you phrased yourself, if case design is an issue, I'd give MS the advantage (at a given volume level, of course. Nothing to stop Sony putting an leaf-blower in there...).
 
Electrical leakage. Not heat.

Yes, I know that. But leakage and heat and clocks and voltage are interrelated. What is the voltage and clock? At such a low clock one would expect a correspondingly low voltage and leakage. Look at the voltages the related commercial products run at versus the clocks they can run at.


What they are asking for, is relative to what they have. If Microsoft bought chips binned as stable at 1800mhz then they would be safe to clock them to that speed and beef up the cooling. However, if Microsoft bought chips binned as stable 1600mhz (a 99.9% certainty) then some of those chips will run fine at a higher clock rate with better cooling and some of those chips will not because they will suffer electrical leakage that the cooling can't prevent. That would be been one of many reasons why they were binned for a lower clock rate.

True, but I think they have imposed a limit well below what the chip can do. Tahiti is solidly in the regime you are talking about. A chip of a similar die size, with higher transistor count and one year later on the same TSMC process but at 100W not 250W (not sure exactly how to divide between Tahiti and the GDDR5) is not running near max clock and thus is also not near max voltage and leakage.

My guess is that the current clock came from the choices of 100W and silence, not the limit of the chip/design/cores and process.

This is like taking the silicon inside the A10-6800k chip at 100W for the desktop and then making a mobile version at 25W and lower clocks and lower voltage and lower leakage.

They are running (based upon the 100W) in a low power/mobile regime with low clocks and low voltage. I think they must be far from the limits/regime you are talking about.



I have no idea how many lots went through fab already and how many are tested/binned out. But I am implying that there is a strong likely hood that they are no where near Scotty saying "I'm giving her all she's got, Captain!"

But if 100k or 250k are binned out and they were binned right at 800 MHz then yes, time to re-test and re-bin. But I doubt they were binned out there and in just one bin. If I did it they were binned out into multiple bins knowing that the final clock takes time and a large database of data to establish.



It would not be shocking at all to find out that one or both companies are still working quite hard to determine the optimum clocks, etc.
 
Last edited by a moderator:
If they wait until then no games will be taking advantage of either change...
Games will be going into QA probably in August.
I think any change at this point is going to be pointless.
Extra RAM might buy them something in the medium to long term, but I just don't see it happening.

Well 1st and 2nd party games will be in the inner circle of such a change so they'll know sooner rather than later. For multiplats, they had ps4 as one of their performance and ram targets anyway right?
 
There is an audio block that does echo cancellation for Kinect, and I imagine whatever other audio processing Kinect needs to do. The rest would be on the GPU and CPU. The question is how much of it is part of the OS reservation and how much is run in the game VM.
there is an Engadget video around that has an engineer discussing hardware and said a good chunk of new silicon is now in Kinect 2. I'll find it...


here

http://www.viddler.com/v/10738ac4?secret=81714898
 
Last edited by a moderator:
A little awkward but, I just came back from an electronics store, and from my surroundings over here the word of mouth has already spread to the simple folk that the Xb1 is weaker than ps4.:rolleyes:
Yep. The same folks were probably convinced the 360 was way weaker than the PS3 too.

FWIW traditionally there has been no difference between the retail and devkit boards on MS consoles.
Beta kits obviously do differ but it's usually marginal.
Depends on the devkit. Internal devkits have serial breakout cables that allowed us to debug the hypervisor, and my devkit also had 1GB of RAM and a couple of other bells and whistles. (Talking about the 360 here, in case anyone wonders.)
 
My source at Microsoft says they are playing with overclocking the GPU at 900 MHz,and ESRAM to 900 mhz.The is talk of upping the memory to 12 gigs of ddr3 leaving 7 gigs for gaming and 5 for OS.The memory upgrade is being discussed because the 3 OS are clunky and taking up a lot of memory to run as smooth as they want it too!

They are waiting for the higher ups to give the go on it.The hold up I was told was they we not getting an consistency percentage on yields?
 
Status
Not open for further replies.
Back
Top