Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
This is a 6% clock adjustment, it's a non-story.

The story is important for the following (sorry if it's been said already):
1) Downclock rumors were false. Unequivocally. I don't see how you can swing from thinking about downclocks to announcing upclocks in a month time frame.
2) Yields aren't terrible. In fact, yields have to be good to consider upclocks.
3) Cooling was designed to allow some amount of upclocking (though I don't think it would affect heat output that much if voltages weren't changed).
4) APU is less likely to cause supply issues due to point #2.
 
The story is important for the following (sorry if it's been said already):
1) Downclock rumors were false. Unequivocally. I don't see how you can swing from thinking about downclocks to announcing upclocks in a month time frame.
2) Yields aren't terrible. In fact, yields have to be good to consider upclocks.
3) Cooling was designed to allow some amount of upclocking (though I don't think it would affect heat output that much if voltages weren't changed).
4) APU is less likely to cause supply issues due to point #2.
Okay maybe not a non-story, it's just a non-surprise :???:
I admit it's a good thing the wacko rumors have less and less room to grow.
 
Okay maybe not a non-story, it's just a non-surprise :???:
I admit it's a good thing the wacko rumors have less and less room to grow.

It's good because they paradoxically get more intense right before a console is about to be released, even though stuff has been set in stone ever since manufacturing starts. You start to hear the phrase "special sauce" mentioned way more often than it should.
 
Hmmm...listening to the podcast, I think it lends more credence to the report posted by DF about the increase of the ESRAM. Mark Whitten was saying that they have moved from "theoretical guestimate to actual testing" and as such they were too conservative on a lot of things. So its very possible that the mode that allows the eSRAM to hit the higher bandwidth figure is something they discovered when they got the final kit. While the report from DF wasn't especially clear, it is good to note that these were not meant for the general public and use case example might have been in a different update.

Anyway, its interesting the way things are going with this console. It will be nice to have a book or an extensive article later detailing the development of this console, sort of like the Xenos article by Dave or the book by Dean.
 
Okay maybe not a non-story, it's just a non-surprise :???:
I admit it's a good thing the wacko rumors have less and less room to grow.

the fact that they actually did an up-clock and somewhat even announced it will only fuel the fire of those remaining rumors. :)
 
3) Cooling was designed to allow some amount of upclocking (though I don't think it would affect heat output that much if voltages weren't changed).

The cooling wasn't designed to allow upclocking. The cooling was designed for silent operation while idle or viewing media and near silent when gaming.

It can accommodate upclocks that generate a higher thermal dissipation load, but that would also compromise why they designed the cooling system in the way that they did. To put it another way, it can likely handle a significantly higher clocked SOC just fine without breaking a sweat. But you'd also increase the noise profile. Possibly making it as noisy as the PS3 or Xbox 360 S cooling solutions. And that isn't something they want.

I'm betting the 53 MHz upgrade to the original specs is just that when they got final silicon back, not only did it not impact yields in any significant way, but that it also did not exceed their original thermal design characteristics. In other words, 853 MHz is still hitting the thermal design goal that they predicted 800 MHz would hit.

They could keep it at 800 and have it run slightly cooler than their original target, but if 53 MHz still allows you to hit your thermal design and yield predictions, then it becomes a "why not?" situation. It's basically free as someone mentioned above.

Regards,
SB
 
I wonder if that means Sony published their final clock too quickly, and if they'll stick with it. They sure are bound to almost the same variables as Microsoft.
Not really. They have more CUs, so more heat. They have, as far as I can tell, a _much_ smaller enclosure, so less ability to deal with extra heat.
 
I'm betting the 53 MHz upgrade to the original specs is just that when they got final silicon back, not only did it not impact yields in any significant way, but that it also did not exceed their original thermal design characteristics. In other words, 853 MHz is still hitting the thermal design goal that they predicted 800 MHz would hit.

There multiple kinds of yield. Parametric yields--whether a device meets design parameters, of which power consumption is one--are an increasingly significant type.
As far as Microsoft is concerned, a chip lost to a random defect and a chip that falls out of spec are equally unusable.
 
Hmmm, in the end XNA is really nothing like "native" development for the xbox 360. It's basically .NET managed runtime.

Naturally, XNA provided sufficient tools enough for indies to get projects started and open them to publishing their games. The XNA framework has both form and function what it lacks is sophistication. Indies could go pro in their status providing they provide upgraded tools, proper hardware, and sufficient teams.

Xbox One should bring the indie environment pretty close to the pro environment.

According to spokes person Whitten, as far as hardware is concerned there are no walls separating the two. Both will have a Devkit of equal status.


^^^
At today MS still says retail kits have 8GB and we don't know for sure if devkits have 12Gb.
Costs and time also play a role in this matter.

Even if the console ships with 12Gbs, the extra memory could possibly be considered as reserved memory; and therefor not applicable as a bump to the core design.
 
I wonder if that means Sony published their final clock too quickly, and if they'll stick with it. They sure are bound to almost the same variables as Microsoft.

Sony haven't published the clocks, all we have is:
Sony said:
Single-chip custom processor
CPU : x86-64 AMD “Jaguar”, 8 cores
GPU : 1.84 TFLOPS, AMD next-generation RadeonTM based graphics
engine
800Mhz has been derived from the performance, number of CUs and what's known about the Jaguar architecture.
 
Not really. They have more CUs, so more heat. They have, as far as I can tell, a _much_ smaller enclosure, so less ability to deal with extra heat.

There's less ability to deal with it at passive or nearly passive air flow levels, at any rate.
On the other hand, if the reason for the upclock was that physical characterization with the latest spins showed higher GPU clocks iso-power, Orbis could see *some* benefit if it's in terms of process improvement and the two chips also share the same fab.
The larger amount of active logic in the Orbis GPU may pose a chance of increased variation and the benefits to logic and SRAM may not be proportionate, so it may not be equivalent.

For the sake of argument, let's assume the Durango APU is 100W, of which the Jaguar cores are between 1/4 to 1/3 the TDP.
Assuming Durango isn't operating at the edge of needing a voltage bump, the power increase should be linear at this small increment. That's .07 percent of 66-75 Watts, or around 5 Watts.
Orbis, with 1.5x the CU complement would pull up to 8W more, assuming it isn't riding the edge of some voltage bump. I'm handwaving whether 32MB of faster SRAM or 16 extra ROPS add more to either side of the equation.

This bump should only matter at load, given the granularity of AMD's current GPU power management. As far as getting heat out of the enclosure, isn't it conceivable that Sony had a guard band of 5-10 Watts, or had the option to bump up the RPMs of the fan by an increment just in case?
 
Remember the DF article ......where they said esram bandwidth up to 133GB/s
Well 1053Mhs ×128bits = 134.784
In other words 134GB/s very near to the DF article .

Just throwing it out there .......:devilish:
 
This bump should only matter at load, given the granularity of AMD's current GPU power management. As far as getting heat out of the enclosure, isn't it conceivable that Sony had a guard band of 5-10 Watts, or had the option to bump up the RPMs of the fan by an increment just in case?

It's definitely possible and conceivable. But considering that they already have a large performance lead over the Xbox One SOC with regards to the GPU, is there a need to potentially compromise the design characteristics (acoustic and thermal) of their hardware?

IMO, if the performance characteristics (thermal dissipation) of the chip turned out better than expected, I would expect them to instead use that to potentially lower the acoustic footprint of their cooling solution rather than increase the performance lead their GPU has over the competition.

Regards,
SB
 
Remember the DF article ......where they said esram bandwidth up to 133GB/s
Well 1053Mhs ×128bits = 134.784
In other words 134GB/s very near to the DF article .

Just throwing it out there .......:devilish:

But not the 192GB/s theoretical peak.
 
It's definitely possible and conceivable. But considering that they already have a large performance lead over the Xbox One SOC with regards to the GPU, is there a need to potentially compromise the design characteristics (acoustic and thermal) of their hardware?

IMO, if the performance characteristics (thermal dissipation) of the chip turned out better than expected, I would expect them to instead use that to potentially lower the acoustic footprint of their cooling solution rather than increase the performance lead their GPU has over the competition.

Regards,
SB

We're talking single-digit changes either way. The more fundamental decisions were related to the enclosure choice and cooler.
 
This is good news, Microsoft is getting some payback on their "wealthy" cooling system, it's a minor speed bump, but also a minor bump on heat/watt. The heat will most likely be kept in check by the cooler, just a few more RPM's. And the power usage is paid by us..

Good news!
 
This is good news, Microsoft is getting some payback on their "wealthy" cooling system, it's a minor speed bump, but also a minor bump on heat/watt. The heat will most likely be kept in check by the cooler, just a few more RPM's. And the power usage is paid by us..

Good news!

The change is small, and likely happened because the chip would draw the same or very nearly the same amount of power as was originally planned.

To make up for the wattage, take one light bulb in the residence and replace it with a model one grade dimmer, and probably save double or more the power consumption.
 
Question..

Is it right to assume that more RAM memory, also increases heat?
So going from 8 to 12, would this increase heat as well?

Because the clock bump on the GPU was so "small", could it be that MS also chose to increase other areas as well, such as CPU speed and RAM?

Just tossing this question out there...
 
But not the 192GB/s theoretical peak.
Sometimes there are strange coincidences. The XB1 gets an upclock to 16/15 of the original 800MHz speed and the eSRAM is rumored to have now a peak bandwidth of 15/16 of twice the original speed.

16/15 * 800 MHz = 853.3 MHz.
15/16 * 2*102.4 GB/s = 192 GB/s

Hmm. Food for conspiracy theories? :rolleyes:
 
Status
Not open for further replies.
Back
Top