Digital Foundry Article Technical Discussion Archive [2013]

Status
Not open for further replies.
Are there really no solutions to that liek, I dunno, Teflon coated blades? We have a constant and tiny particles that weigh nothing. It shouldn't be that hard to keep them moving through the system without settling.

In a moisture free (no humidity) environment perhaps. But dust has an insidious way of accumulating on just about everything. As well, I do wonder how much a Teflon coating would impact the thermal dissipation properties of whatever it is coating. For the fan blades that wouldn't be an issue. So it would make lots of sense there. For the metal cooling fins, however, it may not be worth it.

And dust accumulation can be particularly bad if the device is anywhere near a kitchen (some houses have the kitchen adjacent to the living room) and used during or shortly after anything is cooked with oil. Vaporized oil from cooking can lead to some incredibly bad accumulation on cooling surfaces. It doesn't sound like it would be a significant factor. But I've seen it enough that it isn't an extremely uncommon situation.

Regards,
SB
 
The fan is placed in a way that there's a vortex created in the center, and dust will enter the bearings area eventually. Blowers don't have this issue because the motor and bearings are completely isolated from the air flow. My favorite way to prevent dust from entering is to have a plastic intake mesh, the static electricity is usually enough, it's cheap and no need for filters. This looks like the exact opposite, the air intakes are everywhere and the holes seem too big to prevent large particles from entering.

I never noticed the 4 heatpipes before, it's expensive but should also be very silent. The only negative point is the size and the height being wasted, the heatsink itself seems to be only 3/4 inches but the rest of it's design makes the console 3.2 inches high, and it will need an additional space on top of the console because it's the only air outlet. So in an A/V cabinet I guess it needs about 5 inches of height available.

Blowers with backward curved blades and wide outlet are very silent, and they have better static pressure characteristics than axial fans, so they are ideal when there's a lot of fin depth and dense heatsinks. With the same area available, a good design with one isn't inherently quieter than the other.
 
Last edited by a moderator:
The only negative point is the size and the height being wasted, the heatsink itself seems to be only 3/4 inches but the rest of it's design makes the console 3.2 inches high, and it will need an additional space on top of the console because it's the only air outlet. So in an A/V cabinet I guess it needs about 5 inches of height available.

The size isn't a negative as the point of it being large is so that it can also function as a completely passive cooler. Likewise you'll only need the standard amount of A/V component space between the Xbox One and whatever is above as you would for an A/V amplifier (with or without integrated receiver) and whatever you put on top of it.

The air coming out the top when the fan is active isn't going to be moving very quickly nor will it need more than light airflow. Hence static pressure won't be much of a factor. The whole design is likely geared around 300-600 RPMs (with it more likely being around 300 than 600). My overclocked Core i5 2500k overclocked to 4.4 Ghz quite likely puts out more heat than the Xbox One SOC, has a smaller heatsink (although tower instead of flower) and is passively cooled most of the time with the 120 mm fan on it only ramping up to 300 RPM at load, for comparison.

Large internal space with plenty of venting makes that possible. I can't do something similar in my portable desktop in a Sugo SG07, for example. But I knew I was sacrificing quiet for portability with that.

Blowers with backward curved blades and wide outlet are very silent, and they have better static pressure characteristics than axial fans, so they are ideal when there's a lot of fin depth and dense heatsinks. With the same area available, a good design with one isn't inherently quieter than the other.

Blowers have an advantage in noise and airflow when space is at a premium. Hence you see blower style fans being used for high end GPU reference designs for PC due to AMD and Nvidia not knowing how space constrained the case will be that the cards will go into. Aftermarket coolers and AIBs though can generally expect their users to have large and well ventilated cases (like the Xbox One) depending on how they are marketed, and thus you will almost never see them use a blower style fan as multiple axial fans combined with larger cooling fin surface area is significantly quieter and more efficient. But only if you have a lot of internal space in the case. In a cramped case (like the PS4), a blower is going to be far more efficient and silent than multiple small (40 mm or possibly 60 mm) axial fans.

It still won't be as quiet as a large slow moving axial fan, however.

Regards,
SB
 
It's about form factor. A cube form factor makes an axial design the best solution. A flat and very wide form factor makes the centrifugal design the best. Xbox one compromised the height to allow an axial design. They also compromised stackability. They would still need almost the same 3 inches height if they had a noisier and cheaper 92mm fan with half the heat pipes. I think they compromised cost for silence with the 4 heat pipes and wider fan.
 
It's about form factor. A cube form factor makes an axial design the best solution. A flat and very wide form factor makes the centrifugal design the best. Xbox one compromised the height to allow an axial design. They also compromised stackability. They would still need almost the same 3 inches height if they had a noisier and cheaper 92mm fan with half the heat pipes. I think they compromised cost for silence with the 4 heat pipes and wider fan.

We'll just have to agree to disagree on that. As I cannot see the Xbox One requiring more than 1/4 - 1/2 of an inch of clearance between it and the component above it. Not with the amount of ventilation available on the top and sides. Although if they were silly enough to put the fan right against the case, then you might have a point, although at the low RPMs and airflow I'm expecting this to run at, I'm not sure even that would be a great barrier. Otherwise just the slight movement of air provided by the slow fan will prevent any heat buildup as well as move enough air through the fins to cool the SOC at load.

Put it this way. If the CPU is indeed designed to not go over 100 watts, then my current tower cooling assembly could cool it entirely passively. And it has less cooling surface area than the Xbox One. The Xbox One also has a significantly thicker and larger fan than mine. Meaning it'll push out way more airflow even at the same 300 RPM that I need when running at full load. That is one seriously overdesigned fan on that thing now that I look at it more closely. It wouldn't surprise me if it's designed to run at 150 RPM.

Anyway. My system doesn't have to worry about potentially running in an A/V cabinet with other devices putting out similar or greater amounts of heat as well the limited ventilation in most A/V cabinets. It also has more internal air volume to deal with heat before it's vented through natural convection. Hence why even though it could easily be completely passively cooled with my case and cooling system, they will need some level of active cooling at load for the Xbox One.

It'll be interesting to see what Microsoft recommends in the way of space allowances for stacking though.

Regards,
SB
 
Maybe half an inch would be enough, but it's still blowing all of it's wattage directly under the component placed above it, that's not very nice. The lower the air flow, the hotter the air, can't defy the laws of physics, that spot will blow 100 watts right there, no other outlet anywhere on the xbox, the rest is all inlets. Most a/v receivers manuals suggest a minimum of 6 inches of free air above, and to NEVER put any component on top of it. That's why I suggest 1 or 2 inches would be a better ball park figure for a 100 watts device with a slow fan.
 
"Inside sources at Microsoft have spoken to Digital Foundry about why the Xbox One hardware is so large.."
"our Microsoft sources genuinely believe that the TV integration elements set it apart, and that once you have experienced what it's capable of you can never go back"

The whole article is pretty much anonymous MS sources using Digitalfoundry to publish their talking points.

This is like CIA leaking to Nytimes their talking points
Honestly, not so much the actual TV integration, but the whole Kinect/natural language UI along with the immediacy promised with XBone, is what I'm most excited about. The slight added convenience of HDMI-in will be more interesting if it works well with many different STBs (including ours).

If it lives up to the claims, it should be very compelling.
 
What a very oddly worded article. What useful information does it brings to the table?
An always on consumer product that's designed to be quiet and reliable, just like the vast majority of other always on consumer products.

Was anybody really going to tolerate a 360 or PS3 background level drone 24hrs a day?
 
There is no direct correlation between ROPs and resolution. More ROPs only help shaders that are ROP bound. Only simple shaders are ROP bound. Most shaders are either ALU or TEX bound (depending if the shader has more math or more texture accessing). A shader can also be bandwidth bound (if the render target bit depth is high and/or the shader is sampling lots of high bit depth textures)....

Thanks for the detailed explanation sebbi.

I find the claim extremely dubious to be honest, something that's more aimed at giving forum warriors some extra ammunition in console wars than something that's actually usable in the real world. I mean, are we seriously supposed to believe that Microsoft only just found this out and had no idea this was possible during the consoles design and testing phases? Here's what Eurogamer say:

Richard got that info from a post made by MS to devs on their Developer Central dev resource (which someone forwarded onto him)

that digital foundry comparison, makes sense what is being said here is accurate

http://forum.teamxbox.com/showpost.php?p=14037951&postcount=186

Please stop posting crap from astrograd, he seems to have no idea about what he's talking about there.

For one, devs seem to be using all (or nearly all) the 18 CUs for graphics as compute is very hard to do effectively/
 
Last edited by a moderator:
Richard got that info from a post made by MS to devs on their Developer Central dev resource (which someone forwarded onto him)

Thanks for the extra info ( at least I haven't seen it. ) Is there any more context to the post or has DF disclosed all they know, their description is a bit lacking. Does DF or Richard specifically get whole sentences or merely just the source says ?

For one, devs seem to be using all (or nearly all) the 18 CUs for graphics as compute is very hard to do effectively

Sony devs better start soon if they haven't already !! :smile: Wonder if Cerny is gonna share any wisdom he glean from his game if he actually does try to practice what he is preaching ;) I'm guessing we will be waiting a wee bit for those CUs to be exploited since there is plenty of new resources to play with for quite a while. I would assume devs wouldn't bother trying to the whole GPU compute thing, in earnest, until they think they have to.
 
Thanks for the extra info ( at least I haven't seen it. ) Is there any more context to the post or has DF disclosed all they know, their description is a bit lacking. Does DF or Richard specifically get whole sentences or merely just the source says ?

Richard disclosed all he knew in that ESRAM article; he doesn't know anymore, like how the improvement is possible etc.

The main takeaway point he got from the update was: "ESRAM is capable of more than was previously thought by virtue of actually being able to test production silicon rather than make theoretical calculations"

Sony devs better start soon if they haven't already !! :smile: Wonder if Cerny is gonna share any wisdom he glean from his game if he actually does try to practice what he is preaching ;) I'm guessing we will be waiting a wee bit for those CUs to be exploited since there is plenty of new resources to play with for quite a while. I would assume devs wouldn't bother trying to the whole GPU compute thing, in earnest, until they think they have to.
Well Sony devs don't seem to be jumping on the compute bandwagon either - just look at Guerilla, the only compute job KZ:SF does is memory defragmentation.

Though it's safe to say devs will make more use of GPGPU as the generation goes on - just like the SPUs in PS3.
 
Cerny specifically said the PS4 was designed to be easy to learn, hard to master, and that compute was more for 2-3 years down the road, for those teams looking to really dig deep.

So by that timetable we shouldn't be surprised.

I didn't know compute shaders where supposed to be on SPU level of difficulty, you hear a lot about them on PC and I never got that vibe.
 
Well Sony devs don't seem to be jumping on the compute bandwagon either - just look at Guerilla, the only compute job KZ:SF does is memory defragmentation.
Killzone SF is using Havok (350mb usage according to the slide) and Havok is using compute on PS4 (presumably also Xbox One), I'm betting SpeedTree does too. Unreal Engine 4 uses Compute. Lots of games by virtue of their engine and middleware will benefit greatly from Compute hardware without having to do any work.

Give it a couple of years for developers to play around with the hardware and see what's possible and I think it'll get a lot more use on both consoles.
 
Killzone SF is using Havok (350mb usage according to the slide) and Havok is using compute on PS4 (presumably also Xbox One), I'm betting SpeedTree does too. Unreal Engine 4 uses Compute. Lots of games by virtue of their engine and middleware will benefit greatly from Compute hardware without having to do any work.

Give it a couple of years for developers to play around with the hardware and see what's possible and I think it'll get a lot more use on both consoles.

Well, besides the middleware solutions devs aren't using GPGPU for much in their engines - or at least not yet.


Mod edit: Sources talk removed.
 
Killzone SF is using Havok (350mb usage according to the slide) and Havok is using compute on PS4 (presumably also Xbox One).

Having compute resources available on both systems as well as PCs is a fairly major difference between the industry's interest in SPUs and CUs today. So progress, if it can be made, will occur more rapidly since all cross-platform games could benefit. Once we see how things are laid out in the hardware for both systems then we .. well actually people other than me :), will get a shot at making an educated guess on timelines for compute interest and acceptance.

The plus for the PS4 is that it has all of the same big ticket items as the XB1, just more of it in general ( with CPU and such excepted ). Whether or not devs will take advantage of those expanded resources to gain some competitive advantage is another story. I would think uptake in CU usage would happen quicker if that were the way things went as opposed to devs waiting for a problem to arise and using CUs to solve it.
 
Last edited by a moderator:
Well, besides the middleware solutions devs aren't using GPGPU for much in their engines - or at least not yet.


/*snip*/

Its just like anything new, the engines won't incorporate it for a little or won't majorly use it for a while, but you would have to be dense to think it won't happen soon.

AMD even specifically mentions it as a way to get better performance.

GCN Performance Tip 3: Invest in DirectCompute R&D to unlock new performance levels in your games.
Notes: DirectCompute, OpenCL and OpenGL compute shaders allow better access of shader features to the programmer.
Direct control over execution kernel, explicit thread synchronization and the use of shared memory are very powerful features that can unlock new performance levels in modern algorithms.
DirectCompute, OpenCL and OpenGL compute shaders should be equally fast in terms of raw compute performance (ignoring potential compiler efficiency differences).

http://developer.amd.com/wordpress/media/2013/05/GCNPerformanceTweets.pdf
 
Status
Not open for further replies.
Back
Top