Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
I fearlessly predict that folks will point to "synergistic qualities" in the XB1 and make the claim that any increase in the clock will have a 2 or 3 or more fold increase in performance than the "mere" upclock would suggest. Taking the possible upclocks and synergistic qualities of the xb1 all into account gives you a practical 2nd GPU performance wise. Someone check MisterX and see if I'm right. :devilish:
Sounds like you could be MisterX :devilish:
 
Man they have data for all the games on the 360 on PC (and so with multiple CPU configurations).
Look at how some games behave with AMD vs INtel processors, it is not blanket statement, it should surprise nobody.

He was talking about the XB1 if you read the context, he was trying to justify the upclock.
 
I fearlessly predict that folks will point to "synergistic qualities" in the XB1 and make the claim that any increase in the clock will have a 2 or 3 or more fold increase in performance than the "mere" upclock would suggest. Taking the possible upclocks and synergistic qualities of the xb1 all into account gives you a practical 2nd GPU performance wise. Someone check MisterX and see if I'm right. :devilish:
Well it looks to me in between a prediction and plain flame bait :LOL:

By the way if somebody has DF hears, may be they could ask them to do an experiment (if it doable):
take a HD7700 (10 Cus @1GHz iirc) and an HD 7750 run the former at 800MHz and the latter at a boosted frequency (greater than on the XB1 as 2 CU out of ten is greater than 2 out of 14) makes proper measurement using Nvidia tool (I guess techreport tools and methods are proprietary).
That should tell us if they are in the ballpark (they may have made further arbitration wrt the impact on yields) or if it is plain lie.
 
Wow how biaised... you know MSFT can profile games on the 360 on PC, they discuss with the industry at large (from hardware vendors to software publishers), they are writing API, and so on.
I think that a couple of years ago MSFT engineers had a better (and wider) view and understanding about where the industry (at large again soft and hard) were heading than everybody here (including devs).
Now they were a bit conservative it seems not with the silicon (at large including ram) but power draw, no matter the slight overclock they are hardly pushing the bar. It is not like comparable hardware in the pc world pulls insane amount of power.
But it also applies to Sony (different design choices), it seems both systems are to burn less power than their predecessors.
How the XB1 is not a proper gaming system? I would be surprised if anybody can do better iso power outside of Intel, yes Intel can do better but that does say us much about either Sony or MSFT choices aka their tech is out of their reach.

First of all I never claimed it wasn't a proper gaming system since it obviously is and will continue to be so. I made the "720" because while it might seem biased I think it is merely descriptive, although I was certainly trying for apt.;)

The bits I bolded in your response are indeed important. MS is in the cat bird seat when it comes to the API that affect gaming right now so when I state that they only have to worry about 11.2 I am not disparaging them but pointing out that 11.2 will be the likely standard for years to come and MS is very aware of that fact. AMD is certainly aware of that fact and one some level would love to "collude" ( not illegally of course ) with MS to make sure that AMD ideas will make their way into Windows APIs. In fact it was an MS engineer that made the point about the XB1 being a direct x processing machine so am I not to take what is said by MS at face value ;)

As for the NOT XB1 system, this is not a VS thread and since I shan't even make a joke about going OT I shall not be tempted to do so!! :p
 
Last edited by a moderator:
Sounds like you could be MisterX :devilish:

Evil minds think alike !!! :oops: I await Mister X and his next move !!!! :LOL:

ADDED: The S in SPU on the cell processor stands for Synergistic so playing with the word with respect to console warrior back and forths was just too tempting :LOL:
 
Last edited by a moderator:
Jesus, is the situation that dire for the next gen consoles CPU wise? Good god.
Did you miss the 6-8x faster bit? It's not all about peak flops. The occasions where you can get peak are few and far between, and more importantly, cost a fortune in development costs to get there. The new CPU is quite capable, don't write it off just yet.

Although now you may understand why 360 backcompat is not really on the table.
 
It boils down to this for me; Why were thermals so important that MS would allow it to "limit" (air quotes) power? What driving factor could have been so strong that heat dissipation to the extreme was the primary goal of thier design? EU power restrictions? RROD?

I think they went too far. In fact I think nearly all of their design decisions for nearly every product over the last 15 months was totally off base. Only the unification of OS' from an engineering standpoint makes sense.

I think they want this to be the 'do everything' living room box and it has to behave as such. The way a cable box or smart blu-ray player would behave is silent, fast, reliable, etc. I think given the choice of completely silent and 900p, or a noticeable fan and 1080p, they're going 900p.
 
These flops. Now, it is a LONG time ago, but when I was doing asm programming I would try and avoid floating point arithmetic like the plague, it was slow, using loads of cycles and was hard (no hardware support)
However when discussing the merits of cpus today the flop performance seems to be a significant kpi. Has something changed? Are cpus no longer doing so much compare, move, branch, integer arithmetic etc and doing loads of flops instead?
If there is a requirement to do lots of flops on these consoles cpus, surly it would be better done using the compute functions of the GPU?
People quote FLOPS because it's usually the biggest number a CPU has, even though it's only used for specialized routines that normally could just as easily be done with GPGPU. The reason it is the biggest number is because the floating point units are vector units where you can operate on four or eight 32 bit values at the same time. Divide by 4, and you'll get close to the scalar/integer performance of the CPU, although with micro-ops and different units being able to execute simultaneously, it's not quite as simple as that.

So yes, the floating point unit's performance has almost nothing to do with real production code, but people keep quoting it because bigger numbers are better, dontchaknow?
 
My hunch is that xb1 will ultimately be a bit easier to program/optimize for in a similar fashion as the ps360 era. This I believe will be a direct result of how the two consoles handle compute loads.

With the audio block offloading tasks from the CPU and a clock speed bump, the xb1 will have a core or two CPU advantage over the ps4. The ps4 thankfully has extra compute headroom in its additional gpu compute units, so this can be leveraged to make sure games aren't prematurely CPU bound. Thus I see this as similar to the ps3, which had extra compute resources in the spe's which could handle CPU, graphics, or audio tasks but required some work for developers to become proficient with the architecture. This gen, instead of spe's, the gpgpu will take some work to fully leverage. In the end, I expect the results on both to be very similar. There may be cases where the extra bandwidth in xb1 can be utilized for a specific effect, and cases where the extra alu power in the ps4 can be utilized for a specific effect, but "pretty much the same" I expect to be the prevailing assessment of on screen graphics.
 
As for the NOT XB1 system, this is not a VS thread and since I shan't even make a joke about going OT I shall not be tempted to do so!! :p
Well I put a smiley in my posts, jokes are ot but I guess sometime ;)

I dunno. I think you have to look at total cost of development and sale. Theres no way to know how much MS spent versus Sony on the development of their systems. However, what is telling is that MS is spending tons more effort convincing buyers and maybe developers thats their design choices were worthwhile. Nevermind silly things like releasing a system designed around Sea Islands architecture AFTER the Volcanic Islands discrete parts will become available.
To me it looks like MSFT spend more (even putting kinect aside) in R&D, with more in house efforts.
Now wrt convincing buyers, their PR strategy was a disaster, I think nobody can question that.
They are also facing a deficit of paper FLOPS and well actually whereas Sony I would bet spend less... actually my opinon on the matter is OT. I would add the system is expensive, and whereas the services seems to be top notch in US I'm not sure the same can be said in the other big regions.
Now for the devs, I don't think MSFT has much to do, enthusiasts are going to buy it in sane quantity supporting the console is worthy, in the future like everybody they will adapt to how the things go.
I won't go further nothing is technical about this, pretty much business and marketing.
Obviously they knew the hw roadmap and they missed on nearly every gamble this time around
Well that is a bit unfair, which gamble did they missed? The cost of the GDDR5 is still unknown, how it will affect price reduction too.
MSFT has a lot of room to adapt, I think speaking of Kinect less SKU this early is too anticipated, close to FUD actually, though if they have no other choice than to compete on price only they could remove it.
Durango price should go down pretty well.

Sony "gamble" has yet to prove successful in the long run, acting like MSFT is sort of done for is way too anticipated and it disregards its financial capacity. I don't see in the future though I think is too early to tell one way or another.
 
Last edited by a moderator:
I think they want this to be the 'do everything' living room box and it has to behave as such. The way a cable box or smart blu-ray player would behave is silent, fast, reliable, etc. I think given the choice of completely silent and 900p, or a noticeable fan and 1080p, they're going 900p.

Does anybody seriously think the cooling with a 120mm fan would really sweat with 200W and the cpu/gpu power management can't play 1080p films silently? I doubt anybody assumes the console was designed for 4k streaming, do we?
 
So yes, the floating point unit's performance has almost nothing to do with real production code, but people keep quoting it because bigger numbers are better, dontchaknow?

Yes I suspected as much. There seems to be quite a lot of the "quoting big numbers and blindly assuming bigger is better" game going on these days...

Good news that those CPUs aren't looking too bad...you don't want to be discounting that from the "performance" equation.
 
Does anybody seriously think the cooling with a 120mm fan would really sweat with 200W and the cpu/gpu power management can't play 1080p films silently? I doubt anybody assumes the console was designed for 4k streaming, do we?

So the fan is 120mm confirmed?
 
So the fan is 120mm confirmed?

iPhTQgAhk1rjU.jpg


That's how it looked to me on the images.
 
Last edited by a moderator:
These flops. Now, it is a LONG time ago, but when I was doing asm programming I would try and avoid floating point arithmetic like the plague, it was slow, using loads of cycles and was hard (no hardware support)
However when discussing the merits of cpus today the flop performance seems to be a significant kpi. Has something changed? Are cpus no longer doing so much compare, move, branch, integer arithmetic etc and doing loads of flops instead?
In addition to bkilian's reply, you have multiple functional units in a modern processor, which can run concurrently. the reason people quote flops is because human beings desire metrics to aid their basic comparison-based intelligence. Back in your day of ASM, I'm sure the cool kids were comparing CPUs by clock speeds. We've moved on from that finally, only to blindly follow the Peak Flop Metric which gives the uneducated a true and yet meaningless means to compare processors.

As for floating point performance, FPUs are now (and have been for a long time) as fast as integer units and you can use floats without a performance penalty over running the same code with integers or bytes. No need to multiple all your values by 1000 to perform calcs and then divide by a thousand for the final value. ;)

If there is a requirement to do lots of flops on these consoles cpus, surly it would be better done using the compute functions of the GPU?
GPU compute is the same thing in terms of how it handles code. GPU gets its outlandish flop counts by being massively parallel. If your code doesn't fit a very wide data structure, you'll not get decent performance. There's probably a lot of new compute-friendly algorithms that'll be developed in the coming years, but the CPU still has an important role to play even in FPU.
 
In addition to bkilian's reply, you have multiple functional units in a modern processor, which can run concurrently. the reason people quote flops is because human beings desire metrics to aid their basic comparison-based intelligence. Back in your day of ASM, I'm sure the cool kids were comparing CPUs by clock speeds. We've moved on from that finally, only to blindly follow the Peak Flop Metric which gives the uneducated a true and yet meaningless means to compare processors.

As for floating point performance, FPUs are now (and have been for a long time) as fast as integer units and you can use floats without a performance penalty over running the same code with integers or bytes. No need to multiple all your values by 1000 to perform calcs and then divide by a thousand for the final value. ;)

GPU compute is the same thing in terms of how it handles code. GPU gets its outlandish flop counts by being massively parallel. If your code doesn't fit a very wide data structure, you'll not get decent performance. There's probably a lot of new compute-friendly algorithms that'll be developed in the coming years, but the CPU still has an important role to play even in FPU.

Then from a practical stand point, how can processor performance be realistically quantified and compared?
 
Then from a practical stand point, how can processor performance be realistically quantified and compared?
Benchmarks. There are a number of benchmarks used by comparison sites that provide a much better insight into the real performance of a processor.
 
Status
Not open for further replies.
Back
Top