Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
so what are the odds that xbox gpu and cpu will get clock increase? since theyre based on computer parts shouldnt it be easy without increasing heat too much? i do remember someone saying like increasing the cpu to 2gh would increase the tdp by 66%, but what about gpu? would increasing the gpu to 1ghz be out of the question?
One thing I wonder, a lot of people are thinking the Xbox One would have pretty close specs to the PS4 if they upped the clocks and would undercut the theoretical differences a bit. But if they are going to do that, how can they afford to change the clocks without adding latency?

Iirc, I read here that when you increase the clocks you also increase the latency, that's why they went with a 1.6GHz CPU instead a -let's say- 3.2GHz CPU with less cores. So increasing the GPU clocks could also add some latency. Excuse me if I am wrong.
 
No, that's not how it works. Increasing clocks on the same chip decreases latency of the instructions. It increases latency to off chip resources assuming they aren't similarly sped up in terms of clock cycles but not in terms of actual time.

What you are probably thinking of is called pipe-lining.In which they break the chips into more and more stages which allow them to clock it faster. Basically the more pipeline stages the faster they can clock a processor in general but the more stages also means more latency for the shorter instructions and for cache misses etc. At a certian point the amount of stages aren't worth the clock increase.
 
Last edited by a moderator:
this might be outside of the scope of this thread. i have a question, will there ever be a time when we get hardware like gpus that are "100% efficient?" i ask this because of how many people had got that small bit of infromation incorrect when ms documents were talking about gcn vs xbox 360.

but i mean overall is that even possible? from what your describing xenus seems like the answer would be no because things like pipeline stages, instructions per cycle, latency and other things. what would it take to reach that magical point?

and to get back on topic the guy from df said that there wasn't any clock changes so that means its still 800mhz. would clock increases usually happen this late?

last question i swear, with the xbox only have 12cus and 2 aces. how will it handle ports that make extensive use of both desktop and ps4's compute abilities? maybe xbox has a chip dedicated to stuff like that as to not worry the 12CUs? outside of like physics what would compute be used for? i know in tombraider they used it for the hair? tressfx? would esram make that a non issue? maybe the cloud?
 
this might be outside of the scope of this thread. i have a question, will there ever be a time when we get hardware like gpus that are "100% efficient?" i ask this because of how many people had got that small bit of infromation incorrect when ms documents were talking about gcn vs xbox 360.

but i mean overall is that even possible? from what your describing xenus seems like the answer would be no because things like pipeline stages, instructions per cycle, latency and other things. what would it take to reach that magical point?
A game will never reach 100% efficiency. The only thing that could ever reach 100% is a test that does nothing useful other than trying to achieve 100% efficiency. Even then it might not do it.
 
what about the hardware itself, can it ever run at 100% efficiency? i assume hardware running at 100% woudln't put out much heat since processors are clocked high because they're not 100% efficent right?

back ontop will we ever get some kind of full breakdown on the hardware by ms or anyone? isn't hardware always nda even after being released? either way can't wait until years down the line and we see some nifty tricks using esram and its 196gb/s of bandwidht (that is so insane!). with that much bandwidth wouldn't the xbox one be like a particle monster like ps2 was because of its edram?
 
what about the hardware itself, can it ever run at 100% efficiency? i assume hardware running at 100% woudln't put out much heat since processors are clocked high because they're not 100% efficent right?
They're clocked higher for more power. If you increase efficiency, rather than drop the clocks to get the same performance with less heat, you'd keep the same high clocks and just get MORE POWER!!! ;) But you'll also generate more heat with higher efficiency, as chips generate heat when doing stuff rather than sitting idle so your premise falls apart (artificial tests are use to drive CPUs and GPUs hard to see how hot they get).
 
this might be outside of the scope of this thread. i have a question, will there ever be a time when we get hardware like gpus that are "100% efficient?" i ask this because of how many people had got that small bit of infromation incorrect when ms documents were talking about gcn vs xbox 360.

but i mean overall is that even possible? from what your describing xenus seems like the answer would be no because things like pipeline stages, instructions per cycle, latency and other things. what would it take to reach that magical point?

Never say never but...

I think to have 100% efficiency you'd need to have hardware branch prediction that was 100% accurate (so that you never hit a stall condition) and a scheduler that ensured you never wasted a single instruction slot.

I'm not sure if something that could do that could even exist - you might be getting into reversible computing realms...
 
I’d be interested to know some theoretical tflop numbers based on a 2ghz CPU and a 1ghz GPU, and if anyone is feeling especially clever, the TDP difference. A description on how these are calculated, would make an interesting experiment.

I suspect the difference for such an overclock would be fairly negligible, but if the hardware is capable, it might be an option down the line for the manufacturers. Assuming they don’t break any local regulations/agreements.
 
Last edited by a moderator:
I’d be interested to know some theoretical tflop numbers based on a 2ghz CPU and a 1ghz GPU, and if anyone is feeling especially clever, the TDP difference. A description on how these are calculated, would make an interesting experiment.

I suspect the difference for such an overclock would be fairly negligible, but if the hardware is capable, it might be an option down the line for the manufacturers. Assuming they don’t break any local regulations/agreements.

Thats a 25% overclock on each, which I believe would bring the theoretical specs to:

CPU 100 GFLOPS to 125 GFLOPS
GPU 1.23 TFLOPS to 1.54 TFLOPS
I think there's an Anand article that said the TDP increase on Jaguar to go to 2ghz was 66%. I don't know what the 1.6ghz wattage requirements are or how to calculate the change in GPU.
 
Removed. My numbers were for the PS4.
 
Last edited by a moderator:
According to Phil Spencer tweets it seems some believe indies will have access to all 8gb...




http://www.oxm.co.uk/59041/microsof...access-everybody-gets-full-pool-of-resources/

They have more Phil Spencer tweets too.

Tommy McClain

Hmmmm, I've been crunching some theories together on the memory situation. Here you have a statement from microsoft saying that every XB1 can be treated as a devkit. And then you have a statement saying there is no limit.....well you can't use all 8 Gbs of memory without incurring some sort of limit, especially when some of it will be reserved.

I'm thinking all consoles will ship with 12 gbs of memory. Some of it will be split between the OS and for debugging leaving 8 gbs for developers to utilize. It seems like that would tie these two statements together.

A game will never reach 100% efficiency. The only thing that could ever reach 100% is a test that does nothing useful other than trying to achieve 100% efficiency. Even then it might not do it.


Yeah, they're many factors to be weighed; most of it sits on debugging and perfect allocations.

---------------------------------

Speaking of which, I'm interested in project spark. I want to see how much can be pushed using that engine for indie development. I signed up for the Windows 8 beta which i think your project can be ported over to the XB1. Anyways i have a few ideas of my own.
 
Hmmmm, I've been crunching some theories together on the memory situation. Here you have a statement from microsoft saying that every XB1 can be treated as a devkit. And then you have a statement saying there is no limit.....well you can't use all 8 Gbs of memory without incurring some sort of limit, especially when some of it will be reserved.

I'm thinking all consoles will ship with 12 gbs of memory. Some of it will be split between the OS and for debugging leaving 8 gbs for developers to utilize. It seems like that would tie these two statements together.
Uhmm, no.

These statements basically mean one can get registered as developer at MS which probably enables the download of the XB1 SDK software and tools (and a special firmware update) needed for development. This "no limit" was the answer to the question if indies would be limited to the OS partition of the console. MS's answer means that indies can use the same game partition/VM as all devs are using, so presumably the 5GB if nothing has changed from the rumors.
 
If they shipped with 12 gigs of ram you can reserve 4 gigs fir tombstoning. You could put 1 of the OS in the extra ram along with recently ran programs and instantl switch things in and out and still jave a good 8 gigs for the two running os amd game content
 
Your theory explain the recent tweet from:

Jonathan Blow: "When the XB1 version of The Witness comes out, internet fanboys' heads are going to explode or something." (The Witness, Xbox One)

Could the Witness be better on XB1?
 
Or all the fanboys won't understand how a turncoat Sony-lover like Blow could possible still intend to release an Xbox Version?
 
MS's answer means that indies can use the same game partition/VM as all devs are using, so presumably the 5GB if nothing has changed from the rumors.

So what you're saying is all they did was reassure indies that they now have the same privileges to the hardware. Was there even a limit to their privileges before from the last gen? From my understanding the XNA community has always had good support to the hardware as it was.

If they shipped with 12 gigs of ram you can reserve 4 gigs fir tombstoning. You could put 1 of the OS in the extra ram along with recently ran programs and instantl switch things in and out and still jave a good 8 gigs for the two running os amd game content

That's what i was thinking, maybe developers already knew of this and indies were probably just left in the dark for the time being.....It's a supposition.
 
Uhmm, no.

These statements basically mean one can get registered as developer at MS which probably enables the download of the XB1 SDK software and tools (and a special firmware update) needed for development. This "no limit" was the answer to the question if indies would be limited to the OS partition of the console. MS's answer means that indies can use the same game partition/VM as all devs are using, so presumably the 5GB if nothing has changed from the rumors.

Not exactly... Ms said there would be no hardware restrictions applyed only to indie developers. We assumed that this meant the same VM the game os use, but it could be another winRT partition, set up with the same hardware access the game one does.

If the hyper-v OS can load and unload those partitions fast enough, the player would not even notice coming back and fourth between w8 store games and "general" games.
 
Or all the fanboys won't understand how a turncoat Sony-lover like Blow could possible still intend to release an Xbox Version?

Yes this was my interpretation when I saw it. I don't think he had mentioned an XB1 version until then or very close to then.

It's the Power of Cross-platform that compels you ... it allows the lions to lay with the lambs :LOL:
 
Status
Not open for further replies.
Back
Top