Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Is the XB1's design normal or common in its design? Is it common to interface the ethernet port directly to the SOC versus going over the southbridge?
With a few exceptions, you don't want I/O hooked up direct to the CPU but exactly where you put the ethernet control electronics - in a separate SB chip or on the SOC itself - really doesn't matter. But you do want that dedicated controller to manage ethernet traffic, you don't want to have the CPU do it. Controllers are designed to take unnecessary loads off the CPU.
 
Is the XB1's design normal or common in its design? Is it common to interface the ethernet port directly to the SOC versus going over the southbridge?


I don't think it matters as ethernet latency is several magnitudes larger over the cloud and bandwidth is several magnitudes smaller over the cloud.

Who cares about the internal latency (which is in microseconds) when the cloud introduces 10+ milliseconds in the best consumer level networks.
 
What about that 2.5MB of eSRAM? could that be for on chip cloud computed data storage or something like that?!

Yes, and xbone uses also telepathy to give you a 3d world through computing.
Really, if you do not understand what 'cloud' is, dont even talk about...

WHY do you need fastest (and $$) possible ram for something that comes to you with unpredictable latencies measured in 30-100ms?????????????

That 2.5Mb is 95% likely to used for what they need to run as fast as hell, their hypervisor.
What's curious is - will it contain also the page tables for the game os or not? ~2Mb arent enough imho.
 
Yes, and xbone uses also telepathy to give you a 3d world through computing.
Really, if you do not understand what 'cloud' is, dont even talk about...

WHY do you need fastest (and $$) possible ram for something that comes to you with unpredictable latencies measured in 30-100ms?????????????

That 2.5Mb is 95% likely to used for what they need to run as fast as hell, their hypervisor.
What's curious is - will it contain also the page tables for the game os or not? ~2Mb arent enough imho.

Knowing what cloud generally is makes you so selfish to tell other people what they have right to talk about or not?

And I were talking about some kind of data that are not latency sensitive but position sensitive (player's avatar/character position in the game world), like AI, environmental physic deformations, ... . Whether this kind of data need local storage or not I just asked a question and your reply was not so polite.
 
WHY do you need fastest (and $$) possible ram for something that comes to you with unpredictable latencies measured in 30-100ms?????????????

I am not saying its used for such purposes. But why wouldn't you want to remove as much latency as possible within the hardware. Its the overall latency of the system that matters, and lowering it in one area makes your system more resilient to latency introduced by another area.

That being said, I think its part of the coherent memory system.
 
I am not saying its used for such purposes. But why wouldn't you want to remove as much latency as possible within the hardware. Its the overall latency of the system that matters, and lowering it in one area makes your system more resilient to latency introduced by another area.

That being said, I think its part of the coherent memory system.

Its definitely not used for that. Processing done in the cloud will be for latency tolerant tasks. Storing cloud data in that cache instead of main memory means you are talking shaving a 50ns seconds off something that takes 30,000,000ns to 100,000,000 ns.

L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns
Mutex lock/unlock 100 ns
Main memory reference 100 ns
Compress 1K bytes with Zippy 10,000 ns
Disk seek 10,000,000 ns
Read 1 MB sequentially from network 10,000,000 ns
Read 1 MB sequentially from disk 30,000,000 ns
Send packet CA->Netherlands->CA 150,000,000 ns
http://stackoverflow.com/questions/4087280/approximate-cost-to-access-various-caches-and-main-memory


That latency from the cloud will vary for each packet by orders of a magnitude more than the 50ns shaved off by a cache.
 
Yes, there's no way cache will be used particularly for cloud data. Cache wants to be used on the most immediate data. Cloud data will be stored in RAM, and stored in cache when it's needed.
 
L2 cache reference 7 ns
Mutex lock/unlock 100 ns
Main memory reference 100 ns

...not to be picky, but I seem to remember that if you align your mutex on cache line *and* there are no concurrent access to that cached address line when you read/write on it, the cost is almost the same of an L2 cache reference on Intel architecture, not the same of a memory access.
That's part of lock optimization.

I am not saying its used for such purposes. But why wouldn't you want to remove as much latency as possible within the hardware
No. You want to remove latency from latency-sensitive data.
That's why using lower latency ram with GPU yields low benefits for 3dfx processing, and their MCT is optimized to get huge latency hits: because it can easily handle high latencies without impacting its job.

...and of course, what Pixel said. Removing 50ns... or even 50'000 (x1000 times) out of 30'000'000+ makes absolutely no sense.
 
Last edited by a moderator:
Questions

Mod:crap link removed as the poor internet bots don't want to be led to that shite

Some questions... a friend sent me the link above. It's full of some bizarre stuff.

I suspect the answers to these questions are 'No', 'No', 'No' and 'Maybe'

If someone could clear this up I would appreciate it.

  1. Is there any genuine evidence to support the idea there is a 3D stacked addtional SOC in Xbox One?
  2. Does Xbox One include "Full HSA" - allegedly giving an advantage over PS4?
  3. Does Xbox One have a 'multi-threaded' rendering engine that again, could give a performance boost? (spot a pattern yet?)
  4. Will a new driver pack / tools update, again allegedly to be released 'soon', deliver up to 20% performance gains on Xbox One?

All seems very unlikely, given what we know about the X1 APU die simply having to fit in more stuff (ESRAM, SHAPE, Move Engines, etc), thus limiting the real estate for the GPU elements on the chip?
 
Mod:crap link removed as the poor internet bots don't want to be led to that shite

Some questions... a friend sent me the link above. It's full of some bizarre stuff.
I suggest you abandon all ties with that so called 'friend'. :p

I would say MisterX.... spouts horseshit, but that's not true. After a horse has done its business, along comes a dung beetle that rolls it up, takes it to its nest, eats it itself, and later poops out the excrement derived from excrement. That's still a higher calibre of content than MisterX... generates.

After a catalogue of fails, you'd think people would wake up and smell the Java and not give this guy any recognition whatsoever, but some deluded fools keep returning to him for 'insight'. I think it's a medical condition. If it can't be cured with a swift slap to face, medication should be used.

Please avoid that thing at all costs. Tell everyone you love to avoid it as well.

http://www.ign.com/boards/threads/x...from-32mb-to-6gb-worth-of-textures.453263349/

Interesting overview of tiled textures / resources.

Can the ESRAM really hold the 'onscreen' equivalent of 6GB of textures in more traditional approaches?
You're very late to the party. I suggest you search the forum rather than ask the same old questions again. I suppose reading this entire thread is a bit too much though. ;) The answer's 'no'. Find the thread on SVO by the same author, AlexGlass, on this board and there should be more details for you.
 
I suggest you abandon all ties with that so called 'friend'. :p

I would say MisterX.... spouts horseshit, but that's not true. After a horse has done its business, along comes a dung beetle that rolls it up, takes it to its nest, eats it itself, and later poops out the excrement derived from excrement. That's still a higher calibre of content than MisterX... generates.

After a catalogue of fails, you'd think people would wake up and smell the Java and not give this guy any recognition whatsoever, but some deluded fools keep returning to him for 'insight'. I think it's a medical condition. If it can't be cured with a swift slap to face, medication should be used.

Please avoid that thing at all costs. Tell everyone you love to avoid it as well.

You're very late to the party. I suggest you search the forum rather than ask the same old questions again. I suppose reading this entire thread is a bit too much though. ;) The answer's 'no'. Find the thread on SVO by the same author, AlexGlass, on this board and there should be more details for you.


:D

Thanks. As I suspected.

Forgive my ignorance of the complexity of the issues... but does the architecture basically mean a generation of seeing things like we have with Tomb Raider and COD, etc? Or will the gap lessen with better tools?
 
Saying the gap will lessen with better tools is like expecting that that one will benefit more than the other with better tools.
As both architectures are not "exotic", this is not something that I would expect.

I'd expect both consoles to more or less maintain the current gap instead of the gap changing drastically due to the fact that lots of third party developers seem to have similar end results in terms of the gap between the two consoles.

But yes, the IQ would most definitely improve as time goes on.
 
Saying the gap will lessen with better tools is like expecting that that one will benefit more than the other with better tools.
As both architectures are not "exotic", this is not something that I would expect.

I'd expect both consoles to more or less maintain the current gap instead of the gap changing drastically due to the fact that lots of third party developers seem to have similar end results in terms of the gap between the two consoles.

But yes, the IQ would most definitely improve as time goes on.

Thanks... I guess I was thinking the X1 tools might not be up to par to begin with, so there might be a 'gap' reduction from that
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top