Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Aeoniss's quote says the guy has no insider knowledge. I don't know where Aeoniss is getting that from. But if you go just by technical documentation, you have MS saying, "we've got these great DB and CB blocks that can do funky stuff" and your got Sony not saying that. The logical conclusion is to believe they are something unique to MS if you don't know any better.

There are no details on the blocks in XB1 to compare the GCN RBE to. ;)

Here's AMD's GCN whitepaper with a pretty picture for 7970, showing "color ROP units" and "z/stencil ROP units". I don't know how these differ to the remark about XB1. The source claimed, "Fast render paths and pre-out on depth tests at a vertex level? This isn't something the PS4 GPU can do." If you search the webs for the DB/CB discussion surrounding those leaks, smart-sounding people said it's no different to GCN anyway.

Given he's an indie, likely hasn't got a devkit, and would be under NDA if he had making discussion about XB1 dangerous, I'm inclined to believe he's speculating and just comparing what's known without real technical depth. I may be wrong though, but it's not coming across as something new and insightful.

We're friends, and I can confirm he has no insider knowledge.
 
Depth and Stencil sounds like where he is deriving the quote above. Would that be accurate and is this functionality peculiar to XBO? If it is, is it significant in any way?
As said already before, early Z and Hi-Z are standard features of all AMD GPUs since a decade or so (I don't know when exactly, was before I got interested in GPUs). And yes, it can be a significant performance enhancer, but one that is present in all current GPUs (nV uses equivalent mechanisms), so it makes no difference to the PS4 or any other GPU for that matter.
 
Could the below found here http://memoir-systems.com/index.php/technology shed some light on the supposed ESRAM performance increase?

Memory bandwidth is the rate at which data can be read from or stored into a memory. It is a measure of the rate of data transfer from memory, and can be increased by expanding the data bus width of the memory. This increases performance but does not allow more accesses to the memory (on the address bus). A distinct and more inclusive measure of memory performance is MOPS, which refers to the rate at which unique accesses can be performed to memory. Doubling MOPS, doubles the number of accesses supported by the memory, and as a consequence also doubles the total memory bandwidth. The use of MOPS to measure memory performance is analogous to the use of IOPS (Input/Output Operations Per Second) to benchmark computer storage devices such as hard disk drives, solid state drives.

So based on the above there would be 2 options:
Option 1: Instead of 1 read and 1 write channel there are actually 2 read and 2 write which would double the theoretical bandwidth but in reality is limited by the 32MB memory size of the ESRAM and the target 1080p resolution and would only be attainable under special circumstances.

Option 2: There is 1 read and 1 write channel but these are done at double the rate which would give the impression of more bandwidth but would again be limited due to ESRAM size etc.

Option 2 would allow for MS to give a base performance figure which could then be revised once system verification is complete.
 
Could the below found here http://memoir-systems.com/index.php/technology shed some light on the supposed ESRAM performance increase?
No.
MS would have needed to be a customer to them and asked them to implement the eSRAM for Durango. I guess MS would have known about that.
And that company says, they implement whatever the customer wants and formally verify that it meets exactly the wishes of the customer. It's not that they run some fancy algorithm on an already produced SRAM array to make it magically faster. They actually have to put that bandwidth into the hardware before tapeout as everone else. Simply speaking, they just claim to do it in a more clever way (with an emphasis on high bandwidth with multiple narrow accesses in parallel if I see it right) using multiple standard macros and their custom controller in front of it.
 
Last edited by a moderator:
I think the idea that Microsoft didn't know what their system bandwidth is should be dropped even though it was reported that way. It doesn't make any sense at all and is astronomically unlikely. It's far more likely there is some confusion of the information as it has been spread through intermediaries.
 
Gipsel, thanks for the response.

I wasn't thinking that MS used this company but used the principles suggested as I know MS does a lot of their own chip design. Also there are a few other companies that present their own version of this so the tech is not exclusive to them.

If the bandwidth increase is true what do you think could be the simplest explanation?
 
That "Mynd" guy on PSU forums who was quoted earlier seems to be being discussed on GAF Anyway, does this make any sense (somehow I doubt it) his explanation for the new BW?

They are using the falling edge of the clock cycle? A-la DDR?
That would give you these sorts of numbers I'm pretty sure.

That's something you could do, and would rely entirely on the integrity of the Esram (i.e. you would be able to test if post build and implement it if you found you structure could handle it).

There is other stuff he's saying that seems controversial too

http://www.psu.com/forums/showthrea...lopers/page7?p=6132276&viewfull=1#post6132276
 
Call me crazy, but the idea of an possible upclock to Durango has been fermenting in my head a little...

-Eastmen's recent upclock post
-Greenburg's tweet seeming to imply hardware changes before the May reveal. That was never explained.
-The lack of any clocks or flops being revealed for XB1, means they are still unknown, for better or worse.
-Albert Panello's (Microsoft guy) recent post on GAF. Too me, I can read it as hinting at an upclock.


http://www.neogaf.com/forum/showpost.php?p=66980146&postcount=542

Albert Panello:

I do want to remind people – this interview was done six weeks ago, before E3. It’s not like we just decided to talk about it. So it wasn't that I just decided to call up OXM and give the my opinion :)

I would like to pose this question to the audience. There are several months until the consoles launch, and any student of the industry will remember, specs change.

Given the rumored specs for both systems, can anyone conceive of a circumstance or decision one platform holder could make, where despite the theoretical performance benchmarks of the components, the box that appears “weaker” could actually be more powerful?

I believe the debate on this could give some light to why we don’t want to engage in a specification debate until both boxes are final and shipping.

What does the bolded part, and the post in total, mean? "specs change"?

It could also read to mean they somehow dont want Sony to one up them?

Note, I'd only indulge in this speculation as idle sideshow. We have people like DF, and other sources, that have never hinted at any upclock or have said the clock is 800.

Edit: reading Panello's post again, an alternate explanation that seems very plausible is "we're better (due to something not on paper), we dont want the competition to find out and up their specs to one up us". Rather than hinting at any upclock.

Hope that's not too versus.
 
That "Mynd" guy on PSU forums who was quoted earlier seems to be being discussed on GAF Anyway, does this make any sense (somehow I doubt it) his explanation for the new BW?



There is other stuff he's saying that seems controversial too

http://www.psu.com/forums/showthrea...lopers/page7?p=6132276&viewfull=1#post6132276

In theory maybe that's possible, but that's still something that has to be designed from the start. The memory controller and interface would have to support that.
 
We're friends, and I can confirm he has no insider knowledge.
Ok no "insder knowledge" but does he actually know what he is talking about? :D

How does he come to the conclusions he does when comparing the two machines (DB/CBs, etc) Is it just based on the same knowledge thats public or does he know developers and just not consider that "insider knowledge"


In theory maybe that's possible, but that's still something that has to be designed from the start. The memory controller and interface would have to support that.

Yes but if Mynd is correct, only until you are able to test the final hardware and the integrity of ESRAM can you be sure its something you can communicate to devs to plan on? (Full disclosure, I dont know what "integrity of ESRAM means, I'm jus trying to connect the dots here.)
 
-Albert Panello's (Microsoft guy) recent post on GAF. Too me, I can read it as hinting at an upclock.


http://www.neogaf.com/forum/showpost.php?p=66980146&postcount=542

Albert Panello:

Has it been confirmed that he indeed Albert Panello ? :???:

Edit: reading Panello's post again, an alternate explanation that seems very plausible is "we're better (due to something not on paper), we dont want the competition to find out and up their specs to one up us". Rather than hinting at any upclock.

Hope that's not too versus.

Well "IT" must be on some piece of paper somewhere ? Besides upping the clocks what "specs" could he be talking about ? I can't imagine MS would be adding anything at this late date.

I'm not saying that there is no chance of some hardware reveal with the XB1 occurring but as always what is really wrong with the engineering decisions made by MS if such things don't come to pass ? It's going to do more general purpose stuff than the PS4 so why not adjust the thermal and silicon budgets to reflect this emphasis on multiple roles ?? It's a bit boring for now but ...
 
Either the clocks are bopping between different frequency ranges or they are doing some sort of DDR every once in a while ??

From the DR twitter account:



Meaning there is another kind of read/write ? Is there some "in between" read/write state that STILL is able to carry information ???

I think the "pure reads and writes" probably refers to 102.4GB/s for reading only and similarly for writing only. That link Rangers posted about triggering on the falling edge instead of the rising edge is potentially interesting though as far as explanations go. I wonder if you can tell your hardware to trigger on the rising edge, then during the cycle flip that over to trigger on the falling edge so that it triggers twice during each cycle?
 
Yes but if Mynd is correct, only until you are able to test the final hardware and the integrity of ESRAM can you be sure its something you can communicate to devs to plan on? (Full disclosure, I dont know what "integrity of ESRAM means, I'm jus trying to connect the dots here.)

Yes, that could be a possibility. It could be the type of thing that they did design, but weren't sure was going to work until final production silicon verified it. If it doesn't work, the fall back mode is the advertised 102 GB/sec BW.
 
That "Mynd" guy on PSU forums who was quoted earlier seems to be being discussed on GAF Anyway, does this make any sense (somehow I doubt it) his explanation for the new BW?



There is other stuff he's saying that seems controversial too

http://www.psu.com/forums/showthrea...lopers/page7?p=6132276&viewfull=1#post6132276

I speak only on the subject because of what I have read here so don't blame me if I don't get this right :p :LOL:

His big beef, it seems, with the PS4 is the penalty for a GPU stall ( funnily he doesn't mention a CPU stall which would be somewhat more problematic. ) because of the latency issue such as it is. Now of course when he says A pipeline stall we need to keep in mind that there are more than one pipeline so stalls should they occur ( and lots of circuitry is used to make sure that doesn't happen ) may end up being lost in the noise of a running game.

Now big changes in the framebuffer could indeed cause some excessive access of system memory but they could also do the same for the ESRAM as well. If I was to try and make it simple for ... me to understand :???: it I would say that all things being equal the PS4 would be at a relative disadvantage IF the set of changes to the frame buffer were large enough that it overwhelms PS4 GPU and cause a bunch of stalls but small enough that the ESRAM cache for there to can handle most if not all of the delta frame to frame. This is a very simplified situation and I have no idea how likely such an occurrence would be.

Of course, all things being equal, if latency is your only significant problem it makes your set of solutions to that problem fairly straightforward. With the XB1 one you have lesser magnitudes of problems but more types of them and therefore a more complicated mix of solutions for those problems. But here I am merely waving my hand about with little real knowledge on the subject.:oops:
 
Has it been confirmed that he indeed Albert Panello ? :???:



Well "IT" must be on some piece of paper somewhere ? Besides upping the clocks what "specs" could he be talking about ? I can't imagine MS would be adding anything at this late date.

I'm not saying that there is no chance of some hardware reveal with the XB1 occurring but as always what is really wrong with the engineering decisions made by MS if such things don't come to pass ? It's going to do more general purpose stuff than the PS4 so why not adjust the thermal and silicon budgets to reflect this emphasis on multiple roles ?? It's a bit boring for now but ...

I'm not speculating there's anything not known really (although you could quibble over clocks still). But more like lets say the ESRAM actually makes Durango in practice more effective than PS4. Would you want to hide that fact so Sony doesn't get wind of it and go "oh hey guys, we need to upclock our GPU another 200 mhz to get back ahead!". Or some theoretical scenario like that. That's one way Panello's post reads to me.

The other way it reads is hinting at a Durango upclock.

That's more what I meant by "not on paper"
Has it been confirmed that he indeed Albert Panello ?

No idea. GAF mods are usually pretty aggressive about confronting users over such claims, but I dont personally know if they verified him.

I did look at his post history quickly, and nothing stood out as something that you know, was out of character with what a corporate employee might post.
 
wild speculation
what if MS decided to utilize even the cu reserved for redundancy? :p

double the production cost, but they can almost be there
 
So what I've gathered are there are a myriad of possibilities to explain DF's article:

- Possible downclock
- Possible upclock
- eSRAM interfacing with something else on the chip?
- additional pipeline functionality
- myriad of other grab bag notions and ideas
- something that we haven't thought of, or, something that isn't readily available to speculate on.

What would really put this issue to rest is if we have an architecture documentation leak/release. Aren't Microsoft holding a conference in August where we could possibly learn about the final hw and architecture?

Also, as an aside...I've been looking for documentation on where the eSRAM is located on the APU, but it's been convoluted. Has it been confirmed if it communicates through a memory controller or directly to the GPU?
 
I'm not speculating there's anything not known really (although you could quibble over clocks still). But more like lets say the ESRAM actually makes Durango in practice more effective than PS4. Would you want to hide that fact so Sony doesn't get wind of it and go "oh hey guys, we need to upclock our GPU another 200 mhz to get back ahead!". Or some theoretical scenario like that. That's one way Panello's post reads to me.

The other way it reads is hinting at a Durango upclock.

That's more what I meant by "not on paper"

Ah yes well at this stage I should think that Sony pushing their thermal budget and adding some additional strain to their yields ( if the later is even a problem ) is unlikely no matter what performance advantage the MS ESRAM implementation might have. I wouldn't mind a bit of a late breaking "arms race" myself however :devilish:

nothing stood out as something that you know, was out of character with what a corporate employee might post.

I don't think of him as merely being a 'corporate employee' but on the other hand I haven't checked his past posts either so what do I know.
 
Status
Not open for further replies.
Back
Top