Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
An extra 4GB could be used as a fast buffer?

More is always better, if it costs nothing to run and produce.

You could gradually fill as much RAM as you can fit into a system and use the excess as a RAM drive for orders of magnitude faster access, over HDD/BD.
 
the memory bus
if you take out the os reserved memory / bw, and then the bw needed for a buffer, you get very little of that ~70GB/s on the NFE-bus
 
the memory bus
if you take out the os reserved memory / bw, and then the bw needed for a buffer, you get very little of that ~70GB/s on the NFE-bus

There's eSRAM to make that your FE-bus.

~100MB/s from the HDD versus 70GB/s. If whatever you wanted to load was already in main memory... But 8GB is what there is, with some of a 3GB reserve holding apps for the fast switching.
 
wow! why haven't you said first? we have the magic esram!
now we can eliminate the NFE-bus to keep cost down! :D
 
Regardless of how silly the specs are, do not assume that any leak made now is from a decision made recently.

Well, they communicated the 8GB of memory figure less than two months ago, and 3 months after the PS4 reveal. If they intended to up it in response to Sony's last hour decision, they would have gone public about it at their meeting in May, or even later on the E3 stage...
It simply doesn't make sense...
 
the memory bus
if you take out the os reserved memory / bw, and then the bw needed for a buffer, you get very little of that ~70GB/s on the NFE-bus

I'm not sure I agree, unless of course you're not referring to an extra 4GB of ram being used as a faster data source in place of taking stuff off the Hard Drive. You more than likely don't need any extra bandwidth at all to handle the extra memory, as the extra ram in itself would be its own reward if it saves you trips to the hard drive and allows extra space to store important game data.
 
It is interesting but is it due mostly to the ESRAM, or because its a closed box? The comment Implies that its primarily a result of the ESRAM, otherwise, why mention it...

Dave, can you expand any more on this?

FWIW that's what I've been hearing too - unrelated to Dave.

I think he was guessing...

I said 'perhaps' we'd hear more by Friday! - haven't heard any more yet unfortunately.

Plus, I did say it was a rumour so neither the upclock or RAM increase are confirmed.
And it's not just that the veracity of the rumour isn't confirmed, but that the rumour itself does not claim MS are definitely increasing the clocks or RAM.

Apparently, the story is that MS has gone to devs for feedback on a few different things, two of them being an increased GPU clock and an increase to 12 GB of RAM (another thing on the list was storage for 'tombstoning' - I'm guessing flash cache)

Devs have to list which spec changes they prefer in order of importance. So nothing is confirmed as of yet, just options MS is getting feedback on.

However, I do have someone else telling me (also from a dev source) that we shouldn't be surprised to see 12GB of RAM in the final box and that it isn't as hard for MS to do as we think.

Why would they upclock or add 4GB RAM (which would be asymmetrical, making it a very wonky solution) only a few months before launch? That's just crazy talk.

What point would 12GB be anyway? There wouldn't be any reasonable use for it. Just loading it all up with data - even from HDD - would take considerable time, so again, what would be the point? It's just fanboy nonsense, that's all.

For what it's worth, the most complaints MS received from devs around the XB1 was the large system reservation - it was a bigger issue than even the low GPU flops.

Now, I don't know why devs won't more than 5GB of RAM (especially slow RAM), but it's not like MS doesn't have an impetus to increase the available memory.
 
Last edited by a moderator:
If the problem is OS reservation then MS can still "free" that 1GB that they are now reserving for future updates; this would give devs more RAM for games and would cost MS nothing.
Adding more RAM instead is going to cost a lot of money no doubt.

It would be a good trade-off in my opinion.

P.S.
My theory still stands anyway.
 
Last edited by a moderator:
I just discovered the Engadged article about Xbox One's silicone lab. Very interesting stuff but what really made me wonder, was this:

The chamber is full of hundreds of variations of prototype Xbox hardware -- today, it's set to very cold -- and is vital in determining how the Xbox One stands up to extreme thermal conditions. With laughs all around, he's freed from the icy, zebra-filled prison. Surprising no one, the various beta kits of the console itself, the controller and the new Kinect all sport zebra-pattern tape to hide their shape (as rumored).
Photo of this chamber (you can also see some better shots in the video)
dsc07175.jpg


The article states, that they were able to visit that lab a few days before the Xbox Reveal Event - that time, Beta devkits were already sent out to devs... so why still test different boxes? Maybe the beta kits differ the retail boxes in a large way? Was this already discussed?
 
I just discovered the Engadged article about Xbox One's silicone lab. Very interesting stuff but what really made me wonder, was this:


Photo of this chamber (you can also see some better shots in the video)


The article states, that they were able to visit that lab a few days before the Xbox Reveal Event - that time, Beta devkits were already sent out to devs... so why still test different boxes? Maybe the beta kits differ the retail boxes in a large way? Was this already discussed?

They will be continually testing new prototypes till a new version of the box will not exist (i.e. when the next xbox comes out). Its not really surprising that they are testing how well a few designs of the box handle extreme temp ranges at all :).
 
Yeah - but why not just test the ones with the actual finalized (beta) board? It just doesn't make much sense to me to still test versions of the board that aren't going into production. But I'm probably missing something - I don't know a thing about this stuff.
 
Apparently, the story apparently is that MS has gone to devs for feedback on a few different things, two of them being an increased GPU clock and an increase to 12 GB of RAM (another thing on the list was storage for 'tombstoning' - I'm guessing flash cache)
Developer response to such a question would be predictable without even doing it. Devs whose projects are memory bound will vote for more RAM and devs whose projects are GPU-bound - standing on their own or in comparison to PlayStation 4 builds - will vote for a faster clock. Any other options put on the table, unless the impact to performance is obvious without testing, are unhelpful.

Basing a decision affecting a 5+ year lifecycle device on problems developers are having this month would be terrible. And if Microsoft are making engineering decisions based on popular voting, what is the chief engineer doing?
 
it's not like MS doesn't have an impetus to increase the available memory.
I think the impetus would rather be to decrease reservation (which is merely software), than add more hardware into the box, a much more complicated and expensive solution that requires more effort, cost, verification and so on. And, 12GB is not a natural power of two, so there may be performance implications as well, both insofar as electrical timing goes (you will have two memory devices per channel rather than just one, may affect latency etc), and interleaving (different # of pages between devices).
 
Rel test and qual takes months. The lower the possible temperature acceleration the longer the required testing (and/or larger the sample size). It is exponential with temperature.

With something like a console it is very difficult to get either sufficient temperature acceleration or quantity.

Adding RAM (different density or more chips) is a much more minor change as far as the qualification does. The SoC, VRM, cooling solution and case design are much more serious contributors.

Plus if you did end up with a dual foundry solution you have 2x the testing to do.

Now the ICs and modules I work on are tested outside of the system in simpler test boards, making it easier. I don't know if MS is doing that, but we at least see the racks of systems. Quite possibly both are being done and both are still in progress.

Maybe AMD is also doing tests on the SoC in simpler test boards but X SoC per board. Perhaps 10 per board in huge ovens/cooling chambers. (One of my higher power projects needed to be in cooling chamber due to total 1kW power dissipation. No need to run the oven to heat it up for temperature acceleration.)

Still don't think these changes are much in the grand scheme of things. (An up clock << 1175 MHz and different memory density or number of modules. Hey, do you fear for the reliability of your laptop or PC each time you replace the memory with standard non-over clocked memory in a different density? No, you don't because those modules were qualified separately already and because your motherboard and APU or CPU were qualified for the maximum number of DIMMs and chips per DIMM already. And because there are industry standards for DDR3, etc.)



Keep in mind that AMD and Nvidia are experts at re-binning, re-clocking, re-fusing, (re-branding :rolleyes:) and matching the same original silicon with a new laser mark with all kinds of new combos of memory, clock, cooler, PCB, VRM, etc. And new "name/brand". They do it all day and it is not that hard to do the derivatives or the re-brand.



The real difficulty would be a new SoC. THAT would be difficult. That rumor is fairly crazy but we can not say how crazy without access to insiders and the timeline. If it started early 2012 (almost two years before launch) (and if the rumor is not fake) then it is not so hard (not really any harder than the first SoC), especially if it is based upon the same blocks already fabricated and verified (that could actually make it *EASIER* than the first SoC design was!!!). (In other words more CU of the exact same CU and another ESRAM block of the exact ESRAM block.)

So if it started early, no problem. If it started late, big problem. If that new one is not done then you *really* do not want to say anything. If you announce the new SoC and then have to ship the old one due to a glitch/problem with the new one you will *really* look bad. So a new SoC in process looks pretty much like a bad rumor. Nothing said except a leak or two which might be totally bogus too. And if the leak is from a developer half a dozen times removed from the MS or AMD labs then you can not expect enough accuracy in the leak to be able to tell if it is real or not.



Not that I any lending any credibility to the pastebin or similar. Just filling in some design and industry related facts for fun, since I am a big hardware fan and a hardware designer. So don't go full fanboy on me if you don't agree. It is just a big interest for me. And if the hardware rumor thread causes you to go full fanboy then you don't need to read it do you? (Unless you are a reputation management professional.)
 
Last edited by a moderator:
For what it's worth, the most complaints MS received from devs around the XB1 was the large system reservation - it was a bigger issue than even the low GPU flops.

Do you know that is specifically complaints about the system reservation of DDR3?
(it seems likely that Durango reserves a number of resources - including GPU time)
 
Status
Not open for further replies.
Back
Top