What HW cost reduction options does XB1 have?

The more I dig for some info, the more I think they'll be stuck with 16 chips DDR3-2133 as the least expensive option for quite some time.

DDR4-4266 is very very far in the uncertain future. Considering Haswell-E is coming out with a DDR4-2133 interface. It'll take a long time before DDR4 reaches the maximum speed on the roadmap, let alone at a lower price.

DDR4 isn't expected to reach price parity until 2016, so it's later than 2016 before it becomes significantly lower cost to justify changing the interface. 2017? 2018?

32bit wide DDR3/4 is still unobtainium, as far as I can tell. So no possible cost reduction by going 8Gb (or 16Gb DDR4) if they can't source them in x32 configuration.
 
Yes, the most recent predictions I've found put late 2016 as being the earliest point when there would be a significant price advantage to DD4, which probably means early 2017 before we'd be likely to see these things in the wild.

32-bit seems to be a far bigger part of the DDR4 plan than it ever was for DDR3. Mobile will, I think, push 32 bit interfaces in becoming actualtanium. Unfortunately pricing means MS are stuck with a huge mobo to house those 16 equally spaced DDR3 chips for a few years yet.

Once DDR4 becomes mainstream I think we'll see speeds increase rapidly from 2133. The premium for DDR3 2133 is small now, despite most processors still not supporting it officially. Even 2400 isn't much extra.

I hold out hope that 4266 won't be much more "enthusiast" by 2017 than 2133 was in the middle of 2013. A couple of memory vendors are hoping to have 3200 enthusiast kits available by the close of this year, iirc ...
 
I just realised I cannot recall any nonhandheld console that ever changed the type of ram inside. I'm probably ignorant.
 
Last edited by a moderator:
That's true I believe, but I think that had more to do with consoles using exotic RAM that didn't have an upgrade/replacement path in the market (or there were no significant cost savings to be had).

I don't believe any of the previous consoles had 16 RAM chips either.
 
The problem is that even at the same bw the latency is worst, so I don't know if you must use double the frequency to have almost the same latency on half the bus, and I don't know if access to memory are abstract enought to allow it
And with ddr4 starting to appear only on desktop the next year, how much will take before it beomes inexpensive?
 
I really don't think that Xbone will ever change memory type. For that they would need to re-made APU, which is home of the northbridge and DDR3 memory controller.
 
If it saves them money in the end why shouldn't it happen?

Because it's hard to think of a scenario where it does save them money. The regression testing alone to make sure it didn't break anything would be mega-bucks. For me the Occam's Razor on this is 'Has anyone done it before?' and the answer is no.
 
I still cant believe that they went with this dumbed down internal design, just plastic box with mobo and few elements placed over it [BD drive, HDD and cooler]. Zero attempt to control the internal airflow.

xbox-one-breakdown3-600x450.jpg

microsoft_xbox_one_opened.jpg



Why is Xbone mobo so large? PS4 has much more compact design of the mobo [and everything else].
Besides the orientation of the X1 fan, the cooling solutions are essentially continuations on the designs from the ps3/X360. They also went down the road the engineers were familiar with. PS4 is a centerfugal fan design just like ps3.
 
The problem is that even at the same bw the latency is worst, so I don't know if you must use double the frequency to have almost the same latency on half the bus, and I don't know if access to memory are abstract enought to allow it

It'd certainly be interesting to know. I would that think that software running on a virtual machine, on a platform where it had to contend for memory access with other (unpredictable) processes running simultaneously on another VM, would have to be insulated to some degree from clock-precise memory access latency.

I guess a lot depends on what MS did and what kind of changes they wanted to be able to protect themselves from ...

I really don't think that Xbone will ever change memory type. For that they would need to re-made APU, which is home of the northbridge and DDR3 memory controller.

Redesigning chips is nothing new for consoles. The 360S had a simulated FSB to allow the CPU and GPU (inc NB) to coexist on the same silicon.

And in the PC space, which is AMD's APU home turf, changing the integrated memory controller to a different one while retaining the same uarch is hardly something new ...

Because it's hard to think of a scenario where it does save them money. The regression testing alone to make sure it didn't break anything would be mega-bucks. For me the Occam's Razor on this is 'Has anyone done it before?' and the answer is no.

I don't think Occam's Razor is good for predicting the future, especially when so much has changed.

Xbone is the joint first console ever to have 16 memory chips iirc, and also the first console to use multiple virtual machines to isolate OSes from one another (and perhaps from some elements of the underlying hardware).

If MS can change, and plan to make Xbox one for a long time, DDR4 might at some point allow them to save $10 or more on the memory and perhaps a few dollars more on the PCB, power supply and cooling, case, packaging and shipping. Back when MS were talking of selling hundreds of millions of devices (lol) that could easily add up to hundreds of millions or billions of dollars over a decade.

I would hope they gave themselves room to move to cheaper alternative to an enormous number of memory chips on an huge mobo if they had big plans over the long term. But maybe not ...
 
If MS can change, and plan to make Xbox one for a long time, DDR4 might at some point allow them to save $10 or more on the memory and perhaps a few dollars more on the PCB, power supply and cooling, case, packaging and shipping. Back when MS were talking of selling hundreds of millions of devices (lol) that could easily add up to hundreds of millions or billions of dollars over a decade.

I would hope they gave themselves room to move to cheaper alternative to an enormous number of memory chips on an huge mobo if they had big plans over the long term. But maybe not ...

Is it possible to put a dollar figure on the software QA :?:
 
If you know what you have to test for then you should be able to. At least, that's what a software engineering beard said at uni ...

Can I put a price on it though? Hell no!

If you put memory access behind some kind of sandcurtain interface though, with a view to making future revisions to the controller invisible, that should mean you just test the hardware against spec, and don't need to test any software.*

*(lol)
 
QA is always hard to put a "correct" value on.

1. Ship with no QA and you have no issues, ie any QA that would have been done, would have been "wasted".

2. Ship with no QA and you have issues, then QA might have saved you the cost of having to deal with those issues + any damage to your brand that might mean lost sales in the future etc. And if the short term issues are severe enough, it might mean that you go out of business.

So with just those two simple unrealistic scenarios, the $ designation gets complex to do, real quick.
 
Is it possible to put a dollar figure on the software QA :?:

cannot find the one for software development life-cycle one, but you can find it in any book about SE, cant grab it in google atm, sorry.

Relativecostofsecurityfixes-chart.png


This one is specific to the cost of fixing in different stages, related to ITSEC.
SHould be give you an hint, any way.

With relative money, this:

Issue_SDLC_metrics.jpg


Both are quite accurate. In the past, MS esteemed 100k$/defect bug, but now I think their process (and batching issues) reduced it to 1/4 of it, as long as no study or research to fix it has to be done (i.e. the bug is flatly reported with hooks to almost immediately locate it).

Still, it depends if you have to rerun all the tests, write custom tests for the issues, take care of any regression and so on.

@JPT: no corporate will ever even consider that. I see it routinely done in startups, but a corporation cant afford it.

OT: imagine the cost of pentium FDIV bug - it required intel to massively (hinting many illegal things towards AMD, even more on P4 age) to prevent market shifts.
 
Last edited by a moderator:
I wonder if the graphs still look the same, especially for games? Bugs at post release seem very plentiful.
 
@JPT: no corporate will ever even consider that. I see it routinely done in startups, but a corporation cant afford it.

I know, its why I put unrealistic :) But I have been in meetings where people just think QA is a waste and so its up to people with more insight than me to correct it :)
 
This one is specific to the cost of fixing in different stages, related to ITSEC. SHould be give you an hint, any way.
ITSEC would be expensive as you're not just looking for bugs but doing full on penetration testing, looking for exploits etc. There are a lot of known exploits and typically you'll try every known exploit applicable to your application - this takes time (and money).

But I would expect the scale (if not the actual figures) to be fairly representative of bug reversal. We (I work in Government) worked this out a few decades back and except where we use outside contractors for day-to-day solutions, our methodology for new applications is quite different to traditional software companies. Much of code is mission critical and runs 24/7 but has to be right 100% with no margin for error. Consequently we use a number of bespoke languages (with their origins in traditional languages, like ADA, Pascal and Perl) designed to make errors more apparent to the coder and which produce compiled code that is ultra-safe rather than highly-optimised.

Apple's new 'Swift' language is very interesting in this regard. Sorry, off topic. But it's interesting to me professionally.

Frankly given how games are written, I'm impressed they aren't more bugs - particualrly given the traditional crunch toward the end. You're already burned out and now you're going to start working 16 hours days for several weeks. What could possibly go wrong!! :runaway:
 
I wonder if the graphs still look the same, especially for games? Bugs at post release seem very plentiful.

of course. It's not fun when you have to scramble for bugs post release, which just eats to the resources either directly or cause issues for future plans. I'm sure the guys on BF4 would rather be working on the next DLC versus trying to fix that damn thing still, not to mention the damage caused to the brand.
 
ITSEC would be expensive as you're not just looking for bugs but doing full on penetration testing, looking for exploits etc.

Nah, figures for traditional SE are exactly the same, I cant just paste those are they are in classic IT books - the analysis there is even more refined, but the numbers are (more or less) the same, as xN multipliers. THose figures doesnt include pentesting, just backtracking bugs, fixing costs, re-testing, QA, re-shipping etc.

Frankly given how games are written, I'm impressed they aren't more bugs

For my experience - trust me, an entire OS kernel is easier to crunch than a serious game. Even a well written one.
Until you smash your face over a real game engine... they are behemoth. Especially for the fact they are multi-OS, multi 3d-API and some even multi-script...
The ones you find open-source on internet or such are not really comparable to.

I work for... who I work for :)
 
Back
Top