News & Rumors: Xbox One (codename Durango)

Status
Not open for further replies.
the xbox 360 came in my mind, remember when it change last minute from 256 MB of GDDR3 to 512? this was very costly at its time, as microsoft admits, but they were force by EPIC
they used to make those kind of things, maybe they have done another last minute change

what if the mainboard was flexible enough to permit DDR3 or GDDR5, and one/two APU?
it's reasonable that they're gone with this first type of solution to be able to such last minute change "reaction"? future revisions can shrink component and cut useless features to save the cost

The issue I see with a dual apu setup is that it is very wasteful. If they have the ability to connect a hi-speed link to the apu, it seems the budget would be better spent in connecting to a beefier gpu instead of a duplicate apu.

The gpu is the weaklink in the rumored durango design, not the cpu, so doubling up on cpu resources would be wasteful, unless they feel they need extra compute for kinect, but I would think their compute needs for kinect would be better served with CU's rather than CPUs.
 
but I would think their compute needs for kinect would be better served with CU's rather than CPUs.

from VGLeaks

Core Kinect functionality that is frequently used by titles and the system itself are part of the allocation system, including color, depth, active IR, ST, identity, and speech. Using these features or not costs a game title the same memory, CPU time, and GPU time.

I don't think kinect will use resources, except for some features, that vgl says stay on both cpu and gpu, but on already reservated resources

Kinect’s CPU, GPU and memory usage on Durango are part of the system reservation.
 
It would be worse in many respects, but it would also be superior in many respects as well.

The best thing about choosing x86 hardware in general is the flexibility in design time to go with whatever design seems necessary at the last minute. With the world and the game market being the way it is, it doesn't surprise me that they would head in this direction of x86 if for nothing else, the flexibility it brings to last minute spec changes.

Granted, not as nice/efficient as an apu designed from the ground up, but if their research is telling them that the durango apu spec as we know it is riskier than going with a last minute change-up consisting of off-the-shelf components, then they could choose to go with off-the-shelf components.

R&D costs are not cheap, but neither is a losing proposition at retail.

Its still a APU, just because they used X86 doesn't mean they can change whatever they feel like quickly. A change in CPU would still require a complete redesign of the system which would take upwards of a year.
 
from VGLeaks


Thanks for the link, but I got that from earlier reading. I was just stating that if kinect 2.0 (or future revision(s)) might need more compute power than MS anticipated, the silicon budget would be better spent on CU (GPU) rather than CPU.

More bang for the buck so to speak.

Regardless of the needs of kinect though, from what the rumors state, the gpu compute deficit of durango is such that it might be necessary to boost just on graphics needs alone. Much less factoring in what advanced features/abilities they plan for kinect.
 
what if the mainboard was flexible enough to permit DDR3 or GDDR5, and one/two APU?
...And if wishes were horses then beggars would ride.

Specs were leaked a long time ago now, what's the point of all this pointless speculation of yours, weren't you the guy who yourself noted that the 21st is just days away and we should just wait and see? ;)

it's reasonable that they're gone with this first type of solution to be able to such last minute change "reaction"?
No of course not. It would make their console twice as expensive, twice as complicated, and much much harder to program. There's guaranteed to be no hardware multi-processor support (bus snooping, memory coherency, that sort of thing) in the durango APU; it's a low-cost chip, not intended for multiprocessing applications, so it would make programming it a real bitch. Also SLIing GPUs is as you can see from any PC review, far from linear scaling in almost all situations, you need to double up on both textures and screen buffers in RAM for each GPU, causing waste of hardware resources, and so on.

It will cost twice as much, draw twice as much power and require twice the cooling; it's a really terrible idea.

future revisions can shrink component and cut useless features to save the cost
Bhahah! Yeah, and Santa Clause lives on the north pole. Any revising you care to make to a bastardized setup like that will be far easier to accomplish with just a single APU in the box.
 
Its still a APU, just because they used X86 doesn't mean they can change whatever they feel like quickly. A change in CPU would still require a complete redesign of the system which would take upwards of a year.

That's assuming it remains an apu. The fact that it is entirely x86 and amd gpu based means that if push came to shove, MS could literally pull a mini atx mobo, cpu, and gpu off the shelf and put it in a box.

Not ideal - I know, but point still stands that due to the hardware architecture chosen, it allows ultimate flexibility for last minute changes should MS decide it necessary.
 
That's assuming it remains an apu. The fact that it is entirely x86 and amd gpu based means that if push came to shove, MS could literally pull a mini atx mobo, cpu, and gpu off the shelf and put it in a box.

Not ideal - I know, but point still stands that due to the hardware architecture chosen, it allows ultimate flexibility for last minute changes should MS decide it necessary.

I have yet to see a console that uses a literal off the shelf piece of kit outside of devkits, theres probably a very good reason for that. Also scraping the APU would not be possible at this moment they should already have manufacturing contracts and deals in place to start to spin up and make the thing, you cannot just drop them. It would cost them tens of millions of dollars to drop the APU in the least if not upwards o f 100 million (aka a large number, these are straight from my arse).
 
Doing that would mean throwing away nearly all work done so far on durango games - meaning you won't be launching this year, as durango is NOT a PC compatible computer. It's actually a rather custom jobbie, far more custom than the original xbox, which was also based on x86 tech.

There's no point in doing a quick-and-dirty off-the-shelf component change to hardware if it means resetting the counter on software development - you've quite literally gained nothing.
 
100m is pocket change to ms too be fair. i mean really. if they think it's strategically important enough 100m is nothing in the scheme of a 7 year life cycle. if the positive benefit of said change is strong, it will make multiples of that back (eg, bet the move from 256 to 512 ram last gen ended up gaining them a lot of dollars, not losing them, since it allowed 360 to compete for 8 years instead of 4-5, it allowed the hardware price to remain high since the hardware remained competitively premium compared to the competition, etc)

of course i doubt they're changing the specs (besides clocks/ram), besides maybe at most somehow adding cu's (of course they would have started this at least a few months ago, since our info is a few months old).
 
...And if wishes were horses then beggars would ride.
.

so you think that doesn't exist such Memory controller that can use both gddr5 and ddr3, or that doubles the cost of the whole console or such?

for me to include this kind of Memory Controller (the Kaveri APU' mem controller for example) should be a very smart idea, in the very first version of the console
 
so you think that doesn't exist such Memory controller that can use both gddr5 and ddr3, or that doubles the cost of the whole console or such?

for me to include this kind of Memory Controller (the Kaveri APU' mem controller for example) should be a very smart idea, in the very first version of the console

It wouldn't make much of a difference, both GDDR and DDR3 have very different characteristics in some respects (heat, voltage, etc) and you wouldn't want to have to design with the worst case in mind when you might not even use.

That and you'd be spending money on something you might end up never using, not a wise move when your selling millions of something. Further to the matter that there wouldn't be much reason to have the GDDR5 in there if they went with eSRAM as well, the GDDR5 would well end up being faster then the eSRAM making it rather redundant.
 
Can someone explain to me how a developer can effectively use those 5GB ?(which are available to the game according to rumours)

Im asking because with a bandwidth of 68GB/s, u only can use 2GB per Frame in a 30 FPS Game. Of course u dont need all 5 GB of data every frame, but textures, objects etc. have to be redrawn every frame and in the case of Killzone Shadow Fall they take up almost 2 GB.

BTW, i found another pastebin entry:

http://pastebin.com/sckq0WNF

Probably fake, but the one thing thats interesting is the alleged power consumption.
Could 125 W be enough for the Specs as shown by VGleaks?
 
because I don't know such details anyone knows what kind of redesign is needed to swtich to gddr5? if the controller is the same and the 256 bit bus data is the same, what is the expensive change in the motherboard?

and again, can they upclock over 1GHz using a peltier cell?
AMD have some patents [US patent number 6,800,933] for peltier system ON chip (cound be a good anti-RROD system?)

Various embodiments of a semiconductor-on-insulator substrate incorporating a Peltier effect heat transfer device and methods of fabricating the same are provided. In one aspect, a circuit device is provided that includes an insulating substrate, a semiconductor structure positioned on the insulating substrate and a Peltier effect heat transfer device coupled to the insulating substrate to transfer heat between the semiconductor structure and the insulating substrate.

It is possible to air cool a TEC (Peltier) powered by a 12V molex connector with a copper based cooler and high c.f.m. fan to the degree that CPU temperatures are healthy enough to be considered normal air cooled temperatures (read 45 - 60 degrees Celsius) and keep the T.E.C. healthy enough. If a TEC hot side isn't cooled enough the dissimilar ion effect cannot happen and the so called cold side doesn't become cold enough.
"AMD's possible idea is to mount the TEC into the CPU package and power by Vcc high-plane and GND to prevent any extra wiring for "back of newspaper" so-called system builders. Couple this with a PIB cooler that is good enough to remove the heat from the hotside and they have a cooler

just speculating how the cooling system could work
 
The wording seems to imply the need for an SOI substrate, which hasn't been indicated yet for Durango.
A peltier would be very inefficient from an energy standpoint, and I wouldn't count on it bring a core 1 GHz over its design range.

Any expected power efficiency and savings on the heatsink and fan design would be gone, and then made significantly worse on top of whatever manufacturing problems it would add.
 
This is how MS/AMD/IBM would double up APU's plus allow for extensibility in a HSA future.. http://semiaccurate.com/forums/showpost.php?p=183058&postcount=703

That's a high-level drawing with multiple APUs and an arrow labeled "HT".
The presence of a non-coherent memory component in each memory pool doesn't necessarily leave HSA a non-option, but it's something HSA by its design is supposed to operate without.

There's not much detail there to spell out where it's a good idea for Durango, and it presupposes things like coherent Hypertransport and (coherent?) PCIe, one or both of which may not be making an appearance.
 
That's a high-level drawing with multiple APUs and an arrow labeled "HT".
The presence of a non-coherent memory component in each memory pool doesn't necessarily leave HSA a non-option, but it's something HSA by its design is supposed to operate without.

There's not much detail there to spell out where it's a good idea for Durango, and it presupposes things like coherent Hypertransport and (coherent?) PCIe, one or both of which may not be making an appearance.

If U read on to other future posts, U will get the patent link and good discussion.. its the next logical evolution of the APU (APD)... .. it goes on to explain lots of terms found in the VGLeaks posts, and points out some of the intentional changes in them too..

Believe what U will, we'll find out more in 6 days time up till //build/ conference
 
Look at the schematics we have of Durango, duplicate it... and try to connect them together with a slow external bus.

The problem is that it's a clusterf^&* of bottlenecks. For that amount of silicon they would be better off with off the shelf CPU and GPU.

Last generation Hypertransport Link speed is equal to the Move engine bandwidth in the vgleaks specs. Was discussed back then when the 2 APU rumor came up.
 
HT 3.1 can match the rumored DME bandwidth if it's running at full spec and with a so-far unused 32-bit link width.

edit: Or two 16-bit links to the same endpoint?
 
Doing that would mean throwing away nearly all work done so far on durango games - meaning you won't be launching this year, as durango is NOT a PC compatible computer. It's actually a rather custom jobbie, far more custom than the original xbox, which was also based on x86 tech.

There's no point in doing a quick-and-dirty off-the-shelf component change to hardware if it means resetting the counter on software development - you've quite literally gained nothing.

Not true.

From what I understand, developers up to this point have been told to code to the api layer. (not that they need arm twisting)

This means the work put into development is quite flexible for what hardware MS intends to launch with.

Note: None of my conjecture above WRT hardware is any indication of what I expect from MS at this point.

Come the 21st, I expect we will see something nearly identical to the Durango we have seen in spec. However, that does not mean it is impossible for MS to either:
A) change their mind and source off the shelf components (they can be combined and shrunk later into a single APU)
or
B) have had two concurrent designs this whole time and decide to go with the more powerful of the two.

This decision I believe will have been guided not on some glorified fanboy wishlist inside MS to have the most powerful console, but based on their research for what will make them the most profit in the end.

Saving $50 per console can add up:
50 x 50,000,000 = $2.5B
But this formula is not in a vacuum...

At this point, MS makes this annually in xbl fees.

If the rumored spec = $2.5B savings at the expense of projected lost revenue greater than $2.5B, then prepare to be surprised come the 21st.
 
Status
Not open for further replies.
Back
Top