News & Rumors: Xbox One (codename Durango)

Status
Not open for further replies.
I've always kept it in the realm of possibilities because of the Yukon Roadmap, and also for the AMD/IBM team that MS assembled. I also consider it because MS took direct action in seeing to Yukon's removal from various sites. I believe that was a significant move on their part.

I've repeated this more than a few times so I hope people will forgive me for sounding like a broken recording. lol

I honestly don't have any specific requisites on how it would be configured, APU+APU, APU+GPU, or even something reflecting the AMD patent for APU+APD. I'm naturally for whatever works graphics and performance wise.

In either case however it works out or DOESN'T work out, I look forward to hearing the un-mingled truth. I want to know how they did it, or didn't do it. It's all very interesting to me how this hardware puzzle will finally come together.


One thing that has bothered me quite a bit regarding the assumption that "MS doesn't care about spec/performance"...

If this were true, why bother hiring all of these top engineers for designing the APU? Why bother with ESRAM? Why bother with the move engines etc?

It seems to me if they truly value performance so lowly, why invest so much into the design? Why not simply go with a Trinity variant?

The engineering effort involved would be nil, the time to market would be whenever MS seemed fit, and as a side benefit, MS would not be at the mercy of the manufacturer for all chips as some could be resold as lower spec apu's...

MS' actions to me indicate a company that will make very measured moves to ensure success (maximum profit). However, some moves they've made lately do indicate that perhaps be a bit knee jerk. Windows8 is an example where they seemingly did not do enough research or they simply did not put that research into action for the final product. Regardless, it does show the company to be fallible and potentially reactionary to market conditions.

I'm excited to see what they deem to be the answer for nextgen gaming.
 
Does anyone know an answer to that?

What I expect from the big memory pool is more dynamic environments while keeping changes to the environment persistent through a whole level at least. Something which should allow even better experiences with games like Far Cry 3.
 
You need great engineers to design even low-end parts.


Yes, but if they don't care about performance, why invest so much into R&D for something which is readily available (an x86 dx11 apu) without the expense?

As I said, Trinity is on the shelf right now. Been there for almost a year, about to be replaced.

No R&D involved.

If it's all about max profits upfront and performance be damned, why didn't they go with this option?
 
Yes, but if they don't care about performance, why invest so much into R&D for something which is readily available (an x86 dx11 apu) without the expense?

As I said, Trinity is on the shelf right now. Been there for almost a year, about to be replaced.

No R&D involved.

If it's all about max profits upfront and performance be damned, why didn't they go with this option?

Because you maximise profits by making the best APU you can and using the least parts, with the best layout, best thermals, etc..

So for the best profits you want the best engineers even for a low end design.
 
Because you maximise profits by making the best APU you can and using the least parts, with the best layout, best thermals, etc..

So for the best profits you want the best engineers even for a low end design.

I think that this R&D team is probably part of a go-forward team for consoles, surface, WP, and maybe even MS-branded PCs. For example, whats stopping them from building an iMac type Windows PC with the Durango soc in it?
 
Because you maximise profits by making the best APU you can and using the least parts, with the best layout, best thermals, etc..

So for the best profits you want the best engineers even for a low end design.

For the best profits, there already exists a design on the shelf. And a new one will replace it shortly. According to the rumored spec, they didn't choose this existing apu.

The point is, they do care about performance. Their actions clearly indicate that. To what extent, is to be revealed shortly.
 
For the best profits, there already exists a design on the shelf.
Which one?
One of the APUs on 32nm SOI, a regular Bobcat or Jaguar, or the HSA-enabled Kaveri that was a big question mark as to whether it would show up in 2013 or 2014?

It's certainly possible and expected of engineering to try and optimize for multiple constraints and design goals that no existing product can meet well.
 
For the best profits, there already exists a design on the shelf. And a new one will replace it shortly. According to the rumored spec, they didn't choose this existing apu.

The point is, they do care about performance. Their actions clearly indicate that. To what extent, is to be revealed shortly.


Well, the off the shelf models didn't provide enough performance, likely.

Top current AMD APU's offer 384 SP's, whereas Durango GPU rumors at 768. Twice as many.

Then on some level I guess they liked what Jaguar offered over piledriver. Likely it was more gaming oriented while being cheaper and using less power.

The ESRAM was engineered to allow the use of cheapo DDR instead of expensive GDDR. And a lot of it.

And again, dont forget if it hadn't been for the PS4's late upgrade, it would have held it's own. 8GB RAM was a fairly lofty goal.

There's really nothing unusual about the design imo.

I often harken back to bkilian's next gen goals by Microsoft, one of them was along the lines of "it needs to be powerful enough to show consumers a difference".

MS may have had a lot of goals including a low BOM, but all of them had to accommodate "powerful enough to entice consumers to upgrade" as a baseline. Notice that does not mean super powerful. It just means way better than 360.

So yes, they do care about performance. Otherwise they could have thrown a Tegra 4 in their and called it a day if you want to go to the extreme, obviously.
 
Lots of parts going into these products aren't currently in production for consumer consumption. Apparently that isn't a huge barrier.
Which parts aren't in production, nor sampling? Except for the SoC, which is an expensive custom part (and secret), I don't see much else that wouldn't be available for procurement right now. Certainly not the memory which they need an average of one or two million chips per week, and a very good price. Sure we all thought 8GB GDDR5 was improbable, but still, these chips definitely exist. They were sampling in late 2012 or early 2013.

Micron does offer "Graphics DDR3" at 2133 which they call 1GHz class, they are only available at 2Gb and 4Gb densities.

The best I found is the Micron 8Gb 1.35v which could probably be used at 1.5v, but it's not binned that way.
 
@Rangers & 3dilettante

Trinity obviously was given as an option to MS and presumably (rumored spec) shot down.

I can only infer based on whispers that the executives (Money Over Performance MOP™) did their research and their test groups found the disparity between a pitcairn class gpu and that offered in trinity was too great and that those groups rejected trinity class performance in favor of pitcairn. Either that or the testing between trinity and xb360 was found to be too similar.

So then, among these test groups, I wonder how they felt regarding pitcairn vs cape verde...

I suppose we will find out in a week.


Regarding the esram, I'm not 100% sure that decision was based on avoiding the expense of gddr5 or the desire to hit 8gb and lack of a solution to attain that quantity with gddr5 (at that time).
 
I personally would not want to get stuck at the GPU level Trinity was set to, as the last of the VLIW4 GPUs and built on a pricier process and CPU that would only make the next node transition harder.

I think gaming and consoles in general would have been better off if AMD hadn't lost a year or so in terms of getting its tech together. I'm not certain if the upcoming console generation has timed the advent of unified address spaces and better uncore and package integration too well, and it's a year or so too early to get the promised benefits of better QoS and preemptibility for graphics.
 
Which parts aren't in production, nor sampling? Except for the SoC, which is an expensive custom part (and secret), I don't see much else that wouldn't be available for procurement right now. Certainly not the memory which they need an average of one or two million chips per week, and a very good price. Sure we all thought 8GB GDDR5 was improbable, but still, these chips definitely exist. They were sampling in late 2012 or early 2013.

Micron does offer "Graphics DDR3" at 2133 which they call 1GHz class, they are only available at 2Gb and 4Gb densities.

The best I found is the Micron 8Gb 1.35v which could probably be used at 1.5v, but it's not binned that way.

Do you believe ram clocked at 2133 is fundamentally different from ram clocked at 1866? The lack of a an available consumer part is quite likely the lack of demand for such a part.
 
Oh i wasnt talking about game perfomance.
I was just talking about useable data per frame, which is limited by the bandbwidth of main ram. (Of course 2 GB per frame ist still plenty for the GPU to use)

The eSRAM is pretty useful for any operation that needs a lot of fast reads/writes of small data (thats also where the low latency should be very useful).
I can see some nice alpha particle effects. :)
And the usable data per frame has very little to do with the total texture memory budget. Take any game you'd care to play, in how many of them can you see the entire level at full texture resolution all the time? Say Skyrim, just because you can't see the scenery behind you doesn't mean it's textures are not in RAM. Gears of War, it uses streaming textures now, but you could preload those textures with more RAM, so there's no pop-in. right there, you have a use case for 2x the texture memory from what you can address in a frame. Megatexturing would benefit greatly from more RAM, since you can have more tiles in memory, and manage your tiles more efficiently.

And that's just textures.
 
And the usable data per frame has very little to do with the total texture memory budget. Take any game you'd care to play, in how many of them can you see the entire level at full texture resolution all the time? Say Skyrim, just because you can't see the scenery behind you doesn't mean it's textures are not in RAM. Gears of War, it uses streaming textures now, but you could preload those textures with more RAM, so there's no pop-in. right there, you have a use case for 2x the texture memory from what you can address in a frame. Megatexturing would benefit greatly from more RAM, since you can have more tiles in memory, and manage your tiles more efficiently.

And that's just textures.

You are right of course, but i was just talking about the amount of data a developer can use per frame. :smile:
Caching of data is of course useful.
 
Yes, but if they don't care about performance, why invest so much into R&D for something which is readily available (an x86 dx11 apu) without the expense?

As I said, Trinity is on the shelf right now. Been there for almost a year, about to be replaced.

No R&D involved.

If it's all about max profits upfront and performance be damned, why didn't they go with this option?

How do we know how much R&D went into the APU vs Kinect, controllers, that SHAPE audio processor, software services etc? I'm sure the investment was significant, but it's all relative, and no matter what you're going to hire the best engineers you can get to do the job. They can't risk errors, and any kind of optimization and customization has to be rock solid.
 
Status
Not open for further replies.
Back
Top