Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Surely ms has been in detailed discussions with all major third parties on the rendering hardware in Durango, ensuring their engines are ready to leverage what's under the hood? Heck, I'm sure a lot of this design was born out of what devs said their engines *will* look like in 3 years. (From ~2009)

In the beginning, it would have been more along the lines of...this is what we're thinking of doing due to doing analysis of what games and developers are doing. What works well, what isn't working well. Do you have any suggestions. So game conceptualization will likely occur here, but no real "development" will be done.

Later on as hardware and focus were solidified it'd be more along the lines of...OK, this is roughly what it's going to be like, this is what things are intended to do, and this is roughly the predicted speed (in PC hardware terms) of how fast the hardware will be able to do it. Preliminary development is likely to be done here with non-graphical tasks receiving the lions share of the effort. Art assets are likely also started if they weren't started previously. Ideas on how to handle the hardware as well as potential ways to exploit it will be investigated.

Alpha, beta, final development kits. Heavy work done on the rendering engine. Optimization and code cleanup as final kits are received. Bug fixing, game testing, etc.

It may not actually work like that. But it's sort of how I imagine things working.

Regards,
SB
 
It is easy to say that now about the PS3 with hindsight. But 7 years ago, the proponent s of the PS3s design were saying a lot of the same things about how much more efficient it could be per core because of many dma fetches in flight etc, 2 pools of ram etc. And it's not that they were completely wrong. But the theoretical potential proved difficult even with a lot of additional work. And sure there were other shortcomings to blame. But it is only easy to point at those now because of hindsight. But in the end, the general sentament from devs was give us something more straight forward. And this was MSs position as well until now it seems.s

Also we don't even know about the cost effectivness of these designs yet. This is only a win for MS if this comes out significantly cheaper

I get what you're saying, but at worst it becomes a "downclocked" Orbis to developers. It's not Mandarin vs. Cantonese like PS360.
 
It is easy to say that now about the PS3 with hindsight. But 7 years ago, the proponent s of the PS3s design were saying a lot of the same things about how much more efficient it could be per core because of many dma fetches in flight etc, 2 pools of ram etc. And it's not that they were completely wrong. But the theoretical potential proved difficult even with a lot of additional work. And sure there were other shortcomings to blame. But it is only easy to point at those now because of hindsight. But in the end, the general sentament from devs was give us something more straight forward. And this was MSs position as well until now it seems.s

Also we don't even know about the cost effectivness of these designs yet. This is only a win for MS if this comes out significantly cheaper

That's true, but for Durango, the lowest performance plateau is pretty well known. There isn't going to be any surprises with regards to 8 - X86 cores (unlike 1 - PPE and 7 - SPEs) where all the cores can handle the same workloads. The base memory has well understood performance characteristics (unlike the XDR used in PS3, but this wasn't nearly as bad as coming to terms with Cell) putting a rather easy lower bound on unoptimized performance. The only similarity to PS3 is that the base GPU is fairly well known from the PC space.

So, you have a well understood lower bound of performance that you didn't have with the PS3. In this way it's actually far more similar to X360 than it is to PS3. Where PS3 had multiple non-similar cores with lower functionality but greater speed in what they do, X360 had 3 CPUs (6 threads) all of which could do the same thing (just like Orbis). The memory while not standard DDR was well understood from the graphics space.

So, really, we have a fairly well understood lower bound, and it can only get better if the extra bits that MS has put in work out. The question is, how well will they be able to help in various workloads. The answer is likely better in some and not as good in others.

Where PS3 was all promise with gaudy numbers, Durango is mostly standard with various bits added to help increase the utilization of what is already well understood. Developers know that when X thing happens, Y resource becomes underutilized and such. Durango from what is rumored attempts to address some of those cases by trying not to let X thing stop Y resource from doing work.

Regards,
SB
 
On topic I want to see the hardware planes more then anything seems to be the last bit of technically interesting hardware imo
Really? What's the "audio block" then, chopped liver? :) Man, you graphics oriented guys are so picky. Play through a AAA game without sound sometime and see how much less involving it is. *humph*
 
Really? What's the "audio block" then, chopped liver? :) Man, you graphics oriented guys are so picky. Play through a AAA game without sound sometime and see how much less involving it is. *humph*

Heh, if this audio block brings back things like environmental 3D mapping of sound effects (along the lines of Creative Labs EAX, but hopefully even more advanced) then I would be an extremely ecstatic and happy panda. Even more so if developers take advantage of it to do real 3D positional audio with convincing environmental effects.

We haven't had anything like that in decade or so on PC now (and never on consoles). It's one of the things that has greatly regressed in PC gaming. And such a shame as having realistic sound interaction with the environment can be hugely impressive and immersive.

If the audio block can bring that back, I would buy a Durango in a heartbeat even if it was only half the speed of Orbis.

Please say it's true. ;) I'd given up on advancements in gaming with regards to sound, having consigned myself to the sad state of audio in games for the last decade or so. This would be a welcome ray of hope that there would be some focus on this as there was in the late 90's early 00's.

But, probably shouldn't get my hopes up too high. :p

Regards,
SB
 
Heh, if this audio block brings back things like environmental 3D mapping of sound effects (along the lines of Creative Labs EAX, but hopefully even more advanced) then I would be an extremely ecstatic and happy panda. Even more so if developers take advantage of it to do real 3D positional audio with convincing environmental effects.

We haven't had anything like that in decade or so on PC now (and never on consoles). It's one of the things that has greatly regressed in PC gaming. And such a shame as having realistic sound interaction with the environment can be hugely impressive and immersive.

If the audio block can bring that back, I would buy a Durango in a heartbeat even if it was only half the speed of Orbis.

Please say it's true. ;) I'd given up on advancements in gaming with regards to sound, having consigned myself to the sad state of audio in games for the last decade or so. This would be a welcome ray of hope that there would be some focus on this as there was in the late 90's early 00's.

But, probably shouldn't get my hopes up too high. :p

Regards,
SB
I really can't say. :)

Siri's just the buzzword. Kinect was demonstrating voice control before Siri ever debuted.
And Microsoft TellMe was doing it on Windows Phones and Ford Sync even before that. I played with a TellMe implementation on the XBox during one of the Betas, doing things like "Find me movies starring Tom Cruise and Val Kilmer" (Top Gun) and "Give me action movies from the 80's starring Kelly McGillis" (Top Gun). Apparently, no matter what I asked for, the answer is always Top Gun. (Would have been Airwolf, but that isn't available for streaming...)

Apple has an amazing ability to introduce a product that other folks have already done, but do it in a way that all their customers think they invented it.
 
We have no reason to currently think core efficiency will be that high. All we know is bandwidth as it exists in that design is fairly low. And the added some work arounds to try to tackle this problem. Whether or not it is an actually efficient desgin we probably won't know for at least a year or two after launch once devs come to grips with it. I predict lauch games will struggle though. Similar to what the ps3 went through

Wouldn't much of the functionality be handled transparently by the durango os? Presumably the game coders will be writing to api's and the machine will sort out what to do behind the scenes, rather than coders having to figure out what set of pixels to shuffle from ddr3 to fast ram manually all the time. If components like the move engines really are just there to get data quickly from ddr3 to fast ram then I figure it would be better to just let the machine handle/schedule all that. On the ps3 side much of the complication was that you had to manage everything manually yourself with little help tool side, hence why it took years to get performance out of that box. But I don't think that's the right way to proceed now especially if Microsoft wants forward compatibility, and more so if they want impressive performance on launch day. I'm probably alone on this, but coding "direct to the iron" to me is not only becoming obsolete, but a hindrance to moving forwards. It forces millions of lines of code to basically be thrown away every few years, and as machines get more complex there's bound to be situations where the machine's os can schedule things better than game coders can anyways. From the current descriptions of the move engines it just seems that it's something better left to the os to manage.
 
Stupid question, but I've been just now hearing some chatter from gaf goers that they think Durango is going to have 2133 DDR3 or something?

The bandwidth figures discussed so far- what speed has the memory been when those calculations were made? Or I guess that doesn't matter if the leaks gave a static figure.

I kind of find it a bit hard to believe MS would shell out for that much fast DDR3 RAM. I mean that isn't exactly very cheap, if the prices I'm looking at are to be believed for retail.
 
Well, it's not like it's cheaper to go to a 512bit bus. Or switch to GDDR5. And they're already bandwidth starved in this design, so downgrading the DDR3 is hardly desirable either.
 
What I'm asking then, I guess, is does it matter?

Even at 2133 DDR3 is Durango still at the same bandwidth situation?
 
The raw texel rate is almost five times that of the 360. The Anisotropic filtering algorithms have also evolved enormously since 2005.

The system seems optimized for megamesh/megatexture type rendering. The hardware assisted decompression features together with the GPU using virtual address translation (which might remove the need for indirection in tex-lookup, cutting the cost of anisotropic filtering)

Cheers


How will aquatic games fare? RARE is rumored to be making water based racing game and the next Assassins Creed built around pirate ships.
 
Which is more important for people to perceive graphical improvement, shaders or textures? If you took a screen shot and showed people two options, either a game with heavy textures and medium shaders or medium textures and heavy shading which will they perceive as being better?

Also moving forwards with Intel integrated GPUs, would developers favour improving the textures more than shaders or shaders more than textures in order to better cater to this important audience?
 
Which is more important for people to perceive graphical improvement, shaders or textures? If you took a screen shot and showed people two options, either a game with heavy textures and medium shaders or medium textures and heavy shading which will they perceive as being better?

Also moving forwards with Intel integrated GPUs, would developers favour improving the textures more than shaders or shaders more than textures in order to better cater to this important audience?

Every pixel is drawn with a shader, period. I'm not sure how you can separate the two anymore.
 
Gubbi said:
By the way, are the jpeg decoding numbers in the VGLeaks article real, or just examples? Because decoding just 4M pixels (two 1920x1080 jpegs) per frame is exceptionally slow, IMHO.
It's quoted as peak performance - and if we believe that, I agree it sounds slow - just barely faster than PS2 JPeg decoder (2.5MPix/Frame / 150Mpix/s), and that was 13 years ago - targeted at 6x lower resolution. Can also consider that quoted LZ decoder speeds aren't great either (IIRC single SPE would outperform that handily) - I'd suspect these units cost next to nothing to have in there.

Though on the flipside - the PS2 decoder was actually overkill, and if you're targeting virtual-texturing with this, the only thing JPeg is any good for is Color-maps - so for 1-texture lookup this throughput could technically sustain the wanted 1:1 pixel/texel ratio (assuming this unit can support the kind of granularity we'd want - possibly down to 1-4 macroblocks - and of course almost no overdraw).

I'd question if it's even relevant though. DXT+LZ gives data-rates as low as 2bits/texel and acceptable quality with various data-types - where as JPeg teethers between acceptable and horrible once you go into range between 1-2bits/texels, even for color maps.

Shifty Geezer said:
No decompression of JPEG into DXTC means your stuck with bitmaps in ESRAM to draw.
Given the quality loss from recompression into DXT, I doubt it'd be worth it. And decompression on demand means you're only interested to keep decoded blocks around a short time anyway.

Grall said:
Well, it's still much faster than loading the raw data straight from optical disc, or even harddrive
That would depend how lossy are you willing to go - you'd need to go 1bit/texel or lower to get significantly smaller data than compressed DXT, and quality starts going to hell there, even if you're using newer algorithms than JPeg.
 
Last edited by a moderator:
I really can't say. :)

And Microsoft TellMe was doing it on Windows Phones and Ford Sync even before that. I played with a TellMe implementation on the XBox during one of the Betas, doing things like "Find me movies starring Tom Cruise and Val Kilmer" (Top Gun) and "Give me action movies from the 80's starring Kelly McGillis" (Top Gun). Apparently, no matter what I asked for, the answer is always Top Gun. (Would have been Airwolf, but that isn't available for streaming...)

Apple has an amazing ability to introduce a product that other folks have already done, but do it in a way that all their customers think they invented it.

I wouldn't attribute that to Apple BK. I'd just say that that is typical fanboy mentality right there.

I see the exact same thing on forums with Sony/Nintendo/MS fans. E.g. I read many Nintendo fanboys speak as if Nintendo invented motion controls with the Wii. Pretty depressing really.
 
I wonder if the jaguar cores and SRAM are just an evolution of what MS was working on with IBM. The Power 7 32mb L3 is accessed through a memory controller used to maintain coherency. Intel Haswell has 32mb of what it calls LLC (last level cache) and implements transactional memory on hardware. Transactional memory (TM) is suppose to simplify parallel processing and allow better scalability on multi-core chips. IBM has a chip (z12) with hardware based TM with 48mb L3 cache.

Large amount of cache seems necessary for transactional memory. Intel's Haswell LLC is shared by both the cpu cores and the gpu cores. It probably the only hardware we have that resembles Durango setup.

http://www.xbitlabs.com/news/cpu/di...Emerge_2_4_Cores_Graphics_DDR3_Low_Power.html

Intel Haswell microprocessors for mainstream desktops and laptops will be structurally similar to existing Core i-series "Sandy Bridge" and "Ivy Bridge" chips and will continue to have two or four cores with Hyper-Threading technology along with graphics adapter that shares last level cache (LLC) with processing cores and works with memory controller via system agent, according to a slide (which resembles those from Intel) published by ChipHell web-site.
 
It's possible a Haswell-EX server chip might have that much last level cache, but nothing has been suggested that consumer chips would have that much.
Intel's restricted transactional memory extensions are initially implemented to only work with in the L1 cache. The LLC is currently used as the cache the system checks for coherence, and putting values there would expose transactions in process--breaking TM.

TM really needs changes in the cache and CPU pipelines to function, and nothing disclosed for Jaguar indicates it has the necessary changes, nor is it clear that the memory in Durango is being used as an last-level cache.
 
Status
Not open for further replies.
Back
Top