Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
I believe that we'll be laughing at "Ryse quality graphics" by the end of this generation. "Hur hur," we'll say, while sipping our champagne, "remember when we thought Ryse looked awesome?".

I've had almost that exact conversation, minus the champagne, about the quality of "Oblivion".

In my opinion, we'll see more sub 1080p games on the X1, simply because it's easier to fit the render targets into the ESRAM if they're smaller. But I'd point at the ROPS as culprits too.
 
are you sure its 1080p 100% of the time, Ive seen a couple of ingame shots and they didnt look 1080p even without pixel counting.

So what does 1080p look like? Do not answer that question. You're now talking purely on the subjective level with quality of pixels which is entirely outside the realm of this technical thread. Let's remain on topic.
 
So what does 1080p look like? Do not answer that question. You're now talking purely on the subjective level with quality of pixels which is entirely outside the realm of this technical thread. Let's remain on topic.

He is not saying anything about the "quality" of the pixels!

Some screen shots of Forza 5 do seem to have lower than native res (1080P) elements.
The screen shots came from direct capture footage (and are marred by some compression), but some area's of the shots are clearly lower than native, yet other parts do seem to look native (this might be just some really odd video processing tho).
 
So what does 1080p look like? Do not answer that question. You're now talking purely on the subjective level with quality of pixels which is entirely outside the realm of this technical thread. Let's remain on topic.
No, there are images that are very evidently 'upscaled' with chunky jaggies. See the Pixel Counting thread.

source for the aforementioned images?
Notably a screenshot from the video showing the XB1 interface. It's in the Pixel Counting thread. It's not worth discussing here.
 
As others here have stated for what feels like the hundredth time; MS's Tiled Resources is(are) the direct equivalent of AMD's sparse texture OpenGL extension. Tier 1 supports all DirectX cards (which is how they were able to show it off at Build// running on Nvidia hardware), while Tier 2 exposes GCN's PRT. There is nothing magical about Xbox One's implementation opposite to PC land. There are no well of hidden features in the Xbox One's GPU. The scaler is not secret sauce, there is no raytracing chip, there is no dGPU, etc. Please stop being so quick to believe everything posted. 2/3 of the posts by Astro and his ilk are simply complete nonsense that plays towards your confirmation bias. I'm not going to insult you or anything like that given your age posted in your profile and I applaud you for wanting to get a true understanding of hardware. Honestly I like TXB and UnionVGF (obviously since I'm a mod there) but they're not websites you go for any real discussions on the technical aspects of either platform.

One of my pet peeves is ppl trolling others by lying (repeatedly I might add) about what I've said. The utterly vast majority of what I've said since January has played out verbatim, much to your chagrin. Take your butthurt elsewhere Ketto and stop trying to color what Solarus said as some sort of conspiratorial rant. The guy was simply asking a legitimate question and doesn't deserve (nor do I) to be grouped with the misterx's of the internet as you are so desperate to do. Conflation and misrepresentation doesn't serve the utility of making you appear intelligent nor as above the fray as you imagine.
 
One of my pet peeves is ppl trolling others by lying (repeatedly I might add) about what I've said. The utterly vast majority of what I've said since January has played out verbatim, much to your chagrin. Take your butthurt elsewhere Ketto and stop trying to color what Solarus said as some sort of conspiratorial rant. The guy was simply asking a legitimate question and doesn't deserve (nor do I) to be grouped with the misterx's of the internet as you are so desperate to do. Conflation and misrepresentation doesn't serve the utility of making you appear intelligent nor as above the fray as you imagine.

From the astrophysics thread:
cool astrograd is here, hey astro i lurk on txb mostly to read your posts you always break it down to make it understandable thanks man. you were saying on txb how this would put xbox over ps4 because ps4 can only use 12cus for rendering. i don't want to turn this into a versus, but with esram beign so fast wouldn't that by extnesion increase cu speeds in some way bringing them up to parity if not making them faster than ps4? also with the esram getting a bandwidth increase to like 210GB/s won't they need to increase the dme transfer speed too? i mean the gpu will be getting fed so much quicker from esram now.

sorry i got mixed up with your other post saying it would be xbox's 12cu vs ps4 12 cu because 2 would be used by the os and the other 4 are only good for compute gpgpu stuff. but if the other stuff you said about the display planes helping xbox one games hit 60fps easily when compared ps4 games, and esram bandwidth increase + what you said about the xbox one cpu having more bandwdith than ps4 cpu (40gb compared to 20gb/s) i assumed it meant xbox would be ahead. I'll send you pm, i always want to learn more. i will admit im confused a bit now since youve clarified.




cool and the gpu sees that as one big pool so it's really 210gb/s? man ms engineers are amazing. all these tech upgrades + its being more efficiecnt because of the dmes and whatnot makes xbox sound just ridiculously powerful. proof is in the pudding ryse is pretty much the most amazing thing i saw at e3.

some time ago you suspected that the display plane will allow fore more games to be 1080p and 60fps is that the limit to the display planes or can it do say 4k gaming? i remember ms saying that xbox one was capable of it, would it be wrong to credit the display planes for that?

Oh, that's if you include OS stuff as per bkilian's random guesstimate about PS4 reserving 2CU's for that. Someone else said that may not be necessary. That TXB thread was me trying to ignite some discussion on the subject since nobody was actually on topic. Anyhow, don't do versus stuff or it'll just get deleted. PM me if ya want. ;)



X1 can do 4k gaming and it'd definitely use them for accomplishing that. The more novel use would actually be in handling 3D games. Apparently it's quite helpful for that sorta thing. I've heard that only 1st party devs had access to this display planes until final dev kits went out a few weeks ago. Wonder if that includes 2nd party devs like Crytek or Double Helix etc.

Astrograd's argument has all the time been that you can take a chip manufactured with latches that only latch on one edge of the clock, and after discovering that the timing margins are good somehow magically changing those latches in the already manufactured chip into latches that latch at both edges of the clock (something that would require a physical change to the circuit, not change as in timing margin changes, but changes as in removing transistors, adding transistors, and change the metal wiring).

Of course, if they were designed for this from the start then that would be no problem. Nobody is arguing that the bandwidth can't be higher than the original figure. (That would be no problem at all to achieve.) Only that you can't change one circuit into a different circuit after it has been already manufactured through wishful thinking.

I can keep going.
If the DF info is true, then it could be the result of separate read and write buses to the ESRAM. The ESRAM could internally organized as 8 separate banks.

You could then perform a read and write every cycle as long as there is no bank conflict. The first access is always allowed, the second has a 7/8th chance (87.5%) of not conflicting with the other access

Cheers
 
Last edited by a moderator:
I believe that we'll be laughing at "Ryse quality graphics" by the end of this generation. "Hur hur," we'll say, while sipping our champagne, "remember when we thought Ryse looked awesome?".

I've had almost that exact conversation, minus the champagne, about the quality of "Oblivion".

In my opinion, we'll see more sub 1080p games on the X1, simply because it's easier to fit the render targets into the ESRAM if they're smaller. But I'd point at the ROPS as culprits too.
I think Microsoft should give up the pixels battle forever and go for what suits the eSRAM best!

Focus on the sound -freeing up the audio cores wouldn't hurt either-, resolutions within the limits of the eSRAM. Also freeing that 10% for GPU could also help a lot.

I honestly think Xbox One's architecture is really, really good, if you use it within the limits of the eSRAM, even a PC or the PS4 would have a hard time keeping up with that.
 
I honestly think Xbox One's architecture is really, really good, if you use it within the limits of the eSRAM, even a PC or the PS4 would have a hard time keeping up with that.

If your talking in pure bandwidth terms then even if you could fit every last bit of your game data in the esram entirely but somehow still make full use of the main system memory bandwidth as well without wasting anything on copying then there are still 7 different GPU models available today (not including dual GPU's) which offer more total bandwidth than the One would have available to it. Although admittedly GPU's that meet that criteria start at around £220 or roughly 50% the total cost of the One.
 
same applies to the PS4, both consoles are lagging behind the PC for this gen, Wii U is is closer to the PS3/360 than the PS4/X1.
on the flip side, the graphic fidelity probably have more to do with business than technology...
 
Last edited by a moderator:
If your talking in pure bandwidth terms then even if you could fit every last bit of your game data in the esram entirely but somehow still make full use of the main system memory bandwidth as well without wasting anything on copying then there are still 7 different GPU models available today (not including dual GPU's) which offer more total bandwidth than the One would have available to it. Although admittedly GPU's that meet that criteria start at around £220 or roughly 50% the total cost of the One.
7? if that's mean to be taken literally, that's a good number, but I thought there would be more graphics cards with a superior bandwidth.

I wasn't talking about bandwidth alone but Xbox One's SOC architecture, which can have its advantages if you can keep your framebuffer within the eSRAM size and use the eSRAM in conjunction with the DDR3 bandwidth.

With simultaneously Read/Write eSRAM, its low latency and Move Engines, I could see some win-win scenarios where the console could perform excellently well compared to more powerful machines. Either that or someone has fudged the numbers to look better on paper, but I don't think that's the case.
 
You are making the same mistake by taking few elements out of a system, and try to make assumption on how it's going to behave. For one, you are forgetting about the CPU, the dedicated memory pools, bigger memory, and etc. on the PC side. No doubt it'll be easier to optimized for closed platformed, but there's no secret sauce.
 
Everyone still underestimates the importance of the artwork and the less obvious aspects of technology, stuff not related to resolutions or bandwidth. The limitations of the previous generation have locked out just so much that developers will need a year or two just to explore the new possibilities.

Case in point, facial animation - my speciality ;) Most games are still locked in a mindset based on constraints from PS3/X360 - low amount of memory mostly, and of course limited human resources to produce assets.

For example, Ryse and The Order are both using a facial animation system relying on bones and animated normal maps, despite the huge amount of memory available. They are probably re-using the same head geometry for every character because this way they don't need to redo the weight painting process, which means they can't build facial wrinkles into the geometry, they can only use normal maps to create the creases on the brow and the bridge of the nose and the crow's feet around the eyes and so on.

But 8GB is enough to store a LOT of blendshapes, so they could create head models fitted to each character, with the folds and wrinkles accommodated in the geometry, so that the actual forms and silhouettes can change as well for each facial expression. It's also possible to rely on scan data for the facial expressions if they can license the likeness of some actor or model.

However this is a tech that game devs are unfamiliar with for now, it's only used in movie VFX and CG animation - so they need to invest in R&D and change tools and pipelines and engine tech to implement this new approach.


So, let's extrapolate from this. There's an entire universe of asset production and rendering tech waiting to be discovered by video game developers, that can make a much much bigger difference in visuals compared to upping the resolution. Facial animation is just one possible field, but there's a lot of room to advance practically everything in the visuals.
And tech is just one side of the coin, but the artists have a lot more room in this generation in how they build assets or fine tune lighting.
I mean, just look at Battlefield 4 and COD: Ghosts as a nice example of the difference artists can make. The renderer features are of course better in BF on the current gen, but on the PS4/X1 the game is still looking much much better, eve though IW's new renderer has almost all the checkboxes covered.

So artists will become a LOT more important in the coming years, and I also expect a lot of game developers to hire movie VFX professionals, both for guidance and for content creation. Hardware will matter only in that it enables the artists' skills to become more important.
 
7? if that's mean to be taken literally, that's a good number, but I thought there would be more graphics cards with a superior bandwidth.

Taken at face value the One has HUGE memory bandwidth. Of course the argument that is put forward by many is that you can't compare the split memory pools of the One directly to the single memory pools of other systems on the basis of simply adding those pools bandwidths together.

And yep 7 is a literal number comparing to the "total" 272GB/s of the One's main memory plus eSRAM.
 
You are making the same mistake by taking few elements out of a system, and try to make assumption on how it's going to behave. For one, you are forgetting about the CPU, the dedicated memory pools, bigger memory, and etc. on the PC side. No doubt it'll be easier to optimized for closed platformed, but there's no secret sauce.
Funny you mention the secret sauce, I've had good times trying to find it in this pre-launch months. Lots of crazy news around the tech.

I was just implying that the Xbox One could be a speed demon if you manage to fit the entire framebuffer within the eSRAM and use it efficiently.

Think of the GameCube and F-Zero GX running at 60 fps. The GC was better at running games at higher framerates, despite being less powerful overall than the original Xbox.

I had both consoles then and while I knew there were many things the GameCube couldn't do compared to the original Xbox, :eek: I was conscious that the Xbox was limited in certain scenarios (compared to the GC) where high bandwidth and low latency were primal.
 
Funny you mention the secret sauce, I've had good times trying to find it in this pre-launch months. Lots of crazy news around the tech.

I was just implying that the Xbox One could be a speed demon if you manage to fit the entire framebuffer within the eSRAM and use it efficiently.

Think of the GameCube and F-Zero GX running at 60 fps. The GC was better at running games at higher framerates, despite being less powerful overall than the original Xbox.

I had both consoles then and while I knew there were many things the GameCube couldn't do compared to the original Xbox, :eek: I was conscious that the Xbox was limited in certain scenarios (compared to the GC) where high bandwidth and low latency were primal.

Goes the same for pretty much all software where you only target one specifications though. Bandwidth was never an issue for the X1 to begin with.
 
They're probably just going for good enough given the premature tools and the compressed development schedule. Also, a lot of these games are being released for 5 platforms. I would imagine they're painting with a pretty big brush right now.
 
Status
Not open for further replies.
Back
Top