Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
I think it's a bit silly to speculate on Witcher 3 for any console so far out, it's a 2015 game and I strongly doubt they even know yet (even though it wont surprise if the rumor pans out). Watch Dogs, I'd expect probably 900P really, I mean the 960P is completely unsourced Thuway type rumor AFAIK, and I would just think 900P makes more sense/is the standard, but again it wont shock if the rumor is correct.
 
The whole point of the post was the implied incapability of the X1 being able to achieve 1080p.
I did forget Ac4 and Ryse but trying to count cut scenes from tomb raider and games that arent released yet and also dont have confirmation of resolution is really pointless.
The point is that there are alot of games on the system that are 1080p.
There is absolutely nothing wrong with games being 900p on the X1.
 
The whole point of the post was the implied incapability of the X1 being able to achieve 1080p.
I did forget Ac4 and Ryse but trying to count cut scenes from tomb raider and games that arent released yet and also dont have confirmation of resolution is really pointless.
The point is that there are alot of games on the system that are 1080p.
There is absolutely nothing wrong with games being 900p on the X1.

If there was nothing wrong with it I don't know why you are so bothered by my giving an accurate accounting of the situation. While there may be numerous 1080p games on the system, it's not like games that aren't 1080p are rare. They are quite common, especially for the biggest and most ambitious titles. Ignoring 60fps titles, discounting future titles and conveniently forgetting about most of the released 900p titles is just sticking your head in the sand.
 
Watchdogs will be 1440x960.

In the case of Watch Dogs have a feeling its more of a memory system (ddr3 bandwidth and/or esram cache size) issue rending large open world at a certain setting/iq.

Given all thats gone on up until now, I think XBO owners would happily live with 960p and IQ parity.
 
Can we get off of the feelings and enjoyment to stick with the technical?
 
Pretty interesting interview with Boyd Multerer, one of the Xbox One architects, where he talks about the challenges of developing for the console, the resolution thingy, the difficulties they and developers faced developing games ready for launch date on the Xbox One, the hardware design and how it will be used in the future.

(from XpiderMX link in a different thread)

http://www.totalxbox.com/74852/feat...softs-boyd-multerer-on-creating-the-xbox-one/

He's now in a position to look back on what he's happy to admit is "the hardest project I have ever worked on. It was a total slog to get it done."

Already ambitious, the console architecture changed dramatically following the poor reception at last year's E3. The always-online requirement was dropped, requiring major changes to an already considerable workload getting the machine ready for launch.
"It wasn't fun," he says. "But actually, now that I'm past it and I can look back - I think that it actually vindicates one of the strategies that we had. One of the goals was to create an architecture that can change over time, so as those changes were happening, we could roll with it without it dramatically effecting the kits we were giving to game developers. We were able to keep that train running and keep the games in development while we were scrambling to adjust to policy changes and business changes over at the other side, and it actually helped to make all that manageable."
Big picture

The other, arguably bigger task is getting devs used to the architecture itself. Early Xbox One games struggled to hit full 1080p resolution, particularly cross-generation titles that had to run on multiple formats, the cause of which Multerer describes as "complicated".

"Part of it is the obvious one where everyone's still getting to know this hardware and they'll learn to optimise it. Part of it is less obvious, in that we focused a lot of our energy on framerate. And I think we have a consistently better framerate story that we can tell.
"On the CPU side, we really pushed how fast the central processor is running, so you'll see much smoother framerates, you'll see much less hitching on the Xbox One, and that's a big deal in these games: you really want them to be smooth.

On the number of pixels side, there's also a kind of change in the conversation - what's the right one to be focused on? Is it number of pixels or is the quality of pixels? And this gets into the way that the game engines run.


"There's a whole bunch of post-effects that you can run. We do the right kinds of anti-aliasing, you can do shadow rendering and all that, and you do those things in the really, really fast RAM that's in the back of the pipeline, and you end up with an image that looks significantly better in some ways, and it's becoming less clear whether this is a number of pixels count story or whether this is a quality of pixels story. Both matter, but it's not all about one number or the other number."
It's a noble sentiment, but one that's been lost on the howling wastes of the internet, where early adopters have raged over every pixel. It's not a problem Multerer expects to last; now that developers have had time to get used to the architecture, they'll be able to get more out of it.

"I fully expect that to happen," he says. "The GPUs are really complicated beasts this time around. In the Xbox 360 era, getting the most performance out of the GPU was all about ordering the instructions coming into your shader.

It was all about hand-tweaking the order to get the maximum performance. In this era, that's important - but it's not nearly as important as getting all your data structures right so that you're getting maximum bandwidth usage across all the different buffers. So it's relatively easy to get portions of the GPU to stall. You have to have it constantly being fed."
If that sounds complicated, it gets worse: getting your Xbox One game to run smoothly requires shuffling data between different components with absolute precision; even the smallest delay can hold up the entire process.

"This is a effectively a super-computer design," he says. "This is a design out of the super-computer realm. So I expect that we're going to continue to see fairly large improvements in GPU output as people really tune these data sets
."
It isn't surprising that most publishers, running flat out to release games on every available format before Christmas, didn't have the time to sort this out. "Xbox One was a crazy launch. All console launches are crazy.

Me and the team, we've all worked seven days a week from June until November, and we're all really tired and all the game companies; they were there too. So in a lot of cases it was "Okay, we've got to get the game running" and then there's not a whole lot of time to do the tuning."
 
err... I am not sure I understand.
A GPU is well capable of hiding latencies to achieve high usage, and running compute threads in parallel can even increase that usage.
And if we add in that at this round the GPU can call the CPU, devs will be able over the time to throw everything they can to shaders (not as if they were SPU, yet still...).
Does he compare esram+DDR3 usage to NUMA?
 
I do wish interviewees would qualify their relative descriptions.
you'll see much less hitching on the Xbox One
Much less hitching than what?Rival machines? XB360? A different XB1 design??
 
err... I am not sure I understand.
A GPU is well capable of hiding latencies to achieve high usage, and running compute threads in parallel can even increase that usage.

well latencies again. just try to think of it. if the cache on the gpu is used to hide latencies from main memory, you don't really use the cache effective. so why not store the commands in esram and use the cache on gpu for the tasks itself. this shoudl save much bandwidth and the esram has low latencies so the commds can be loaded really quick.

but well, ms told us they customized the gpu. well seems like they done more than we thought. they knew they coulnd't use a high-end gpu so they tried tweaking the effectiveness (or at least it is now up to developers to tweak it).
 
so why not store the commands in esram and use the cache on gpu for the tasks itself. this shoudl save much bandwidth and the esram has low latencies so the commds can be loaded really quick.
We have no latency figures for the ESRAM, but I see no reason to think they'd be at least L3 level, which is considerably slower that the local caches of the GPU should be, although I know squat about GPU caches in particular.

Also, learn to use your Shift button and capitals.
 
We have no latency figures for the ESRAM, but I see no reason to think they'd be at least L3 level, which is considerably slower that the local caches of the GPU should be, although I know squat about GPU caches in particular.

Also, learn to use your Shift button and capitals.

That's right, but far better than DRAM access. the main difference bettween the eSRAM and the caches is, the caches are inside integrated in the GPU and must not pass the memory controller. that makes it slower, but not much. still far better latencies than DRAM can offer. also the esram runs in sync with the GPU. This should also eliminate some latency issues.

Offtopic:
sry capitals are out most times ;) (my netbook keyboard does not react well.. especially the shift-buttons (kids :( )
 
Pretty interesting interview with Boyd Multerer, one of the Xbox One architects, where he talks about the challenges of developing for the console, the resolution thingy, the difficulties they and developers faced developing games ready for launch date on the Xbox One, the hardware design and how it will be used in the future.

(from XpiderMX link in a different thread)

http://www.totalxbox.com/74852/feat...softs-boyd-multerer-on-creating-the-xbox-one/

This interview is 6 months old. It's all the same arguments and justifications we've heard again and again from Microsoft's hardware designers.
 
This interview is 6 months old. It's all the same arguments and justifications we've heard again and again from Microsoft's hardware designers.

And its nearly the same arguments Sony was using last gen 'its super computer like!!!' its pretty funny how similar this gen is to last just with the arguments / places switched.
 
And its nearly the same arguments Sony was using last gen 'its super computer like!!!' its pretty funny how similar this gen is to last just with the arguments / places switched.

Yeah, I though so, too. In essence, they have switched. Higher price, more obscure system, huge box, slower GPU... all except the weird CPU is there.

In my opinion, all the "DX12 will save us" talk is... well. I dunno. Not helpful, I guess? I don't think it'll make a huge difference, and I also don't think their current API is bad. It's the regular early console API. Obviously it's not done yet, and obviously there's a lot to gain, still. But a saving grace like a full new API is not what'll help. Most issues on PC that are supposedly fixed in DX12 aren't even an issue on consoles.
 
And its nearly the same arguments Sony was using last gen 'its super computer like!!!' its pretty funny how similar this gen is to last just with the arguments / places switched.

There is a big difference. the xbox one is easy to develop but hard to master. If you use it just like the PS4, it will work. slower (on gpu side) but it will work. Use the eSRAM and you gain at least some bandwidth again.

If you want more, you must optimize for the special things in that gpu. not all is possible now and comes with dx12 (or at least with future API/OS iterations). I don't say it makes the xbox one faster than the PS4, it just seems to use available resources more effective. Sony will also try everything to make the OS/API overhead smaller, but the question is, if they can make that and how long they need to do that. It is much easier for MS making that.
The good thing about the xbox one is, they don't need to watch for backwards compatiblity with their API, because every game has it's own OS. If Sony want's to update their OS and API, they must always keep in mind that it must all be backwards compatible even if some algorythems are inefficient, they must still behave like they did. this could make it easier for MS to upate their OS/API without checking everything new and old stuff. Well they could even through the Win8 kernel out of the window and use a whole other thing, but I doubt they would make that.
Only thing that MS can't update so easy is the OS that Hosts the VMs
 
The good thing about the xbox one is, they don't need to watch for backwards compatiblity with their API, because every game has it's own OS. If Sony want's to update their OS and API, they must always keep in mind that it must all be backwards compatible even if some algorythems are inefficient, they must still behave like they did.
I'm not sure who this is a "good thing" for? If you're a game developer working on a multi-year product then you don't want the APIs changing radically enough that you need to re-assess using the latest iteration for fear what they've broken in backwards compatibility. Nobody is going to go back and re-write great chunks of code.

Such a move would only reduce adoption of the latest version of the API, along with whatever performance advantages it brings, which would be contrary to the advantages of what their competition are doing.
 
Last edited by a moderator:
Yes last gen, Sony's talk of super computer architecture and harder to develop for but worth it is the exact same argument that is being presented here.

Sony's exotic Cell architecture at least offered potentially twice the performance of competing console CPUs. Microsoft's "exotic" memory architecture offers no such advantages in this generation. "Super Computer Architectures" aren't impressive if they deliver less performance than conventional designs.
 
I'm not sure who this is a "good thing" for? If you're a game developer working on a multi-year product then you don't want the APIs changing radically enough that you need to re-assess using the latest iteration for fear what they've broken in backwards compatibility. Nobody is going to go back and re-write great chunks of code.

Such a move would only reduce adoption of the latest version of the API, along with whatever performance advantages it brings, which would be contrary to the advantages of what their competition are doing.
but you can introduce new features faster. The developer has a save state of a OS, that is worth a lot. With the PS4, the is not that stable, it could break or produce bugs with future iterations. That is why updating the system software is so risky. It is no longer with saved state virtualized OS. Games will still run the same after years. Don't get me wrong. It should be the same for PS4, but adding new features or optimize the internal code is just so much harder for Sony.
Still, MS won't break compatibility until it is absolutly necessary. But if an algorythm can be replaced by a faster one, it has not that big impact if it doesn't behave as the older one. The developers can than (if they want) update to the new version and check if still everything is running, or they can just use the old state.
 
Status
Not open for further replies.
Back
Top