Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
What about the two of the four MMUs for "processing" LZ encode/decode?! Sorry if I use the wrong term. Not tech savvy. But I read from a source I have hard time finding, that those two MMUs will have cloud computing capabilities.
 
:O

are you kidding me?? You mean..... uuurgh. That's unbelievable... so the structure is


Hypervisor ->
VM Hypervisor ->
->GameOS | WinOS


...but you cannot put an hypervisor on top of an hypervisor :oops: ...there is NO hardware support for that, no way they could have made it on custom Jaguar cores so quickly.

But wth they call it hypervisor if it does not work as an hypervisor?

An hypervisor is supposed to handle events and calls/etc from... :???::???:

i know this post is old but,

x86 especially AMD have supported nested virtualization for ages. Hell i have multiple copies of ESXi running on top of ESXi right now on my home server. this allows for labing up complex failover and topologies for the work i do.
 
Saying the gap will lessen with better tools is like expecting that that one will benefit more than the other with better tools.
As both architectures are not "exotic", this is not something that I would expect.

I'd expect both consoles to more or less maintain the current gap instead of the gap changing drastically due to the fact that lots of third party developers seem to have similar end results in terms of the gap between the two consoles.

But yes, the IQ would most definitely improve as time goes on.

I agree that there will be a gap in performance between the consoles throughout this generation. The thing is though I think it's too early for us to know exactly what that gap is.
As far as third party software all having a similar gap in performance, I don't think thats completely accurate. The results from launch titles are all over the place. There are several titles that achieve parity. Then you have Cod ghost which has a huge gap in resolution 1080p vs 720p. Then you have BF4 and AC blackflag with a smaller gap. Now with the latest release Tomb Raider both versions are 1080p and the difference is in framerate. So Im sure that there will continue to be a performance gap between the systems, but I'm not sure if we know exactly what that gap is.
 
Last edited by a moderator:
Hell i have multiple copies of ESXi running on top of ESXi right now on my home server

...apart clicking on VMWare's interface, do you have an idea of what is really happening behind the scenes? my sysadmin makes very complex topologies and is going to take the VMW arch cert.

...yet I highly doubt he understand a dime about machine's virtualization process.

x86 especially AMD have supported nested virtualization for ages.

..cool! Mind to explain how? feel free to go directly explaining how VMRun and VMCall are managed inside the hypervisor inside the hypervisor, including shadow memory remapping and how hardware assisted services are performed. I am already familiar with them, no problem at all.

I am always happy when I can learn something cool and really new to me :idea:
 
..cool! Mind to explain how? feel free to go directly explaining how VMRun and VMCall are managed inside the hypervisor inside the hypervisor, including shadow memory remapping and how hardware assisted services are performed. I am already familiar with them, no problem at all.

I am always happy when I can learn something cool and really new to me :idea:
my understanding is that the first hypervisor needs to intercept/pass thought the required opcodes.


i'll have to go looking for you but i stumbled across a usenet topic about it with a dev from Qemu. It was a few years ago that i found it.

http://thread.gmane.org/gmane.comp.emulators.kvm.devel/21119


ESX itself is not more forty and i dont know if they do the same ( not like they actually tell anyone how things work under the bonnet) its just a tool to do what i need ( virtual F5's, cisco ASR's etc) much like KVM for some of the more insane things ( emulating powerpc based cisco wireless lan controllers etc)


edit: given esx has tools like
https://labs.vmware.com/flings/vmware-tools-for-nested-esxi
im assuming some level of supportability.
 
my understanding is that the first hypervisor needs to intercept/pass thought the required opcodes.

you have to think that the fact you can do something, does not mean you can do it fast enough.
You can i.e. emulate ARM in QEMU under x86, but that does not mean it is fast enough for your purposes.
When you are running VMs inside VMWare, you do not expect to have a game at 30 or 60 fps, and you are ready to accept the tradeoff for your needs.

A simple example, that directly ties to AMD architectures:
your VA is converted to PA inside the kernel and the processor do VA->PA (tlb etc) automatically for the VM, then it is PA-accessed. Then the HY translates it back to the real PA using a sort of 'page extension' mechanism and makes the final memory translation. All this is hardware assisted (so your HY doesnt do it manually on each memory access).
But if your hypervisor is on top of that hypervisor, the CPU wont be able to do page translation on that PA, but will do it only for the "real" HY.
And now you see that running an HY inside an HY can be good as exercise, or as testing, but really NOT as a production environment.
Or at least, it is as good as having binary translation, in the end (which is not good). Yet, the chances to add trouble is high, and the benefits are relative.

On a side note, there are some tricks you could employ, like making the processor use the HY-inside-HY tables taking advantages of Tags, but that would break the emulation and the scope of isolation.
 
Last edited by a moderator:
What about the two of the four MMUs for "processing" LZ encode/decode?! Sorry if I use the wrong term. Not tech savvy. But I read from a source I have hard time finding, that those two MMUs will have cloud computing capabilities.
You are right. IIRC, it has been mentioned here that two of the DMEs -Data Move Engines- were specialised on decompressing data from the cloud. I could be wrong though.
 
Compressed data can just as readily be locally sourced. The primary hardware feature for cloud computing, as noted earlier, is the existence of networking equipment.
 
You are right. IIRC, it has been mentioned here that two of the DMEs -Data Move Engines- were specialised on decompressing data from the cloud.
They decompress data from anywhere. Data stored on disc is stored compressed, for example, as it's faster to load and decompress.
 
You are right. IIRC, it has been mentioned here that two of the DMEs -Data Move Engines- were specialised on decompressing data from the cloud. I could be wrong though.

Thanks for the answer. I had read an article from IDC.com that the potential of the Xbox One and cloud computing is not so much in using the cloud or Azure in this case to enhance

Xbox One graphics fidelity overtime. But to simply prepare Xbox One as an "ultimate microconsole" of some sort. Where Azure will be the core of where games are not only designed

but played from I guess. So the potential of graphics fidelity will be....limitless?

That's all of course IF ISPs within the next five years provide GB speeds. Also why the Xbox One has a gigabit ethernet port prepared I guess.
 
Thanks for the answer. I had read an article from IDC.com that the potential of the Xbox One and cloud computing is not so much in using the cloud or Azure in this case to enhance

Xbox One graphics fidelity overtime. But to simply prepare Xbox One as an "ultimate microconsole" of some sort. Where Azure will be the core of where games are not only designed

but played from I guess. So the potential of graphics fidelity will be....limitless?

That's all of course IF ISPs within the next five years provide GB speeds. Also why the Xbox One has a gigabit ethernet port prepared I guess.

Trying to distribute the processing load across both the server and client is more trouble than it's worth. You just create a bunch of bottlenecks that aren't worth the engineering cost to work around since the result would still have many of the latency drawbacks of a true streaming service like PlayStation Now. In that case the Xbox One only needs to interface with the network, decode a compressed video feed and send back controller commands. No exotic hardware needed. If that's the future they envision they'll just release a VitaTV/Roku/AppleTV style puck or HDMI stick down the line. A giant, expensive Xbox One would just be a waste for that.
 
Trying to distribute the processing load across both the server and client is more trouble than it's worth. You just create a bunch of bottlenecks that aren't worth the engineering cost to work around since the result would still have many of the latency drawbacks of a true streaming service like PlayStation Now. In that case the Xbox One only needs to interface with the network, decode a compressed video feed and send back controller commands. No exotic hardware needed. If that's the future they envision they'll just release a VitaTV/Roku/AppleTV style puck or HDMI stick down the line. A giant, expensive Xbox One would just be a waste for that.

Agreed, I doubt the exoticness of the Xb1 is used to create some hybrid rendering solution where part of the processing is done in the cloud as well as locally. Some latency tolerable circumstance may be used, sure. But ultimately why do a bunch of engineering of a solution that will be trumphed by simply do all the rendering on the server side and streaming the game to a console.

Net neutrality is about to die and isps will in the future offer companies the option of paying additional fees to do things like remove the ability of their data to impact a user's bandwidth caps and give access to better latency and higher bandwidth. And while I realize most people hate the ideal of the loss of net neutrality, but paid services like Live and PSN+ will probably benefit with the caveat of users paying higher fees.
 
Firstly we could be talking about a "Local Cloud" where Local devices (like AR glasses) leverage a Local server (like the XB1 or other device) to do the processing ..

Secondly I hope we get cloud processing like what was proposed a while back , the 'Orleans' actor based cloud programming model .. http://www.zdnet.com/microsofts-orleans-cloud-programming-model-gets-a-halo-test-drive-7000009300/

Even then why wouldn't render everything on the xb1 and stream to local devices. It's not like you can gain an appreciable amount of rendering power outside of something like a PC.

"Use your AMD r based gpu to add a couple of TFlops to your XB1" sounds delightful but doesn't seem practical. How many users would that accommodate?
 
Oh wow, didn't know that the Wii U supports "local cloud"

eh... we never end to learn, indeed.

One finally thinks to have understood what a cloud architecture is and how it works, and "local cloud" comes waving to you from the window... :rolleyes:
 
http://en.m.wikipedia.org/wiki/Cloud_computing

"Cloud computing is a phrase used to describe a variety of computing concepts that involve a large number of computers connected through a real-time communication network such as the Internet."

"Marketers have further popularized the phrase "in the cloud" to refer to software, platforms and infrastructure that are sold "as a service"
 
Either definition has an over-network connection to an abstracted service that self-allocates computational resources.
A peripheral slaved to a specific console wouldn't meet the definition.
 
Local Cloud
Im not sure , but maybe MS release devices that looks like a sega 32X , sega saturn cartridges ... or something with a very low cost (about a game price) to boost processing power.
 
Status
Not open for further replies.
Back
Top