XB1 enhancements aid remote computing? *spawn

dcbronco

Newcomer
It seems my question gave the impression that I am some kind of troll here to rile people up. Okay. Not sure how because as far as I've read I've never seen anyone else mention anything about the possibility of Xbox One having the capability to work in parallel with another or other processors. Well, except for Microsoft who have been pushing the idea that their machine is built to work seamlessly with the cloud.

My question wasn't about dual onboard GPUs. It was about using another machines APU as a local cloud and if those extra parts might play some role in that process. Using the word rumor is bad in this thread. Noted. Was I speculating, absolutely. Again, I'm a layman so it's mostly speculation until I learn better. As far as being a disciple of someone else I'm not. I don't know enough to choose a side. And when it comes to something like technology, I may love an idea, but I'm never in love with one. Ideas in a changing landscape have a short shelf life.

Now if Shifty was saying it is possible but who's going to program for that, then I say it depends on how valuable it turns out to be. I read complaints about multiple cores ten years ago, yet here we are. Given the machine is built with the cloud in mind. What difference does it make where the additional processing power comes from. Processing is already being done outside of the Xbox One on some games. It seems to me that a console that becomes more powerful with the addition of things I already own or may be about to purchase is an excellent selling point. Especially for a company that just spent billions acquiring a phone maker and started making tablets themselves. If the whole system works best with my custom parts it's a win for me. Throw in that my OS will soon work seamlessly across all of those platforms.

So while the name 360 was about being an 360 degree entertainment system, One is about being a single coding and resource system. I posted here because I have questions and wanted to hear from a more exclusive group of minds from most sites. Hopefully you guys and girls don't mind a question from time to time.
 
Cloud computing nets you a miniscule bandwidth, completely inappropriate for operating a GPU remotely. Remote computing can't rely on large datasets and needs to use data specifically designed for very limited bandwidth and high latency. eg. XB1's DDR3 runs at ~70 GB/s and the fastest internet and future WiFi in the world operates at 1 Gbps, which is 1/560th the speed of XB1's 'too slow on its own' RAM. Typical internet speeds are approaching one 10,000th the speed of XB1's local RAM. Wifi N may manage 150 Mbps, or less than 20 MB/s. That's utterly inadequate to run a GPU! If you want to use remote computing, you have to have the executable for the remote workload on the remote computer, and send over tiny packets of data.

There's no such thing as a computer 'designed for cloud computing'. By which, there aren't specific designs that make cloud computing more effective on a local machine. Cloud computing is sending and receiving data over the network and possibly decompressing it. The components necessary exist in a $30 TV dongle running cloud apps on your TV. You can design a machine specifically for cloud computing in something like a thin client or a Gaikai style streamed-gaming box (Chrome Book), but a console designed to synergise with computers over a network isn't a thing. The clever part of cloud computing is on the cloud end with job apportioning across hardware.

Things like this are common knowledge to us here, so we don't tend to engage in old discussions. Maybe try the search facility and read up threads like this one - https://forum.beyond3d.com/threads/...he-transition-to-cloud-really-possible.54209/
I read complaints about multiple cores ten years ago, yet here we are.
Actually, the discussion of the time was exactly correct. Multicore is a pain to program and that's why people complained, but it's necessary as we reach the limits of single core performance. No-one said it was impossible and the move to multicore doesn't prove/disprove anything about 'wild theories' becoming reality. About the only 'wild theory' that came true I can think of is that PS4 got 8 GB/s GDDR5. And that wasn't that wild!
 
Last edited:
There's no such thing as a computer 'designed for cloud computing'. By which, there aren't specific designs that make cloud computing more effective on a local machine.
Latencies differ between different network adapters. If a network adapter would be highly integrated and optimized, it could shave off a few milliseconds of latency (especially cheap wireless adapters seem to be badly optimized for latency). This is generally nice for a gaming device (but would also be nice for any latency sensitive online computation). Obviously none of this has anything to do with the GPU. After the network adapter has written the packets to the RAM, there's nothing much left to optimize (that would have a significant impact even when the server is located in the same city).
 
There's processing in the "Cloud" like azure/live services...

Then there's processing on your local network (100MBit-1Gbit) .. PC's/Mobile Devices etc.. - we saw glimpses of this with SmartGlass and the Division . In the Smartglass case there is a dedicated QOS api that guarantees packets from the XB1 to the Smartglass device, these packets of data are graphics as well as input .. (DirectXSurface , pointer events etc) ...

Would be great if modern Win10/Dx12 PC's, Win10/Dx12 Mobile devices and XB1 all on the same network could somehow contribute to rendering a scene (in an optimized way ofcourse) ...
 
Ps2/3's were able to be linked together but this was only made possible because sony allowed those consoles to run linux. And it will be a cold day in hell before ms does te same.
 
Thanks for the comments. One of the things I was wondering about was less latency related problems. Like could a local cloud provide a day night cycle or sun and weather relieving the Xbox One APU from those task. Or crowd AI. Or maybe there could be damage seen during gameplay but the collective damage physics could be run on another machine and updated on the Xbox. The console would then be responsible for collision detection and damage incurred as a result.
 
Thanks for the comments. One of the things I was wondering about was less latency related problems. Like could a local cloud provide a day night cycle or sun and weather relieving the Xbox One APU from those task. Or crowd AI. Or maybe there could be damage seen during gameplay but the collective damage physics could be run on another machine and updated on the Xbox. The console would then be responsible for collision detection and damage incurred as a result.
That's covered in the thread I linked to.
 
Back
Top