According to that 40 servers for 1 hour would cost $10 theres your profit from your game gone.
In fact, we even give them the cloud computing power for FREE so they can more easily transition to building games on Xbox One for the cloud.
"A couple of things happen when, say, a building gets destroyed in a game. You've got the physics calculation of all the pieces that something's going to break into and all of what happens to those pieces as they collide with one another. And you kind of, in the truest sense, want it to be somewhat non-deterministic, meaning that if I shot, like, say, a missile from one angle instead of a slightly different angle, that the destruction looks different based on the pure physics of the impact. So what we've been working on is this capability of actually computing [in the cloud] the physics calculation of millions and millions of particles that would fall and then just having the local box [the player's console] get the positional data and the render, so, 'Okay I need to render this piece at this particular location. I don't know why.' The local box doesn't know why it's going to be at this location or where it's going to be in the next frame. That's all held in the cloud. You have this going back and forth between the two.
"That's just an example, because it's the example that we showed. Let's run getting a pure physics model in the cloud, because we can use multiple CPUs there and then locally using the power [of the console] to render and make things look good locally.
Cloudgine are delighted to announce that they are working with Microsoft on the upcoming title, Crackdown for Xbox One!
We are immensely excited about this project and the opportunity to create an entirely new experience in the Crackdown universe, one which we truly believe will set a new bar for open-world gaming.
So like, we went from "cloud physics is impossible" to "it'll be too expensive for the developers" to "it'll be too expensive to MSFT"
I'd argue it's not, otherwise why would they give it for free?
what's next? "cloud physics will cause [strike]global warming[/strike] climate change?"
Honestly this slippery slope is getting way too slippery.
That's not a technical discussion and if it weren't for the reply, I'd delete this post. The arguments against cloud computing have all been laid out. Instead of taking it on faith, we're reasoning the probabilities. That reasoning can be wrong (see 8GB PS4), but the reasoning process should be intelligent and based on factual arguments instead of wishy-washy acceptance of PR spiel.So like, we went from "cloud physics is impossible" to "it'll be too expensive for the developers" to "it'll be too expensive to MSFT"
I'd argue it's not, otherwise why would they give it for free?
what's next? "cloud physics will cause [strike]global warming[/strike] climate change?"
Honestly this slippery slope is getting way too slippery.
That's not a technical discussion and if it weren't for the reply, I'd delete this post. The arguments against cloud computing have all been laid out. Instead of taking it on faith, we're reasoning the probabilities. That reasoning can be wrong (see 8GB PS4), but the reasoning process should be intelligent and based on factual arguments instead of wishy-washy acceptance of PR spiel.
To date, there has not been one demonstration of complex cloud physics running over the internet. We can look at the facts of the Cloudengine engine - if they calculate physics meshes and positions and send the particle data to the console, they have to send significant amounts of data per frame. Or at least, send mesh data once and then update rotation and position every frame. The maths for that is already in this thread along with conclusions.
If you want to partake in the conversion, do so properly and intelligently on the technical level instead of from an unjustifiable high-horse of blind, unreasoned faith.
If they've pulled this off, there must be a clever data compression scheme in effect. Or, the effect is going to be significantly pared back to calculate fewer pieces enough to fit the data stream. As mentioned in the earliest days of this discussion, the primary resource bottleneck is bandwidth, so problems either have to fit that, or find solutions around it.'Okay I need to render this piece at this particular location. I don't know why.' The local box doesn't know why it's going to be at this location or where it's going to be in the next frame.
If you want to partake in the conversion, do so properly and intelligently on the technical level instead of from an unjustifiable high-horse of blind, unreasoned faith.
According to that 40 servers for 1 hour would cost $10 theres your profit from your game gone.
Interesting point; so even is the transition was possible, it will never be cost effective?
okay, but now, think about a first party title
Your post wasn't a technical argument but an appeal to faith it'll work.blind faith?
You're right, the cost debate doesn't belong here. It's been pruned back a number of times IIRC.I was ridiculing those that turned this discussion from a technical aspect to a cost debate, not sure why it's so hard to see:
from feedback next showing has to be tech showing specific bandwidth, CPU, latency etc. Team is working on demo plan.
My position is still "cloud physics is impossible" (actually it's not but I'm buying into your false dichotomy here), we still have to see the evolution from the BUILD demo to what arrives in shipping products that have to deal with real world internet, not private leased lines. Given I still randomly have a hard time streaming 480p YT and I have 150Mb/s this will be interesting. I hope it's real as I would love to be able to just flatten a city, it would be the sequel to Red Faction: Guerilla we never got
/u/JonnyLH said:I just calculated an estimate of the data rate for the Crackdown demo shown at Build. Obviously there's a couple more variables involved, for example, how the building breaks and the shape of the chunks. Would they derive from the local box which then gets sent up to Azure? Presumably a server application which would have the collision meshes of the map so it can sync up with the local box, it'd first receives the variables around the explosion like size, direction, radius etc.
Data Rate
UPDATED: Rather than real-time calculating of every chunk, 32 times a second, /u/caffeinatedrob recommended drawing paths which I've just substituted into the calculations
32 bits * 6 - Float
9 bits * 2 - 9 Bit Integer
Compression Ratio: 85%
Chunks: 10,000
Total Bits per Chunk: 210 bits
Total Bits for Chunks: 2,100,000
Total Bits Compressed: 315,000
Typical Ethernet MTU = 1500 bytes = 12000 bits
Data Frames per Initial Explosion of 10,000 Chunks: 27
Typical UDP Overhead = 224 bits
Total Overhead per Explosion = 6048 bits
Total Bits Needing to Be Sent Per Explosion: 321,048
Throughput Needed Per Initial Explosion: 313Kbps
All of Chunks Collide in 4 seconds: 2500 Chunks re-drawn every second
2500*210 = 525000
Compressed: 78750 bits
Data Frames per second needed for re-draw: 7
UDP Overhead = 1568 bits
Total Bits Needed per re-draw: 80318 bits
Throughput Needed per re-draw: 78kbps
Overall throughput needed in the first second: 391kbps
Every second after initial explosion would be: 78kbps
For the data, I've used float values for the X,Y,Z starting co-ordinates and the same for the finishing co-ordinates of the path on the map. I've assigned 9 bit integers for the rotation values on the path and the radius of the arc of the path.
The compression used is a given in this scenario. With the data being compressed featuring purely floats/ints the compression is very high, in around the 80's which I've substituted in.
To compare this to services which are used daily, for example Netflix, which uses 7Mbps for a Super HD stream which is pretty much standard these days. Next-gen consoles, previous gen support Super HD.
Latency
Average RTT (Round Trip Time) to Azure: 40ms
Calculation Time at Server: 32ms (For 32FPS)
Total RTT = 72ms
In Seconds = 0.072 Seconds
That means it takes 0.072 seconds from the beginning of the explosion for it to come back and start happening on your screen. Once the first load has occurred, you only have to receive data if the chunks collide with anything which would result in the re-draw of paths. The latency on that would be the calculation time, call it 16ms which is a lot considering that only a few may have to be-drawn. Then, add the half trip time of 20ms which would result in waiting 36ms or 0.036 seconds before the re-drawn path gets updated on-screen.
Packet Loss
In regards to packet loss, in 2014, you simply don't have any. ISPs these days tend to be both Tier 3 and 2 with peering often directly to large services which make up a lot of the bandwidth. This includes services like Google, Facebook, Twitter, Netflix etc. Honestly, unless you have a poor wireless signal inside your house which often causes some slight packet loss, you're not going to get any. Even if you drop a couple of packets, you'd loose a handful of chunks for that one frame and in-terms of gameplay it's not going to be noticeable really.
Conclusion
After taking suggestions on-board and drawing paths rather than real-time chunk calculation, the data rates which are needed there a significantly lower and the requirements for the internet connection are perfectly acceptable with only needing to transmit at 391kbps.
If anyones got any suggestions how to increase accuracy, or anything, let me know.
The OLD solution which requires 5.8Mbps is documented here:
http://pastebin.com/vQQs5ffZ
TL;DR: Cloud computing is definitely feasible on normal ISP connections. Would require 391kbps when the explosion starts.
Seen this on Reddit & thought it might spur some good technical conversation...
http://www.reddit.com/r/xboxone/comments/27yczf/i_just_calculated_an_estimate_of_the_internet/
I reformatted the quote to make it easier to read.
Tommy McClain
'Okay I need to render this piece at this particular location. I don't know why.' The local box doesn't know why it's going to be at this location or where it's going to be in the next frame.
For the one's that will be making collisions , at least those that can be predicted and be known to happen with complete certainty, the system can detail which objects will have a collision, the variables that need adjusting, and even when to apply those changes. So if the server can tell that some object is going to hit something after a while, and there is some spare bandwidth, it can tell the XBO that when a certain object reaches a certain point, or at a certain time, or some other criteria, to make a specific set of adjustments, and the XBO will then be primed and ready to make that change when it needs to.
Seen this on Reddit & thought it might spur some good technical conversation...
http://www.reddit.com/r/xboxone/comments/27yczf/i_just_calculated_an_estimate_of_the_internet/
I reformatted the quote to make it easier to read.
Tommy McClain