ISS uses asynchronous compute to process particles. The compute shader is dispatched to the GPU by the CPU. Asynchronous compute doesn't come for free on the CPU or GPU. There's nothing that suggests there's a huge amount of idle GPU time that can easily be exploited with asychronous compute. Any significant amount of processing done in a compute shader, synchronously or asynchronously will be processing time unavailable to other shaders/algorithms. If you were to do AI on the GPU, that's GPU time unavailable for graphics rendering. The great thing about GPGPU is you can do things that traditional pixel/vertex shaders cannot, and it should allow for some more efficient rendering (or so I've read).
There can actually be quite a bit of "idle" time on a GPU, at least if you look at the resources used by compute shaders. Even if you ignore rendering phases where ALU's aren't used heavily to begin with (for instance, depth-only rendering for shadow maps) there's typically quite a bit of time where the GPU has to sync/stall in order to allow subsequent rendering passes to run in lock-step. Async compute offers a convenient way of executing shaders that bypass all of the syncing (hence, the "async" part of its name), which allows you to "fill up" that idle time with compute jobs. I don't really want to go into too many specifics due to NDA, but my friends at Q games are a bit more cavalier and have shared some of their profiling data in these slides (see slide 83).
Obviously it depends quite a bit on what kinds of compute jobs you're running and what else is happening concurrently on the GPU. However it certainly isn't so cut and dry as "running async compute shaders always takes away processing time from graphics", if that's what you're suggesting.
Last edited: