6. GPU compute context switch and GPU graphics pre-emption: GPU tasks can be context switched, making the GPU in the APU a multi-tasker. Context switching means faster application, graphics and compute interoperation. Users get a snappier, more interactive experience. As UI's are becoming increasing more touch focused, it is critical for applications trying to respond to touch input to get access to the GPU with the lowest latency possible to give users immediate feedback on their interactions. With context switching and pre-emption, time criticality is added to the tasks assigned to the processors. Direct access to the hardware for multi-users or multiple applications are either prioritized or equalized
http://www.anandtech.com/show/5847/...geneous-and-gpu-compute-with-amds-manju-hegde
2.4. Preemption and Context Switching
TCUs provide excellent opportunities for offloading computation, but the current generation of TCU hardware does not support pre-emptive context switching, and is therefore difficult to manage in a multi- process environment. This has presented several problems to date:
• A rogue process might occupy the hardware for an arbitrary amount of time, because processes cannot be preempted.
• A faulted process may not allow other jobs to execute on the unit until the fault has been handled, again because the faulted process cannot be preempted.
HSA supports job preemption, flexible job scheduling, and fault-handling mechanisms to overcome the above drawbacks. These concepts allow an HSA system (a combination of HSA hardware and HSA system software) to maintain high throughput in a multi-process environment, as a traditional multi-user OS exposes the underlying hardware to the user.
To accomplish this, HSA-compliant hardware provides mechanisms to guarantee that no TCU process (graphics or compute) can prevent other TCU processes from making forward progress within a reasonable time.
http://developer.amd.com/wordpress/media/2012/10/hsa10.pdf