NocturnDragon
Regular
I've found this paper and even tho I still didn't read it, It look pretty interesting.
J. Sheaffer, D. Luebke, and K. Skadron. “A Hardware Redundancy and Recovery Mechanism for Reliable Scientific Computation on Graphics Processors.†In Proceedings of Eurographics/ACM Graphics Hardware 2007 (GH), Aug. 2007, to appear.
http://www.cs.virginia.edu/~skadron/Papers/sheaffer_gh2007.pdf
J. Sheaffer, D. Luebke, and K. Skadron. “A Hardware Redundancy and Recovery Mechanism for Reliable Scientific Computation on Graphics Processors.†In Proceedings of Eurographics/ACM Graphics Hardware 2007 (GH), Aug. 2007, to appear.
http://www.cs.virginia.edu/~skadron/Papers/sheaffer_gh2007.pdf
Looks like that even in the future GPGPUs and GPUs can continue to be one and the same.We present a hardware redundancy-based approach to reliability for general purpose computation on GPUs that requires minimal change to existing GPU architectures.
Upon detecting an error, the system invokes an automatic recovery mechanism that only recomputes erroneous results. Our results show that our technique imposes less than a 1.5× performance penalty and saves energy for GPGPU but is completely transparent to general graphics and does not affect the performance of the games that
drive the market.