Let's not forget the size and power draw of those installations. It's going to take more than several iterations of Moore's Law to make those physically small enough to be considered for consumer use.
Aside from raw computational power, they're optimized for large jobs, not interactivity. Some kinds of computation runs take days to weeks to complete.
The human mind and the human eye do a lot to fool themselves.
Nobody is able to pixel-count visual recall, and that process is all about the re-creation of the feeling that you are seeing something in detail. Recall is also decidedly non-interactive, and we have game consoles that can play movies already.
There are savant painters that can recreate highly detailed scenes to which they've been briefly exposed to. Assuming one such happened to have the vivid-form of working memory he'd have both the detail as well as the vividness.
Precision down even to the number of windows in random buildings.
Note that while this is in painting, it is said that in music there have been savant that need hear only once to perfectly play the music in the given instrument. In terms of text 3 second exposure to 2 pages and apparently all lines and word position may be recalled.
That's not simulation or rendering of 3D scene, it's just to recollect and eventually to adapt the thousands of images we have stored in our memories. And even that it's not based on pixel, but on other "features".
Yet put a precise enough high resolution neural interface in place and I would suggest some individuals would provide data that can be reconstructed into highly accurate 2d scenes.
While the savant may not have the artistic training of a professional, it is not impossible to imagine that the combination of both can potentially coexist such that the photorealistic artist of which there are many, would be able to picture high detail photorealistic images in short order like the savant of which there are few.
Provide a good high resolution neural interface and this could be output directly to a monitor.
What's more exciting is whether such architectures can be instantiated on more efficient substrates. We know that moving from axon to optical fiber we gain a 2.5+M speed up in information transmission. Likewise moving from chemical synapse gap diffusion to solid-state processing yields immense speed-up(2.5M? too, less, more?). The question remains whether the other processing steps can be likewise sped up to such a degree.... if they can be sped up, a synthetic biology brain could potentially have 2.5+M the processing capacity of the human brain... 250M Petaflops = 250,000 Exaflops = 250 Zettaflops
In a single second a subjective month would go by for such a synthetic brain.
Brains are nothing like a processor or a Von Neumann machine.
As long as their computational capabilities do not go beyond turing complete machines, their function can be translated and run on traditional machines provided sufficient memory.