AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

[/edit : this rant does not address the feature of GPU virtulization. Still I wonder if both multi-tasking and virtualization are supported i.e. one OS for multple users, and one virtual OS with virtual GPU for each user]

To build on that a bit, we do have all the component pieces available and running on a cheap Windows Home PC :
- multi-user : processes from several user accounts running (Administrator, SYSTEM, limited user accounts) and Fast user Switching
- multi-tasking, including running multiple programs that use the GPU
- game streaming or capture through a hardware video encoder.
 
Last edited:
Yes and i see allready a lot of practical cases for offices like engineers, CAD, architects.... One important difference is it is hardware based, not software and should be pretty easy to implement in an professional ecosystem.
 
Thanks. It feels like you will be able to drop a suitable pro card in any server or PC (provided it has Vt-d or IOMMU, hardware and hypervisor that support each other..) which is an important difference from nvidia GRID systems.

If I understand well, does NVIDIA GRID work by taking the command buffers and have them go through a 'passthrough' inside the hypervisor? Did I understand it right? And how memory protection is done when using direct pointers in memory then??
 
Kind of interested to know about how the graphics pipeline handles multiple processes.
Their drivers have been terrible at it for months IME; running a 3D rendering app of some sort (read: game, typically in my case, but also Google maps in a browser window for example) alongside standalone compute jobs have had much less of a disruptive effect on said 3D rendering app on Hawaii than on any other GPU I've owned in the past, NV or AMD. However, doing this has led to graphics driver hangs, or full-blown kernel panics (BSOD + system restart) within minutes, or sometimes even seconds.

Haven't dared trying under windows 10 yet, but it would be awesome if I could run Folding@Home on my GPU while playing a game like Diablo3 for example which doesn't tax the shader arrays at all really, even at 1440P rez, without having my machine keel over and die on me. World of Warcraft also runs quite well while it actually runs, that is.

Any other GPU, prior to Hawaii, has experienced incredible choppiness in any form of 3D rendering whenever the F@H client was active. Hawaii on the other hand hardly drops a single frame, even in the thickest action. Then suddenly out of the blue...your screen goes blue. Ugh. :( This is particularly troublesome if you're just zooming or panning around in a map as mentioned, since it makes it difficult to use your computer for basically anything while the folding client is running if the graphics driver could go postal at any moment. Maybe they finally fixed that issue in windows 10, I don't know, but I sure as hell hope so.
 
I dont know if it is linked more to the F@H or similar client... I can run raytracing OpenCL intensive on both my GPU's, and run anything else ( just i will largely slow the render or dont have enough performance for gaming anyway ).

Anyway, for a case like that, i will run the OpenCL intensive render on my second GPU and use the first as gaming.

i have the choice for my render what device i use ( CPU, GPU1, GPU2 in any configuration. ).. this simplify the life in thoses cases.
 
Great News Everyone!

Running GPU compute + 3D rendering at the same time appears to no longer hang the driver or crash the system. I've been gaming and folding now for like an hour and a half at least and encountered no issues!

*makes happy little dance*
 
Great News Everyone!

Running GPU compute + 3D rendering at the same time appears to no longer hang the driver or crash the system. I've been gaming and folding now for like an hour and a half at least and encountered no issues!

*makes happy little dance*

On windows 10?
 
On windows 10?
Yes.

Played WoW for hours with folding running simultaneously; totally no problem whatsoever from what I could tell. A few dropped frames here and there, but nothing that degraded the gaming experience.
 
Yes.

Played WoW for hours with folding running simultaneously; totally no problem whatsoever from what I could tell. A few dropped frames here and there, but nothing that degraded the gaming experience.

Good news so.. Im ask me if this could be linked more to the WDMM2.o driver model, than an fix in the catalyst driver itself. Something to do with the latency reaction on the ressource management.
 
Came back to my PC this morning after having leaving folding running overnight. Box was intermittently sluggish and unresponsive, but seemingly still working all the same.

No idea if there might still remain driver software quirks needing to be worked out, or if it's due to my factory O/C-d Hawaii board for example. (Powertune settings are still ignored while running GPU compute, for example, but this should be unrelated.) I'mma reboot this box, windows update is whining periodically at me anyway that it can't install updates. Maybe it's for the best, all things considered... :D
 
*makes happy little dance*
Aw f--k. I was premature in my celebrations.

Yesterday I probably gamed for six hours straight and no issues, then today I run two timewarp dungeons and when I'm just done with the second, *boom*, there she goes belly-up on me with a 0x116 VIDEO_TDR_ERROR BSOD in atikmpag.sys.

Damn. Damndamndamn, DAMN. :(
 
If I understand well, does NVIDIA GRID work by taking the command buffers and have them go through a 'passthrough' inside the hypervisor? Did I understand it right? And how memory protection is done when using direct pointers in memory then??

I have no idea. Perhaps that explains why current GRID is a vertically integrated solution.
I don't really know on a regular PC how WoW can't grab the output of Google Maps and upload it to Blizzard, or the other way around.
 
I don't really know on a regular PC how WoW can't grab the output of Google Maps and upload it to Blizzard, or the other way around.

This is a different thing. "grab" is a kernel function (you dont access the frame buffer from userland, unless it is specifically mapped in it), and has nothing to do with memory protection.
If NVIDIA shaders doesnt allow direct memory access like GCN one, memory protection is made easy.
However, you may have to 'fix' pointers on the queues (and its data) when it travels to the hypervisor. A fully scaled hardware solution would have your queues inside each VM, and would not require travels to the Hypervisor except for funky requests like 'change the card configuration' or such.
...or video memory sharing, which is what I think happens on XB1 since you have a game and windows on the same memory pool and I bet my ... that each GPU space is fully protected.
 
Back
Top