post n. 28: http://www.xtremesystems.org/forums...1-Nvidia-unveils-the-GeForce-GTX-780-Ti/page2
(I admit, however, does not know who the user SKYMTL is)
SKYMTL is the boss, reviewer and writer of HardwareCanucks.com http://www.hardwarecanucks.com/
In general he's really well informed.
Could be, wasn't GF110 also known as GF100b?
Added nvidia-uvm.ko, the NVIDIA Unified Memory kernel module, to the NVIDIA Linux driver package. This kernel module provides support for the new Unified Memory feature in an upcoming CUDA release.
what is the difference to the "Unified Virtual Addressing" they had so far? (or was that a win7/8 64bit only feature so far?)
Any memory in the process space I assume? (Unless you are running something on the GPU from root.)letting the GPU access any memory in the system.
thx for the detailed explanation.No, UVA has worked on Linux for quite a while now. UVM lets the GPU page non-page-locked CPU memory. The first implementation, which NVIDIA demonstrated at GTC in the spring and called UVM-lite, works on Kepler as well, but requires the memory be allocated through a special malloc that uses a kernel extension to handle page faults coming from the GPU. If you allocate memory with this allocator, you don't have to explicitly move data to and from the GPU, it gets paged back and forth as required by your program. The full UVM that Maxwell brings should remove the need for a special allocator, letting the GPU access any memory in the system.
Any memory in the process space I assume? (Unless you are running something on the GPU from root.)
There will still be page faults, it's just that the GPU and CPU will share the same page tables with Maxwell. When you write an application using UVM, you won't need to copy data to the GPU explicitly. You just allocate data on the CPU as normal, and run the program on the GPU using pointers from your CPU program. When the GPU accesses data that's sitting on the CPU, the GPU page faults and requests the page from the CPU. The CPU then pages the memory out as normal with virtual memory: except that instead of paging it to disk, it pages it to the GPU. When the CPU accesses memory that's sitting on the GPU, the CPU page fault page it back in from the GPU. For simple applications, you won't have to worry about where your memory is, which will make it easier to program GPUs.thx for the detailed explanation.
So, with maxwell, there won't be any page faults to copy to the GPU in a software emulation, but native hw access?
As I said, I'd guess that this particular kernel extension is for UVM-lite, which is very similar, except it can only operate on memory allocated with NVIDIA's allocator: it can't access arbitrary memory in the process. But UVM-lite runs on Kepler and so it's a step towards the full UVM.
Has UVM-lite already been released for Kepler GPUs on Windows drivers? I never heard Nvidia making a big fuss about it. Or is this still to be released?
We got programmable. We got unified. We got general compute. We went from VLIW to "scalar". We have scalable tessellation.
What's the next frontier in graphics architecture? There are some rumblings about maxwell adding chunkier caches but that on its own isn't very exciting.
Unified memory space. GPU can access system RAM and CPU can access video RAM.What's the next frontier in graphics architecture?
Au contraire, more caches, more coherency an more cpu's on die are very exciting.
Unified memory space. GPU can access system RAM and CPU can access video RAM.