Will hUMA mean CPUs and discrete GPUs can share a unified pool of memory, too? Not quite. When the question came up during the briefing, AMD said hUMA "doesn't directly unify those pools, but it does put them in a unified address space." The company then stressed that bandwidth won't be consistent between the CPU and discrete GPU memory pools—that is, GDDR5 graphics memory will be quicker, while DDR3 system memory will lag behind, so some hoop-jumping will still be required. (As an interesting side note, AMD then added that "people will be able to build APUs with either type of memory [DDR or GDDR] and then share any type of memory between the different processing cores on the APU.")