JEDEC's wide io2 isn't the same as hbm right? Or has jedec given up and simply adopted Hynix's work as their standard?
JEDEC has multiple standards. As it stands, HBM is JESD235: http://www.jedec.org/standards-documents/results/jesd235JEDEC's wide io2 isn't the same as hbm right? Or has jedec given up and simply adopted Hynix's work as their standard?
Wonderful! I thought it was going be Hynix exclusive for some reason.JEDEC has multiple standards. As it stands, HBM is JESD235: http://www.jedec.org/standards-documents/results/jesd235
Starting with CUDA 6, Unified Memory simplifies memory management by giving you a single pointer to your data, and automatically migrating pages on access to the processor that needs them. On Pascal GPUs, Unified Memory and NVLink will provide the ultimate combination of simplicity and performance. The full-bandwidth access to the CPU’s memory system enabled by NVLink means that NVIDIA’s GPU can access data in the CPU’s memory at the same rate as the CPU can. With the GPU’s superior streaming ability, the GPU will sometimes be able to stream data out of the CPU’s memory system even faster than the CPU.
So many years waiting for that.
Cool.
I think high bw CPU mem access is not for x86 cpus', just power8 and derivatives.
I do hope to find something more recent.
Try the AMD presentations evangelizing the industry.
http://sites.amd.com/la/Documents/TFE2011_001AMC.pdf
http://www.microarch.org/micro46/files/keynote1.pdf
P.S: Nvidia will actually be using the HBM tech that AMD co-developed with SK-Hynix
Try the AMD presentations evangelizing the industry.
http://sites.amd.com/la/Documents/TFE2011_001AMC.pdf
http://www.microarch.org/micro46/files/keynote1.pdf
P.S: Nvidia will actually be using the HBM tech that AMD co-developed with SK-Hynix
For Pascal, or we could expect to see it early ?
It says 3D rather than stacked, is there a difference between the two or is Nvidia just using a different term for the same thing?Pretty sure the latest roadmap from March 2014 showed Pascal introducing stacked DRAM in 2016.
*snip*
It says 3D rather than stacked, is there a difference between the two or is Nvidia just using a different term for the same thing?
http://www.eetimes.com/author.asp?section_id=36&doc_id=1321693&page_number=2Pascal (the subject of a separate discussion/article) has many interesting features, not the least of which is build-in, or rather I should say, built-on, memory. Pascal will have memory stacked on top of the GPU. That not only makes a tidier package, more importantly it will give the GPU 4x higher bandwidth (~1 TB/s), 3x larger capacity, and 4x more energy efficient per bit.
Basically the already high-speed GPU to video memory bandwidth will go up four orders of magnitude. That alone will help speed up things, but Nvidia took it one-step further and added GPU-to-GPU links that allow multiple GPUs to look like one giant GPU.
Today a typical system has one or more GPUs connected to a CPU using PCI Express. Even at the fastest PCIe 3.0 speeds (8 Giga-transfers per second per lane) and with the widest supported links (16 lanes) the bandwidth provided over this link pales in comparison to the bandwidth available between the GPU and its system memory.
NVLink addresses this problem by providing a more energy-efficient; high-bandwidth path between the GPU and the CPU at data rates 5 to 12 times that of the current PCIe Gen3. NVLink will provide between 80 GB/s and 200 GB/s of bandwidth.
One order of magnitude, maybe (there's no universal definition of what is if, but at least 4 is bigger than 2, ln 10 or "e"!)