Nvidia Pascal Announcement

So 1st 'real world' results for the SMP technology within Pascal benchmarked at Computerbase.de.
They see around 20% gain when using it with 3 screens playing iRacing, which sort of aligns with the developers of Obduction who has said the VR version gains are between 20-30% (interesting gains when compared to 3 screens of iRacing).
Not bad considering how new the technology is.
Here is the Computerbase.de article: https://www.computerbase.de/2016-09/pascal-smp-iracing-benchmark/

Cheers
 
IBM Bests Intel's HPCs

"Designed from ground up for the cognitive era and datacenter Big Data, our on-chip V-Link to Nvidia Tesla P100 Pascal GPUs delivers 5X faster interconnect data transfer than PCIe — a bi-direction 160 Gbytes per second from CPU to GPU compared to 32 Gbytes for PCIe," Dylan Boday, senior offering manager of Linux infrastructures told EE Times.

rcj_Supercomputer_In-a-Slot_2.png


"Overall IBM now offers 2.5X better performance than any Hewlett Packard x86 systems, plus we offer a water cooled option for improved efficiency and the ability to run indefinitely at peak turbo-boost speeds," Boday said. "We can also accommodate 42 percent more virtual machines (VMs) per server than Hewlett Packard's x86 systems or can accommodate the same number of users while reducing the number of required servers by two-thirds."

All comparisons were made to Intel E5v4 based Hewlett Packard servers, and when asked IBM had no comparison data available for Intel Xeon Phi based supercomputers. However, the U.S. Department of Energy’s (DoEs) Oak Ridge National Laboratory (ORNL), Lawrence Livermore National Laboratory (LLNL) and several large corporations have already signed on for initial installations in 2017. Pricing starts at $6000.
http://www.eetimes.com/document.asp?doc_id=1330417
 
I don't understand the logic behind GDDR5 on the P40, the P4 I understand given it's low clocks (impressive power efficiency though)
 
What is surprising is just how long the 11 and 12 Gb/s have been in sampling status, nearly 8 months now.
Wonder if it is not going to plan or trying to reduce production costs before being available as mass production.

Cheers
 
Simple: there's currently no way to build a GP102 with more than 12GB of GDDR5x. The Titan X has 12 physical memory chips and each are 8Gbit for 12*8Gbit=12Gbyte. If you look at Micron's site 8Gbit is the only size available:
https://www.micron.com/products/dram/gddr/gddr5x-part-catalog#/

That impossible, no way, 24GB GDDR5X was demoed working at SIGGRAPH last month. Not shipping yet, though, probably because as you say that dense chip is not widely available yet.

Clamshell I assume
 
Samsung mentioned at the time they went into mass production with their 4GB HBM package (that was late January 2016) that they are planning to produce 8GB package this year.
I have not seen any news indicating whether this is still achievable or suffering delays.

Cheers
Edit:
LOL NVM some mindblank by me as you guys are talking about GDDR5X :)
 
Last edited:
As a reminder, I believe GDDR5X is pegged to eventually be available in 4Gb, 6Gb, 12Gb, and 16Gb capacities alongside the current 10Gbps 8Gb stuff that we know and love.

http://www.anandtech.com/show/10193/micron-begins-to-sample-gddr5x-memory

So a 384-bit chip like GP102 could eventually support 24GB of GDDR5X without clamshelling via high capacity 16Gb chips. Anyone know about the timeline for that?

EDIT As we're tangentially on the topic, does anyone know what GDDR6 is supposed to bring? Based on Samsung's discussions at hot chips, it superficially appears similar to gddr5x. But I think only micron got on board for gddr5x, so I figure sammy has to have their own plans for new gddr-type memory down the road.

http://www.anandtech.com/show/10589...as-for-future-memory-tech-ddr5-cheap-hbm-more
 
Last edited:
Back
Top