Nvidia Pascal Speculation Thread

Status
Not open for further replies.
No one thought that all chips in the Pascal and Arctic Islands families would carry HBM2.

My guess is HBM2 is going to the top Pascal model GP100 and the top 2 or 3 Radeon GPUs (e.g. Fury 2,R490 and R480).
The rest will have to do with GDDR5 and this new generation GDDR5X seems great for that.
 
With this refresh of GDDR5, HBM2 could be pushed to the exclusive server SKUs initially, to rake the higher and more secure margins, until the production volumes permit smoother mainstream market adoption.
 
It's a really bummer, if it's still a year away from release. I'm hoping fro first half of next year.
 
Heh, I've always said that for the foreseeable future, HBM only makes sense for the highest-end due to price and not being necessary, and that was before learning about GDDR5X today.

I think this seals it.

With 448GB/s out of a 256 bit bus, I really don't see why HBM is worthwhile for anything but high BW workloads such a compute.

And who cares if the lower performance GPUs will be bigger in size than the top dog? That only increases the attractiveness of the latter.
 
With 448GB/s out of a 256 bit bus, I really don't see why HBM is worthwhile for anything but high BW workloads such a compute.
The current GDDR PHYs are already quite vast, would anyone expect 448GB/s PHYs to be any smaller, proportionally? I think the Radeon Fury quite excellently illustrated the worthwhileness of HBM; as already mentioned, the less than stellar scaling of new semiconductor processes isn't going to allow miracles, so smaller memory interfaces will definitely be advantageous. Also, HBM production will have matured by the time NV launches a product featuring it; there's no genuine reason to fear shortages or the like without any indication of such.
 
With 448GB/s out of a 256 bit bus, I really don't see why HBM is worthwhile for anything but high BW workloads such a compute.

And who cares if the lower performance GPUs will be bigger in size than the top dog? That only increases the attractiveness of the latter.
Laptop GPUs. Both power consumption and size matters in this case. And GDDR5X makes the first one even worse than with original GDDR5.

I mean, how many customers of dedicated mid range GPUs for desktops are left nowadays?
The current GDDR PHYs are already quite vast, would anyone expect 448GB/s PHYs to be any smaller, proportionally?
Proportionally? No.
But not exactly bigger either. Not much is changing, just doubled prefetch size compared to GDDR5. That's a lot cheaper than doubling the interface width would be.
 
With 448GB/s out of a 256 bit bus, I really don't see why HBM is worthwhile for anything but high BW workloads such a compute.

And who cares if the lower performance GPUs will be bigger in size than the top dog? That only increases the attractiveness of the latter.

I'm curious on how they will roughly double the Gbps over the highest GDDR5. It didn't seem like anyone was really planning on driving the standard GDDR5 interface to that bit rate per pin.
The article on it is rather light on details, and seems primarily focused on the increase in DRAM prefetch, which is one way to better provide data to the physical interface, but that's not a huge revelation as the slide on that point admits. Is that 256-bit bus the same as the one we currently know?
Fiji's DRAM power savings over 512 bits of 5Gbps GDDR5 was roughly estimated at 20-30W by Anandtech, so where does a half-wide GDDR5X bus at 3x the speed (over PCB?) fit?

AMD's adoption of a new high-speed memory interface might be interesting to see, since they gave up much of their engineering and IP in that realm.
 
Having 8 x 8 Gb GDDR5X chips (for a 8 GB frame buffer), might be cheaper as HMB2.
Still a 256 bit bus.
Technically it would be more like a 512 bit bus as 2 data lanes are paired for differential IO....
256 = 8 x 32 bit lanes (ie 8 x 64 bit lanes that are paired).
Should be able to provide more bandwidth as a 512 bit single ended bus.
 


This dont shape so much good for the rumor.. yes i see maybe low end, middle end use it.. or maybe just use GDDR5.. they said they will do an announcement about it current 2016.. Is it a bit late for next generation of GPU's ? does the production ( mass production ) have start ?

I have see 2 announces ( financial and investment ) about Micron who try make his possible to up his marketshare against Samsung ( massive investment in South Korean semiconductor industry ).

Look more and more for me like an old history about Nand ram we have allready see between thoses 3 actors ( Hynix, Samsung and Micron )
 
I'm curious on how they will roughly double the Gbps over the highest GDDR5. It didn't seem like anyone was really planning on driving the standard GDDR5 interface to that bit rate per pin.
Yes, I'm surprised by this as well. I always thought that even 10 Gbps was pushing it and now we're going towards 14Gbps. Pretty crazy. Maybe they're using improved PCB material as well?
 
Laptop GPUs. Both power consumption and size matters in this case. And GDDR5X makes the first one even worse than with original GDDR5.
I don't think that market warrants a completely different piece of silicon.

I mean, how many customers of dedicated mid range GPUs for desktops are left nowadays?
Judging by various forums and the space allocated on the shelves at local Best Buys and Frys, I wouldn't be surprised if it's the market segment with the largest volume. (Not necessarily the largest revenue or profit.) It makes sense when most monitors are 1080p. And $300 is still quite a bit of money.
 
Yes, I'm surprised by this as well. I always thought that even 10 Gbps was pushing it and now we're going towards 14Gbps. Pretty crazy. Maybe they're using improved PCB material as well?

I tried to explain that in a post above (I recon you might need to be an electrical engineer to understand that).
Differential signalling uses 2 PCB wires in a pair, which enables higher frequencies and less interference.
 
That seems like a probable direction, although not mentioned in the article about GDDR5X.

Differential GDDR5 was one of the directions evaluated in the leadup to HBM. The power cost of whatever form of differential GDDR5 interface they were thinking of at the time was why they discarded it.
 
Status
Not open for further replies.
Back
Top