Stacked Ram will only improve speed or will we get a lot more capacity ? Seems like we have been at 2-3 gigs for a long time
Both, depending on your definition of a lot more capacity.
Stacked Ram will only improve speed or will we get a lot more capacity ? Seems like we have been at 2-3 gigs for a long time
Both, depending on your definition of a lot more capacity.
Both, depending on your definition of a lot more capacity.
If it's first-gen HBM, getting to 4 GB would require four stacks at 128 GB/s bandwidth (some fiddling may come from speed grade) each.
A 2-stack design would get 256 GB/s in bandwidth, which is decent but not difficult to reach with existing memory buses. The capacity is a constrained 2 GB.
A 4-stack would get 512 GB/s, which is 60% above what a consumer Hawaii board can get at 4 GB capacity. The high-end FireStream boards would see a massive drop in capacity with first-gen HBM.
I have not seen mockups going beyond 4 stacks on the interposer. Gen 2 should significantly help with capacity and bandwidth.
Stacked Ram will only improve speed or will we get a lot more capacity ? Seems like we have been at 2-3 gigs for a long time
McConnell and Chung said AMD's next high-end graphics processing unit launch has been delayed until the second half of 2015.
Was there still anyone left who expected desktop GPUs in 20nm?Doesn't bode well for 20nm availability.
I haven't seen any new Iceland rumors lately, but hopefully that GPU will show up before 2015 H2.I find it hard to believe that AMD will be without anything new for another 8 months. They must have been doing something more than Tonga in the last year...
Was there still anyone left who expected desktop GPUs in 20nm?
Sure looks like it. gm204 already shows that Nvidia thought it was worth staying with 28nm, even though 20nm is available for mass production. (Apple)Are they actually skipping straight to 16?
Sure looks like it. gm204 already shows that Nvidia thought it was worth staying with 28nm, even though 20nm is available for mass production. (Apple)
20nm is a choice between power and speed. Can't have both. And it's only expected to become cheaper per transistor than 28nm by the time 16nm comes around. So it's not even a cost reduction.
I suppose so. But if it's both lower power and faster, at least you get some value for your money.Is 16nm not going to be expensive for a while too?
Sure looks like it. gm204 already shows that Nvidia thought it was worth staying with 28nm, even though 20nm is available for mass production. (Apple)
20nm is a choice between power and speed. Can't have both. And it's only expected to become cheaper per transistor than 28nm by the time 16nm comes around. So it's not even a cost reduction.
Was there still anyone left who expected desktop GPUs in 20nm?
I find it hard to believe that AMD will be without anything new for another 8 months. They must have been doing something more than Tonga in the last year...