AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

That's exactly my gripe: it looks like they wasted a fantastic piece of enabling technology on something that doesn't deserve it.
AMD is almost facing a bankrupt, so investing their barely existent resources to massive RnD on 28nm seems to be wrong... The upcoming Fiji cards are more or less a tech-demo/pilot/pipe-cleaner for new memory related technologies. In reality the HBM gen2 is the first production worthy tech for GPUs. nVidia simply doesn't need a HBM gen1 test vehicle unlike AMD - the Finfet generation will be interesting.
 
That's exactly my gripe: it looks like they wasted a fantastic piece of enabling technology on something that doesn't deserve it.

I wonder if there is something contracted where the design partnership expects someone to use the first product somehow before HBM2.
I'm still not sure what changed between HBM1 and HBM2 to enable the higher stacking. A process/die thinning change?

At any rate, AMD has spun off a high-speed IO team to Synopsys and might not have much left that could go off of an interposer, besides a 832-bit GDDR bus, if such a thing would fit.

Having nice tech can enable new performance heights, or compensate for serious glass jaws. AMD's DVFS for its GPUs is pretty good, and probably gives it a speed bin or two above what could be managed otherwise, which gets it to where it is today.
 
At any rate, AMD has spun off a high-speed IO team to Synopsys and might not have much left that could go off of an interposer, besides a 832-bit GDDR bus, if such a thing would fit.
One would assume that Synopsys is still used as an IP provider for future developments. I look at it as the equivalent of the sale-and-lease-back operation of their real estate.
 
One would assume that Synopsys is still used as an IP provider for future developments. I look at it as the equivalent of the sale-and-lease-back operation of their real estate.

That would be the 832 bit GDDR5 bus, basically tacking on an extra 2/3 of Hawaii's existing controllers rather than creating a new controller, since all AMD has that is new is HBM.
 
Various DRAM flavors can employ a burst-chop mode, but depending on the standard it may cut the number of cycles data is transfered, but not the number of cycles in a burst. DDR3's mode will simply not select data for the first 4 or last 4, but the burst isn't shorter.
The data sheets for GDDR5 that I've found didn't show that option or any burst length but 8.
Thanks for the info, didn't know that. But in relation to what I was saying, storing a compressed tile to RAM will involve 0 or more maximum length bursts. If there is any left over data it can be transfered using shorter bursts, but since HBM only has 2 burst lengths the granularity is reduced so I figured there are less opportunities for those type of savings. You will of course get savings from less burst transfers per tile anyway but I figure every bit helps.

edit - of course my theory is all dependent on how variable compressed tiles are so...
 
Thanks for the info, didn't know that. But in relation to what I was saying, storing a compressed tile to RAM will involve 0 or more maximum length bursts.
There's no such thing as a shorter burst, as best as I can tell for GDDR5.

Burst length itself requires setting a mode register, which involves resetting the device settings of the DRAM. If the option exists for a chopped burst, this can allow for an on the fly shift to the shorter burst. However, depending on the implementation it might not mean that the bus can provide useful transfers in its place.
 
Burst length itself requires setting a mode register, which involves resetting the device settings of the DRAM. If the option exists for a chopped burst, this can allow for an on the fly shift to the shorter burst. However, depending on the implementation it might not mean that the bus can provide useful transfers in its place.
Oh, now I see what you're saying, thanks for correcting me. So the granularity of savings = Uncompressed_tile_size / current_access_granularity. Thanks again.
 
FYI: take the benchmaks as a big grain of salt, its only 3dmark... we still dont know the power usage, but we do know that hbm uses alot less. and i cannot forget that according to this leaked benchmark with beta/out of date drivers the Fury X beats or near the same perf with only 4GB memory compared to 980ti with 6GB and 12GB for titan x...
and for those that think AMD will go bankrupt, i dont think so, all the current gen consoles are all based around AMD technology, PS4 and XBone even includes AMD x86-64 tech too
 
Hopefully, the USA anti-trust will not allow only one "big" x86 maker and only one "big" GPU maker (sorry VIA, sorry Imagination Technologie, but that's the reality...).

As for HBM: I really, really hope that this could help to stop the GPU makers to product enormous and insane PCBs.
 
FYI: take the benchmaks as a big grain of salt, its only 3dmark... we still dont know the power usage, but we do know that hbm uses alot less.
If the GPU is not bottlenecked elsewhere, it should try its best to use as much power as is permissible to provide the best frame rate. HBM may allow more power budget to be put towards other components, which the GPU will usually try to do.
 
As for HBM: I really, really hope that this could help to stop the GPU makers to product enormous and insane PCBs.

It does not mean that the card as a whole will be a whole lot shorter. You still need to cool the thing. So unless its cooled with CLC like FuryX WCE/Nano or whatever its called you are still dealing with a long metal backplate and huge air cooler.
 
It does not mean that the card as a whole will be a whole lot shorter. You still need to cool the thing. So unless its cooled with CLC like FuryX WCE/Nano or whatever its called you are still dealing with a long metal backplate and huge air cooler.
I wonder if that could create a 1+.5 or 2+.5 slot cooling solution, since half the typical card length doesn't have a PCB blocking the cooler from utilizing the card's volume to the maximum reach of the rear components.
 
I wonder if that could create a 1+.5 or 2+.5 slot cooling solution, since half the typical card length doesn't have a PCB blocking the cooler from utilizing the card's volume to the maximum reach of the rear components.
This. I had to put my Sound Blaster in a PCI-E x16 slot for that.
 
They need the most advanced and expensive memory in existence to match the performance of a mainstream solution?
What exactly are you calling 'mainstream'?

There is nothing at all 'mainstream' about GTX 980 series, even the bottom end one costs more than I've ever bought a GPU for.

Matching (or close to) NVs latest biggest baddest most ludicrously expensive card in a test that if I recall correctly historically favors NV architecture seems like a pretty good start to me.
 
I was talking about a mainstream *memory* solution.

That's exactly my gripe: it looks like they wasted a fantastic piece of enabling technology on something that doesn't deserve it.

Your assumptions are wrong.
GDDR5 with lower frequencies and more narrow memory buses might be somehow called mainstream, BUT the solution in the competition's top cards is anything but mainstream.
Also, there should be a first time when HBM is being used. I guess there are numerous reasons to prefer HBM over GDDR5 not only for raw speed but smaller cards, lower power consumption, other benefits, etc.

Do you honestly believe that?

Yes, because I saw data about how the total HMC system cost is higher.
Not comparing GDDR5 but I wouldn't be surprised if GDDR5 at the same performance level is more expensive.
 
Hopefully, the USA anti-trust will not allow only one "big" x86 maker and only one "big" GPU maker (sorry VIA, sorry Imagination Technologie, but that's the reality...).
Anti-trust regulation doesn't kick in just because you're the only major actor in a field of business. If your competitiors sink themselves due to bad products and poor decisions, then you're not to blame, and will not be punished. Besides, the US hasn't split up a company under anti-trust since "Ma Bell" in...what, the 1980s? And then it was possible to split the company into regional sub-divisions; how would you reasonably split Nvidia for example? It'd be like Solomon and the baby...

As for HBM: I really, really hope that this could help to stop the GPU makers to product enormous and insane PCBs.
While certain high-end boards are unreasonably over-engineered (and you're not forced to buy one of these cards if you don't want to... ;)), GPUs today are not only physically much larger than back in the aughts (and often have a lot more memory devices on the board, taking up space), but also require a lot more power regulation circuitry as well. Delivering 200+ amps of current, reliably and affordably, requires quite an investment in R&D, as well as board area. :)

I had one of those. Fan started rattling in a few months. Had to replace it with ICEQ.
I had one of those too. Beautiful card, simply beautiful. Ran Doom3 fast and slick like a mother-you-know-what at 1280*1024... Fan ran just dandy too until I retired that whole PC, you couldn't even hear it. Those were the days, yes? (No, I don't really long back to them; I'm not insane...)
 
Back
Top