It will be very interesting to see how that all shakes out.Well, you don't necessarily know what will happen across a plurality if titles under DX12. The bottlenecks may end up shifting to a part of hardware that previously hasn't been a bottleneck.
Do you think Fury X gamble on HBM1 is a silent failure....?
Since reviews of Fury X are out, 980Ti interests and love-lust have spiked to madness levels around all forums i read. People on the fence, neutrals and even Radeon supporters are positively cheering...moving onto the green team...
However the 980Ti is an overcocking beast....the pure numbers boost is attractive to gamers, that 30-40% free speed-grade, even though the nature of Maxwell relies on high clock speed boosts whereby the stock reference ones already boost to higher than advertised speeds
I don't think so. I think it's a good card, but people were just expecting more.Do you think Fury X gamble on HBM1 is a silent failure....?
This is certainly not something like RV600's 512bit bus. The next flagship will remain with HBM, unlike R600->RV670 which went 512bit->256bit and ring->crossbar.I don't think so. I think it's a good card, but people were just expecting more.
Yes. That's an argument I made not too far earlier on this thread. I do think that AMD didn't really make use of the best that HBM has to offer, but I do think calling the product a failure is going a bit far. It's roughly where it should be in price/performance vs. the competition, and has an incredible cooler (when it's working properly...I expect the cooler issues to disappear shortly). The power consumption isn't great, though, which is a significant negative for me.This is certainly not something like RV600's 512bit bus. The next flagship will remain with HBM, unlike R600->RV670 which went 512bit->256bit and ring->crossbar.
HBM on it's own has definite value for Fury X, and if it was available, it should have been used. The issue is whether prioritizing certain other investments first (perf/watt, for example) might have paid greater dividends.
It's a matter of prioritization. Certainly the development of the architecture was some time in the making, which gives AMD time to adjust its development priorities. I think what this experience shows is that AMD was short on investing in power savings and rendering efficiency.Saying that AMD could have just prioritized one metric over the other may imply that resources are more fungible than they really are.
In terms of money, maybe, and there could be overlap in part, but HBM started years ago, and designing and implementing a memory standard is not going to map to reconfiguring a rasterizer, rewriting a DX11 driver, or revamping CU clock gating.
This is assuming there weren't costs or obligations associated with the HBM project if it were to have things like funding or organizational support stripped from it, and since the memory standard and its 2.5D integration involved outside parties, there probably would be complications.
As far as upside for pulling staff or funding to improving 28nm GCN, it would be hurting the time to market for the delayed HBM for the sake of an end of the line product for 28nm.
If they wanted to make that kind of call, AMD had multiple GCN revisions prior to have made the attempt. Fury is a bit late to start undermining the long-term play HBM presents. AMD has spun off high-speed IO engineering already, so some level of commitment has been made in this direction.
Ayup. If they can test something early and cheaply, they should. They've derisked HBM for their next architecture. And not just HBM, for example, adaptive voltage has also been derisked.As far as upside for pulling staff or funding to improving 28nm GCN, it would be hurting the time to market for the delayed HBM for the sake of an end of the line product for 28nm.
It's a matter of prioritization. Certainly the development of the architecture was some time in the making, which gives AMD time to adjust its development priorities. I think what this experience shows is that AMD was short on investing in power savings and rendering efficiency.
Why do you think AMD's headcount is static and unchanging? Also, it's not all that uncommon for engineers to switch to different disciplines.But what does prioritization mean?
Sending the HBM signaling engineers to the ROP group?
Having the physical testing lab rewrite the WDMA memory allocator?
Asking the HBM protocol designers to work for free or for Hynix, Amkor, and UMC to just take a breather for a year or two?
Why do you think AMD's headcount is static and unchanging? Also, it's not all that uncommon for engineers to switch to different disciplines.
I really don't know why you seem to think that anybody is claiming that this kind of decision could have been done quickly or recently. It would have had to have been a decision made right at the start of the design for the Fury.People who specialize in very different fields can move, but since this is all on a schedule it is not without opportunity cost. Or they can quit, because their skills would be in demand elsewhere and they may like what they're doing.
The needs of an interposer-based parallel signalling protocol do not mesh well with a next-gen conservative rasterizer.
Just saying AMD could trivially shift its priorities around on a spreadsheet assumes that treating an established engineering effort like legos and failing to maintain external obligations wouldn't lead to a worse outcome than what we've observed now or for the next gen.
True, AMD's headcount is not static. I laid out one example where it declined precipitously when it traded away its high-speed IO assets--IP and engineers specializing in that specific field.
If I may, the point is that such a decision would have to be made way before beginning detailed design of the Fury.I really don't know why you seem to think that anybody is claiming that this kind of decision could have been done quickly or recently. It would have had to have been a decision made right at the start of the design for the Fury.
I really don't know why you seem to think that anybody is claiming that this kind of decision could have been done quickly or recently. It would have had to have been a decision made right at the start of the design for the Fury.
Apparently GCN in its current incarnation can't go to more than 4 shader engines, more than 16 CUs per shader engine and more than 16 ROPs per shader engine.Everything about today's Fiji would have made sense if the competition had been 10-15% slower. IOW: Maxwell being a bit less efficient or Nvidia deciding on, say, 20 SMs with high speed DP instead of 24 SMs without.
It's totally fair to criticize AMD for Fiji's lackluster performance (and terrible introduction, and being late), but, at the time when its specs were decided, with the information then at hand, its current specs probably were the best way to go.