it doesn't work like that, you can't just keep bolting on CUs. There are other things to consider as the chip gets larger. Things like timing, synchronization, latency become an issue - all of this coupled with making sure thermally everything is going to work under pressure as well.At how many TF's would that 80CU gpu land at? We dont know clocks but assume around 2ghz?
Around 1.5 GHz would be a much safer assumption.At how many TF's would that 80CU gpu land at? We dont know clocks but assume around 2ghz?
Yep, that is the one.Is Ice Bolt the next after Flashbolt? Samsung says Flashbolt would get to 4.2 Gbps eventually, but they're only planning to get 3.2 Gbps to mass production H1/2020
I'm not sure AMD would clock it that low.Around 1.5 GHz would be a much safer assumption.
No no no no no no no no no no.MS Confirming "12 teraflops" for Xsx: https://www.engadget.com/2020/02/24...gn=homepage&utm_medium=internal&utm_source=dl
Meaning, since it's semi-custom only, we've pretty much guaranteed got a 48CU part for upper mid range RDNA2 coming, and the XSX has it (cut down a bit? who knows). Not that such should be surprising, it's the same pattern of slow changes AMD likes and the 5500 chip already has that config per SE.
No no no no no no no no no no.
Semi-custom APU chips CU counts have absolutely no relations whatsoever with PC GPUs CU counts. They can have just as many as they can fit there (and the architecture allows), or just as few as they want and everything in between (again with possible limitations from architecture dictating you need to have even number of CUs or something)
Of course they need to have balanced configurations, but no, they don't need to adhere to any specific discrete GPUs CU counts on any level and doing so wouldn't be any cheaper (relative of course, naturally it can be cheaper if the other option is bigger).They need the CU count to be stable for their arch, balanced with work distribution and bandwidth and etc. They are not going to develop an entire custom variant for a client and then not use that, the number line up almost perfectly for what they're going to do, and this is approximately what they did with the PS4 and Xbox One, which was just make somewhat custom variants of GPUs they already had.
Those previous two were just variants with things added on, XBOs Esram for example. Semi-custom very much does indicate about what consumer GPUs will be as an overall picture, just without the bells and whistles the clients want to add. EG we don't know if the 6700xt will have the PS5 (and doesn't the Xsx have it too?) fancy audio processing hardware, or etc. This goes doubly true today, with taping out chips on 7nm costing hundreds of millions of dollars, neither AMD nor MS want to spend however much of that it would cost just to add a few more CUs.
TL;DW but advanced techniques like boot-time calibration were implemented in Bristol Ridge and later in Polaris. The aim is to prevent overvolting of "fresh"/non-degraded chips not the way around.... AdoredTV...
the boot-time calibration optimizes the voltage to account for aging and reliability. Typically, as silicon ages the transistors and metal interconnects degrade and need a higher voltage to maintain stability at the same frequency. The traditional solution to this problem is to specify a voltage that is sufficiently high to guarantee reliable operation over 3-7 years under worst case conditions, which, over the life of the processor,can require as much as6%greater power. Since the boot-time calibration uses aging-sensitive circuits, it automatically accounts for any aging and reliability issues. As a result, Polaris-based GPUs will run at a lower voltage or higher frequency throughout the life time of the product, delivering more performance for gaming and compute workloads.
According to Jim from AdoredTV, the real reason for AMD constant use of high voltage on their GPUs is to protect cards from silicon degradation over the lifetime of the cards. NVIDIA is not affected by this because they have "exceptional" performance per watt.
No, chip degradation is irrelevant to bad performance per watt, it forces them to increase voltages to prevent it from happening down the line.They are basicly saying, that overvolting (which results in worse performance per watt) is needed because of problems, which are cause by bad performance per watt
Yes, but maybe this mitigates the problem a little, but doesn't solve it completely, hence the need for AMD GPUs to increase the voltages.TL;DW but advanced techniques like boot-time calibration were implemented in Bristol Ridge and later in Polaris. The aim is to prevent overvolting of "fresh"/non-degraded chips not the way around.
Is this a picture of a paper or physical display? Is this supposed to be a disclosure from Hynix?From @PSman1700 tweet:
I'm not sure if the word protect is being used in a different way than its main definition. Physical degradation of the silicon and board components is generally proportional to voltage, so it would get worse at higher voltages.According to Jim from AdoredTV, the real reason for AMD constant use of high voltage on their GPUs is to protect cards from silicon degradation over the lifetime of the cards. NVIDIA is not affected by this because they have "exceptional" performance per watt.
Although it was claimed that the VBIOS settings for many of these features were not enabled in many products. Perhaps they were intended for specific use cases and AMD neglected to mention it, or they weren't as workable or effective as the marketing stated. It might explain why AMD advertised them as new features for more than one GPU launch if it wound up not using them effectively.TL;DW but advanced techniques like boot-time calibration were implemented in Bristol Ridge and later in Polaris. The aim is to prevent overvolting of "fresh"/non-degraded chips not the way around.
Then may be this is the real reason then? Upholding voltages to stay within spec and not degrade the product?There is a need for a voltage margin to buffer transient spikes in demand and for product consistency in the face of device aging, but I'm not sure I'd characterize that as protecting the card as much as staying true to the specs.
I'm pretty sure that laws of physics don't care about what AMD would or wouldn't do.I'm not sure AMD would clock it that low.
With 96 ROPs, it would make it have a much lower fillrate per compute throughput than any Navi card so far.