AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Yes, Lisa Su said that RDNA is for gaming. I'm not aware, that she said that RDNA is only(!) for gaming or that all transistors are for gaming. May be they are, may be they are not. Too little is known about RDNA / Navi. On the other hand, RTX 2080 Ti (TU102) has exatly the same architectural configuration as as RTX 2060 (TU106). Where are those transistors used purely for science? RT cores are used for gaming, tensor cores are used for gaming, high-speed FP64 is not supported. Turing GPUs used for GeForce RTX with all the fixed-function silicon are significantly closer to Navi than to Vega.
 
About that.
Does anything use NN denoising right now?
So far tensor cores there are cheap DC inferencing play.
RTX libraries may leverage NN for denoising if developers want to use a canned NN denoiser. But I don't think developers can create their own custom NN denoising models without implementing it through directML (too slow). Which should have been released by now. But Nvidia would have to support it (which I'm fairly positive they will)
 
Last edited:
It's a little late for a 7nm pipecleaner, isn't it? AMD already has 2 SKUs based on a relatively large 7nm chip, out since last year.

I was under the assumption that 7nm+ use of EUVL would be more of a transition but it seems like TSMC is just dipping their toes in the water. Guess I haven't been following TSMC news as close as I thought.
 
I was under the assumption that 7nm+ use of EUVL would be more of a transition but it seems like TSMC is just dipping their toes in the water. Guess I haven't been following TSMC news as close as I thought.
The key question is whether any 7nm+ projects became 6nm.
 
Time to market would suggest that this is unlikely. 7nm+ manufacturing has started for Apple so I would assume that anyone who aimed for 7nm+ are quite far along. Mass production for 6nm isn’t anticipated to start until 2021. Anyone who was motivated to design for 7nm+ is unlikely to just postpone their launch for little to no benefit.
 
TSMC 7nm+ and 6nm are different, but Samsung 7nm 3rd gen and 6nm seems to be very close. Those may be (or may be not) identical.
 
Time to market would suggest that this is unlikely. 7nm+ manufacturing has started for Apple so I would assume that anyone who aimed for 7nm+ are quite far along. Mass production for 6nm isn’t anticipated to start until 2021. Anyone who was motivated to design for 7nm+ is unlikely to just postpone their launch for little to no benefit.
I don’t think it’s confirmed Apple is using 7nm+. Reportedly many do not like the cost and the fact there was no upgrade path from 7nm, hence 6nm’s creation.
 
I don’t think it’s confirmed Apple is using 7nm+. Reportedly many do not like the cost and the fact there was no upgrade path from 7nm, hence 6nm’s creation.
"Confirmed" is a big word. TSMC has announced that it has commenced volume production on 7nm+ aligning with Apples typical cadence. (In lithographic circles it has been said that Apple will use a bespoke version of 7nm+. Whatever that would mean. These nodes are developed in close collaboration with their leading customers anyway.) Confirmation would basically require that we got high resolution microscopy shots of the EUV-layers to compare with the previous gen. I doubt Apple will say anything. Possibly TSMC at a conference call where they usually give information regarding revenue by node, if they choose to break out this 7nm variant.
 
Afaik Apple is skipping 7FF+. The EUV customer is HiSilicon with the new Kirin.
Interesting if so. I'm aware of HiSilicon, but those volumes can't be anywhere near an A13. Then again, that could be a reason if EUV throughput still was in doubt at the time of the decision. We'll see, I would guess that the answer will eventually come from TSMC.
(I assume that by "skipping" you mean that they would stay with their current A12 process. The waters have been muddled by those "7nm Pro"/bespoke 7nm+ rumours.)
 
Last edited:
Q: What part of the memory, has to be unified so that two GPUs don't have to resort to alter-frame-rendering (AFR) method...?

Anyone?
 
Why's that an issue? Afaik HiSilicon volunteered as guinea pig while Apple didn't want to go for it this year. 7+ was never supposed to be a popular node.
It’s not an issue, rather a method of figuring out if Apple used the process or not if TSMC break out its performance next conference call.
Whether a node is popular or not never seemed to matter much to Apple, they went for both 20nm planar and 10nmFF - and did well on both.
What could cause concern for them and make them not go for a superior process is more likely to be guaranteed delivery. I’ve seen different claims regarding Apples choice of process this generation, and I hope firm confirmation comes at some point. (Be that as it may, four layers of EUV can only do so much anyway, it’s the next node that is truly interesting, the end of the line for FinFets by all appearances.)
 
Status
Not open for further replies.
Back
Top