The AMD Execution Thread [2007 - 2017]

Status
Not open for further replies.
Why wasn't it like this in first place? I mean if you hire one of the most famous engineers is to make him engineer for you not to place him in a management position.

It could be that Raja has always wanted to be in upper management because he felt he could make more meaningful changes there WRT what direction to go with broad sweeping architectural features. Or he though that the financial increase and security was what he wanted, only to then find out that maybe it wasn't something he enjoyed. Or was something he wasn't good at.

Either way it was his choice to go for that position. What we don't know is whether the potential change in his official position within the company is what he wants or what the company is forcing on him.

Regards,
SB
 

Very interesting. This puts Jim Keller's departure in an entirely new light. I wonder whether we're talking about some kind of APU with a custom CU:CPU_core ratio, something with dedicated AI hardware (tensor operation units?) or something even more customized. Traditionally—if the word can be applied to something so recent—AMD has only done semi-custom work where they reused core IP blocks without significantly modifying them, but this business might be large enough to warrant some deviation from that model.
 
AMD already said they are going to do ai specific circuits in Navi(whatever that means), this could be Navi based or much like ps4 pro or scorpio gpu a mid point in development because Vega and Navi.

So the question is just IP licence or Semi custom, or maybe a longer term deal starting out as semi custom and transitioning to IP at a later date once Tesla has in house capacity/capability.
 
AMD already said they are going to do ai specific circuits in Navi(whatever that means)
If we're going to have AI hardware in PC GPUs, we're going to need mainstream API support for said hardware (IE, DirectX for windows), or the feature is going to die the same kind of death that for example PhysX hardware did, or ATIAMD Truform for that matter, or wavetracing audio soundcards or well, basically anything proprietary that isn't Cuda (which is a special case since it's exclusively for professional use basically...)
 
If we're going to have AI hardware in PC GPUs, we're going to need mainstream API support for said hardware (IE, DirectX for windows), or the feature is going to die the same kind of death that for example PhysX hardware did, or ATIAMD Truform for that matter, or wavetracing audio soundcards or well, basically anything proprietary that isn't Cuda (which is a special case since it's exclusively for professional use basically...)

So long as it's useful to professional users, as is apparently the case for Volta's dedicated deep learning hardware, it will live on. I'm not certain this kind of hardware can be of much use to game developers, except maybe as raw arithmetic units, provided their limited precision is sufficient. I'm not convinced Microsoft will want to add support for FP calculations with weird precision levels.
 
The "tensor" logic wouldn't be all that difficult to add as a modification. It's technically already possible with DPP, but could be more optimal. It would require a few more fixed swizzle patterns to accommodate varying non-standard matrix dimensions which are simple logic operations. The heavier modification would be getting accumulators working in parallel with the FMADs and a forwarding network to really increase FLOPs. Not difficult or absolutely necessary, but a deviation from Vega. Even that latter step could be hacked in, but likely not as efficiently as properly integrating the operations. Think output to LDS, then dedicated, parallel adders read from LDS. As opposed to connecting the output to input directly with parallel execution. Some patents mention a per SIMD LDS which could be the front end of that operation. From there it's just a matter of accumulators. Not overly useful for graphics beyond downsampling or perhaps prefix sum.

AMD already said they are going to do ai specific circuits in Navi(whatever that means), this could be Navi based or much like ps4 pro or scorpio gpu a mid point in development because Vega and Navi.
The GF CEO in that article said they had returned silicon to Tesla. So Navi would be close to taping out if that were the case. Modified Vega makes more sense and bulking up on tensor FLOPs isn't difficult. AMD could probably create a Epyc backplane with 4-8 GPUs for processing power cheaper than Nvidia's offerings. More devices for redundancy isn't necessarily a bad thing either. However, if Tesla is keeping the design in house it's difficult to see Zen being used. On the other hand it would make software development more straightforward.

If we're going to have AI hardware in PC GPUs, we're going to need mainstream API support for said hardware (IE, DirectX for windows), or the feature is going to die the same kind of death that for example PhysX hardware did, or ATIAMD Truform for that matter, or wavetracing audio soundcards or well, basically anything proprietary that isn't Cuda (which is a special case since it's exclusively for professional use basically...)
As mentioned above the "AI" hardware is very straightforward. It's giant arrayed FMA operations that are essentially SIMDs with forwarding networks. Precision is only relevant to performance. With LLVM the coding would fall to any suitable programming language as a front end. So C, Python, Rust, C#(this can't possibly end well), etc. Audio mixing and AI acceleration on a PC is probably warranted beyond just game development. It's not inconceivable to be using voice commands/interaction on a PC in the near future considering all the home assistants hitting the market.
 
AMD already said they are going to do ai specific circuits in Navi(whatever that means), this could be Navi based or much like ps4 pro or scorpio gpu a mid point in development because Vega and Navi.

So the question is just IP licence or Semi custom, or maybe a longer term deal starting out as semi custom and transitioning to IP at a later date once Tesla has in house capacity/capability.
They did?
No they didn't, rumor mongers just said Navi might have AI-specific circuitry which quickly turned into half the internet singing Navi will have AI-specific circuitry.
 
No they didn't, rumor mongers just said Navi might have AI-specific circuitry which quickly turned into half the internet singing Navi will have AI-specific circuitry.
Half the internet also equates FP16 to only deep learning. So by the same argument even Vega, and in turn Navi, has AI specific circuitry. Not a huge leap to add in a few more AI specific circuits, so the rumor mongering should be rather accurate.
 
No they didn't, rumor mongers just said Navi might have AI-specific circuitry which quickly turned into half the internet singing Navi will have AI-specific circuitry.
I'm sure I saw a quotefrom raja, I could be wrong.

Also didn't amd say the 3Rd semicustom win was arm Cpu based


Edit: f u phone auto correct
 
Last edited:
I'm sure I saw a quotefrom raja, I could be wrong.

Also didn't amd say the 3Rd semi-automatic win was arm Cpu based?
I thought I saw that quote as well, but then it doesn't take much to meet that criteria as vague as it was. It's vague enough it would be difficult for it not to be true.

I'd have to dig up another quote, but an AMD driver dev did say the drivers were being reworked for several, non-public and I believe Linux/UNIX based operating systems. So presumably there are some SFF systems running ChromeOS, Fuchsia, FireOS, Android, SteamOS, etc that may fall into a semi-custom category. Those could include car entertainment and navigation systems as well.
 
This means that AMD is getting very close to Nvidia in this regard since Tesla autonomous system is one of the more advance(or the more advance) in the world and they need a very powerful HW to handle it and a efficient one to not suck unnecessary energy from the batteries. Knowing Elon he wouldn't do this if he doesn't think this would be a big step forward and the only way I imagine this scenario is if AMD offert him a much better solution than Nvidia, otherwise I don't see a reason to suddenly which chip maker.
 
AMD moving to 12nm GF process with starts in 1Q2018.

At the Global Foundries Technology Conference, AMD’s CTO Mark Papermaster announced that the company will be transitioning “graphics and client products” from the Global Foundries 14nm LPP FinFET process it uses today to the new 12nm LP process in 2018. Global Foundries also announced that 12LP will begin production in 1Q18.

Anandtech has some additional information
 
The "tensor" logic wouldn't be all that difficult to add as a modification. It's technically already possible with DPP, but could be more optimal. It would require a few more fixed swizzle patterns to accommodate varying non-standard matrix dimensions which are simple logic operations. The heavier modification would be getting accumulators working in parallel with the FMADs and a forwarding network to really increase FLOPs. Not difficult or absolutely necessary, but a deviation from Vega. Even that latter step could be hacked in, but likely not as efficiently as properly integrating the operations. Think output to LDS, then dedicated, parallel adders read from LDS. As opposed to connecting the output to input directly with parallel execution. Some patents mention a per SIMD LDS which could be the front end of that operation. From there it's just a matter of accumulators. Not overly useful for graphics beyond downsampling or perhaps prefix sum.


The GF CEO in that article said they had returned silicon to Tesla. So Navi would be close to taping out if that were the case. Modified Vega makes more sense and bulking up on tensor FLOPs isn't difficult. AMD could probably create a Epyc backplane with 4-8 GPUs for processing power cheaper than Nvidia's offerings. More devices for redundancy isn't necessarily a bad thing either. However, if Tesla is keeping the design in house it's difficult to see Zen being used. On the other hand it would make software development more straightforward.


As mentioned above the "AI" hardware is very straightforward. It's giant arrayed FMA operations that are essentially SIMDs with forwarding networks. Precision is only relevant to performance. With LLVM the coding would fall to any suitable programming language as a front end. So C, Python, Rust, C#(this can't possibly end well), etc. Audio mixing and AI acceleration on a PC is probably warranted beyond just game development. It's not inconceivable to be using voice commands/interaction on a PC in the near future considering all the home assistants hitting the market.

I would be extremely apprehensive about drawing any sort of inferences regarding Navi or anything else from this rumor and the highlighted inference represents a tremendous leap of faith.

1) Obviously, the validity is in question.
2) "Helping" is extremely nebulous and could span anything from major IP licensing deal to something as trivial as IC consulting for the process.
3) In the best case scenario for AMD (a full-blow semi-custom win), how the included IP would be relevant any current or future product.

This report has many of the shades of "AMD-Intel DeallZZZ!" as far as I am concerned.
 
Agreed. While on the surface it might seem to create a good opportunity for both AMD and Tesla, there are quite a few undercurrents that make it seem unlikely.

Like these?

"GlobalFoundries, which fabricates chips for Advanced Micro Devices, said on Thursday that Tesla had not committed to working with it on any autonomous driving technology or product, contradicting an earlier media report."

https://www.autoblog.com/2017/09/21/tesla-amd-artificial-intelligence-chip-ai-self-driving-cars/


"AMD (NYSE: AMD) has officially denied that there is any chip deal with Tesla (NASDAQ: TSLA), as reported by CNBC Wednesday evening.

Globalfoundries spokesperson this morning denying a deal between it and Tesla, there was still hope among AMD bulls that there was some sort of direct IP deal between AMD and Tesla that would bypass their contract chip manufacturer. This too has now been denied directly by AMD."

https://www.streetinsider.com/dr/news.php?id=13317752&gfv=1
 
Weird happenings. I wonder if this is Tesla playing shenanigans to get a better deal from NVidia. Doubtful, but this story is very strange. The only other thing I can think of is that it would be targeted at market manipulation of some sort, seeing that AMD stock price reacted favourably, as expected.
 
Weird happenings. I wonder if this is Tesla playing shenanigans to get a better deal from NVidia. Doubtful, but this story is very strange. The only other thing I can think of is that it would be targeted at market manipulation of some sort, seeing that AMD stock price reacted favourably, as expected.

Welcome to the world of AMD stock P&D mill.

Microsoft interested in acquiring AMD!

No wait, Samsung!

Maybe TI?


Making custom iMac chips!
https://cnafinance.com/advanced-mic...ial-interest-from-texas-instruments-txn/14667
Hold the phone, Intel DialZZZ!
 
Status
Not open for further replies.
Back
Top