Bondrewd
Veteran
That I do.if you say so....
4 days left!
That I do.if you say so....
That's Navi.
Occums razor says it's far more likely lack of R&D money especially with all the patients that are gcn based.Why are you so sure that Navi will be a big jump while it's still GCN ?
(I'm not saying you're wrong, but, we saw fiji to vega was pretty much a bunch of nothing. PS isn't available, it's very hot, all the changes they did on paper doesn't seems to affect real performances, only clock speeds etc...Like GCN is out of gas, they can't tweak it much anymore imo)
Occums razor says it's far more likely lack of R&D money especially with all the patients that are gcn based.
That is the instruction that I mentioned earlier was explicitly flagged as causing shaders to hang in the GFX1010 changes.
If it says hazard, I'm not sure how different the situation is than before in terms of importance, or if they're "bugs" (even though they're listed in the bug list). Hazards in the ISA docs usually involve referring to a list of required stall cycles where there could be invalid or unpredictable behavior. All architectures have them, and GCN as a somewhat loosely integrated set of sequencers and pipelines has a long history of them listed out in the various ISA docs. GFX10 actually has a flag indicating it may have removed a lot of those longstanding hazards.I saw a whole bunch of HW bugs in the LLVM commit.
https://github.com/llvm-mirror/llvm...939#diff-983f40a891aaf5604e5f0b955e4051d2R733
Someone have some idea how severe they are?
I suppose this could go either way.General consensus say GCN and Fermi (with some arguing over is Fermi really that big of a change or not)
GCN's something of a mix of high and low-level architectural details, beyond just the ISA. At the very least the instructions themselves have at times been subject to encoding whiplash that would have been completely unacceptable for x86.Isn't GCN an ISA? I doubt AMD would switch off it completely in the foreseeable future. Every iteration of GCN changes some functions but the core of it still gets called GCN. Switching off GCN is kind of like switching off X86 for CPU. Even after x64 came out, people still call it x86.
A GFX10 wavefront is listed as being 64 in the recent code commits, although changing the hardware underneath that would have implications.Now... with a wavefront of 32 that must be diferent! Although I doubt final values are diferent.
So if I reverse calculations, for a 64 CU Navi I would get the same 40 per CU, Diference would be that the SIMD would get 20 wavefronts.
Not shure this is correct, or even possible, but I doubt total capacity has been reduced.
That might not have been a point of architectural focus if it was 3-per, if it just happened that the CU count worked out. I thought Vega was the one had this limit stated.
Anyhoo, previously on GCN...
So Polaris 10 moved to 3 CUs sharing the K$ & I$ for wiring purposes. Wonder if they're taking another step with just paired CUs while also moving to the (apparent) super-SIMD
That has nothing to do with what i said, your moving the goal posts, you said:Since Navi is in the pipeline for years now, I doubt they benefited of noticeable increased R&D money at the time...
(I'm not saying you're wrong, but, we saw fiji to vega was pretty much a bunch of nothing. PS isn't available, it's very hot, all the changes they did on paper doesn't seems to affect real performances, only clock speeds etc...Like GCN is out of gas, they can't tweak it much anymore imo)
Who cares about patents if they don't deliver something good or not implemented ? Look at Vega. We'll see with Navi soon.
And what I said had something to do with your remark about money...
AMD makes their own set of compromises. While they have been criticised for less than stellar performance/W or performance/FLOP the last couple of generations, what has been ignored is that they actually are pretty good at performance/mm2 and particularly shader FLOPS/mm2. (I shy away from saying performance/$ since that is too dependent on market forces)
What that implies is that if they had so decided they could have supported their shader cores with larger caches/buffers/queues or multi porting or more registers or more front end/back end processing. But that may not have made sense to them overall, since this would have come at a cost in die size that may not have paid for itself in overall performance. Nvidia has enjoyed better margins on their products, and are able to ship larger dies (with each FLOP better supported). Now, the balance that AMD has struck in the past may or may not change on 7nm lithography. But even for these higher power chips, the improvement in density at 7nm is significant, maybe enough to pay a price in die area. Or not, and they might choose to produce smaller chips (improving both dies/wafer and percentage of good dies) so they have greater pricing flexibility.
I have nowhere near the detailed GPU architectural knowledge needed to know what is and is not possible to do within the GCN ISA, but I find it dubious to assume that it would be "maxed out" from a technical standpoint. I'm more versed in CPU architecture, and similar arguments have been made about "ARM" (nowish) and x86 (30 years ago) performance potential that has been proven emphatically wrong. You can always throw more hardware at a problem. The question is if it makes overall sense to do so, and that's decided by the market.
True.point taken, but coudn´t be the same said about Bulldozer ? Instead of throwing more money into it, they rather ditched it in favour of new arch. And it was right decision.
Let´s be frank. GCN is more than 8 years (!) old gpu architecture, with just modest updates through the years. It has it´s limits and weakness. It´s development started somewhere between 2007/2008. Sure, they can "tweak it" hard in the end of its live and spend tons of money to make it better, or they rather focus on new GPU architecture which has higher potential. One scenario here is more probable than the other I guess....
Zen carries a lot of BD (and overall AMD-cores-since-K7) legacy, down to it inheriting the entire BPU.point taken, but coudn´t be the same said about Bulldozer ? Instead of throwing more money into it, they rather ditched it in favour of new arch. And it was right decision.
Zen carries a lot of BD (and overall AMD-cores-since-K7) legacy, down to it inheriting the entire BPU.
Oh boy. When you keep the scope of changes done between BD and Zen and project this on to GCN you wouldn't call that GCN anymore. That's the whole point of this "ditch GCN" move - to implement major improvements.Zen carries a lot of BD (and overall AMD-cores-since-K7) legacy, down to it inheriting the entire BPU.
It's a new core, but it's still a very, very AMD core.Still, Zen is considered by AMD like a new x86 architecture.
I mean, we continued calling x86 x86 even after P6 so yes I would.and project this on to GCN you wouldn't call that GCN anymore
It doesn't match what we can see ? Do you look at real products instead of patents ?
They have hard times making gains with gcn, it's a fact. So you say, because money. All right.
And both are meager versus Intel's ~$13B a year, yet their execution is as close to pathetic as the company of their size can get.Navi will be preceded by 2 years of $320-370M/quarter. It's a lot more, but it's still anemic compared to nvidia's >$500M/quarter during the last year (which they don't share as much with CPU development).
And both are meager versus Intel's ~$13B a year, yet their execution is as close to pathetic as the company of their size can get.
Money can't buy execution either.
point taken, but coudn´t be the same said about Bulldozer ? Instead of throwing more money into it, they rather ditched it in favour of new arch. And it was right decision.
Let´s be frank. GCN is more than 8 years (!) old gpu architecture, with just modest updates through the years. It has it´s limits and weakness. It´s development started somewhere between 2007/2008. Sure, they can "tweak it" hard in the end of its live and spend tons of money to make it better, or they rather focus on new GPU architecture which has higher potential. One scenario here is more probable than the other I guess....