AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
What was wrong with Llano?
AFAIK it was a pretty capable chip with a GPU performance similar to the HD3850/3870 which is pretty much what AMD had promised, and years ahead any iGPU Intel could provide for years to come.
 
Yeah.
That's what you get for setting your roadmaps on fire.
Applies to Intel, too, since Intel goes byebye soon.
So you're suggesting Apple wanted Llano but it was too late?
You've been recently posting a lot of stuff without anything to back them up, and honestly, this falls right into that pile too. Any references to AMD APUs and/or CPUs in macOS came years after Llano at least as far as I'm aware and even then they're more likely just stuff left in AMD drivers from their common base than anything else
Last set of them came up earlier this year when _rogame spotted them, but same happened few times earlier too
 
So you're suggesting Apple wanted Llano but it was too late?
Yeah lol.
They were also the sole driving force for things like Crystalwell.
Any references to AMD APUs and/or CPUs in macOS came years after Llano
Because Llano in macs never reached product development stage due to them solid ass delays.
As did plenty of other things like CNL-Y macs or 1st gen ARM mac test boards and whatever else.
 
Because Llano in macs never reached product development stage due to them solid ass delays.

The very first time Llano appeared in a roadmap it said 2011, which is when it ultimately launched.
Back in 2007 AMD had a codename "Shrike" APU planned for a 2009 debut of their Fusion concept, but it was cancelled less than a year later.

So whatever you're talking about, it's probably not Llano.

I could see Apple being somehow vindictive about what happened with e.g. Kaveri, which had two GDDR5M PHYs for high GPU performance, but fabrication of that memory was scrapped when Elpida went bankrupt, which makes it not AMD's fault. Also, I doubt Apple would ever want to adopt bulldozer CPU cores.


In the end, what I'm saying is you're not making much sense with your Llano delay allegations. The 32nm Llano was never delayed. There was a 45nm Shrike that got cancelled, but it's a very different chip.

The fact that you consistently fail to back any of your claims with any source at all makes it all even harder to digest.
 
Llano was truly late launching in H2 2011 given it was just a shrinked K10 with a GPU. Even the completely new arch - Bobcat - launched in Jan 2011. So maybe that awesome 32nm delivery from GloFo was the culprit.

Anyway, Apple migrating to a dead K10 architecture followed by utterly sad Bulldozer-based chips (ok, up to Carrizo-refresh) sounds hilarious given their premium branding. If there ever were such talks, Apple would likely back off at the very moment they were shown the APU roadmap.
 
have you guys seen those?

20200193681 MECHANISM FOR SUPPORTING DISCARD FUNCTIONALITY IN A RAY TRACING CONTEXT
20200193682 MERGED DATA PATH FOR TRIANGLE AND BOX INTERSECTION TEST IN RAY TRACING
20200193683 ROBUST RAY-TRIANGLE INTERSECTION
20200193684 EFFICIENT DATA PATH FOR RAY TRIANGLE INTERSECTION
20200193685 WATER TIGHT RAY TRIANGLE INTERSECTION WITHOUT RESORTING TO DOUBLE PRECISION
 
The 16GiB VRAM + 384bit bus rumour looks to be particularly suspect. No obvious way to do this without making memory bandwidth non-uniform, like the GTX 970 3.5GB + 0.5GB debacle.

I mean... I guess they could go 8GiB with 8gigabit GDDR6 packages on 256bits of the bus, and the other 8GiB with 16gigabit chips on the remaining 128bits... but seems like you're adding an awful lot of driver and memory controller complexity for not a lot of gain.

I guess one way to think of it is that if you're using up to and including 12GiB of VRAM, you still get full bandwidth to all of it, and on the rare occasions you need more than 12GiB, having your last 8GiB run at half speed is probably better than swapping it in and out of system memory over PCIe. Still extremely odd and highly unlikely IMO.
 
The 16GiB VRAM + 384bit bus rumour looks to be particularly suspect. No obvious way to do this without making memory bandwidth non-uniform, like the GTX 970 3.5GB + 0.5GB debacle.

I mean... I guess they could go 8GiB with 8gigabit GDDR6 packages on 256bits of the bus, and the other 8GiB with 16gigabit chips on the remaining 128bits... but seems like you're adding an awful lot of driver and memory controller complexity for not a lot of gain.

I guess one way to think of it is that if you're using up to and including 12GiB of VRAM, you still get full bandwidth to all of it, and on the rare occasions you need more than 12GiB, having your last 8GiB run at half speed is probably better than swapping it in and out of system memory over PCIe. Still extremely odd and highly unlikely IMO.
NVIDIA has used different density chips without issues in the past in the lower end, no reason AMD couldn't use similar approach. But I doubt they will.
 
In any case, if it's coming after gaming Ampere and losing, it will be sort of Vega-ish... Very hyped chip, I hope for them they can execute...

Depends on how late wouldn't it ? Rumor for Ampere is an announcement in Sept but would it be available in sept or take until oct to come out? Oct announcement for Navi2 with cards hitting stores in Nov wouldn't be to bad if you can get them before the big holiday games or right with the big holiday games.

I think performance and pricing is more important than a month or two of time diffrence.
 
Status
Not open for further replies.
Back
Top