AMD Polaris Rumors and Discussion

gamervivek

Regular
A Polaris 12 shows up on Apple's driver,

BsPforn.jpg


https://www.tonymacx86.com/threads/...d-radeon-drivers.197273/page-118#post-1386344
 
Polaris 10
Polaris 11
Polaris 12
Polaris 10XT2 (dual P10?)
Vega 10

Some definite head-scratchers in there.

My 2 cents: patents and safeguarding against lawsuits. Delta-C compression is already in skylake, FWIW.
Maybe, but I noticed something months ago where upon feature completion of the Intel linux driver stack they decided to begin rewriting from scratch following AMD's model. May be completely off here, but it's possible they're making both driver stacks compatible with each other for future maintenance. This was back just prior to their layoffs.

In addition to reducing a lot of staffing from the consumer PC division it's possible they are looking to actually outsource GPU development for APUs beyond simple patent licensing. AMD already designed their GPUs around integration with X86 CPU cores. Definitely an odd thought, but there could be something to it. Regardless, it seems likely AMD would have more useful IP than Nvidia for iGPU or APU design.
 
Last edited:
GDDR5X or HBM1 variations make some sense. XT2 would likely gain 15-20% with faster ram. Going off the codenames they made P10, then P11 as the low performance model. P12 I'd have to imagine is a big fat one, dual, or improved P10. If P12 is GDDR5X on P10 I have no idea what XT2 did for a boost. We've seen partners make dual GPU coolers, so it stands to reason one of those exist. XT2 with GDDR5X and P12 as a dual?
 
So P12 could be dual P10 and P10X2 is RX485?

Then if they enable the full P11 for a desktop part, we might get RX465 and RX485 in the near future.
Dual P10 is still a very strange product, though. Unless they sell it as a cheap-ish "Premium VR" solution.

IIRC 'Tahiti XT2' launched as the 7970 Ghz Edition (XT was the 7970).
The 7970 had a lot more clocking headroom to allow a higher SKU than the RX 480 does though.

Then again, we have been hearing rumors of process upgrades that are allowing for significantly higher clocks. A >1.5GHz EX485 would be a very powerful card.
 
P12 is very likely a different chip, there were rumors early of three chips from AMD before the Polaris/Vega renaming and then there were the C99x board samples in March and their RRA certification in April.
On the more optimistic side, a tahiti sized chip with 384-bit bus and GDDR5X, while the Vega chips do Hawaii and Fiji again. Or Polaris 12 could be a small chip, smaller than Polaris 11 for low low end.

A 3200-3500 shader part with around 1.4Ghz clocks would be a great replacement for current Fury line and the Vega cards can then bring on the Fury back again next year.
 
I don't see a smaller chip could fits AMDs a smaller chip than P11 will enter the IGPs user range. I could be a respin or maybe a refinement of the architecture(P10's 1% OC room is just too small) OR maybe a chip aim for the professional market(it doesn't have to be a customer chip)
 
That's the more pessimistic take, but P12 being a bigger chip is the more likely scenario(if it exists outside of Apple's drivers). AMD had a C99x card samples going around in March and certified in April but Vega 10's development milestone happened much later. It's usually pegged as dual-GPU by wccftech because it's listed for more than twice the price of Polaris 10 but that's unlikely because it goes against AMD's naming convention.

There were rumors of three chips from AMD's next-gen(which would turn out to be Polaris) but since Greenland was wrongly named and turned out to be an APU or something, people assumed that there were only two chips. Charlie was pretty adamant about it,

https://semiaccurate.com/forums/showthread.php?p=252525#post252525


Also there's some super secret event going on for AMD,

http://videocardz.com/64637/secret-amd-vega-tech-day-is-happening-right-now

Something seems to be afoot at AMD's Markham office in Canada. Those of you that have read up on history will know it as ATI's headquarters before they were consumed by eternal glory acquired by AMD. Afterwards, it became an AMD office, but Radeon-related stuff still usually happens there.

First, it started with Eber from Hardware Canucks posting this tweet a few days ago on the 5th (which was also posted onto the subreddit earlier today). The Hardware Canucks guys (both Eber and Dimitry) are both in the Southern Ontario area, which would give them an inherent advantage of being able to just drive down to AMD's Markham office, see whatever needs to be seen, leave, and then hammer away in Premiere Pro.

https://www.reddit.com/r/Amd/comments/5h67g2/theorycrafting_what_the_hell_is_going_on_in/
 
Today's news about the Radeon Instinct cards seem to have left us even more clueless about what this Polaris 12 may be.
AMD is using Polaris 10 for a 6 TFLOPs card, Fiji for 8 TFLOPs and Vega 10 for 12.5 TFLOPs.

If there's a Polaris 10XT2 then Polaris 12 is probably not another dual-P10.
Since AMD wasn't shy to formally introduce the first Vega with this line of cards, they also shouldn't be doing it for a Polaris 12.. yet Fiji is being presented as the mid-term between Polaris 10 and Vega 10.
So where does this leave Polaris 12?

Maybe P12 is coming significantly later than Vega 10, so it wouldn't make much sense to present it now?
Or maybe P12 is just a "major" revision to P10, the RX485, with exact same ISA and featureset but with a large enough performance boost that would warrant a different codename, like the RV770 -> RV790 transition.
This would probably mean the P12 has better gaming performance than a Fury Nano, but for this specific market mostly uses ALUs it makes more sense to use the older and larger chip.
 
Polaris 12 could be a small GPU, maybe half of Polaris 11. It would allow to build quite inexpensive laptops (with pre-Kaby Lake CPUs) supporting full acceleration of 4k content. Both Bristol Ridge and Skylake don't offer acceleration of 4k 10bit streams and P11 is still quite expensive for low-end notebooks.
 
Does it make sense for low-end notebooks to support 4K HDR content, though?
Their screens certainly won't support HDR and most TVs who do are SmartTVs with compatible streaming services pre-installed.

We're talking about people who want to buy cheap laptops and need to use them to watch HDR content on a very rare (perhaps non-existent?) TV that supports 4K HDR but doesn't have pre-installed software to run it.
And even those may probably be better off just spending 70€ on a Chromecast Ultra or a Mi Box.
 
I think the biggest issue with a smaller-than-P11 chip would be the memory interface. Essentially, 64bit ddr3 is out, otherwise it will still get beaten by the old 64bit ddr3 gm108 (due to the quite a bit higher memory efficiency). I don't believe updated video encode/decode blocks are a reason for such a chip - the typical notebook discrete low-end chip doesn't even have these blocks at all (from AMD, Mars has them, but Sun and Icelands do not).
So, going back to the memory interface, 128bit ddr3 or 64bit gddr5 would do. But the former requires more space (both on die and surrounding area) and the latter isn't exactly popular (due to higher cost and small marketable memory sizes). Though maybe 64bit ddr4 could do. That would still be bandwidth limited, but with ~1600Mhz ddr4 that would be 60% more bandwidth than the typical ddr3 solution, certainly enough to beat 64bit ddr3 gm108 (and possibly 64bit gddr5 gm108 as well, since this one isn't limited by bandwidth).
 
Does it make sense for low-end notebooks to support 4K HDR content, though?
Their screens certainly won't support HDR and most TVs who do are SmartTVs with compatible streaming services pre-installed.

We're talking about people who want to buy cheap laptops and need to use them to watch HDR content on a very rare (perhaps non-existent?) TV that supports 4K HDR but doesn't have pre-installed software to run it.
And even those may probably be better off just spending 70€ on a Chromecast Ultra or a Mi Box.
For OEM checkboxes for sure.
 
For OEM checkboxes for sure.

And AMD is developing a discrete GPU solely for OEM checkboxes on future low-cost laptops that will be carrying pre-Kaby Lake CPUs?
 
Maybe if there's a tech disclosure or leak someday concerning the PS4 Pro, we might know what kind IP mixture it is.
For the purposes of binary compatibility with GCN2, however, it seems less problematic if the CUs keep Sea Islands as their ISA basis--which would make the Pro more of a mix of certain Polaris physical implementation features, Sea Islands CUs+some future GPU features+Sony tweaks, plus a GPU front-end mix of multiple generations.
 
And AMD is developing a discrete GPU solely for OEM checkboxes on future low-cost laptops that will be carrying pre-Kaby Lake CPUs?
+ AMD needs dual-graphics solution for Bristol Ridge. It isn't only about 4k. Previous solutions (Mars/Oland, Cape Verde etc.) are really outdated in terms of power consumption and monitor outputs. In previous generations AMD always had sub-100mm² solutions.
 
+ AMD needs dual-graphics solution for Bristol Ridge. It isn't only about 4k. Previous solutions (Mars/Oland, Cape Verde etc.) are really outdated in terms of power consumption and monitor outputs. In previous generations AMD always had sub-100mm² solutions.
Do they really? I've always wondered which kind of audience would be catered to well with such a solution. In my mind, low-end dual graphics incorporates all I would NOT want from my graphics solution - and I don't care which company or brand it is.
 
Back
Top