AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
7nm isn't going to viable till like what 2020? Unless GF and Samsung are going to be pulling a rabbit out of their hat and beat Intel to it....... I don't see that happening.
 
If I understood the news correctly, GF has broken up with Samsung again and decided to skip 10nm (which Samsung will do) and go directly to 7nm.
 
7nm isn't going to viable till like what 2020? Unless GF and Samsung are going to be pulling a rabbit out of their hat and beat Intel to it....... I don't see that happening.

If we believe GF can follow his roadmap, we can imagine maybe for the end of 2018-2019...

Test chips with IP from lead customers have already started running in Fab 8. The technology is expected to be ready for customer product design starts in the second half of 2017, with ramp to risk production in early 2018.

This is where i dont really understand where is coming so early this Vega 20 in 7nm "rumor" ... It dont really make sense. If you have Vega 10 in early 2017, and Navi 10 - 11in 2019, how a Vega 20 could come in 2018 in 7nm ?

i think some information have been mixed somewhere.
 
Last edited:
It may have something to do with HPC Zen. Even if expensive it may be worth it for that market. This would also rely on Zen being paired with an actual Vega GPU as opposed to a custom GPU design. Ramp to risk in 2018, but a potentially multiple thousand dollar HPC chip could probably arrive earlier.
 
It may have something to do with HPC Zen. Even if expensive it may be worth it for that market. This would also rely on Zen being paired with an actual Vega GPU as opposed to a custom GPU design. Ramp to risk in 2018, but a potentially multiple thousand dollar HPC chip could probably arrive earlier.


Can't find your post about 'if AMD had the cash for critical path optimization therefore clocked high' so I'll just reply like this.

In theory there's nothing stopping them except design cost considerations, but we don't know what kind of drawbacks it will have. Will they need to decrease density to combat heat? Plus you have the memory bandwidth issue, even if Polaris could hit 1800mhz you're already bottlenecked by the bandwidth at stock so the same card would need a bigger bus etc etc
 
Polaris: AMD has expanded the VSR capabilities of RX 480 & Co. secretly
The display engine in AMD Polaris GPUs based on GCN gene 4 appeared to receive additional VSR capabilities for driver-side downsampling. Now On Ultra-HD displays a 5K resolution can be used, also the exotic 2,560 × 1,600 is supported on 3,840 × 2,400. This could be a harbinger of the Crimson's successor.
...
However, AMD solves the scaling issue unlike Nvidia and does not utilize the shader units, but instead via the display engine. Performance degradation can be avoided, but the solution is not as flexible.
http://www.pcgameshardware.de/AMD-Polaris-Hardware-261587/News/VSR-Faehigkeiten-5K-1208554/
 
I don't know about nvidia, but VSR was never made through compute shaders in AMD, it was done through dedicated hardware.
Otherwise, there should be no practical limit to VSR in older cards, but e.g. Hawaii can't do 4K in a 1080p screen.

During a reddit AMA, I asked Robert Hallock why GCN3 (Tonga, Fiji) cards could do 4K -> 1080p downsampling while GCN2 cards (Hawaii, Bonaire) couldn't. This was his response:

Robert Hallock said:
We have hardware that performs real-time frame resizing with 0% performance impact above and beyond the change in resolution selected by the user. This hardware became more advanced as we iterated GCN.

It is possible to explore shader-based methods, but that can introduce specific performance penalties of their own.

We will continue to look at adding additional resolution options to VSR.

Downsampling could be done in the older cards using shaders (I guess that's what DICE does in Frostbyte when you select >100% resolution in BF-Hardline or over 42% in BF1), but AMD would rather not implement it because it would cause an unpredictable impact in performance.


It's also not a secret that the multimedia and display engines were updated in Polaris.
 
It's also not a secret that the multimedia and display engines were updated in Polaris.

No it isn't (and neither did the article imply that it's a new feat that AMD does the downscaling not through the shader engines). But if you figure into the equation, that the Fiji GPU(s) on the Radeon Pro Duo could already and indeed to 8K VSR on a 4K screen, then there's something that' been unlocked in the driver that was possible before and as it seems, is also not Polaris exclusive (think about Fiji 8K-VSR, I doubt that the two GPUs could work together in a meaningful way here).
 
No it isn't (and neither did the article imply that it's a new feat that AMD does the downscaling not through the shader engines).

It was a reply to the post that mentioned the article, not the article itself.
 
It was a reply to the post that mentioned the article, not the article itself.
Then you're replying to the article since that's the article's title as anyone can see. If you disagree with the title then say so.
 
Last edited by a moderator:
FPGAs are becoming the new standard for broadcast decoding. Codecs can change rapidly without updating hardware and bandwidth matters. Goes along with all the cable companies and FCC fighting over consumer choice in cable boxes. Putting a Zen+FPGA APU/MCM in a TV would be a pretty potent combination for interactive TV. Makes sense if decoding many channels simultaneously. Would likely require a big/expensive TV with the cost however. Plenty of other custom applications for FPGAs too.

Bring on the Zen+Vega with 10TB SSD DVR, because overkill is a myth! Record 16 shows in 4k HDR and play them all back simultaneously.
 
Huh: http://wccftech.com/amd-polaris-revisions-performance-per-watt/

Claim that massive performance per watt increase for Polaris is coming. Looks to be due to some 14nm bottleneck they were dealing with initially, but Nvidia uses TSMC and AMD uses Samsung/GloFlo, so... well that would be a reason for the massive performance per watt difference in the jump in manufacturing process over the last year. I'd fully expect some sort of PR branding new cards to hit desktop out of this eventually, after the assumedly larger/more profitable mobile market gets these GPUs first.

Also concerning "7nm" from GloFlo: http://www.globalfoundries.com/news...performance-offering-of-7nm-finfet-technology

This is, in reality, about the same as an actual "traditional" Moore's law jump from 14nm. Which is to say Glo-Flo's 7nm = Intel's 10nm. The 10nm from Samsung and TSMC is another 20nm like "half-node". Which is to say there for Apple et. al. to keep up their upgrade cadence. That being said while Global Foundries seems staked on yelling "First" (after Intel) for "7"nm it doesn't appear that TSMC is THAT far behind it, meaning AMD wouldn't have any sort of huge timetable advantage over Nvidia for switching to a new process node (assuming all goes to plan).
 
TSMC seems to be planning for 7nm risk production in 2H 2017, while GF has 7nm risk production in 1H 2018.
Taking them at their word, it would seem like GF would not be first in the foundry space.

Anandtech's article on the upcoming nodes would indicate mass production is offset by about half a year. It also makes it seem like GF's 7nm would be potentially inferior to TSMC's 7nm, but Samsung would still be on its 10nm half-node. I could be misreading the tables.
http://www.anandtech.com/show/10704/globalfoundries-updates-roadmap-7-nm-in-2h-2018
 
TSMC seems to be planning for 7nm risk production in 2H 2017, while GF has 7nm risk production in 1H 2018.
Taking them at their word, it would seem like GF would not be first in the foundry space.

Anandtech's article on the upcoming nodes would indicate mass production is offset by about half a year. It also makes it seem like GF's 7nm would be potentially inferior to TSMC's 7nm, but Samsung would still be on its 10nm half-node. I could be misreading the tables.
http://www.anandtech.com/show/10704/globalfoundries-updates-roadmap-7-nm-in-2h-2018
Costs will be interesting both in terms of manufacturing/yields/wafer price.
Cheers
 
AMD has improved Radeon RX 480, 50% better perf/watt
Improved Polaris GPUs are on the way, with AMD expecting up to 50% better perf/watt numbers on Radeon RX 400 series

few months ago some of my industry sources said that AMD were working on making its Polaris GPU more efficient, with some magic found in the PCB and power delivery systems - something that was broken when AMD launched the card, with Radeon RX 480s using more power than they should have. AMD is supposedly tweaking their latest revisions to Polaris 10 and Polaris 11, with over 50% more performance per watt, which would really help the Radeon RX 480 and Radeon RX 470 stand out more than they do now. We recently tested AMD's Radeon RX 480 in Gears of War 4 running on DX12 at a crazy 8K (7860 x 4320) resolution, and it still offers half the performance of the Titan X which costs $1199, while the RX 480 sits at $279. The Titan X is 91% more powerful, but costs 344% more money - this comes down to AMD's excellent DX12 capabilities and efficiency with Asynchronous Compute on the Polaris architecture. The power savings should be significant, with the 150W TDP on the RX 480 dropping to just 95W, while the RX 460 would drop from 75W to under 50W, as well as improve clock speeds and increase compute performance from 2.1 TFLOPS to 2.5 TFLOPS.

Read more: http://www.tweaktown.com/news/54433/amd-improved-radeon-rx-480-50-better-perf-watt/index.html

They don't list any sources nor data to back up their claim though
 
TSMC seems to be planning for 7nm risk production in 2H 2017, while GF has 7nm risk production in 1H 2018.
Taking them at their word, it would seem like GF would not be first in the foundry space.

Anandtech's article on the upcoming nodes would indicate mass production is offset by about half a year. It also makes it seem like GF's 7nm would be potentially inferior to TSMC's 7nm, but Samsung would still be on its 10nm half-node. I could be misreading the tables.
http://www.anandtech.com/show/10704/globalfoundries-updates-roadmap-7-nm-in-2h-2018

Huh, guess I missed TSMC moving up its timetable. Who knows when either will be able to execute though. If Intel's troubles are anything to go by it might be 2019 till either actually gets a 7nm product out.

They don't list any sources nor data to back up their claim though

It fits with previously accurate leaked benchmarks showing a Fury X like performance coming out of some Polaris chip. Besides it also fits with AMD's own stated efficiencies for their mobile cards, which showed dramatically more efficient perf/watt than the rx480/70/etc. have. So it's hardly a stretch. Though the question now becomes how much they can actually improve the clockspeeds, which due to fmax curves tend to drop efficiency off exponentially. For all that's stated it could be the 50% improvement is at current clockspeeds, meaning either an rx480 drops to <=100 watts or remains at 150 watts and gets <50% increase in clockspeeds.

The Fury X like benchmarks suggest a 40% clockspeed increase, but what TDP that would mean is only guessable unless you work for AMD.
 
Status
Not open for further replies.
Back
Top