AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Correct me if I'm wrong, but PS4K literally couldn't not be be a SoC thanks to the originals HSA implementation could it? Games that made use of that low latency link would presumably experience massive slow downs on a dual die solution
 
The PS4 probably doesn't count as fully supporting HSA, much like Kaveri does not. That aside, the latency aspect would be one problematic area.
There are other concerns, such as a lack of a high-bandwidth interface that could serve as conduit for the Onion bus with which Jaguar could use to plug into a discrete GPU, and the same could be said in the other direction. If there were to be one, it sounds more like Vega or similar might have been the generation for it.

Another question mark is where the memory controllers would be. The GPU would like to be in close proximity, but whether the Jaguar modules can work without their being linked to the controllers is unclear.
 
And what speaks against basing the SoC on Onion 3, rather than the old interconnect?

I would actually expect "Jaguar-compatible", not an original Jaguar core. Means Puma+, aka Carizzo-L respectively its shrink.
 
Sorry, I was wrong. Something was shown running. So, why the delay till 2017?

If HBM2 is in production now, there's going to be an awful lot of it lying around this autumn.
 
Big Vega- ~460-520mm2

To be honest I hope we'll never get to see 14/16FF GPUs with >500mm^2.
This last generation got unprecedentedly large chips because 20nm proved to be unprecedentedly useless and 28nm had to be used for an unprecedentedly long time.

The way I see it, If we see a 550mm^2 GPU in 14/16nm then it means 10nm failed, just like 20nm did.
 
No, he hasn't.
His account is a little over a year old, and out of the blue he's claiming that he used to own a hardware website that supposedly brought those leaks, but is pretty much gone from the Internet (no google cache, no archive.org, nothing).
Quite convenient.
He also said that several websites would start breaking the news "a couple of hours" after his post... 10 hours ago.


He could be right and that would be rather depressing for AMD (although they never promised both GPUs for Summer), but that 1 year-old forum account isn't a reputable source at all.

I know that guy from XS forums and he's an old poster there (at least Athlon 64 days). Credible guy with access to early hardware and good info sources but that doesn't make him 100% reliable as manufactures can feed fake info just to find leaks. Here is one of his posts with some info no one else published.
 
Pretty sure 10nm will be fine, the reason why 20nm failed was because the lack of finfets, 20nm planer transistors the leakage was probably uncontrollable for larger and more complex chips.
 
I know that guy from XS forums and he's an old poster there (at least Athlon 64 days). Credible guy with access to early hardware and good info sources
Hum.. okay then.

but that doesn't make him 100% reliable as manufactures can feed fake info just to find leaks.
What is really odd is that he's claiming that Polaris 10 is being pushed back to October at practically the same time when every other news outlet is claiming that Vega is being pushed forward to October.
So if there really is a reliable source in there, it might just have been a mistake.

Here is one of his posts with some info no one else published.
Well here he's simply saying some of the Pascal cards will have 10GHz memory.. a couple of hours before last friday's announcement so way after the pics of GP104 with GDDR5X being leaked all over the internet.
Regardless, I'll take your word as him providing true information before.
 
Sorry, I was wrong. Something was shown running. So, why the delay till 2017?

If HBM2 is in production now, there's going to be an awful lot of it lying around this autumn.
Some of it is miscommunication from certain publications, some not accepting the announcement and that there is something fundamentally wrong with Pascal and HBM2 (that would be semi-accurate and bits&chips), some not realising it is a multi-phase release with early 2017 being the OEM partners.
For now it is being sold to NVIDIA's close clients (those multi-million $ deals or high profile research labs-universities) and that would mean direct sales; one of the 1st announced large orders was a European supercomputer lab associated with CERN ordering 4,500 P100 to replace their Keplers; that is one high value client and order (considering minimum anyone is meant to be getting these card is at $10k a piece even for preferential clients).

Cheers
 
keep reading his other posts, two more there, one he specifically states about overclocks granted it was LN2 but no one knew Pascal was going to overclock like what they showed.

Once they do their announcement at end of this month that will clarify everything.

I agree its hard to believe since they had the chip demoed 2 q's ago, and now they are running across this problem.
 
Sorry, I was wrong. Something was shown running. So, why the delay till 2017?

If HBM2 is in production now, there's going to be an awful lot of it lying around this autumn.
Volumes for very large GPUs are unclear for 14/16nm, and then there's the volume for the interposer and assembly chain.
Unlike HBM1, there's also the prospect of other products like networking hardware fighting for HBM2 and assembly volumes. Designs for that range have been touted as being in progress.

As far as having plenty of HBM2 floating around, that might be relative. If it's anything like the other comparatively boutique forms of DRAM like GDDR5 and GDDR5X, the "mass" in mass production is very small compared to broadly adopted standards like DDR3 and DDR4 that simply keep factory lines on by default, rather than starting and stopping manufacturing to order.

To be honest I hope we'll never get to see 14/16FF GPUs with >500mm^2.
This last generation got unprecedentedly large chips because 20nm proved to be unprecedentedly useless and 28nm had to be used for an unprecedentedly long time.

The way I see it, If we see a 550mm^2 GPU in 14/16nm then it means 10nm failed, just like 20nm did.

Competing in HPC and other areas of compute is going to mean facing very large die competitors.
If there is a supposed way to scale around that, AMD hasn't made note of that until Navi.
How that is meant to escape the current realities of off-die=more power is unclear. Even with tech like HBM, the goals for future HPC is so power-constrained that even the lower interface power left now is not efficient enough and is going to be growing with bandwidth demands.

Having more die area still seems like it is going to provide more performance or power efficiency(which equals performance).
The high clocked GTX 1080 has left a fair amount of efficiency on the table, in terms of power/transistor.
 
To be honest I hope we'll never get to see 14/16FF GPUs with >500mm^2.
This last generation got unprecedentedly large chips because 20nm proved to be unprecedentedly useless and 28nm had to be used for an unprecedentedly long time.

The way I see it, If we see a 550mm^2 GPU in 14/16nm then it means 10nm failed, just like 20nm did.
If 10nm is a year late, we could see a big gpu since nvidia obviously already has gp100 which they could eventually release as a consumer card once the process ramps to really good yields and the demand for the card dies down a bit.
 
And what speaks against basing the SoC on Onion 3, rather than the old interconnect?
If this is in reference to my post, I was addressing what could be difficult when it's not an SoC. Onion3 is still an on-die APU connection.

I would actually expect "Jaguar-compatible", not an original Jaguar core. Means Puma+, aka Carizzo-L respectively its shrink.
The name Jaguar is limited to a rather specific implementation and node combination. The hop to Puma was not an architecturally significant one.
A shrink to 14/16nm seems worthy enough for AMD to give it a new name. It's given new names for less.
 
If 10nm is a year late, we could see a big gpu since nvidia obviously already has gp100 which they could eventually release as a consumer card once the process ramps to really good yields and the demand for the card dies down a bit.
A year late where?
It would seem nVidia is using TSMC as a foundry, and AMD is probably using Global Foundries. I wouldn't take for granted that the different foundries move in perfect lock step the next couple of nodes.
TSMC 10nm won't be a year late. They have already taped out a number of 10nm designs for customers. And while snafus are certainly possible, very large delays seem unlikely.
 
A year late where?
It would seem nVidia is using TSMC as a foundry, and AMD is probably using Global Foundries. I wouldn't take for granted that the different foundries move in perfect lock step the next couple of nodes.
TSMC 10nm won't be a year late. They have already taped out a number of 10nm designs for customers. And while snafus are certainly possible, very large delays seem unlikely.
TSMC also said they taped out ~40 designs for 16FinFet before the end of 2014.
 
I knew I had seen it somewhere, here is the Microsoft SoC slides courtesy of TomsHardware pertaining to the XBOX One, and the PS4 would not be too different.


HC25.26.121-fixed-XB1-20130826gnn-2.png



HC25.26.121-fixed-XB1-20130826gnn-4.png


Cheers
 
Pretty sure 10nm will be fine, the reason why 20nm failed was because the lack of finfets, 20nm planer transistors the leakage was probably uncontrollable for larger and more complex chips.
AMD not mentioning 10nm on their 2016-2018 Radeon roadmap does not leave me hopeful.
 
Status
Not open for further replies.
Back
Top