AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Ah, alright. Could there be any efficiency downsides to spreading out work across multiple teams in multiple locations, as compared to NV, which I presume, are having all their engineers assembled in one place?

Also, it would be interesting to know just what Koduri is referring to; "merely" Vega 10 tapeout, or perhaps first silicon from the fab? Something else, like, engineering samples at the lab booting up and working? I hate vague teases like this! :LOL:

"Long way to go" he says, so probably not finished silicon I suppose. Unless verification process and finalizing drivers and reference board design takes a huge amount of time... *grr!*


This will be like saying Samsung forces are in Texas, because their hold down there their biggest RD developpement team and the bigger Fabric ( world record on this one, its just the bigger Fabs never done ( all word enterprise and workers counted )... Maybe Samsung is just an Texan enterprise after al..... ( ok im a bit drunk so )

This said, the question of investment and forces ( mean peoples ) with AMD. As im realy close to Asia in heart ( my wife ), i have look with a lot of interest this photo... with a differentt regard maybe ...
 
Last edited:
Is still disappointing if only manages to match the 390. RX480 is meant to compete with GP106 cards, 28nm cards are all very close to the end of life.
 
  • Like
Reactions: Tim
AMD-nVidia-GPU-Roadmap-2016-2019.png

Vega 11 is also likely one of the last AMD graphics chips based on 14/16nm and can not be expected before spring 2017
http://www.3dcenter.org/news/hardware-und-nachrichten-links-des-23-juni-2016
 
Isn't GP106 pretty much 50% of GP104. Should be right around RX480.


Depends on clocks though, but yeah should be around there. Also we dont' know if there will be a gtx 1060 ti, there is a big performance hole between the gtx 980 performance level and the gtx 1070.....
 
You're disappointed in a card that hasn't been released because it's apparently not competing with a card that doesn't even exist yet?
Not to mention that when you compare to AMD 390x, the rx480 is a bit ahead in three, a bit behind in two, and roughly equal in two subtests, so an arguably more valid statement would be that the RX 480 is slightly ahead of the 390x. Of course, there is no mention of the actual clocks of the tested cards, differences between driver versions, throttling over time, et cetera.

This thread reveals far more about forum posters than about the RX480.
 
What's the basis for that October date for Vega 10?
Alignment of chicken bones thrown to the altar on the night of the full moon.


Every Vega speculation now is just wishful thinking. AMD does not have high end and enthusiast offerings for 14nm launch, so rumors fly high from many people who wish to wait less for cards that are more in line with their expectations.
 
Well the Q4 P100 PCie version, ... no FP16 support, no Nvlink ( only compatiblle with IBM chipset anyway ( outside GPU to GPU ). And i soomewhat can imagine that the most "available " one on Q4 will be the salvaged one. ( due to HBM2 )

As for double precision, i dont know, AMD have been at 1:2 rate since a lot of time.... so if they keep up with it, i dont see much trouble in term of performance for them.. ofc, all depend the direction they have to take ....
Where are you getting the PCIe P100 has no FP16 support?
This is meant to be the PCIe version of the NVLink P100, it is still GP100 and similar tier Tesla product.

Edit:
Does support FP32/FP16 mixed precision according to Nvidia spec the PCIe model is same as the Nvlink version apart from clocks/HBM2/NVlink.
I put more details here: https://forum.beyond3d.com/threads/...mors-and-discussion.57662/page-6#post-1926135
Thanks
 
Last edited:
Im not quite sure, that AMD want to target it right now .. maybe after Vega ... Not that they have no interest to do it, but i think they have set their priority to take back share on " larger base of server and workstation" for Vega.. instead of deploying energy ( and money ) for Deep learning .. ( specially on the software ecosystem ) .

I could be wrong, and that dont mean they have nothing in this regard ready, just that in their actual state, they need to fix some priority, and even if their hardware is capable of it, dont go frontal in the market of Deep AI learning ( specially when today, lets be honest, specialized architectue are way better than what we could see with "GPU's use "arch" who are made as it, is maybe better than trying to simulate thoses arch. ) Using Neuronal architecture is surely better than any gpu's ..

( offcourse i like the work of Nvidia with Deepl, but honestly i think the future of it outside execution is on specialized processors, if you want to simulate neuronal activitty, you need neuronal processors who are really able to simullate it, simple as that., and at this rate, not even an ExaFlops system willl be able to do it correctly with gpu's.. )
I thought AMD presented the S9300 X2 as their Deep Learning/training (along with other research segments) HPC product.
I see this dropping down a tier with the big Vega replacing it as their top product, similar to what Nvidia did with the Tesla range when they released the P100.
But it is not clear how they will be targetting double precision operation focused sciences and applications.
I thought Greenland (good point) was merged/replaced with Vega, so it will be interesting what AMD is going to do as Fiji-Tonga so far was about FP32/FP16.
So will Big Vega be more of a compromise-balanced FP64/FP32-FP16 mixed precision like the P100, or will they again have a more dedicated double precision operation card updated-based from S9170 (this a mix of Tahiti and Hawaii?) and its 1/2 FP64 design.
Thanks
 
Last edited:
PCI-E bandwitdh limit is a serious problem on multi-GPU (particularly on more then 2 GPUs) and especially VR.... And of course 4K involved in those scenarios. Actually is not a problem at all on 60Hz scenarios..
Do you have actual data to back this up?
I know PCIe is shared with everything, but even the NVIDIAs new SLI HB bridges give you only 2GB/s bandwidth, which is far cry from PCIes bandwidth
 
Not sure how intensive this benchmark is but looks like 110W is confirmed. It's also slower than my 290.

AnN0oUk.jpg
That's quite a stretch from one benchmark, that the card would be slower than 290 (hint: it's not)
Also, most if not all of the random leakers don't actually have the press driver with all the bells and whistles for Polaris
 
Do you have actual data to back this up?
I know PCIe is shared with everything, but even the NVIDIAs new SLI HB bridges give you only 2GB/s bandwidth, which is far cry from PCIes bandwidth

●Cross GPU copies are slow regardless of parallel copy engines
●<8 GB/s on 8xPCIe3, 64 MB consumes at least 8 ms
●Cost of copying step would limit frame rate to about 60 fps on 8xPCIe 3.0 system
http://twvideo01.ubm-us.net/o1/vault/gdc2016/Presentations/Juha_Sjoholm_DX12_Explicit_Multi_GPU.pdf

Be mindful of PCIe bandwidth
● PCI 3.0 (8x) – 8GB/s (expect ~6GB/s)
● PCI 2.0 (8x) – 4GB/s (expect ~3GB/s)  Still common…
● e.g. transferring a 4k HDR buffer will limit you to ~50/100 FPS right away
http://gpuopen.com/wp-content/uploa...ogramming_Model_and_Hardware_Capabilities.pdf

Here you can find some summary tables about typical VR memory costs per frame: http://alex.vlachos.com/graphics/Alex_Vlachos_Advanced_VR_Rendering_Performance_GDC2016.pdf

With some simple maths you can see, just only for frames buffers, how easily prohibitive can VR, HDR and 4K with in a high frame-rate scenario become...
 
Status
Not open for further replies.
Back
Top