Linux GPU Compute Issues *fork*

Because you picked the wrong hardware; If you pick the AMD Hardware then you would see where they have better support, which seems to be Windows?
I added “for compute” for a reason.

When AMD is publishing press releases about how Baidu will use Vega for their deep learning data center, they’re not going to deploy it under Windows. More generally, when AMD praises Vega for its deep learning abilities, nobody sane would expect that to be Windows.

And as somebody pointed out above: 2x FP16 does seem to work now for OpenCL on Linux but not on Windows, which kinda proves where AMD’s priorities are.

In a way, I agree that it’d be stupid to use Vega for deep learning on Linux, but that’s only because it’d be stupid to consider AMD in general: the field to moving too fast to waste time on immature tooling.
 
Float is float, double is double, and half is half. The numbers mean the number of components in the vector.

Linux:

[half8 ] Time: 0.020113s, 27333.33 GFLOP/s
[float8 ] Time: 0.040722s, 13500.05 GFLOP/s
[double8 ] Time: 0.630167s, 872.40 GFLOP/s

Windows:

[half8 ] Time: 0.038200s, 14391.46 GFLOP/s
[float8 ] Time: 0.038084s, 14435.29 GFLOP/s
[double8 ] Time: 0.603230s, 911.35 GFLOP/s

Rapid Packed Math working under Linux, but not under Windows, with OpenCL (the slight difference in performance may be due to power profile, etc.).

Yep, seems I did read it wrong (was in the wrong pastebin). Thank you for the correction and providing concrete proof of half precision support on linux and stated spec performance. As such, this was a mute point in my complaint but highlighted a final struggle I had with finding the information and details I needed w.r.t to this card that hit a breaking point considering I was dealing with similar issues from other hardware I had purchased.

In a way, I agree that it’d be stupid to use Vega for deep learning on Linux, but that’s only because it’d be stupid to consider AMD in general: the field to moving too fast to waste time on immature tooling.
Correct. Reverting fully to Nvidia hardware highlighted this. While CUDA is rock solid, you end up finding other battles like navigating SDK/APIs and getting exotic frameworks and hardware working under a very robust development platform. Without a stable foundation of dev tools, drivers, and frameworks, it's no man's land and a lot of time is wasted circling the wagon. Thank you everyone for providing feedback and information alongside my frustrated posts.
Seems I will be participating moreso in Team Green threads from here on out albeit on Ryzen machines :runaway:
 
Back
Top