xpea's Recent Activity

  1. xpea replied to the thread Nvidia Ampere Discussion [2020-05-14].

    Agree. Speed is one variable to consider. Accuracy is another one and BF16 doesn't provide enough precision for many networks...

    May 27, 2020 at 12:50 PM
  2. xpea liked OlegSH's post in the thread Nvidia Ampere Discussion [2020-05-14].

    A good question. HPC folks I know are quite positive about changes in Ampere because their tasks are bandwidth bound in many cases....

    May 27, 2020 at 11:09 AM
  3. xpea replied to the thread Nvidia Ampere Discussion [2020-05-14].

    frankly, I like this kind of waste that provides 10 times the performance :lol: More seriously, it's obviously for backward...

    May 27, 2020 at 11:09 AM
  4. xpea replied to the thread Nvidia Ampere Discussion [2020-05-14].

    You forget to say that TF32 operates on FP32 input, all internal accumulators are FP32, and output is FP32. In training, they are no...

    May 27, 2020 at 8:51 AM
  5. xpea liked pharma's post in the thread Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc).

    Benchmarks: Premiere Pro with NVENC: Rendering videos in 20% of the time ( May 26, 2020...

    May 27, 2020 at 1:27 AM
  6. xpea liked Benetanegia's post in the thread Nvidia Ampere Discussion [2020-05-14].

    For the same reason Turing had 2xFP16 rate alongside the much higher Tensor FP16. One is matrix multiplication the other is not. Both...

    May 27, 2020 at 1:24 AM
  7. xpea liked DavidGraham's post in the thread Nvidia Ampere Discussion [2020-05-14].

    It is. https://devblogs.nvidia.com/nvidia-ampere-architecture-in-depth/

    May 27, 2020 at 1:22 AM
  8. xpea liked Benetanegia's post in the thread Nvidia Ampere Discussion [2020-05-14].

    They are not for DL only. They are just matrix on matrix multiplication instead of "scalar", so their use may (or may not) be more...

    May 27, 2020 at 1:22 AM
  9. xpea attached a file to the thread Nvidia Ampere Discussion [2020-05-14].

    But AI ML is where the money is. From hyperion research: [ATTACH] HPC is shrinking and Nvidia made the right decision to go all in AI ML...

    Screenshot_20200527-071015__01.jpg May 27, 2020 at 1:13 AM
  10. xpea liked DavidGraham's post in the thread Nvidia Ampere Discussion [2020-05-14].

    Once more, FP64 Tensor format is designed for HPC simulation workloads as well as AI training. NVIDIA is encouraging developers to...

    May 26, 2020 at 3:41 PM
  11. xpea replied to the thread Nvidia Ampere Discussion [2020-05-14].

    According to Nvidia, they provide the same accuracy in training. From here:...

    May 26, 2020 at 2:44 PM
  12. xpea replied to the thread GART: Games and Applications using RayTracing.

    Next step in Ray Tracing tech, will be in Ampere Gaming new marketing push and introduced at SIGGRAPH 2020: [MEDIA] Nvidia ReSTIR...

    May 25, 2020 at 9:10 AM
  13. xpea liked pjbliverpool's post in the thread PC system impacts from tech like UE5? [Storage, RAM] *spawn*.

    I can easily see DirectStorage being a vendor agnostic version of this. GPUDirectStorage is accessed via CUDA extensions, it'd make...

    May 24, 2020 at 4:16 PM
  14. xpea replied to the thread OpenCL 3.0 [2020].

    ...Well maybe for you but not for Nvidia as their >1 billion last datacenter quarterly revenue proves. CUDA adoption is accelerating...

    May 24, 2020 at 12:49 PM
  15. xpea liked Remij's post in the thread PC system impacts from tech like UE5? [Storage, RAM] *spawn*.

    I mean, surely to god MS has been anticipating this for a while now right? They've got to know that they can't just sit on this stuff...

    May 24, 2020 at 12:33 PM
  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...