AMD Execution Thread [2023]

Status
Not open for further replies.
or should we be ignoring how MI300X uses about 3x the silicon of H100
That's how you do tiles, you spam stuff.
Also they pay more upfront costs for config flexibility which is like, the reason MI300 exists.
Of course, that's an AMD thread after all :) Conveniently ignoring RT and AI workloads is a part of that culture too)
someone's upset.
It's cute.
How about eliminating the x86 market and moving to an open standard while we're at it?
Replace with what exactly.
ARM is every bit as closed as x86.
 
and the fistfight continues
 
I wonder if AMD will supply enough information to reproduce the results. Nvidia has done so with their claim so I'd imagine some potential customers likely want to benchmark the results on their own.
Anyone wanting to validate these claims will be able to do so because NVIDIA is sharing the information necessary to reproduce the results. The blog post contains the command lines for the scripts used by NVIDIA to build its model, alongside the benchmarking scripts used to gather the data.

It’s surprising that AMD didn’t make sure that the data it shared was as accurate as necessary, as it was only a matter of time until NVIDIA or LLM enthusiasts fact checked it. If AMD wants to gain ground on NVIDIA in the race for AI market share these types of mistakes need to be eliminated.
 
Isn't this Tensor-RRT opensource?
Both TensorRT and TensorRT-LLM are open source:

 
Both TensorRT and TensorRT-LLM are open source:

Any change to HIPpyfy it ?
 
Status
Not open for further replies.
Back
Top