You're trying to take precise calculations out of a graph whose author says it's not precise.
Yes, I highlighted this yesterday when that slide was posted here -
https://forum.beyond3d.com/threads/...itecture-for-dgpus.60999/page-25#post-2219524
But that's all what we got, and yes, I prefer calculations to belief (into someone's probably misquoted, misspelled, misunderstood or overthinked words).
Moreover, the slide says "Subject to revision with
further testing", so graph results are based on initial testing.
That means the graph should illustrate tested proportions in execution time between DP4A and XMX.
Iris Xe 96 = 96 EUs * 8 ALUs = 768 shader units
I looked at specs for a discrete SKU tested here -
https://www.tomshardware.com/features/intel-xe-dg1-benchmarked
Have no idea what configs other DG1 SKUs have and whether they all use salvage of full parts.
With THG specs, that's 4096*1.5 / 640 = 9.6 times, or 10 when rounded to the closest integer value.
Not only are you making calculations out of counting pixels from graph bars that are meant for "conceptual illustration purposes only", you're also doing said calculations wrong.
These graphs are meant for "conceptual illustration purposes only" because they don't contain any performance numbers (are they too shy of them?), but the graphs itself are based on real measurements, so should be perfectly fine for calculating proportions, the scale is linear anyway.
Also, nice spin on calculations.
Perhaps because you're confusing the Xe MAX discrete GPU with 80 EUs with the integrated Iris Xe with 96 EUs.
Yes, I was talking about discrete solutions all the way here.
claiming they're lying about their deep-learning upscaling method running efficiently on their integrated Iris Xe using RPM DP4A truly is a very specific flex.
I've not seen any quotation of Intel guys on "deep-learning upscaling method running efficiently on their integrated Iris Xe using RPM DP4A", can you share the quotes from Intel guys?
Sorry, but I don't buy it, someone's retellings (that can be based on wrong assumptions) are not Intel's claims.