AMD Execution Thread [2022]

Status
Not open for further replies.
That's not where stuff goes sour.
Most bent LGA pins are the installing idiot dropping the CPU sideways onto the naked pin grid.
I've never dropped a CPU but I think it's pretty easy to drop LGA CPU on MB. They are very thin and do not have a lot of area for a good grip.
 
It's been 4 years since everyone and their mommy bolted on GEMM blobs into phone SoCs which amounted to...
...hell yeah nothing just like I love it.
Couldn't be further from the truth, NPUs in mobile phones are used extensively, from camera and video applications to voice commands/sound recognition, assistance agents (Google, Siri ..etc), GPS apps, face recognition, on the fly emjoi, smart typing, text prediction, battery optimizations according to user habits, touch recognition, language translation, speech to text .. hell, the entirety of iOS as about now relies heavily on machine learning algorithms that are accelerated locally on the device.
 
but I think it's pretty easy to drop LGA CPU on MB
Yeah and usually it's fine but when it hits the pins; catastrophe.
NPUs in mobile phones are used extensively, from camera and video applications to voice commands/sound recognition, assistance agents (Google, Siri ..etc), GPS apps, face recognition, on the fly emjoi, smart typing, text prediction, battery optimizations according to user habits, touch recognition, language translation, speech to text
Nearly everything here isn't ondevice or isn't touching the weird non-standard ML block what-so-ever.
Everyone baked their processing into ISP codecs so ML garbage goes off even with camera.
the entirety of iOS as about now relies heavily on machine learning algorithms that are accelerated locally on the device.
That's bullshit and you know it.
Please stop trying to give me HotChips 2018 flashbacks; it was silly back then and is silly now.
 
That's bullshit and you know it.
Please stop trying to give me HotChips 2018 flashbacks; it was silly back then and is silly now.
The only bullshit here are your statements, mobile companies are increasingly investing into ML blocks, Apple is unquestionably emphasizing the heavy involvements of their NPUs.

There's a whole bunch of new experiences that are powered by machine learning. And these are things like language translation, or on-device dictation, or our new features around health, like sleep and hand washing, and stuff we've released in the past around heart health and things like this. I think there are increasingly fewer and fewer places in iOS where we're not using machine learning.
the fact that all the work is done locally on the device
Have an educated read:
https://arstechnica.com/gadgets/202...s-machine-learning-across-ios-and-soon-macos/

https://www.samsung.com/global/galaxy/what-is/npu/
 
Yea for marketing since CPU/GPU gains of note are no more.
Please no more HC2018 rethread okay?
It was silly as is back then.

Lmao I'd do that too if my CPU design teams deserted.
I seldom agree with most of what DavidGraham posts, but he’s right on this one.

iOS has been utilising the NPU on the A12 Bionic and newer (the A11 Bionic’s NPU was too underspecc’d) for photography and video applications.

As alluded to, executing ML on the CPU is a fallback option for iOS devices with an A11 Bionic or older chipset, and Intel-based Mac computers.

source: my time at Apple and discussions with my SW engineering buddies.

But anyway, back to AMD discussion.
 
She's pretty damn competent, and without resorting to a fem-centric pandering narrative. Clearly AMD is doing great, but I can't help but look at Apple's M1s and wish AMD had that kind of overall graphics performance on a single SoC.
 
Clearly AMD is doing great, but I can't help but look at Apple's M1s and wish AMD had that kind of overall graphics performance on a single SoC.
The potent M1 Ultra chips are targeting devices *starting* at $4k. How large is that market?
 
Status
Not open for further replies.
Back
Top