> Throughout this series, “we” refers to maderix (human) and Claude Opus 4.6 (by Anthropic) working as a pair. The reverse engineering, benchmarking, and training code were developed collaboratively
Sure, "collaboratively." Why would I ever trust a vibe coded analysis? How do I, a non expert in this niche, know that Opus isn't pulling a fast one on both of us? LLMs write convincing bullshit that even fools experts. Have you manually verified each fact in this piece? I doubt it. Thanks for the disclaimer, it saved me from having to read it.
We've got about a year before so many people are interacting with LLMs on a daily basis that its style starts to reverse infect human speech and writing
I have always wondered if the neural engine could be used for training - pretty excited for part 3 of this to see if the juice is actually worth the squeeze
If you strip away the branding, Apple has and continues to ship a ton of algorithms that likely use the ANE and end users can use CoreML to do the same.
Just some things that people will likely take for granted that IIRC Apple have said use the ANE or at least would likely benefit from it: object recognition, subject extraction from images and video, content analysis, ARKit, spam detection, audio transcription.
I worked on the Xcode team for years and know the lengths Apple goes to make this stuff difficult to figure out.
I just wanted to say that you’ve done an excellent job and am looking forward to the 3rd installment.
> Throughout this series, “we” refers to maderix (human) and Claude Opus 4.6 (by Anthropic) working as a pair. The reverse engineering, benchmarking, and training code were developed collaboratively
Sure, "collaboratively." Why would I ever trust a vibe coded analysis? How do I, a non expert in this niche, know that Opus isn't pulling a fast one on both of us? LLMs write convincing bullshit that even fools experts. Have you manually verified each fact in this piece? I doubt it. Thanks for the disclaimer, it saved me from having to read it.
This article was clearly written by a human (and AI) but still has a few "LLMisms" such as:
- The key insight - [CoreML] doesn't XXX. It YYY.
With that being said, this is a highly informative article that I enjoyed thoroughly! :)
The article links to their own Github repo: https://github.com/maderix/ANE
We've got about a year before so many people are interacting with LLMs on a daily basis that its style starts to reverse infect human speech and writing
This said, there were people that talked like this before LLMs, it didn't develop this whole cloth.
My honest take? You're probably right
I have always wondered if the neural engine could be used for training - pretty excited for part 3 of this to see if the juice is actually worth the squeeze
Part 2 has benchmarks: https://maderix.substack.com/p/inside-the-m4-apple-neural-en...
6.6 FLOPS/W, plus the ability to completely turn off when not in use, so 0W at idle.
The future is bright for software engineers.
The big takeaway isn't reverse engineering the ANE per se, but what Manjeet could do with his software engineering skills when accelerated by AI.
This is a good example of the present state of software engineering. Not future state - present state.
Genuine question, not trying to throw a shade or anything, but are those cores actually useful with the state of apple intelligence being what it is?
If you strip away the branding, Apple has and continues to ship a ton of algorithms that likely use the ANE and end users can use CoreML to do the same.
Just some things that people will likely take for granted that IIRC Apple have said use the ANE or at least would likely benefit from it: object recognition, subject extraction from images and video, content analysis, ARKit, spam detection, audio transcription.
They are also used by ML models that are deeply integrated in macos and ios without you knowing. Like object and text detection in images.
And help in Photos, Final Cut Pro, and other apps.
Apple's OSes run a lot of local ML models for many tasks that aren't branded as Apple Intelligence, and they have done so for many years now.
https://dennisforbes.ca/blog/microblog/2026/02/apple-neural-...
You can convert your own ML models to MLX to use them; Apple Intelligence is not the only application.
MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine.
Even then there is no transparency on how it decides what runs on the ANE/GPU etc