Running local LLMs and VLMs on the Arduino UNO Q with yzma
Discover how to run local LLMs and VLMs directly on the Arduino UNO Q using Ron Evans' yzma project, using llama.cpp with Go making edge AI LLM possible in the Arduino UNO Q.
Devices & Components
1
Arduino® UNO™ Q 2GB
Software & Tools
1
Edge Impulse Studio
Project description
Code
Yzma
Run llama.cpp with Go in the Arduino UNO Q
Comments
Only logged in users can leave comments