Running local LLMs and VLMs on the Arduino UNO Q with yzma
Discover how to run local LLMs and VLMs directly on the Arduino UNO Q using Ron Evans' yzma project, using llama.cpp with Go making edge AI LLM possible in the Arduino UNO Q.
Components and supplies
1
Arduino UNO Q
Apps and platforms
1
Edge Impulse Studio
Project description
Code
Yzma
Run llama.cpp with Go in the Arduino UNO Q
Comments
Only logged in users can leave comments