Running local LLMs and VLMs on the Arduino UNO Q with yzma

Discover how to run local LLMs and VLMs directly on the Arduino UNO Q using Ron Evans' yzma project, using llama.cpp with Go making edge AI LLM possible in the Arduino UNO Q.

Feb 18, 2026

342 views

0 respects

Components and supplies

1

Arduino UNO Q

Apps and platforms

1

Edge Impulse Studio

Project description

Code

Yzma

Run llama.cpp with Go in the Arduino UNO Q

/

0
0
Latest commit to the master branch on Invalid date

Comments

Only logged in users can leave comments

marc-edgeimpulse

0 Followers

0 Projects

+1

Work attribution

0