Running local LLMs and VLMs on the Arduino UNO Q with yzma

Discover how to run local LLMs and VLMs directly on the Arduino UNO Q using Ron Evans' yzma project, using llama.cpp with Go making edge AI LLM possible in the Arduino UNO Q.

Feb 18, 2026

7217 views

2 respects

Apache-2.0

Devices & Components

1

Arduino® UNO™ Q 2GB

Software & Tools

1

Edge Impulse Studio

Project description

Code

Yzma

Run llama.cpp with Go in the Arduino UNO Q

/

0
0
Latest commit to the master branch on Invalid date

Comments

Only logged in users can leave comments

marc-edgeimpulse

0 Followers

0 Projects

+1

Work attribution

0