If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?
Thanks for any recommendations in advance.
If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?
Thanks for any recommendations in advance.
maid + VPN to Ollama on your own computer.
Use an Onion service with client authorisation to avoid needing a domain or static IP.