Video on running an ollama coding assistant on Hetzner. Does not mention the fact it runs on a GEX44 at Hetzner, which starts at 200 Euro/month
- Feb 2026
-
www.youtube.com www.youtube.com
-
-
medium.com medium.com
-
mentions different ways of installing a local set-up for AI. Also mentions much cheeper options than Anthropic, specifically OpenRouter.ai. Does not mention running your own local install on a cloud VPS.
-
-
www.youtube.com www.youtube.com
-
Comparison video of Claude Code using Anthropics cloud models vs local models on a M4 128GB. Still a heavy lift, fans spinning, memory usage almost at full capacity. But it works. Means that for my M1 16GB a smaller model is all that works, and you need to leave room for context loading too. For one-offs like code generation and for interactive in moving contexts there's different needs.
-
-
docs.openclaw.ai docs.openclaw.ai
-
Ollama is automatically detected when running locally at http://127.0.0.1:11434/v1
openclaw can detect presence of ollama if it is visible at this specific localhost address. basically if you have ollama running it will be detected. Meaning I could run openclaw fully locally.
Tags
Annotators
URL
-
- Jan 2026
-
docs.ollama.com docs.ollama.com
-
Recommended Models qwen3-coder glm-4.7 gpt-oss:20b gpt-oss:120b
the local models ollama recommends for use in claudecode - qwen3-code - glm-4.7 - gpt-oss:20b - gpt-oss:120b
-
ollama can also be used with claude code through their endpoint. This allows to use the open Chinese coding models like qwen3
Tags
Annotators
URL
-
-
lmstudio.ai lmstudio.ai
-
one can run claude code using local models through the LM Studio endpoint. so that you don't use Claude in the cloud.
Tags
Annotators
URL
-
-
openclaw.ai openclaw.ai
-
Your assistant. Your machine. Your rules. Unlike SaaS assistants where your data lives on someone else’s servers, OpenClaw runs where you choose—laptop, homelab, or VPS. Your infrastructure. Your keys. Your data.
you run openclaw yourself. I think I saw [[Martijn Aslander p]] use it on a VPS yday.
-
-
www.digitaleoverheid.nl www.digitaleoverheid.nl
-
Vlam.ai
vlam: Veilige lokale AI-modellen, pilot toepassing AI in NL overheid
-
-
www.ssc-ictspecials.nl www.ssc-ictspecials.nl
-
vlam.ai pilots bij rijksoverheid, genoemd in [[Artificiële intelligentie vooral een bestuurlijke uitdaging - Digitale Overheid]]
-
-
simonwillison.net simonwillison.net
-
I have yet to try a local model that handles Bash tool calls reliably enough for me to trust that model to operate a coding agent on my device.
this. Need to understand better conceptually the diff set-ups I have, and how I might switch between them.
-
My excitement for local LLMs was very much rekindled. The problem is that the big cloud models got better too—including those open weight models that, while freely available, were far too large (100B+) to run on my laptop.
Cloud models got much better stil than local models. Coding agents made a huge difference, with it Claude Code becomes very useful
-
The year local models got good, but cloud models got even better
Local models improved a lot in 2025. Mentions Llama 3.3 70B, Mistral Small 3, and the Chinese 20-30B parameter models.
-
- Dec 2024
-
lmstudio.ai lmstudio.ai
-
LM Studio can run LLMs locally (I have llama and phi installed). It also has an API over a localhost webserver. I use that API to make llama available in Obsidian using the Copilot plugin.
This is the API documentation. #openvraag other scripts / [[Persoonlijke tools 20200619203600]] I can use this in?
Tags
Annotators
URL
-
- Nov 2024
-
www.heise.de www.heise.de
-
Exolabs.net experiment running large LLMs locally on 4 combined Mac Mini's. Links to preview and github shared code. For 6600-9360 you can run a cluster of 4 Minis locally. Affordable for SME outfits.
-