Vlam.ai
vlam: Veilige lokale AI-modellen, pilot toepassing AI in NL overheid
Vlam.ai
vlam: Veilige lokale AI-modellen, pilot toepassing AI in NL overheid
vlam.ai pilots bij rijksoverheid, genoemd in [[Artificiële intelligentie vooral een bestuurlijke uitdaging - Digitale Overheid]]
I have yet to try a local model that handles Bash tool calls reliably enough for me to trust that model to operate a coding agent on my device.
this. Need to understand better conceptually the diff set-ups I have, and how I might switch between them.
My excitement for local LLMs was very much rekindled. The problem is that the big cloud models got better too—including those open weight models that, while freely available, were far too large (100B+) to run on my laptop.
Cloud models got much better stil than local models. Coding agents made a huge difference, with it Claude Code becomes very useful
The year local models got good, but cloud models got even better
Local models improved a lot in 2025. Mentions Llama 3.3 70B, Mistral Small 3, and the Chinese 20-30B parameter models.
LM Studio can run LLMs locally (I have llama and phi installed). It also has an API over a localhost webserver. I use that API to make llama available in Obsidian using the Copilot plugin.
This is the API documentation. #openvraag other scripts / [[Persoonlijke tools 20200619203600]] I can use this in?
Exolabs.net experiment running large LLMs locally on 4 combined Mac Mini's. Links to preview and github shared code. For 6600-9360 you can run a cluster of 4 Minis locally. Affordable for SME outfits.