15 Matching Annotations
  1. Feb 2026
    1. Comparison video of Claude Code using Anthropics cloud models vs local models on a M4 128GB. Still a heavy lift, fans spinning, memory usage almost at full capacity. But it works. Means that for my M1 16GB a smaller model is all that works, and you need to leave room for context loading too. For one-offs like code generation and for interactive in moving contexts there's different needs.

    1. Ollama is automatically detected when running locally at http://127.0.0.1:11434/v1

      openclaw can detect presence of ollama if it is visible at this specific localhost address. basically if you have ollama running it will be detected. Meaning I could run openclaw fully locally.

  2. Jan 2026
    1. Your assistant. Your machine. Your rules. Unlike SaaS assistants where your data lives on someone else’s servers, OpenClaw runs where you choose—laptop, homelab, or VPS. Your infrastructure. Your keys. Your data.

      you run openclaw yourself. I think I saw [[Martijn Aslander p]] use it on a VPS yday.

    1. My excitement for local LLMs was very much rekindled. The problem is that the big cloud models got better too—including those open weight models that, while freely available, were far too large (100B+) to run on my laptop.

      Cloud models got much better stil than local models. Coding agents made a huge difference, with it Claude Code becomes very useful

  3. Dec 2024
  4. Nov 2024