24 Matching Annotations
  1. Last 7 days
    1. Ollama is automatically detected when running locally at http://127.0.0.1:11434/v1

      openclaw can detect presence of ollama if it is visible at this specific localhost address. basically if you have ollama running it will be detected. Meaning I could run openclaw fully locally.

    1. Setting context length Setting a larger context length will increase the amount of memory required to run a model. Ensure you have enough VRAM available to increase the context length.

      This setting is in ollama desktop interface. Does it set it for the terminal too? Or are these two separate instances?

    2. Context length is the maximum number of tokens that the model has access to in memory. The default context length in Ollama is 4096 tokens. Tasks which require large context like web search, agents, and coding tools should be set to at least 64000 tokens.

      Default ollama context length is 4k. Recommended minimum for websearch, agents and coding tools (like Claude Code or Open code) is 64k. I've seen 128k recommendations for Claude Code

  2. Jan 2026
  3. Dec 2025
    1. Supported Providers Ollama (Free & Local): Run powerful open-source models locally on your machine. This is a great option for privacy and offline use.

      Ah, enhancement can be done locally too, by connecting to ollama.

    1. Flow uses a combination of open-source models (i.e. LLAMA 3.1) and proprietary LLM providers (such as OpenAI) to provide its services. Wispr has agreements with all third party generative AI providers to ensure no data is stored or used for model training (zero data retention).

      Wispr Flow uses both open (Llama) and closed LLMS, ao OpenAI . Server side though

    1. GPT-OSS OpenAI's open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.

      GPT-OSS is by OpenAI. It is available locally in Ollama it seems in various versions.

  4. Jun 2025
    1. 能力: - 语音转录支持本地(WhisperCpp/FasterWhisper) 和在线(B接口/J接口??) - 字幕翻译支持传统引擎和LLM - 传统引擎: DeepL/微软/谷歌 - LLM: Ollama、DeepSeek、硅基流动以及【OpenAI兼容接口】 (配套提供LLM API中转站)

      安装部署 - Windows提供一键安装包 - MacOS需要自行基于python搭建,且作者说未验证过 👎 。另外本地 whisper 功能尚不支持macos)

  5. Oct 2024