13 Matching Annotations
  1. Apr 2026
    1. Qwen3.5 397B A17B: 15.3%, DeepSeek V3.2: 14.5%, GLM-5: 14.5%, Kimi K2.5: 11.5%, MiniMax-M2.7: 10.6%

      中美专业服务 Agent 的差距在这里变得具体可见:顶级美国模型 33%,中国最强开源模型(Qwen3.5、DeepSeek、GLM-5)约 14-15%,差距超过 2 倍。更值得注意的是智谱 AI 的 GLM-5 与 DeepSeek V3.2 并列,说明在专业服务 Agent 这个维度,国内头部玩家的能力相当接近。对于智谱的战略意义:这个 2 倍差距是否可以通过领域专精(比如专注于中国本土金融场景)来弥补?

    1. DeepSeek is the only product that bridges the divide.

      DeepSeek 同时在中国、俄罗斯、美国获得显著用户,在技术分化的世界中极为罕见。它不仅是产品,更是地缘政治缝隙中的独特存在——既规避西方制裁,又突破中国的封闭性。这种「跨界」属性是护城河也是风险源:当三个监管体系冲突时,它能否维持这种微妙平衡?

    1. early-career researcher salaries at OpenAI and Anthropic are around twice as high as at DeepSeek, even after accounting for purchasing power.

      购买力平价调整后,OpenAI/Anthropic 给初级研究员的薪资仍是 DeepSeek 的两倍——这意味着顶尖人才流向美国不仅是文化和机会问题,还是冷冰冰的经济计算。中国 AI 公司在人才争夺上面临的不只是算力差距,还有薪资结构性劣势。「绝大多数赴美中国 AI 研究员选择留下」这一事实,从这里得到了最朴素的解释。

    2. Anthropic, who accused DeepSeek, Moonshot, and MiniMax of distilling from Claude's outputs.

      Anthropic 公开指控 DeepSeek、月之暗面和 MiniMax 从 Claude 的输出中蒸馏数据——这是一个令人震惊的商业伦理事件。更深层的含义是:这些中国公司被迫采用「寄生式追赶」策略,以 Claude 为「免费教师」压缩训练成本。这既是技术现实的写照,也暗示了「无算力优势」下的竞争逻辑:当你无法花钱训练更好的模型,就借用别人训练好的。

  2. Jan 2026
  3. Jun 2025
    1. 能力: - 语音转录支持本地(WhisperCpp/FasterWhisper) 和在线(B接口/J接口??) - 字幕翻译支持传统引擎和LLM - 传统引擎: DeepL/微软/谷歌 - LLM: Ollama、DeepSeek、硅基流动以及【OpenAI兼容接口】 (配套提供LLM API中转站)

      安装部署 - Windows提供一键安装包 - MacOS需要自行基于python搭建,且作者说未验证过 👎 。另外本地 whisper 功能尚不支持macos)

  4. Mar 2025
    1. The analysis uncovered an average of 11 different types of data out of the 35 possible. As mentioned earlier, Google Gemini stands out as the most data-hungry service, collecting 22 of these data types, including highly sensitive data like precise location, user content, the device's contacts list, browsing history, and more.Among the analyzed applications, only Google Gemini, Copilot, and Perplexity were found to collect precise location data. The controversial DeepSeek chatbot stands right in the middle, collecting 11 unique types of data, such as user input like chat history.
  5. Feb 2025
  6. Jan 2025
    1. Take aways: AI will become cheaper and more efficient. - closed source models can cache responses and save computations for repetitive queries - closed source also has possibility of iterative improvements using constant reinforcement learning. - Prioritizing capabilities and deliberate strategy in data selection, carefully designed training objectives.