4 Matching Annotations
  1. Last 7 days
    1. Today’s LS guest, Mikhail Parakhin, CTO of Shopify, had another take on the 'tasteful tokenmaxxing' - you want to go for depth (e.g. do more serial autoresearch loops) than go for breadth (e.g. solve a problem by kicking off 5, 10, 50, 500 parallel runs of the LLM slot machine). Worth thinking through.

      Mikhail Parakhin's emphasis on depth over breadth in AI research suggests a focus on quality and depth of work rather than quantity.

  2. Apr 2026
    1. A small but directionally consistent improvement on strict instruction following. Loose evaluation is flat. Both models already follow the high-level instructions — the strict-mode gap comes down to 4.6 occasionally mishandling exact formatting where 4.7 doesn't.

      这一发现揭示了AI模型能力提升的一个微妙现象:微小但精确的改进可能比重大但模糊的改进更有价值。Claude 4.7只在严格指令遵循上有微小提升,但这种提升针对的是实际开发中常见的精确格式化问题,这挑战了人们对'重大突破'的执念,强调了'精准解决特定问题'的价值。

  3. Oct 2020
    1. Most people seem to follow one of two strategies - and these strategies come under the umbrella of tree-traversal algorithms in computer science.

      Deciding whether you want to go deep into one topic, or explore more topics, can be seen as a choice between two types of tree-traversal algorithms: depth-first and breadth-first.

      This also reminds me of the Explore-Exploit problem in machine learning, which I believe is related to the Multi-Armed Bandit Problem.

  4. Dec 2019