1 Matching Annotations
  1. Last 7 days
    1. Today’s LS guest, Mikhail Parakhin, CTO of Shopify, had another take on the 'tasteful tokenmaxxing' - you want to go for depth (e.g. do more serial autoresearch loops) than go for breadth (e.g. solve a problem by kicking off 5, 10, 50, 500 parallel runs of the LLM slot machine). Worth thinking through.

      Mikhail Parakhin's emphasis on depth over breadth in AI research suggests a focus on quality and depth of work rather than quantity.