2 Matching Annotations
  1. Sep 2024
    1. Cerebras differentiates itself by creating a large wafer with logic, memory, and interconnect all on-chip. This leads to a bandwidth that is 10,000 times more than the A100. However, this system costs $2–3 million as compared to $10,000 for the A100, and is only available in a set of 15. Having said that, it is likely that Cerebras is cost efficient for makers of large-scale AI models

      Does this help get around the need for interconnect enough to avoid needing such large hyper scale buildings?

  2. Oct 2020
    1. This strategy did not coalesce right, and AMD found itself not able to sell the Freedom interconnect to partners alongside its Opteron CPUs and it ended up trying to sell SeaMicro systems against its partners.

      while I LOVE AMD's strategy of selling gobs & gobs of PCIe at a good price, i have to say, the loss of SeaMicro is one of the saddest most unfortunate tales of computing.

      the hyperscalers are busy busy busy recreating many of the advantages on their own, AWS for example with Nitro. the big vendors have started dabbling with some related tech like GenZ, but are, for the most part, nowhere near this. this is such a better, more interesting way of building systems. tech that needs to get un-lost.