4 Matching Annotations
  1. Feb 2026
    1. Airbnb picks Alibaba’s Qwen over ChatGPT in a win for Chinese open-source AIAirbnb ‘relies heavily’ on Alibaba’s Qwen models to power its AI customer service agent, CEO Brian Chesky says

      Even US firms are using Chinese models because they are so much cheaper. They do not use them for everything, but where they can get away with lower cost options they will.

    1. Low-cost Chinese AI models forge ahead, even in the US, raising the risks of a US AI bubble Nvidia’s latest earnings report reassured some. But Chinese AI models are fast gaining a following around the world, underlining concerns over an ‘AI bubble’ centered on high-investment, high-cost US models.
  2. Dec 2023
  3. Feb 2017
    1. SVM only cares that the difference is at least 10

      The margin seems to be manually set by the creator in the loss function. In the sample code, the margin is 1-- so the incorrect class has to be scored lower than the correct class by 1.

      How is this margin determined? It seems like one would have to know the magnitude of the scores beforehand.

      Diving deeper, is the scoring magnitude always the same if the parameters are normalized by their average and scaled to be between 0 and 1? (or -1 and -1... not sure of the correct scaling implementation)

      Coming back to the topic -- is this 'minimum margin' or delta a tune-able parameter?

      What effects do we see on the model by adjusting this parameter?

      What are best and worst case scenarios of playing with this parameter?