7 Matching Annotations
  1. Jan 2025
  2. Mar 2023
    1. Each model’s resource overhead is 1CPU and 1 GB memory. Deploying many models using the current approach will quickly use up a cluster's computing resource. With Multi-model serving, these models can be loaded in one InferenceService, then each model's average overhead is 0.1 CPU and 0.1GB memory.

      If I am not mistaken, the multi-model approach reduces the size by 90% in this case

    2. While you get the benefit of better inference accuracy and data privacy by building models for each use case, it is more challenging to deploy thousands to hundreds of thousands of models on a Kubernetes cluster.

      With more separation, comes the problem of distribution

  3. Sep 2021