3 Matching Annotations
  1. Feb 2024
  2. Oct 2023
    1. Neural operators are guaranteed to be discretization invariant, meaning that they can work on any discretization of inputs and converge to a limit upon mesh refinement. Once neural operators are trained, they can be evaluated at any resolution without the need for re-training. In contrast, the performance of standard neural networks can degrade when data resolution during deployment changes from model training.

      Look this up: anyone familiar with this? sounds complicated but very promising for domains with a large range of resolutions (medical-imaging, wildfire-management)