15 Matching Annotations
  1. Nov 2020
    1. There are two steps in ourframework:pre-trainingandfine-tuning. Dur-ing pre-training, the model is trained on unlabeleddata over different pre-training tasks. For fine-tuning, the BERT model is first initialized withthe pre-trained parameters, and all of the param-eters are fine-tuned using labeled data from thedownstream tasks.

      .

  2. May 2019
  3. Mar 2019
  4. Nov 2018