Training will continue to be expensive and require supercomputing-level resources — it costs many millions to train these models, because they consume huge amounts of data, and the training is very computationally intense and must be repeated many times to tune the weights of the model. There will be efficiency improvements, but training costs of the most powerful models will rise as training datasets expand and model architectures grow.
This is true for the state of the art, most general models. But, I strongly believe there will be models that are nearly as good that don't require so much expense. I bet that Bloomberg didn't spend that much money. https://arxiv.org/abs/2303.17564. The llama stuff from facebook has smaller models they say that are competitive with GPT3. see my Ben's thoughts also. https://benjaminfspector.com/ae.html I think his title is not exactly what his article says, btw.