neural networks of various kinds can generalise within a distribution of data they are exposed to, but their generalisations tend to break down beyond that distribution
True as it's always been at a high level. But many neural networks do generalize well in ways that we feel is surprising and impressive. But usually these cases turn out to still be within the distribution due to good inductive biases of the network.