1 Matching Annotations
- Dec 2023
-
www.semanticscholar.org www.semanticscholar.org
-
Self-attention is naturally permutation equivariant, therefore, we maythink of them as set-encoders rather than sequence encoders. However, for modalities where thedata does follow a specific ordering, for example agent state across different time steps, it is ben-eficial to break permutation equivariance and utilize the sequence information. This is commonlydone through positional embeddings. For simplicity, we add learned positional embeddings for allmodalities. As not all modalities are ordered, the learned positional embeddings are initially set tozero, letting the model learn if it is necessary to utilize the ordering within a modality.
在轨迹预测中,对于我们是否使用transformer中的positional Embeddings我们需要多方面考虑
-