- Jan 2023
-
www.complexityexplorer.org www.complexityexplorer.org
-
a common technique in natural language processing is to operationalize certain semantic concepts (e.g., "synonym") in terms of syntactic structure (two words that tend to occur nearby in a sentence are more likely to be synonyms, etc). This is what word2vec does.
Can I use some of these sorts of methods with respect to corpus linguistics over time to better identified calcified words or archaic phrases that stick with the language, but are heavily limited to narrower(ing) contexts?
-
- Jul 2021
-
psyarxiv.com psyarxiv.com
-
Lee, Y. K., Jung, Y., Lee, I., Park, J. E., & Hahn, S. (2021). Building a Psychological Ground Truth Dataset with Empathy and Theory-of-Mind During the COVID-19 Pandemic. PsyArXiv. https://doi.org/10.31234/osf.io/mpn3w
-
- Dec 2019
-
nlpoverview.com nlpoverview.com
-
Traditional word embedding algorithms assign a distinct vector to each word. This makes them unable to account for polysemy. In a recent work, Upadhyay et al. (2017) provided an innovative way to address this deficit. The authors leveraged multilingual parallel data to learn multi-sense word embeddings.
- multilingual parallel data
- multi-sense word embeddings
-
This is very important as training embeddings from scratch requires large amount of time and resource. Mikolov et al. (2013) tried to address this issue by proposing negative sampling which is nothing but frequency-based sampling of negative terms while training the word2vec model.
Amostragem negativa... termos negativos?
-
A general caveat for word embeddings is that they are highly dependent on the applications in which it is used. Labutov and Lipson (2013) proposed task specific embeddings which retrain the word embeddings to align them in the current task space.
Acredito que aplicação aqui se relaciona com contexto, logo word embeddings são dependentes de contexto. Isso é bem óbvio, a princípio. Seria isso o que o autor quis dizer?
Retreinar as incorporações para alinhar à tarefa corrente. Alinhar seria nada mais do que adequar as incorporações prévias no novo contexto, é isso?
-
One solution to this problem, as explored by Mikolov et al. (2013), is to identify such phrases based on word co-occurrence and train embeddings for them separately. More recent methods have explored directly learning n-gram embeddings from unlabeled data (Johnson and Zhang, 2015).
Co-ocorrência de palavras eu consigo entender, mas treinar as embeddings separadamente não. Seria supor a co-ocorrência das palavras como unidade na incorporação, em vez da palavra apenas?
-
The context words are assumed to be located symmetrically to the target words within a distance equal to the window size in both directions.
O que significa dizer "simetricamente localizadas" as palavras alvo?
Tags
Annotators
URL
-