- Jun 2023
-
www.derstandard.de www.derstandard.de
-
Durch die Erwärmung sinkt die Menge an CO2, die tropischen Regenwälder aufnehmen. Dieser Feedback-Mechanismus wird von vielen Klimamodellen nicht berücksichtigt. Eine neue Studie zeigt, dass er- wir die zunehmenden Waldbrände - dafür sorgen könnte, dass die globale Erhitzung noch schneller voranschreitet als bisher angenommen.
-
- Nov 2022
-
www.researchgate.net www.researchgate.net
-
n recent years, the neural network based topic modelshave been proposed for many NLP tasks, such as infor-mation retrieval [11], aspect extraction [12] and sentimentclassification [13]. The basic idea is to construct a neuralnetwork which aims to approximate the topic-word distri-bution in probabilistic topic models. Additional constraints,such as incorporating prior distribution [14], enforcing di-versity among topics [15] or encouraging topic sparsity [16],have been explored for neural topic model learning andproved effective.
Neural topic models are often trained to mimic the behaviours of probabilistic topic models - I should come back and look at some of the works:
- R. Das, M. Zaheer, and C. Dyer, “Gaussian LDA for topic models with word embeddings,”
- P. Xie, J. Zhu, and E. P. Xing, “Diversity-promoting bayesian learning of latent variable models,”
- M. Peng, Q. Xie, H. Wang, Y. Zhang, X. Zhang, J. Huang, and G. Tian, “Neural sparse topical coding,”
-
e argue that mutual learningwould benefit sentiment classification since it enriches theinformation required for the training of the sentiment clas-sifier (e.g., when the word “incredible” is used to describe“acting” or “movie”, the polarity should be positive)
By training a topic model that has "similar" weights to the word vector model the sentiment task can also be improved (as per the example "incredible" should be positive when used to describe "acting" or "movie" in this context
-
. However, such a framework is not applicablehere since the learned latent topic representations in topicmodels can not be shared directly with word or sentencerepresentations learned in classifiers, due to their differentinherent meanings
Latent word vectors and topic models learn different and entirely unrelated representations
Tags
Annotators
URL
-
- May 2019
-
topicmodels.west.uni-koblenz.de topicmodels.west.uni-koblenz.de
Tags
Annotators
URL
-