232 Matching Annotations
  1. Dec 2018
    1. A semantic treebank is a collection of natural language sentences annotated with a meaning representation. These resources use a formal representation of each sentence's semantic structure.
  2. Nov 2018
    1. Language GANs Falling Short

      此文的一个 Paper Summary 写的特别棒!


      This paper’s high-level goal is to evaluate how well GAN-type structures for generating text are performing, compared to more traditional maximum likelihood methods. In the process, it zooms into the ways that the current set of metrics for comparing text generation fail to give a well-rounded picture of how models are performing.

      In the old paradigm, of maximum likelihood estimation, models were both trained and evaluated on a maximizing the likelihood of each word, given the prior words in a sequence. That is, models were good when they assigned high probability to true tokens, conditioned on past tokens. However, GANs work in a fundamentally new framework, in that they aren’t trained to increase the likelihood of the next (ground truth) word in a sequence, but to generate a word that will make a discriminator more likely to see the sentence as realistic. Since GANs don’t directly model the probability of token t, given prior tokens, you can’t evaluate them using this maximum likelihood framework.

      This paper surveys a range of prior work that has evaluated GANs and MLE models on two broad categories of metrics, occasionally showing GANs to perform better on one or the other, but not really giving a way to trade off between the two.

      • The first type of metric, shorthanded as “quality”, measures how aligned the generated text is with some reference corpus of text: to what extent your generated text seems to “come from the same distribution” as the original. BLEU, a heuristic frequently used in translation, and also leveraged here, measures how frequently certain sets of n-grams occur in the reference text, relative to the generated text. N typically goes up to 4, and so in addition to comparing the distributions of single tokens in the reference and generated, BLEU also compares shared bigrams, trigrams, and quadgrams (?) to measure more precise similarity of text.

      • The second metric, shorthanded as “diversity” measures how different generated sentences are from one another. If you want to design a model to generate text, you presumably want it to be able to generate a diverse range of text - in probability terms, you want to fully sample from the distribution, rather than just taking the expected or mean value. Linguistically, this would be show up as a generator that just generates the same sentence over and over again. This sentence can be highly representative of the original text, but lacks diversity. One metric used for this is the same kind of BLEU score, but for each generated sentence against a corpus of prior generated sentences, and, here, the goal is for the overlap to be as low as possible

      The trouble with these two metrics is that, in their raw state, they’re pretty incommensurable, and hard to trade off against one another.

      更多需要阅读 Paper Summary 了。。。。

  3. Jul 2017
    1. This third research question led to the formulation of agile text mining, a new methodologyagile textminingto support the development of efficient TMAs. Agile text mining copes with the unpredictablerealities of creating text-mining applications.
  4. Jun 2017
  5. Apr 2017
    1. In the skip-gram model, each wordw2Wisassociated with a vectorvw2Rdand similarlyeach contextc2Cis represented as a vectorvc2Rd, whereWis the words vocabulary,Cis the contexts vocabulary, anddis the embed-ding dimensionality.

      Factors involved in the Skip gram model

    1. Algorithmically, these models are similar, except that CBOW predicts target words (e.g. 'mat') from source context words ('the cat sits on the'), while the skip-gram does the inverse and predicts source context-words from the target words. This inversion might seem like an arbitrary choice, but statistically it has the effect that CBOW smoothes over a lot of the distributional information (by treating an entire context as one observation)
    2. Word2vec is a particularly computationally-efficient predictive model for learning word embeddings from raw text. It comes in two flavors, the Continuous Bag-of-Words model (CBOW) and the Skip-Gram model (Section 3.1 and 3.2 in Mikolov et al.).
    1. if your goal is word representation learning,you should consider both NCE and negative sampling

      Wonder if anyone has compared these two approaches

  6. Oct 2016
    1. Distributional Hypothesis, which states that words that appear in the same contexts share semantic meaning

      分布假说,上下文相同的词,语义也相同。

    1. CBOW: The input to the model could be wi−2,wi−1,wi+1,wi+2wi−2,wi−1,wi+1,wi+2w_{i-2}, w_{i-1}, w_{i+1}, w_{i+2}, the preceding and following words of the current word we are at. The output of the neural network will be wiwiw_i. Hence you can think of the task as "predicting the word given its context"Note that the number of words we use depends on your setting for the window size.Skip-gram: The input to the model is wiwiw_i, and the output could be wi−1,wi−2,wi+1,wi+2wi−1,wi−2,wi+1,wi+2w_{i-1}, w_{i-2}, w_{i+1}, w_{i+2}. So the task here is "predicting the context given a word". Also, the context is not limited to its immediate context, training instances can be created by skipping a constant number of words in its context, so for example, wi−3,wi−4,wi+3,wi+4wi−3,wi−4,wi+3,wi+4w_{i-3}, w_{i-4}, w_{i+3}, w_{i+4}, hence the name skip-gram.

      CBOW和Skip-gram模型

  7. Jul 2016
    1. TS. Lê Hồng Phương
    2. PGS.TS. Nguyễn Lê Minh

      lẽ ra nên để thầy đồng hướng dẫn, ít ra đã tốt nghiệp dù là NLP chưa bao giờ nằm trong sự quan tâm, nhiều khi cần chịu đựng làm việc mình k thích lắm để được làm điều mình thích nhất sau này

  8. Apr 2016
    1. TextpressoCentral

      Could this be used as a front end to adding content to wikidata ?

    2. Described DARPA-funded NLP research. 'Big Mechanism'. Crowd annoyed that they used untrained humans in the study, thus setting up the machines to look better.

  9. Oct 2015