PurdueNLP at SemEval-2017 Task 1: Predicting Semantic Textual Similarity with Paraphrase and Event Embeddings.

I-Ta Lee     Mahak Goindani     Chang Li     Di Jin     Kristen Johnson     Xiao Zhang     Maria Leonor Pacheco     Dan Goldwasser    
The Workshop on Semantic Evaluations (SemEval) collocated with ACL, 2017
[pdf]

Abstract

This paper describes our proposed solution for SemEval 2017 Task 1, Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlationscores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.


Bib Entry

  @article{LGLJJZPG_ws_2018,
    author = "I-Ta Lee and Mahak Goindani and Chang Li and Di Jin and Kristen Johnson and Xiao Zhang and Maria Leonor Pacheco and Dan Goldwasser",
    title = "PurdueNLP at SemEval-2017 Task 1: Predicting Semantic Textual Similarity with Paraphrase and Event Embeddings.",
    booktitle = "The Workshop on Semantic Evaluations (SemEval) collocated with ACL",
    year = "2017"
  }