Semi-supervised Structured Prediction with Neural CRF Autoencoder

Xiao Zhang     Yong Jiang     Hao Peng     Kewei Tu     Dan Goldwasser    
Empirical Methods in Natural Language Processing (EMNLP), 2017
[pdf]

Abstract

In this paper we propose an end-toend neural CRF autoencoder (NCRF-AE) model for semi-supervised learning of sequential structured prediction problems. Our NCRF-AE consists of two parts an encoder which is a CRF model enhanced by deep neural networks, and a decoder which is a generative model trying to reconstruct the input. Our model has a unified structure with different loss functions for labeled and unlabeled data with shared parameters. We developed a variation of the EM algorithm for optimizing both the encoder and the decoder simultaneously by decoupling their parameters. Our experimental results over the Part-of-Speech (POS) tagging task on eight different languages, show that the NCRF-AE model can outperform competitive systems in both supervised and semi-supervised scenarios.


Bib Entry

  @InProceedings{ZJPTG_emnlp_2017,
    author = "Xiao Zhang and Yong Jiang and Hao Peng and Kewei Tu and Dan Goldwasser",
    title = "Semi-supervised Structured Prediction with Neural CRF Autoencoder",
    booktitle = "Empirical Methods in Natural Language Processing (EMNLP)",
    year = "2017"
  }