Implementations of VAEs for Text

  • “Generating Sentences from a Continuous Space” (Bowman et al., 2015)
  • “Improved Variational Autoencoders for Text Modeling using Dilated Convolutions” (Yang, Hu, Salakhutdinov, & Taylor, 2017)
  • “A Hybrid Convolutional Variational Autoencoder for Text Generation” (Semeniuta, Severyn, & Barth, 2017)
    • https://github.com/ryokamoi/hybrid_textvae
    • Implementation of VAE for text with hybrid structure. This model tries to solve the problem called “posterior collapse” with an auxiliary task to predict a sentence with CNN without teacher forcing.

References

  1. Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R., & Bengio, S. (2015). Generating Sentences from a Continuous Space. In SIGNLL Conference on Computational Natural Language Learning (CoNLL).
  2. Yang, Z., Hu, Z., Salakhutdinov, R., & Taylor, B.-K. (2017). Improved Variational Autoencoders for Text Modeling using Dilated Convolutions. In International Conference on Machine Learning (ICML).
  3. Semeniuta, S., Severyn, A., & Barth, E. (2017). A Hybrid Convolutional Variational Autoencoder for Text Generation. In Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 627–637).
Ryo Kamoi
Ryo Kamoi
Ph.D. Student

My research interests are in improving reliability of natural language processing systems. PhD student at Penn State (2023-), MS at UT Austin, BE at Keio University.