Implementations of VAEs for Text
- “Generating Sentences from a Continuous Space” (Bowman et al., 2015)
- https://github.com/ryokamoi/original_textvae
- Implementation of the first work on VAE for text. This model is a simple LSTM-LSTM seq2seq model with word dropout.
- “Improved Variational Autoencoders for Text Modeling using Dilated Convolutions” (Yang, Hu, Salakhutdinov, & Taylor, 2017)
- https://github.com/ryokamoi/dcnn_textvae
- Implementation of an improved model of VAE for text. This model uses dilated CNN as a decoder to control the capacity of the decoder.
- “A Hybrid Convolutional Variational Autoencoder for Text Generation” (Semeniuta, Severyn, & Barth, 2017)
- https://github.com/ryokamoi/hybrid_textvae
- Implementation of VAE for text with hybrid structure. This model tries to solve the problem called “posterior collapse” with an auxiliary task to predict a sentence with CNN without teacher forcing.
References
- Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R., & Bengio, S. (2015). Generating Sentences from a Continuous Space. In SIGNLL Conference on Computational Natural Language Learning (CoNLL).
- Yang, Z., Hu, Z., Salakhutdinov, R., & Taylor, B.-K. (2017). Improved Variational Autoencoders for Text Modeling using Dilated Convolutions. In International Conference on Machine Learning (ICML).
- Semeniuta, S., Severyn, A., & Barth, E. (2017). A Hybrid Convolutional Variational Autoencoder for Text Generation. In Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 627–637).