Skip to content
/ CMAML Public

The implementation of the ACL 2020 paper "Learning to Customize Model Structures for Few-shot Dialogue Generation Tasks"

Notifications You must be signed in to change notification settings

zequnl/CMAML

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning to Customize Model Structures for Few-shot Dialogue Generation Tasks

This is the implementation of our ACL 2020 paper:

Learning to Customize Model Structures for Few-shot Dialogue Generation Tasks.

Yiping Song, Zequn Liu, Wei Bi, Rui Yan, Ming Zhang

https://arxiv.org/abs/1910.14326

Please cite our paper when you use this code in your work.

Dependency

❱❱❱ pip install -r requirements.txt

Put the Pre-trained glove embedding: glove.6B.300d.txt in /vectors/.

Trained NLI model pytorch_model.bin in /data/nli_model/.

Experiment

The code is for the experiment of our model CMAML-Seq2SPG on Persona-chat. The scripts for training and evaluation are "train.sh" and "test.sh".

After training, please set the "--save_model" as the model with the lowest PPL in validation set to evaluate the model.

Acknowledgement

We use the framework of PAML and the Seq2seq implementation in https://github.com/MaximumEntropy/Seq2Seq-PyTorch

About

The implementation of the ACL 2020 paper "Learning to Customize Model Structures for Few-shot Dialogue Generation Tasks"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published