MuseGAN is a project on music generation. In a nutshell, we aim to generate polyphonic music of multiple tracks (instruments). The proposed models are able to generate music either from scratch, or by accompanying a track given a priori by the user.
We train the model with training data collected from Lakh Pianoroll Dataset to generate pop song phrases consisting of bass, drums, guitar, piano and strings tracks.
Sample results are available here.
- The latest implementation is based on the network architectures presented in BinaryMuseGAN, where the temporal structure is handled by 3D convolutional layers. The advantage of this design is its smaller network size, while the disadvantage is its reduced controllability, e.g., capability of feeding different latent variables for different measures or tracks.
- The original code we used for running the experiments in the paper can be found in the
v1
folder. - Looking for a PyTorch version? Check out this repository.
Below we assume the working directory is the repository root.
-
Using pipenv (recommended)
Make sure
pipenv
is installed. (If not, simply runpip install pipenv
.)# Install the dependencies pipenv install # Activate the virtual environment pipenv shell
-
Using pip
# Install the dependencies pip install -r requirements.txt
The training data is collected from Lakh Pianoroll Dataset (LPD), a new multitrack pianoroll dataset.
# Download the training data
./scripts/download_data.sh
# Store the training data to shared memory
./scripts/process_data.sh
You can also download the training data manually (train_x_lpd_5_phr.npz).
As pianoroll matrices are generally sparse, we store only the indices of nonzero elements and the array shape into a npz file to save space, and later restore the original array. To save some training data
data
into this format, simply runnp.savez_compressed("data.npz", shape=data.shape, nonzero=data.nonzero())
We provide several shell scripts for easy managing the experiments. (See here for a detailed documentation.)
Below we assume the working directory is the repository root.
-
Run the following command to set up a new experiment with default settings.
# Set up a new experiment ./scripts/setup_exp.sh "./exp/my_experiment/" "Some notes on my experiment"
-
Modify the configuration and model parameter files for experimental settings.
-
You can either train the model:
# Train the model ./scripts/run_train.sh "./exp/my_experiment/" "0"
or run the experiment (training + inference + interpolation):
# Run the experiment ./scripts/run_exp.sh "./exp/my_experiment/" "0"
Run the following command to collect training data from MIDI files.
# Collect training data
./scripts/collect_data.sh "./midi_dir/" "data/train.npy"
-
Download pretrained models
# Download the pretrained models ./scripts/download_models.sh
You can also download the pretrained models manually (pretrained_models.tar.gz).
-
You can either perform inference from a trained model:
# Run inference from a pretrained model ./scripts/run_inference.sh "./exp/default/" "0"
or perform interpolation from a trained model:
# Run interpolation from a pretrained model ./scripts/run_interpolation.sh "./exp/default/" "0"
By default, samples will be generated alongside the training. You can disable
this behavior by setting save_samples_steps
to zero in the configuration file
(config.yaml
). The generated will be stored in the following three formats by
default.
.npy
: raw numpy arrays.png
: image files.npz
: multitrack pianoroll files that can be loaded by the Pypianoroll package
You can disable saving in a specific format by setting save_array_samples
,
save_image_samples
and save_pianoroll_samples
to False
in the
configuration file.
The generated pianorolls are stored in .npz format to save space and processing time. You can use the following code to write them into MIDI files.
from pypianoroll import Multitrack
m = Multitrack('./test.npz')
m.write('./test.mid')
Some sample results can be found in ./exp/
directory. More samples can be
downloaded from the following links.
sample_results.tar.gz
(54.7 MB): sample inference and interpolation resultstraining_samples.tar.gz
(18.7 MB): sample generated results at different steps
Please cite the following paper if you use the code provided in this repository.
Hao-Wen Dong*, Wen-Yi Hsiao*, Li-Chia Yang and Yi-Hsuan Yang, "MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic
Music Generation and Accompaniment," AAAI Conference on Artificial Intelligence (AAAI), 2018. (*equal contribution)
[homepage]
[arXiv]
[paper]
[slides]
[code]
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
Hao-Wen Dong*, Wen-Yi Hsiao*, Li-Chia Yang and Yi-Hsuan Yang (*equal contribution)
AAAI Conference on Artificial Intelligence (AAAI), 2018.
[homepage]
[arXiv]
[paper]
[slides]
[code]
Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation
Hao-Wen Dong and Yi-Hsuan Yang
International Society for Music Information Retrieval Conference (ISMIR), 2018.
[homepage]
[video]
[paper]
[slides]
[slides (long)]
[poster]
[arXiv]
[code]
MuseGAN: Demonstration of a Convolutional GAN Based Model for Generating Multi-track Piano-rolls
Hao-Wen Dong*, Wen-Yi Hsiao*, Li-Chia Yang and Yi-Hsuan Yang (*equal contribution)
ISMIR Late-Breaking Demos, 2017.
[paper]
[poster]