Mesh-Tensorflow 中的 Open-AI 的 DALL-E。
如果这与 GPT-Neo 具有类似的效率,则该存储库应该能够训练达到或大于 Open-AI 的 DALL-E(12B 参数)大小的模型。
还没有预训练模型。
感谢 Ben Wang 的 tf vae 实现以及 mtf 版本的运行,感谢 Aran Komatsuzaki 帮助构建 mtf VAE 和输入管道。
git clone https://github.com/EleutherAI/GPTNeo
cd GPTNeo
pip3 install -r requirements.txt
在 TPU 上运行,未经 GPU 测试,但理论上应该可以工作。示例配置设计为在 TPU v3-32 pod 上运行。
要设置 TPU,请注册 Google Cloud Platform,并创建存储桶。
使用ctpu up --vm-only
通过 google shell ( https://ssh.cloud.google.com/
) 创建虚拟机,以便它可以连接到您的 Google 存储桶和 TPU 并按上述方式设置存储库。
DALLE 需要预训练的 VAE 将图像压缩为令牌。要运行 VAE 预训练,请将configs/vae_example.json
中的参数调整为指向 jpg 数据集的 glob 路径,并将图像大小调整为适当的大小。
"dataset": {
"train_path": "gs://neo-datasets/CIFAR-10-images/train/**/*.jpg",
"eval_path": "gs://neo-datasets/CIFAR-10-images/test/**/*.jpg",
"image_size": 32
}
全部设置完毕后,创建 TPU,然后运行:
python train_vae_tf.py --tpu your_tpu_name --model vae_example
训练记录图像张量和损失值,要检查进度,您可以运行:
tensorboard --logdir your_model_dir
一旦 VAE 完成预训练,您就可以继续使用 DALL-E。
目前我们正在虚拟数据集上进行训练。 DALL-E 的公共大型数据集正在开发中。同时,要生成一些虚拟数据,请运行:
python src/data/create_tfrecords.py
这应该下载 CIFAR-10,并生成一些随机标题作为文本输入。
自定义数据集应在文件夹中格式化,根文件夹中包含一个 jsonl 文件,其中包含标题数据和相应图像的路径,如下所示:
Folder structure:
data_folder
jsonl_file
folder_1
img1
img2
...
folder_2
img1
img2
...
...
jsonl structure:
{"image_path": folder_1/img1, "caption": "some words"}
{"image_path": folder_2/img2, "caption": "more words"}
...
然后,您可以使用src/data/create_tfrecords.py
中的create_paired_dataset
函数将数据集编码为 tfrecords 以用于训练。
创建数据集后,使用 gsutil 将其复制到您的存储桶:
gsutil cp -r DALLE-tfrecords gs://neo-datasets/
最后,运行训练
python train_dalle.py --tpu your_tpu_name --model dalle_example
VAE:
{
"model_type": "vae",
"dataset": {
"train_path": "gs://neo-datasets/CIFAR-10-images/train/**/*.jpg", # glob path to training images
"eval_path": "gs://neo-datasets/CIFAR-10-images/test/**/*.jpg", # glob path to eval images
"image_size": 32 # size of images (all images will be cropped / padded to this size)
},
"train_batch_size": 32,
"eval_batch_size": 32,
"predict_batch_size": 32,
"steps_per_checkpoint": 1000, # how often to save a checkpoint
"iterations": 500, # number of batches to infeed to the tpu at a time. Must be < steps_per_checkpoint
"train_steps": 100000, # total training steps
"eval_steps": 0, # run evaluation for this many steps every steps_per_checkpoint
"model_path": "gs://neo-models/vae_test2/", # directory in which to save the model
"mesh_shape": "data:16,model:2", # mapping of processors to named dimensions - see mesh-tensorflow repo for more info
"layout": "batch_dim:data", # which named dimensions of the model to split across the mesh - see mesh-tensorflow repo for more info
"num_tokens": 512, # vocab size
"dim": 512,
"hidden_dim": 64, # size of hidden dim
"n_channels": 3, # number of input channels
"bf_16": false, # if true, the model is trained with bfloat16 precision
"lr": 0.001, # learning rate [by default learning rate starts at this value, then decays to 10% of this value over the course of the training]
"num_layers": 3, # number of blocks in the encoder / decoder
"train_gumbel_hard": true, # whether to use hard or soft gumbel_softmax
"eval_gumbel_hard": true
}
达尔-E:
{
"model_type": "dalle",
"dataset": {
"train_path": "gs://neo-datasets/DALLE-tfrecords/*.tfrecords", # glob path to tfrecords data
"eval_path": "gs://neo-datasets/DALLE-tfrecords/*.tfrecords",
"image_size": 32 # size of images (all images will be cropped / padded to this size)
},
"train_batch_size": 32, # see above
"eval_batch_size": 32,
"predict_batch_size": 32,
"steps_per_checkpoint": 1000,
"iterations": 500,
"train_steps": 100000,
"predict_steps": 0,
"eval_steps": 0,
"n_channels": 3,
"bf_16": false,
"lr": 0.001,
"model_path": "gs://neo-models/dalle_test/",
"mesh_shape": "data:16,model:2",
"layout": "batch_dim:data",
"n_embd": 512, # size of embedding dim
"text_vocab_size": 50258, # vocabulary size of the text tokenizer
"image_vocab_size": 512, # vocabulary size of the vae - should equal num_tokens above
"text_seq_len": 256, # length of text inputs (all inputs longer / shorter will be truncated / padded)
"n_layers": 6,
"n_heads": 4, # number of attention heads. For best performance, n_embd / n_heads should equal 128
"vae_model": "vae_example" # path to or name of vae model config
}