VQGAN: Taming Transformers for High-Resolution Image Synthesis [Paper Explained]

Описание к видео VQGAN: Taming Transformers for High-Resolution Image Synthesis [Paper Explained]

The authors introduce VQGAN which combines the efficiency of convolutional approaches with the expressivity of transformers.
VQGAN is essentially a GAN that learns a codebook of context-rich visual parts and uses it to quantize the bottleneck representation at every forward pass.
The self-attention model is used to learn a prior distribution of codewords.
Sampling from this model then allows producing plausible constellations of the codewords which are then fed through a decoder to generate realistic images in arbitrary resolution.


Paper: https://arxiv.org/pdf/2012.09841.pdf
Code: https://github.com/CompVis/taming-tra... (with pretrained models)
Colab notebook: https://colab.research.google.com/git...
Colab notebook to compare the first stage models in VQGAN and in DALL-E: https://colab.research.google.com/git...

Abstract:
Designed to learn long-range interactions on sequential data, transformers continue to show state-of-the-art results on a wide variety of tasks. In contrast to CNNs, they contain no inductive bias that prioritizes local interactions. This makes them expressive, but also computationally infeasible for long sequences, such as high-resolution images. We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images. We show how to (i) use CNNs to learn a context-rich vocabulary of image constituents, and in turn (ii) utilize transformers to efficiently model their composition within high-resolution images. Our approach is readily applied to conditional synthesis tasks, where both non-spatial information, such as object classes, and spatial information, such as segmentations, can control the generated image. In particular, we present the first results on semantically-guided synthesis of megapixel images with transformers.

--
🐦 My Twitter:   / artsiom_s​  
Subscribe to my Telegram channel to get more posts on AI and CV: https://t.me/gradientdude

--
Timecodes:
0:00 Intro
0:27 VQGAN method
2:21 Learn transformer on codewords
4:42 Inference: Generate arbitrary size images with Transformer
5:15 Sliding attention window
5:45 Limitations
6:33 Losses used for training
7:44 Comparison to CNN-based autoregressive model PixelSNAIL
8:05 Conditioned image synthesis
9:35 Results
10:00 Conclusion
10:30 Outro

--
Background music:    • JNATHYN - Dioma [NCS Release]  

Комментарии

Информация по комментариям в разработке