TR2020-014

FX-GAN: Self-Supervised GAN Learning via Feature Exchange


    •  Huang, R., Xu, W., Lee, T.-Y., Cherian, A., Wang, Y., Marks, T., "FX-GAN: Self-Supervised GAN Learning via Feature Exchange", IEEE Winter Conference on Applications of Computer Vision (WACV), February 2020, pp. 3194-3202.
      BibTeX TR2020-014 PDF
      • @inproceedings{Huang2020feb,
      • author = {Huang, Rui and Xu, Wenju and Lee, Teng-Yok and Cherian, Anoop and Wang, Ye and Marks, Tim},
      • title = {FX-GAN: Self-Supervised GAN Learning via Feature Exchange},
      • booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
      • year = 2020,
      • pages = {3194--3202},
      • month = feb,
      • url = {https://www.merl.com/publications/TR2020-014}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning

We propose a self-supervised approach to improve the training of Generative Adversarial Networks (GANs) via inducing the discriminator to examine the structural consistency of images. Although natural image samples provide ideal examples of both valid structure and valid texture, learning to reproduce both together remains an open challenge. In our approach, we augment the training set of natural images with modified examples that have degraded structural consistency. These degraded examples are automatically created by randomly exchanging pairs of patches in an image’s convolutional feature map. We call this approach feature exchange. With this setup, we propose a novel GAN formulation, termed Feature eXchange GAN (FX-GAN), in which the discriminator is trained not only to distinguish real versus generated images, but also to perform the auxiliary task of distinguishing between real images and structurally corrupted (feature-exchanged) real images. This auxiliary task causes the discriminator to learn the proper feature structure of natural images, which in turn guides the generator to produce images with more realistic structure. Compared with strong GAN baselines, our proposed self-supervision approach improves generated image quality, diversity, and training stability for both the unconditional and class-conditional settings.