Inferent and Generative Adversarial Networks (IGAN)

Version 1.0.0 (6.21 KB) by Luc VIGNAUD
IGAN Discriminative & Generative architecture will "crack open the bone and suck out the substantific marrow" of your favorite dataset.
78 Downloads
Updated 21 Oct 2021

View License

"Crack open the bone and seek out the substantificial marrow", wrote Rabelais in Gargantua.
That is exactly what the IGAN architecture entends to do with your favorite dataset by building a Discriminative and Generative Deep Learning model, i.e. a self-organized bijective relation between data with complex statistics and a much simpler latent space model.
Thus, IGAN can generate plausible "fake" data from random latents (like a classical GAN*) AND encode real data into latents (like a Variational Auto Encoder for instance), with the magic trick that close data (in the semantic sense) will be encoded into close latents (in L2 sense), and close latents will generate close data !
Many applications will benefit from a much simplier processing in the latent space than in the data space, such as self-supervision, domain transfert, data semantic interpolation and arithmetics, ...).
Unlike existing similar methods (AAE, VAEGAN, AGE, AVB, VEEGAN, AEGAN, ALI, BiGAN, ALAE, ALICE, DALI, LIAGAN, ALAE ...), IGAN sets a coupled adversarial game on both the data and latent space to the follow the joint probability p(X,Z)=p(G(Z),E(X)), where X represents real data, Z the model latents, G a Generator network and E an Encoder network. IGAN fosters that both generated latents and data will expand on all the available space. Furthermore, the self-supervised data-to-latent correspondance (and reverse) is solely obtained by a latent cyclo-reconstruction loss. Because of the dimensionality reduction between the data and the latent spaces, details and outliers will also be automatically filtered out by the algorithm. A data cyclo-reconstruction loss is often a source of blurring due to this filtering.
IGAN remains frugal by adding only a single Encoder network to the classical GAN scheme and will run on a single lab PC with GPU.
IGAN brings inference, convergence and stability to the classical GAN and its performance along training is monitored by checking the data and latent reconstruction errors.
IGAN outputs on flower photos dataset should look like following :
Figure 5: networks adversarial scores along training
Figure 2: reconstruction errors (data and latents) along training time
Exemple of 25 random validation images :
Figure 3: reconstructions in image space : upper left: some images used for training; down left: reconstruction of these training images ; upper right: IGAN generated "fake" images; down right: reconstruction of these generated images
Figure 4: reconstructions in latent space: upper left: 25 latent vectors (one per column) used to generate fake images from figure 3 upper right; upper center: reconstruction of these latents; upper right: visualisation of reconstruction error; down left; latents encoded from real training images (these from figure 3 upper left); down center: reconstruction of these latents; down right: reconstruction error
Thanks for playing with IGAN on your favorite dataset !
I would be delighted to have your comments, critics, encouragements, bug reports, improvement suggestions, application ideas ...

Cite As

Luc VIGNAUD (2024). Inferent and Generative Adversarial Networks (IGAN) (https://www.mathworks.com/matlabcentral/fileexchange/100923-inferent-and-generative-adversarial-networks-igan), MATLAB Central File Exchange. Retrieved .

Vignaud, Luc. “IGAN: Inferent and Generative Adversarial Networks.” ArXiv abs/2109.13360 (2021)

MATLAB Release Compatibility
Created with R2021a
Compatible with R2021a and later releases
Platform Compatibility
Windows macOS Linux

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!
Version Published Release Notes
1.0.0