Face Attribute Editing with Disentangled Latent Vectors

1Bilkent University

VecGAN performs disentangled semantic edits while preserving image details. VecGAN is presented in ECCV 2022. We additionally propose VecGAN++ with additional improvements and analysis. This page hosts both works.

Abstract

We propose an image-to-image translation framework for facial attribute editing with disentangled interpretable latent directions.

Facial attribute editing task faces the challenges of targeted attribute editing with controllable strength and disentanglement in the representations of attributes to preserve the other attributes during edits. For this goal, inspired by the latent space factorization works of fixed pretrained GANs, we design the attribute editing by latent space factorization, and for each attribute, we learn a linear direction that is orthogonal to the others.

To project images to semantically organized latent spaces, we set an encoder-decoder architecture with attention-based skip connections. We extensively compare with previous image translation algorithms and editing with pretrained GAN works. Our extensive experiments show that our method significantly improves over the state-of-the-arts.

Method


VecGAN edits images using an encoder-decoder architecture, relying on learned disentangled latent directions. We encode images with an Encoder to a latent representation from which we change a selected tag (i), e.g. hair color with a learnable direction Ai and a scale ɑ. To calculate the scale, we subtract the target style scale from the source style. This operation corresponds to removing an attribute and adding an attribute. To remove the image's attribute, the source style is encoded and projected from the source image. To add the target attribute, the target style scale is sampled from a distribution mapped for the given attribute (j), e.g. black, blonde, or encoded and projected from a reference image. In VecGAN++, we additionally introduce an attention based skip connection to bridge flow of selected information from the encoder to decoder.

Video for VecGAN - ECCV 2022

Interpolation Results

VecGAN++ can interpolate through attribute edits through changing the translation strength for the desired semantic. Our model can edit smile, bangs, eyeglasses, hair color, age and gender tags where interpolation results are given below.

Input

Smile

Bangs

Eyeglasses

Hair

Age

Gender

Generalization Results

VecGAN++ also generalizes well to out-of-domain images. Smile interpolation results are provided, using samples from MetFaces dataset.

BibTeX

@inproceedings{dalva2022vecgan,
    title={VecGAN: Image-to-Image Translation with Interpretable Latent Directions},
    author={Dalva, Yusuf and Alt{\i}ndi{\c{s}}, Said Fahri and Dundar, Aysegul},
    booktitle={European Conference on Computer Vision},
    pages={153--169},
    year={2022},
    organization={Springer}
    }

@article{dalva2023face,
    title={Face Attribute Editing with Disentangled Latent Vectors},
    author={Dalva, Yusuf and Pehlivan, Hamza and Moran, Cansu and Hatipo{\u{g}}lu, {\"O}yk{\"u} Irmak and D{\"u}ndar, Ay{\c{s}}eg{\"u}l},
    journal={arXiv preprint arXiv:2301.04628},
    year={2023}
    }