Title
Attribute2image: Conditional Image Generation From Visual Attributes
Abstract
This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.
Year
DOI
Venue
2015
10.1007/978-3-319-46493-0_47
COMPUTER VISION - ECCV 2016, PT IV
Keywords
Field
DocType
Face Image,Image Generation,Convolutional Neural Network,Deep Neural Network,Recognition Model
Image generation,Bayesian inference,Computer science,Artificial intelligence,Rendering (computer graphics),Machine learning,Feature learning,Generative model
Journal
Volume
ISSN
Citations 
9908
0302-9743
132
PageRank 
References 
Authors
4.77
38
4
Search Limit
100132
Name
Order
Citations
PageRank
Xinchen Yan141516.71
Jimei Yang2108340.68
Kihyuk Sohn362932.95
Honglak Lee46247398.39