Semi-supervised Synthesis of High-Resolution
Editable Textures for 3D Humans

CVPR 2021

Bindita Chaudhuri1
Nikolaos Sarafianos2
Linda Shapiro1
Tony Tung2
1 University of Washington     2 Facebook Reality Labs Research, Sausalito


Coming soon:        

Our end-to-end framework. The style encoder encodes the region-wise styles from the input which are then used by our ReAVAE to learn region-specific style distributions. The generator synthesizes texture maps given per-class style vectors and desired layout (segmentation map). The generated texture is then upscaled and used to render 3D human meshes.


We introduce a novel approach to generate diverse high fidelity texture maps for 3D human meshes in a semi-supervised setup. Given a segmentation mask defining the layout of the semantic regions in the texture map, our network generates high-resolution textures with a variety of styles, that are then used for rendering purposes. To accomplish this task, we propose a Region-adaptive Adversarial Variational AutoEncoder (ReAVAE) that learns the probability distribution of the style of each region individually so that the style of the generated texture can be controlled by sampling from the region-specific distributions. In addition, we introduce a data generation technique to augment our training set with data lifted from single-view RGB inputs. Our training strategy allows the mixing of reference image styles with arbitrary styles for different regions, a property which can be valuable for virtual try-on AR/VR applications. Experimental results show that our method synthesizes better texture maps compared to prior work while enabling independent layout and style controllability.

Qualitative Results

Given a segmentation map defining the layout of semantic regions in a texture map, our proposed method generates diverse high-resolution texture maps which are then used to render 3D humans. Each example shows a UV map in the inset and the corresponding 3D mesh rendered with the map. The style of each region/class can be controlled individually by manipulating the input style vectors. Note that along each column, the styles of the same classes are the same (for example, the man in the first row and the woman in the second row of the first column are wearing green colored pants).

User interface

Unfortunately, we won't be able to release our dataset and code because of confidentiality issues.


We would like to thank Christoph Lassner, Olivier Maury, Yuanlu Xu and Ronald Mallet from Facebook Reality Labs for valuable discussions. Thanks to the authors of SEAN for sharing their code and for this webpage template.