Abstract:
We propose an approach for digitally altering people's outfits in images. Given images of a person and a desired clothing style, our method generates a new clothing item ...Show MoreMetadata
Abstract:
We propose an approach for digitally altering people's outfits in images. Given images of a person and a desired clothing style, our method generates a new clothing item image. The new item displays the color and pattern of the desired style while geometrically mimicking the person's original item. Through superimposition, the altered image is made to look as if the person is wearing the new item. Unlike recent works with full-image synthesis, our work relies on segment synthesis, yielding benefits in virtual try-on. For the synthesis process, we assume two underlying factors characterizing clothing segments: geometry and style. These two factors are disentangled via preprocessing and combined using a neural network. We explore several networks and introduce important aspects of the architecture and learning process. Our experimental results are three-fold: 1) on images from fashion-parsing datasets, we demonstrate the generation of high-quality clothing segments with fine-level style control; 2) on a virtual try-on benchmark, our method shows superiority over prior synthesis methods; and 3) in transferring clothing styles, we visualize the differences between our method and neural style transfer.
Published in: IEEE Transactions on Multimedia ( Volume: 22, Issue: 2, February 2020)