Behind the Scenes of TryOnDiffusion

Written by backpropagation | Published 2024/10/06
Tech Story Tags: ai-in-fashion | deep-learning | tryondiffusion | parallel-unet | photorealistic-fashion | fashion-technology | body-pose-adaptation | image-based-virtual-try-on

TLDR The proposed method for virtual try-on utilizes images of a person and a garment worn by another person to synthesize a realistic visualization. The process begins with preprocessing, which includes predicting human parsing maps and 2D pose keypoints. The images are transformed into clothing-agnostic representations, ensuring minimal leakage of garment information. Additionally, the use of diffusion models facilitates an iterative denoising process for generating high-quality outputs conditioned on various inputs.via the TL;DR App

Authors:

(1) Luyang Zhu, University of Washington and Google Research, and work done while the author was an intern at Google;

(2) Dawei Yang, Google Research;

(3) Tyler Zhu, Google Research;

(4) Fitsum Reda, Google Research;

(5) William Chan, Google Research;

(6) Chitwan Saharia, Google Research;

(7) Mohammad Norouzi, Google Research;

(8) Ira Kemelmacher-Shlizerman, University of Washington and Google Research.

Table of Links

Abstract and 1. Introduction

2. Related Work

3. Method

3.1. Cascaded Diffusion Models for Try-On

3.2. Parallel-UNet

4. Experiments

5. Summary and Future Work and References

Appendix

A. Implementation Details

B. Additional Results

3. Method

This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.


Written by backpropagation | Uncovering hidden patterns with backpropagation, a powerful but often misunderstood algorithm shaping AI insights.
Published by HackerNoon on 2024/10/06