Is it often frustrating to see the difference between a seller’s presentation and a buyer’s presentation when shopping online? The machine learning team at the University of Bielefeld in Germany developed an AI tool called TryOffDiff to solve this problem. This AI can remove people from photos, leaving only the clothes themselves, and generates a high-quality product display image, effectively bridging the gap between the buyer's show and the seller's show, and improving the shopping experience.
When shopping online, have you ever been hurt by the huge difference between the buyer’s show and the seller’s show? It’s obviously the same piece of clothing, but when it’s worn on a model, it looks so fashionable, how come it turns out to be “horrible” when it’s worn on you? Don’t worry! German comparison The machine learning team of Lefeld University has developed an AI black technology called TryOffDiff, which can "remove" the people in the photo, leaving only the clothes themselves, and generate a standard product display picture!
This technology uses powerful "diffusion model" artificial intelligence technology to identify the shape, color, texture and other information of clothes from a photo, and "restore" this information into a high-definition product display picture. The resulting pictures are not only clear and lifelike in detail, but also automatically remove the background, just like the work of a professional photographer!
How does TryOffDiff work? Simply put, it's like a skilled "tailor." First, it uses an image encoder called SigLIP to extract characteristic information of the clothes from the photos, including color, texture, pattern, etc., just like a tailor carefully observing the cloth. It then "feeds" this information to the Stable Diffusion image generation model. Stable Diffusion is like a magical "sewing machine" that can generate a variety of images based on the input information. Finally, Stable Diffusion will generate a standard product display picture based on the extracted clothing feature information, and "wear" the clothes on a virtual model, just like a tailor making a perfect garment.
To test the effect of TryOffDiff, the researchers used a data set called VITON-HD for training and testing. Experimental results show that TryOffDiff works very well. The clothing pictures it generates are not only clear in detail, but also very realistic, even comparable to the work of professional photographers! Compared with existing virtual fitting technology, TryOffDiff performs well in retaining clothing details. Even better, especially in terms of patterns and logos.
The application prospects of this technology are very broad. It can not only help consumers better understand product information, but also help e-commerce platforms improve product display effects and reduce return rates. In the future, when you shop for clothes online, you may only need to upload a photo of yourself to see how you look in different clothes. You no longer have to worry about "the goods being wrong" between the buyer's show and the seller's show!
Online experience: https://huggingface.co/spaces/rizavelioglu/tryoffdiff
Project address: https://rizavelioglu.github.io/tryoffdiff/
The emergence of TryOffDiff undoubtedly provides a new way to solve the difference between "seller show" and "buyer show" in online shopping. This technology will greatly enhance the online shopping experience and bring more convenience to consumers and e-commerce platforms. . In the future, perhaps we can look forward to a more perfect virtual fitting experience.