Virtual Model Alternation

Introduction

The Virtual Model Alternation can automatically retain the clothing area in the model image and intelligently match the model type to generate diverse models from different countries, showcasing the real wear effect of the product on different models. This allows for localized content targeting for cross-border markets, accurately capturing user preferences and reducing photograph costs.

Use Cases

  • Content localization: Enhance the localization of product images when selling in multiple regions, improving the product's appeal.

  • Image Optimization: Photographing, processing, and optimizing the images for fashion products sold by e-commerce sellers.

Original image(input)Result image-1(output)Result image-2(output)Result image-3 (output)Result image-4 (output)

Key Features

  • It can generate models from over 40 different countries, various age groups, and genders, meeting the needs of diverse global markets

  • It supports retaining or changing background content, and can generate diverse background styles to enhance display effects.

  • It supports the training and generation of exclusive brand models.

Pricing

To use the API, you are required to choose and purchase an API resource pack from us on a subscription basis.

  • Each resource pack is valid for one calendar year upon successful purchase, and enables you to access the API up to the number of requests specified in the pack. No refunds can be provided.

  • If you need to purchase more QPS due to business requirements, please contact us via navigation bar or email us (aidge_support@service.alibaba.com).

  • Resource packs cannot be used across different products. For example, if you need to use both product text translation and image translation, you must purchase separate resource packs for each.

The prices are as follows:

CapacityPrice (USD)Unit Price(USD)Maximum QPS

100 images

25

$0.25 /image

1

1,000 images

250

$0.25 /image

1

10,000 images

2500

$0.25 /image

1

Interface

It supports both API and editor integration methods.

  • It is recommended to use the editor integration method for a better experience and generation effect. The editor integration capabilities will be launched soon. Please contact us via navigation bar or email us (aidge_support@service.alibaba.com) if you have any needs before the launch.

InterfaceSupported featuresCompetitive advantagesLimitationsUse cases

API interface

1)Automatically segment and retain the clothing area of the model in the input image, redrawing the remaining areas of the model.

2)Identify the gender and age of the model in the input image, and automatically generate new model images.

1)More simple and convenient than editor.

1)Unable to adjust the area to be retained, resulting in limited effects.

Suitable for users with fewer customization requirements and a large volume of image processing needs; also for those who have high demands for automated processing and ease of integration.

Editor interface

1)With control page of users

2)Could adjust the area to be retained

3)Users can select models, backgrounds, and other parameters based on their needs to generate new model images.

1)You can edit defined region and select more functional parameters, such as hair color, etc.

2)Supports WYSIWYG (What You See Is What You Get) interactive generation.

1)The develop cost is more than API interface.

2)Can not remove images in bulk.

Suitable for users who require a high degree of customization or have relatively few image processing needs; also suitable for systems with corresponding user interfaces and interactions.

Quick Start

Description of the API Stringing Process Involved

Step 1, Call the Virtual Model Alternation Submit API, process product images based on the given model requirements, initiate a generation task, and return the task ID.

Step 2,Call the Virtual Model Alternation Query API, input the task ID to obtain the corresponding generated result.

For related request and response samples, please refer to the content of each API reference.

FAQ

  1. Why will the shoes, accessories, and other items in the image change?

A:Now the API only supports automatic segmentation and retention for the "clothing" category and also not yet support segmentation for other specified product categories. Items that are not recognized will be redrawn during the generation process, leading to changes in content. It is recommended to use an editor and manually adjust the areas of the image content that need to be retained within the editor.

  1. Why do some generated models appear strange, such as having deformed fingers?

Virtual Model Alternation is AI-generated capabilities, which entail a certain degree of randomness and uncontrollability. The generated results are influenced by the input image. For example, if the model in the input image has an unusual pose or significant facial obstructions, it will greatly affect the generated outcome. It is recommended to generate multiple images at once for such cases to improve the quality rate of the results.

  1. Why some models have not been alternated? Or not been alternated partially?

Virtual Model Alternation requires first segmenting the clothing in the image, which may encounter areas that the segmentation algorithm cannot accurately identify, such as those with minimal exposure of skin or where the skin and clothing are mixed together. This may result in those parts not being generated. It is recommended to use the editor and manually adjust the areas of the image content that need to be generated within the editor.

  1. Will the generated models very similar?

No. All models are randomly generated by AI, so there will be some variations. However, the generated model is greatly influenced by the appearance of the model in the original image. If the input images feature a model with a similar angle and the same appearance, the similarity of the generated models will be relatively high.

Last updated