A free and open-source inpainting & outpainting tool powered by SOTA AI model.
Erase(LaMa) | Replace Object(PowerPaint) |
---|---|
IOPaint-erase-markdown.mp4 | iopaint-inpaint-markdown.mp4 |
Draw Text(AnyText) | Out-painting(PowerPaint) |
---|---|
AnyText-markdown.mp4 | outpainting.mp4 |
Completely free and open-source, fully self-hosted, support CPU & GPU & Apple Silicon
Windows 1-Click Installer
OptiClean: macOS & iOS App for object erase
Supports various AI models to perform erase, inpainting or outpainting task.
runwayml/stable-diffusion-inpainting
diffusers/stable-diffusion-xl-1.0-inpainting-0.1
andregn/Realistic_Vision_V3.0-inpainting
Lykon/dreamshaper-8-inpainting
Sanster/anything-4.0-inpainting
BrushNet
PowerPaintV2
Sanster/AnyText
Fantasy-Studio/Paint-by-Example
Erase models: These models can be used to remove unwanted object, defect, watermarks, people from image.
Diffusion models: These models can be used to replace objects or perform outpainting. Some popular used models include:
Plugins:
Segment Anything: Accurate and fast Interactive Object Segmentation
RemoveBG: Remove image background or generate masks for foreground objects
Anime Segmentation: Similar to RemoveBG, the model is specifically trained for anime images.
RealESRGAN: Super Resolution
GFPGAN: Face Restoration
RestoreFormer: Face Restoration
FileManager: Browse your pictures conveniently and save them directly to the output directory.
IOPaint provides a convenient webui for using the latest AI models to edit your images. You can install and start IOPaint easily by running following command:
# In order to use GPU, install cuda version of pytorch first.# pip3 install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cu118# AMD GPU users, please utilize the following command, only works on linux, as pytorch is not yet supported on Windows with ROCm.# pip3 install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/rocm5.6pip3 install iopaint iopaint start --model=lama --device=cpu --port=8080
That's it, you can start using IOPaint by visiting http://localhost:8080 in your web browser.
All models will be downloaded automatically at startup. If you want to change the download directory, you can add --model-dir
. More documentation can be found here
You can see other supported models at here and how to use local sd ckpt/safetensors file at here.
You can specify which plugins to use when starting the service, and you can view the commands to enable plugins by using iopaint start --help
.
More demonstrations of the Plugin can be seen here
iopaint start --enable-interactive-seg --interactive-seg-device=cuda
You can also use IOPaint in the command line to batch process images:
iopaint run --model=lama --device=cpu --image=/path/to/image_folder --mask=/path/to/mask_folder --output=output_dir
--image
is the folder containing input images, --mask
is the folder containing corresponding mask images.
When --mask
is a path to a mask file, all images will be processed using this mask.
You can see more information about the available models and plugins supported by IOPaint below.
Install nodejs, then install the frontend dependencies.
git clone https://github.com/Sanster/IOPaint.gitcd IOPaint/web_app npm install npm run build cp -r dist/ ../iopaint/web_app
Create a .env.local
file in web_app
and fill in the backend IP and port.
VITE_BACKEND=http://127.0.0.1:8080
Start front-end development environment
npm run dev
Install back-end requirements and start backend service
pip install -r requirements.txt python3 main.py start --model lama --port 8080
Then you can visit http://localhost:5173/
for development.
The frontend code will automatically update after being modified,
but the backend needs to restart the service after modifying the python code.