An online AI image search engine based on the Clip model and Qdrant vector database. Supports keyword search and similar image search.
中文文档
The above screenshots may contain copyrighted images from different artists, please do not use them for other purposes.
In most cases, we recommend using the Qdrant database to store metadata. The Qdrant database provides efficient retrieval performance, flexible scalability, and better data security.
Please deploy the Qdrant database according to the Qdrant documentation. It is recommended to use Docker for deployment.
If you don't want to deploy Qdrant yourself, you can use the online service provided by Qdrant.
Local file storage directly stores image metadata (including feature vectors, etc.) in a local SQLite database. It is only recommended for small-scale deployments or development deployments.
Local file storage does not require an additional database deployment process, but has the following disadvantages:
O(n)
. Therefore, if the
data scale is large, the performance of search and indexing will decrease.v1.0.0
).python -m venv .venv
. .venv/bin/activate
If you want to use CUDA acceleration for inference, be sure to install a CUDA-supported PyTorch version in this step. After installation, you can use
torch.cuda.is_available()
to confirm whether CUDA is available.
pip install -r requirements.txt
config/
, you can edit default.env
directly, but it's recommended to
create a new file named local.env
and override the configuration in default.env
.python main.py
--host
to specify the IP address you want to bind to (default is 0.0.0.0) and --port
to specify the
port you want to bind to (default is 8000).python main.py --help
.NekoImageGallery's docker image are built and released on Docker Hub, including serval variants:
Tags | Description | Latest Image Size |
---|---|---|
edgeneko/neko-image-gallery:<version> edgeneko/neko-image-gallery:<version>-cuda edgeneko/neko-image-gallery:<version>-cuda12.1
|
Supports GPU inferencing with CUDA12.1 | |
edgeneko/neko-image-gallery:<version>-cuda11.8 |
Supports GPU inferencing with CUDA11.8 | |
edgeneko/neko-image-gallery:<version>-cpu |
Only supports CPU inferencing |
Where <version>
is the version number or version alias of NekoImageGallery, as follows:
Version | Description |
---|---|
latest |
The latest stable version of NekoImageGallery |
v*.*.* / v*.*
|
The specific version number (correspond to Git tags) |
edge |
The latest development version of NekoImageGallery, may contain unstable features and breaking changes |
In each image, we have bundled the necessary dependencies, openai/clip-vit-large-patch14
model
weights, bert-base-chinese
model weights and easy-paddle-ocr
models to provide a complete and ready-to-use image.
The images uses /opt/NekoImageGallery/static
as volume to store image files, mount it to your own volume or directory
if local storage is required.
For configuration, we suggest using environment variables to override the default configuration. Secrets (like API tokens) can be provided by docker secrets.
nvidia-container-runtime
(CUDA users only)If you want to use CUDA acceleration, you need to install nvidia-container-runtime
on your system. Please refer to
the official documentation for installation.
Related Document:
- https://docs.docker.com/config/containers/resource_constraints/#gpu
- https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker
- https://nvidia.github.io/nvidia-container-runtime/
docker-compose.yml
file from repository.
# For cuda deployment (default)
wget https://raw.githubusercontent.com/hv0905/NekoImageGallery/master/docker-compose.yml
# For CPU-only deployment
wget https://raw.githubusercontent.com/hv0905/NekoImageGallery/master/docker-compose-cpu.yml && mv docker-compose-cpu.yml docker-compose.yml
# start in foreground
docker compose up
# start in background(detached mode)
docker compose up -d
There are serval ways to upload images to NekoImageGallery
python main.py local-index <path-to-your-image-directory>
python main.py local-index --help
for more
information.The API documentation is provided by FastAPI's built-in Swagger UI. You can access the API documentation by visiting
the /docs
or /redoc
path of the server.
Those project works with NekoImageGallery :D
There are many ways to contribute to the project: logging bugs, submitting pull requests, reporting issues, and creating suggestions.
Even if you with push access on the repository, you should create a personal feature branches when you need them. This keeps the main repository clean and your workflow cruft out of sight.
We're also interested in your feedback on the future of this project. You can submit a suggestion or feature request through the issue tracker. To make this process more effective, we're asking that these include more information to help define them more clearly.
Copyright 2023 EdgeNeko
Licensed under AGPLv3 license.