English
pip install SwanAPI -i https://pypi.org/simple
1️⃣ Write a predict.py
file. Here we use the example of converting an image to black and white:
If you have written Gradio before, you must be familiar with this writing method, which is very similar to the method of defining
gr.Interface
.
from SwanAPI import SwanInference
import cv2
# 这是一个简单的图像转黑白的任务
def predict ( im ):
result_image = cv2 . cvtColor ( im , cv2 . COLOR_BGR2GRAY )
return "success" , result_image
if __name__ == "__main__" :
api = SwanInference ()
api . inference ( predict ,
inputs = [ 'image' ],
outputs = [ 'text' , 'image' ],
description = "a simple test" )
api . launch ()
2⃣️ Run python predict.py
to run an API inference service on localhost://127.0.0.1:8000/
$ python predict.py
* Serving Flask app " SwanAPI Server" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)
3⃣️ Call API
from SwanAPI import SwanRequests , Files
response = SwanRequests (
url = "http://127.0.0.1:8000/predictions/" ,
inputs = { 'im' : Files ( "/path/to/image" )}) #填写图像文件的本地路径
print ( response )
If you use
curl
to send a request:
curl --location ' http://127.0.0.1:8000/predictions/ '
--form ' im=@"path/to/image" '
When outputs is set to 'image', the base64-encoded byte stream is returned and converted to np.ndarray in python:
from SwanAPI import SwanRequests , Files
import base64
import numpy as np
import cv2
response = SwanRequests (
url = "http://127.0.0.1:8000/predictions/" ,
inputs = { 'im' : Files ( "../Feedback/assets/FeedBack.png" )}) #填写图像文件的本地路径
image_base64 = response [ str ( 1 )][ 'content' ]
nparr = np . frombuffer ( base64 . b64decode ( image_base64 ), np . uint8 )
img_restore = cv2 . imdecode ( nparr , cv2 . IMREAD_COLOR )
cv2 . imwrite ( "output.jpg" , img_restore )
After developing predict.py
is complete:
1⃣️ Create a swan.yaml
file, which will guide your image construction
build :
gpu : false
system_packages :
- " libgl1-mesa-glx "
- " libglib2.0-0 "
python_version : " 3.10 "
python_packages :
- " numpy "
- " opencv-python "
predict :
port : 8000
build:
gpu
: Whether to enable gpu mode. true will automatically select the best nvidia support based on your hardware configuration, python_version, and deep learning framework.
system_packages
: Linux system basic libraries, they will apt-get install
.
python_version
: The basic Python version that the image runs on, supports 3.8, 3.9, and 3.10.
python_packages
: Python libraries that your model depends on
- "torch==2.0.0 --index-url https://download.pytorch.org/whl/cpu"
python_source
: Specify the download source of the python library, optional cn
and us
, the default is us
. Select cn
download source will use清华源
predict:
port
: The port number when the inference service is started2⃣️Build the image
swan build -t my-dl-image
swan build optional parameters:
-t
: required. Specify the name of the image build, such as my-dl-image
.-r
: optional. If you add this parameter, the container will be run after the image is built, and port mapping will be done: swan build -r -t my-dl-image
-s
: optional. If this parameter is added, the Dokefile will be saved in the directory after the image is built.3⃣️ Run the container
docker run my-dl-image
If running on gpu
docker run --gpus all my-dl-image