This project is a unique demonstration of PyTorch's capabilities, created for the PyTorch Conference 2024. It combines computer vision and audio synthesis to generate melodic sounds based on input images. The application uses a PyTorch neural network to analyze images and extract features, which are then used to create varied, electronic-style music. This cloud native, open source project showcases the power of machine learning in creative applications.
Clone the repository:
git clone https://github.com/onlydole/pytorch-keynote-2024.git
cd pytorch-keynote-2024
Build and run the Docker container:
docker compose up --build
Open your web browser and navigate to http://localhost:8080
If you don't have a Kubernetes cluster, you can use Kind to create one locally:
kind create cluster --config cluster.yml
Apply the Kubernetes configurations:
kubectl apply -f kubernetes/
Access the application:
For Kind: Use port forwarding to access the service
kubectl port-forward service/pytorch-music-service 8080:8080
Open your web browser and navigate to http://localhost:8080
startup.sh
: Script to start the applicationshutdown.sh
: Script to shut down the applicationWe welcome contributions! Please feel free to submit a Pull Request.
This project uses GitHub Actions for building and publishing the container image. You can view the latest run status using the badges at the top of this README.
This project is licensed under the Apache License 2.0. See the LICENSE file for details.