The deep learning framework to pretrain, finetune and deploy AI models.
NEW- Deploying models? Check out LitServe, the PyTorch Lightning for model serving
Quick start • Examples • PyTorch Lightning • Fabric • Lightning AI • Community • Docs
PyTorch Lightning: Train and deploy PyTorch at scale.
Lightning Fabric: Expert control.
Lightning gives you granular control over how much abstraction you want to add over PyTorch.
Install Lightning:
pip install lightning
pip install lightning['extra']
conda install lightning -c conda-forge
Install future release from the source
pip install https://github.com/Lightning-AI/lightning/archive/refs/heads/release/stable.zip -U
Install nightly from the source (no guarantees)
pip install https://github.com/Lightning-AI/lightning/archive/refs/heads/master.zip -U
or from testing PyPI
pip install -iU https://test.pypi.org/simple/ pytorch-lightning
Define the training workflow. Here's a toy example (explore real examples):
# main.py# ! pip install torchvisionimport torch, torch.nn as nn, torch.utils.data as data, torchvision as tv, torch.nn.functional as Fimport lightning as L# --------------------------------# Step 1: Define a LightningModule# --------------------------------# A LightningModule (nn.Module subclass) defines a full *system*# (ie: an LLM, diffusion model, autoencoder, or simple image classifier).class LitAutoEncoder(L.LightningModule):def __init__(self):super().__init__()self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3))self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))def forward(self, x):# in lightning, forward defines the prediction/inference actionsembedding = self.encoder(x)return embeddingdef training_step(self, batch, batch_idx):# training_step defines the train loop. It is independent of forwardx, _ = batchx = x.view(x.size(0), -1)z = self.encoder(x)x_hat = self.decoder(z)loss = F.mse_loss(x_hat, x)self.log("train_loss", loss)return lossdef configure_optimizers(self):optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)return optimizer# -------------------# Step 2: Define data# -------------------dataset = tv.datasets.MNIST(".", download=True, transform=tv.transforms.ToTensor())train, val = data.random_split(dataset, [55000, 5000])# -------------------# Step 3: Train# -------------------autoencoder = LitAutoEncoder()trainer = L.Trainer()trainer.fit(autoencoder, data.DataLoader(train), data.DataLoader(val))
Run the model on your terminal
pip install torchvision python main.py
PyTorch Lightning is just organized PyTorch - Lightning disentangles PyTorch code to decouple the science from the engineering.
The lightning community is maintained by
10+ core contributors who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
800+ community contributors.
Want to help us build Lightning and reduce boilerplate for thousands of researchers? Learn how to make your first contribution here
Lightning is also part of the PyTorch ecosystem which requires projects to have solid testing, documentation and support.
If you have any questions please:
Read the docs.
Search through existing Discussions, or add a new question
Join our discord.
OSX (multiple Python versions) | |||
Windows (multiple Python versions) |