Table of Contents
2.5.1.post0

Home

  • Lightning in 15 minutes
  • Install
  • 2.0 Upgrade Guide

Level Up

  • Basic skills
  • Intermediate skills
  • Advanced skills
  • Expert skills

Core API

  • LightningModule
  • Trainer

Optional API

  • accelerators
  • callbacks
  • cli
  • core
  • loggers
  • profiler
  • trainer
  • strategies
  • tuner
  • utilities

More

  • Community
  • Examples
  • Glossary
  • How-to Guides
  • Overview
  • Team management
  • Production
  • Security
  • Open source
    • Overview
    • PyTorch Lightning
    • Fabric
    • Lit-GPT
    • Torchmetrics
    • Litdata
    • Lit LLaMA
    • Litserve
  • Examples
  • Glossary
  • FAQ
  • Docs >
  • Common Workflows
Shortcuts

Common WorkflowsΒΆ

Customize and extend Lightning for things like custom hardware or distributed strategies.

Avoid overfitting

Add a training and test loop.

Build a model

Steps to build a model.

Configure hyperparameters from the CLI

Enable basic CLI with Lightning.

Customize the progress bar

Change the progress bar behavior.

Deploy models into production

Deploy models with different levels of scale.

Effective Training Techniques

Explore advanced training techniques.

Eliminate config boilerplate

Control your training via CLI and YAML.

Find bottlenecks in your code

Learn to find bottlenecks in your code.

Finetune a model

Learn to use pretrained models

Manage Experiments

Learn to track and visualize experiments

Run on a multi-node cluster

Learn to run multi-node in the cloud or on your cluster

Save and load model progress

Save and load progress with checkpoints.

Save memory with half-precision

Enable half-precision to train faster and save memory.

Train models with billions of parameters

Scale GPU training to models with billions of parameters

Train in a notebook

Train models in interactive notebooks (Jupyter, Colab, Kaggle, etc.)

Train on single or multiple GPUs

Train models faster with GPUs.

Train on single or multiple HPUs

Train models faster with HPUs.

Train on single or multiple TPUs

Train models faster with TPUs.

Track and Visualize Experiments

Learn to track and visualize experiments

Use a pure PyTorch training loop

Run your pure PyTorch loop with Lightning.

  • Common Workflows

To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. Read PyTorch Lightning's Privacy Policy.