Tensor Atelier is a clean, modular PyTorch training framework designed for developers who want powerful ML capabilities without the complexity. Built with automatic optimization, built-in profiling, and a flexible architecture that grows with your needs.
- 🔄 Automatic Optimization - Handles gradients, backprop, and optimization steps automatically
- 📊 Built-in Profiling - Monitor training performance with custom profiler support
- ⚡ Multi-Accelerator - CPU, GPU, and custom accelerator support
- 🧩 Modular Design - Clean separation of concerns with extensible components
- 📦 Smart DataLoader - Automatic device placement and train/validation splitting
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, TensorDataset
from tensoratelier import AtelierModule, AtelierTrainer
# Define your model
class MyModel(AtelierModule):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 1)
self.loss_fn = nn.MSELoss()
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.linear(x)
return self.loss_fn(y_hat, y)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
# Create data and trainer
x, y = torch.randn(1000, 10), torch.randn(1000, 1)
dataloader = DataLoader(TensorDataset(x, y), batch_size=32)
trainer = AtelierTrainer(max_epochs=10, accelerator="cpu")
trainer.fit(MyModel(), dataloader)AtelierModule - Your model base class
class MyModel(AtelierModule):
def training_step(self, batch, batch_idx):
# Define your training logic
return loss
def configure_optimizers(self):
# Return your optimizer
return torch.optim.Adam(self.parameters())AtelierTrainer - Training orchestrator
trainer = AtelierTrainer(
max_epochs=10,
accelerator="cpu", # or "cuda", "mps"
profiler=AtelierBaseProfiler() # optional
)AtelierDataLoader - Enhanced data loading
# Automatic device placement and train/val splitting
dataloader = AtelierDataLoader(
original_dataloader,
trainer,
lengths=[0.8, 0.2], # 80% train, 20% val
device="cpu"
)Create custom profilers to monitor your training:
from tensoratelier.profilers import AtelierBaseProfiler
import time
class TimeProfiler(AtelierBaseProfiler):
def start(self, desc, **kwargs):
self.active_profiles[desc] = time.perf_counter()
def stop(self, desc, context, **kwargs):
elapsed = time.perf_counter() - self.active_profiles[desc]
print(f"{desc}: {elapsed:.4f}s")
# Use it
trainer = AtelierTrainer(
max_epochs=10,
accelerator="cpu",
profiler=TimeProfiler()
)# From source (recommended for development)
git clone https://github.com/tensor-atelier/tensor-atelier.git
cd tensor-atelier
pip install -e .- Basic Training - Simple linear model training
- Custom Profiler - Implementing a custom profiler
- Simplicity - Minimal boilerplate for common tasks
- Modularity - Pluggable components for extensibility
- Performance - Efficient training loops with profiling
- Type Safety - Full type hints for better development experience
- Model checkpointing
- Learning rate scheduling
- Multi-GPU support
- Distributed training
- CLI interface
We welcome contributions! Please see our Contributing Guidelines or:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request