Skip to content

Modular PyTorch research framework for rapid experimentation, clean training loops, and reproducible ML workflows. Built for flexibility and clarity, it helps you focus on ideas—not boilerplate.

License

Notifications You must be signed in to change notification settings

mdelriolanse/tensor-atelier

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tensor Atelier Logo

Tensor Atelier

A minimalist PyTorch training framework with automatic optimization and profiling

Python 3.8+ PyTorch License: Apache 2.0


🎯 What is Tensor Atelier?

Tensor Atelier is a clean, modular PyTorch training framework designed for developers who want powerful ML capabilities without the complexity. Built with automatic optimization, built-in profiling, and a flexible architecture that grows with your needs.

✨ Key Features

  • 🔄 Automatic Optimization - Handles gradients, backprop, and optimization steps automatically
  • 📊 Built-in Profiling - Monitor training performance with custom profiler support
  • ⚡ Multi-Accelerator - CPU, GPU, and custom accelerator support
  • 🧩 Modular Design - Clean separation of concerns with extensible components
  • 📦 Smart DataLoader - Automatic device placement and train/validation splitting

🚀 Quick Start

import torch
import torch.nn as nn
from torch.utils.data import DataLoader, TensorDataset
from tensoratelier import AtelierModule, AtelierTrainer

# Define your model
class MyModel(AtelierModule):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(10, 1)
        self.loss_fn = nn.MSELoss()

    def training_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self.linear(x)
        return self.loss_fn(y_hat, y)

    def configure_optimizers(self):
        return torch.optim.Adam(self.parameters(), lr=0.001)

# Create data and trainer
x, y = torch.randn(1000, 10), torch.randn(1000, 1)
dataloader = DataLoader(TensorDataset(x, y), batch_size=32)

trainer = AtelierTrainer(max_epochs=10, accelerator="cpu")
trainer.fit(MyModel(), dataloader)

🏗️ Architecture

Core Components

AtelierModule - Your model base class

class MyModel(AtelierModule):
    def training_step(self, batch, batch_idx):
        # Define your training logic
        return loss
    
    def configure_optimizers(self):
        # Return your optimizer
        return torch.optim.Adam(self.parameters())

AtelierTrainer - Training orchestrator

trainer = AtelierTrainer(
    max_epochs=10,
    accelerator="cpu",  # or "cuda", "mps"
    profiler=AtelierBaseProfiler()  # optional
)

AtelierDataLoader - Enhanced data loading

# Automatic device placement and train/val splitting
dataloader = AtelierDataLoader(
    original_dataloader,
    trainer,
    lengths=[0.8, 0.2],  # 80% train, 20% val
    device="cpu"
)

Profiling System

Create custom profilers to monitor your training:

from tensoratelier.profilers import AtelierBaseProfiler
import time

class TimeProfiler(AtelierBaseProfiler):
    def start(self, desc, **kwargs):
        self.active_profiles[desc] = time.perf_counter()
    
    def stop(self, desc, context, **kwargs):
        elapsed = time.perf_counter() - self.active_profiles[desc]
        print(f"{desc}: {elapsed:.4f}s")

# Use it
trainer = AtelierTrainer(
    max_epochs=10, 
    accelerator="cpu", 
    profiler=TimeProfiler()
)

📦 Installation

# From source (recommended for development)
git clone https://github.com/tensor-atelier/tensor-atelier.git
cd tensor-atelier
pip install -e .

📚 Examples

🎨 Design Philosophy

  • Simplicity - Minimal boilerplate for common tasks
  • Modularity - Pluggable components for extensibility
  • Performance - Efficient training loops with profiling
  • Type Safety - Full type hints for better development experience

🔮 Roadmap

  • Model checkpointing
  • Learning rate scheduling
  • Multi-GPU support
  • Distributed training
  • CLI interface

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines or:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

📄 License

Apache License 2.0 - see LICENSE for details.

Built with ❤️ for the PyTorch community

About

Modular PyTorch research framework for rapid experimentation, clean training loops, and reproducible ML workflows. Built for flexibility and clarity, it helps you focus on ideas—not boilerplate.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages