Mapping high-dimensional biology & chemistry into intuitive, navigable spaces with Generative Topographic Mapping (GTM).
Expand
ChemographyKit is a comprehensive Python library for exploring chemical space and high-dimensional data using Generative Topographic Mapping (GTM). GTM is a probabilistic dimensionality reduction technique that creates non-linear mappings from high-dimensional data spaces to interpretable low-dimensional latent spaces using a generative model with radial basis functions. This is a PyTorch-based implementation of the GTM algorithm that runs on GPU, and includes functions for building landscapes and GTM-specific metrics.
Tip
Pair GTM maps with interactive notebooks or dashboards to let users zoom from global chemical space down to neighborhood-level structure–activity patterns.
-
GPU-Accelerated: PyTorch-based implementation with CUDA support for fast computation
-
Multiple GTM Variants:
VanillaGTM: Basic GTM implementation with random initializationGTM: Enhanced version with PCA-based initialization for better convergence
-
Comprehensive Visualization:
- Interactive landscapes using Plotly (smooth heatmaps)
- Static visualizations using Altair (discrete grid-based plots)
- Support for density, classification, and regression landscapes
-
Advanced Analytics:
- Responsibility Patterns (RP) for chemical space coverage analysis
- Classification and regression landscape analysis
- Python ≥ 3.11
- PyTorch ≥ 2.7.1
- NumPy ≥ 2.3.2
- Pandas ≥ 2.3.2
- Scikit-learn ≥ 1.7.1
- Altair ≥ 5.5.0
- Plotly ≥ 6.3.0
- tqdm ≥ 4.67.1
- matplotlib ≥ 3.7.0 (for tutorials)
git clone https://github.com/your-username/ChemographyKit.git
cd ChemographyKit
pdm installgit clone <repository-url>
cd ChemographyKit
pip install -e .Or install dependencies manually:
pip install numpy>=2.3.2 torch>=2.7.1 pandas>=2.3.2 altair>=5.5.0 plotly>=6.3.0 scikit-learn>=1.7.1 tqdm>=4.67.1 matplotlib>=3.7.0Note
For GPU acceleration, install a CUDA-enabled build of PyTorch appropriate for your system.
import torch
import numpy as np
from chemographykit.gtm import GTM
# Generate sample data
data = torch.randn(1000, 50, dtype=torch.float64) # 1000 samples, 50 features
# Create GTM model
gtm = GTM(
num_nodes=100, # 10x10 grid in latent space
num_basis_functions=25, # 5x5 RBF centers
basis_width=0.3, # RBF width parameter
reg_coeff=0.01, # Regularization coefficient
device="cuda" # Use GPU if available
)
# Fit model and transform data
latent_coords = gtm.fit_transform(data)
print(f"Latent coordinates shape: {latent_coords.shape}") # (2, 1000)
# Get responsibilities for landscape analysis
responsibilities, log_likelihoods = gtm.project(data)
print(f"Responsibilities shape: {responsibilities.shape}") # (100, 1000)This example demonstrates the full workflow using synthetic S-curve data (adapted from tutorials/Synthetic_data.ipynb):
import os
import numpy as np
import torch
import altair as alt
from sklearn.datasets import make_s_curve
# GTM and utilities
from chemographykit.gtm import GTM
from chemographykit.utils.molecules import calculate_latent_coords
from chemographykit.utils.regression import get_reg_density_matrix, reg_density_to_table
from chemographykit.plots.altair_landscapes import (
altair_points_chart,
altair_discrete_regression_landscape,
)
# 1. Create S-curve dataset
rng = np.random.RandomState(0)
n_samples = 5000
s_curve_3d, s_curve_color = make_s_curve(n_samples, random_state=rng)
# Add extra dimension and convert to tensor
extra_dims = rng.randn(n_samples, 1)
X = np.hstack([s_curve_3d, extra_dims])
device = "cuda" if torch.cuda.is_available() else "cpu"
X_t = torch.tensor(X, dtype=torch.float64, device=device)
# 2. Fit GTM model
gtm = GTM(
num_nodes=225, # 15 x 15 grid
num_basis_functions=100, # 10 x 10 RBF centers
basis_width=1.0,
reg_coeff=1.0,
device=device,
standardize=False,
pca_scale=True,
pca_engine="torch",
max_iter=200,
)
gtm.fit(X_t)
# 3. Transform data and create visualizations
Z = gtm.transform(X_t) # Latent coordinates
responsibilities, _ = gtm.project(X_t)
R_np = responsibilities.detach().to("cpu").numpy()
if R_np.shape[0] != n_samples:
R_np = R_np.T
# 4. Create regression landscape using curve parameter as target
density, reg_density = get_reg_density_matrix(R_np, s_curve_color)
reg_table = reg_density_to_table(density, reg_density, node_threshold=0.10)
# 5. Generate visualizations
coords = calculate_latent_coords(R_np, correction=True, return_node=True)
coords["color"] = s_curve_color
# Create points and landscape charts
points_chart = altair_points_chart(
coords.sample(min(5000, len(coords)), random_state=0),
num_nodes=15, # sqrt(225)
points_size=120,
coloring_scheme='viridis',
coloring_column='color'
)
reg_chart = altair_discrete_regression_landscape(
reg_table,
title="GTM Regression Landscape",
colorset='viridis'
)
# Combine and save
combined = alt.hconcat(points_chart, reg_chart).properties(
title="Latent points (colored by target) — GTM Regression Landscape"
)
# Save visualization
os.makedirs("plots", exist_ok=True)
combined.save("plots/gtm_scurve_regression.html")
print("Saved visualization to plots/gtm_scurve_regression.html")To run the complete tutorial notebook:
# Using PDM
pdm run jupyter notebook tutorials/Synthetic_data.ipynb
# Or with pip installation
jupyter notebook tutorials/Synthetic_data.ipynbThe tutorial demonstrates:
- Data preparation: Creating synthetic S-curve data with additional noise dimensions
- GTM training: Fitting a 15×15 grid GTM with 10×10 RBF centers
- Visualization: Both matplotlib 3D/2D plots and interactive Altair landscapes
- Regression analysis: Using the intrinsic curve parameter as regression target
- Interactive landscapes: Generating HTML visualizations saved to
plots/
from chemographykit.metrics import resp_to_pattern, compute_rp_coverage
# Convert responsibilities to RP fingerprints
rp_fingerprints = np.array([
resp_to_pattern(resp, n_bins=10, threshold=0.01)
for resp in responsibilities_np
])
# Calculate coverage between datasets
reference_fps = rp_fingerprints[:500] # First 500 as reference
test_fps = rp_fingerprints[500:] # Last 500 as test
coverage = compute_rp_coverage(reference_fps, test_fps, use_weight=True)
print(f"Weighted coverage: {coverage:.3f}")from chemographykit.utils.molecules import calculate_latent_coords
# Calculate molecular coordinates for plotting
mol_coords = calculate_latent_coords(
responsibilities_np,
correction=True, # Adjust for visualization
return_node=True # Include most responsible node
)
print(mol_coords.head())from chemographykit.plots.altair_landscapes import (
altair_discrete_density_landscape,
altair_discrete_class_landscape,
altair_points_chart
)
# Create discrete density landscape
density_chart = altair_discrete_density_landscape(
density_table,
title="GTM Density Map"
)
# Overlay molecular points
points_chart = altair_points_chart(
mol_coords,
num_nodes=100,
points_size=50,
points_color="red"
)
# Combine charts
combined = density_chart + points_chart
combined.show()- num_nodes: Number of latent space grid nodes (must be perfect square for 2D)
- num_basis_functions: Number of RBF centers (must be perfect square for 2D)
- basis_width: RBF width parameter (controls smoothness)
- reg_coeff: Regularization coefficient (prevents overfitting)
- standardize: Whether to standardize input data (recommended: True)
- max_iter: Maximum EM algorithm iterations
- tolerance: Convergence tolerance
- device: Computation device ("cpu" or "cuda")
- pca_engine: PCA implementation ("sklearn" or "torch")
- pca_scale: Scale eigenvectors by sqrt of eigenvalues
- pca_lowrank: Use low-rank PCA approximation for large datasets
gtm.py: Main GTM implementations (BaseGTM,VanillaGTM,GTM)metrics.py: RP fingerprints and coverage metricsutils/: Specialized analysis modulesclassification.py: Binary/multi-class analysisregression.py: Continuous property analysisdensity.py: Density calculations and grid mappingmolecules.py: Molecular coordinate calculations
plots/: Visualization modulesplotly_landscapes.py: Interactive smooth heatmapsaltair_landscapes.py: Static discrete visualizations
DataStandardizer: Robust data preprocessing with NaN handlingBaseGTM: Abstract base class defining GTM interfaceVanillaGTM: Basic GTM with random initializationGTM: Enhanced GTM with PCA-based initialization
- Use GPU: Set
device="cuda"for significant speedup on large datasets - Choose appropriate grid size: Balance between resolution and computational cost
- PCA initialization: Use
GTMclass instead ofVanillaGTMfor better convergence - Data standardization: Always enable for numerical stability
- Batch processing: Process large datasets in chunks if memory is limited
Run the comprehensive test suite:
pdm run pytest tests/ -vRun with coverage:
pdm run pytest tests/ --cov=src/chemographykit --cov-report=html- Fork the repository
- Create a feature branch
- Make your changes with tests
- Run quality checks:
pdm run pre-commit run --all-files - Submit a pull request
If you use this code please cite future_url
GTM has been applied across biological data and extensively studied for analyzing large chemical datasets and exploring chemical space, including virtual screening, library comparison/design, de novo compound design, and multi-scale visualization.
| Domain | Representative study |
|---|---|
| Genomes | Molecular Informatics (2023) |
| Proteins | Bioinformatics (2022) |
| Peptides | bioRxiv (2024) |
Virtual screening
Library comparison & design
- Molecular Informatics (2011)
- Journal of Chemical Information and Modeling (2015)
- PubMed (2019)
- Molecular Informatics (2021)
- JCIM (2023)
De novo design of chemical compounds
Multi-scale visualization of large chemical spaces
The authors thank Dr. Arkadii Lin and Dr. Yuliana Zabolotna for their contributions to the development of the initial versions of the functions for density and classification landscape building, as well as GTM-derived metric calculation.
This project is licensed under the MIT License — see the LICENSE file for details.
Contact: varnek@unistra.fr