Framework for deep learning driven floor plan generation.
Current data-driven floor plan generation methods often reproduce the ergonomic inefficiencies found in real-world training datasets. To address this, we propose a novel approach that integrates architectural design principles directly into a transformer-based generative process. We formulate differentiable loss functions based on established architectural standards from literature to optimize room adjacency and proximity. By guiding the model with these ergonomic priors during training, our method produces layouts with significantly improved livability metrics. Comparative evaluations show that our approach outperforms baselines in ergonomic compliance while maintaining high structural validity.
Key Features:
- Framework designed for training GPT-2 based models on RPLAN dataset
- Custom losses derived from expert knowledge
- Masked inference - ignoring invalid outputs of the model - for more accurate inference
src/— source codeconfigs/— YAML configuration files for dataset preprocessing and model trainingdata/— input data for training and testingpreprocess.py,train.py,evaluation.py,synthesis.py— main entry points
- Linux is recommended operating system. Not tested on Windows or MacOS.
- Cuda 12.6
- Python 3.12+
- See
requirements.txtfor Python dependencies.
Run these commands from the repository root.
# Create virtual environment
python3 -m venv venv
# Activate virtual environment
source venv/bin/activate
# Upgrade pip and install deps
python -m pip install --upgrade pip
pip install -r requirements.txt- Download already preprocessed version of RPLAN dataset from Graph2Plan paper repo.
- Extract
data.zipfile. - In extracted folder find
data.matfile inNetwork/folder. - Copy
data.matfile todata/folder in this repo.
Preprocessing performs data augmentation and converts floor plans into sequences that are used for training
python preprocess.py <path to config> <path to .mat file with dataset>Preprocessing configs are stored in configs/preprocessing path. In the same folder there is also config_explanations.yaml, which documents all available options for preprocessing.
In order to train a model use:
python train.py --config <path to training config>
Training configs are stored in configs/ directory. Similarly, like for preprocessing config_explanations.yaml contains all supported options with explanations.
Results from training should appear in runs/ directory. Learning process can be examined using tensorboard:
tensorboard --logdir runs/To continue training from checkpoint refer to training script documentation
python train.py --helpThe following script can be used to evaluate the model by synthesizing a large number of floor plans and compute various metrics:
python evaluation.py <path to model> <path to preprocessed dataset folder>For additional options like saving created floor plans or enabling masked inference refer to:
python evaluation.py --helpThe following script allows you to synthesize 10 new floor plans from scratch:
python synthesis.py runs/sota/model 10 --draw_imgsFor additional options, such as saving the generated floor plans, see:
python synthesis.py --helpTo reproduce results from paper What a Comfortable World: Ergonomic Principles Guided Apartment Layout Generation run these commands from project root directory:
python preprocess.py configs/preprocessing/paper.yaml <path to .mat file with dataset>
# Baseline training
python train.py --config configs/paper_baseline.yaml
# Our method
python train.py --config configs/paper_our.yaml
# Evaluation
python evaluation.py <path to model> data/paper- Repo supports pyenv via
.python-versionfile.
