Skip to content

nuoyihan/BlendMR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces

This repository contains the implementation for the paper:

BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces Violet Yinuo Han, Hyunsung Cho, Kiyosu Maeda, Alexandra Ion, David Lindlbauer Proceedings of the ACM on Human-Computer Interaction, Volume 7, Issue ISS (October 2023) https://doi.org/10.1145/3626472

Overview

BlendMR is an optimization-based Mixed Reality (MR) system that blends virtual content onto physical objects in users' environments to serve as ambient information displays. The approach takes existing 2D applications and meshes of physical objects as input, analyzes the geometry of the physical objects, identifies suitable regions for virtual elements, and uses integer programming to optimally map selected contents of the 2D applications onto the object.

Key Features

  • Mesh Segmentation: Automatic segmentation of 3D meshes using spectral clustering
  • UV Parameterization: Least squares conformal maps (LSCM) for texture mapping
  • Optimization-based Layout: Integer programming formulation considering:
    • Geometry constraints (curvature, distortion)
    • Viewing angle optimization
    • UI element importance
    • Spatial relationships between elements
  • Texture Synthesis: Automatic generation of texture maps with mapped UI elements
  • UI Annotation Tool: Interactive tool for annotating UI screenshots with bounding boxes and importance scores
  • 3D Visualization: PyVista-based visualization of results

Installation

Prerequisites

  • Python 3.9+
  • Gurobi Optimizer (requires license - free academic licenses available)

Using UV (Recommended)

UV is a fast Python package manager. Install it first:

# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

Then install BlendMR:

# Clone the repository
git clone https://github.com/AugmentedPerception/blendmr.git
cd blendmr

# Create virtual environment and install dependencies
uv venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install the package in development mode
uv pip install -e ".[dev]"

Using pip

# Clone and install
git clone https://github.com/AugmentedPerception/blendmr.git
cd blendmr
pip install -e ".[dev]"

Gurobi Setup

  1. Download from Gurobi Downloads
  2. Obtain a license (free for academics)
  3. Run grbgetkey <license-key>

Quick Start

Example Data

The repository includes example meshes and annotated UI screenshots for testing:

Meshes (data/mesh/):

Mesh Vertices Faces Description
bunny 3,115 4,148 Stanford bunny
speaker 3,002 4,000 Bluetooth speaker
pig_plush 4,998 6,664 Plush toy
humidifier 5,002 6,668 Desktop humidifier
house_deco 4,002 8,000 House decoration

UI Screenshots (data/ui/):

UI Index Elements Description
6 19 Weather/calendar widget
10 2 Simple notification
7373 26 Music player interface
17825 32 Dashboard with multiple widgets
70532 38 Complex multi-element UI

Process a Mesh

# Process the bunny mesh with 5 segments
blendmr-mesh --mesh_name bunny --n_seg 5

# Or try other meshes
blendmr-mesh --mesh_name speaker --n_seg 4
blendmr-mesh --mesh_name pig_plush --n_seg 6

This generates outputs in outputs/:

  • outputs/meshes/bunny.obj - Processed mesh with UV coordinates
  • outputs/meshes/bunny_*.pickle - Metadata files
  • outputs/visualizations/bunny_input.png - Input geometry (gray)
  • outputs/visualizations/bunny_segmentation.png - Colored segments
  • outputs/visualizations/bunny_uv_atlas.png - UV atlas with grid
  • outputs/visualizations/bunny_alpha_shapes.png - Alpha shapes

Run Layout Optimization

# Map UI 6 onto bunny with 50x50 grid
blendmr-layout --mesh_name bunny --ui_idx 6 --n_slots 50 \
               --geo_w 0.25 --imp_w 0.25 --dis_w 0.25 --scale 0.5

# Try a complex UI on speaker
blendmr-layout --mesh_name speaker --ui_idx 7373 --n_slots 30 \
               --geo_w 0.25 --imp_w 0.25 --dis_w 0.25 --scale 0.3

Visualize Results

View the optimized UI texture mapped onto the 3D mesh:

# List available textures
blendmr-visualize --list

# Interactive 3D viewer
blendmr-visualize --mesh bunny --texture mbunny_ui6_s50_imp0.25_geo0.25_dis0.25_scale0.5_tex.png -i

# Save screenshot
blendmr-visualize --mesh bunny --texture mbunny_ui6_s50_imp0.25_geo0.25_dis0.25_scale0.5_tex.png --save result.png

# Show grid mapping on mesh
blendmr-visualize --mesh bunny --grid

Data Preparation

3D Mesh Input

Place your 3D mesh files (.obj format) in the data/mesh/ directory:

data/
└── mesh/
    ├── your_mesh.obj
    └── temp/          # Generated during processing

Requirements:

  • Mesh should be a watertight triangle mesh
  • File format: Wavefront OBJ

2D UI Input

Place UI data in the data/ui/ directory:

data/
└── ui/
    ├── 0.json         # UI element annotations
    ├── 0.jpg          # UI screenshot
    └── ...

JSON format (element annotations):

[
  {
    "bounds": [x1, y1, x2, y2],
    "importance": 5
  }
]
  • bounds: Bounding box coordinates [left, top, right, bottom]
  • importance: Integer from 1 (low) to 5 (high)

UI Annotation Tool

Use the built-in annotation tool to create UI annotations interactively:

# Open file picker to select a UI screenshot
blendmr-annotate

# Annotate a specific image
blendmr-annotate --image /path/to/screenshot.jpg

# Edit existing annotation
blendmr-annotate --edit 0

Controls:

  • Click & drag to draw bounding boxes
  • After drawing, select importance (1-5) in the popup
  • Right-click on a box to delete it
  • Press 'S' to save
  • Press 'Z' to undo last box
  • Press 'Q' to quit without saving

Python API Usage

from blendmr.mesh import process_mesh, read_mesh
from blendmr.optimization import run_optimization, MeshData, UIData, PlacementGrid, PenaltyComputer
from blendmr.visualization import create_texture
from blendmr.config import BlendMRConfig
import pickle

# Configure paths
config = BlendMRConfig.from_base_dir("/path/to/project")
config.setup()

# Step 1: Process mesh
mesh_uv, alpha_shapes = process_mesh("my_object", n_segments=5, config=config)

# Step 2: Build placement grid
grid = PlacementGrid.from_alpha_shapes(alpha_shapes, n_slots_per_side=50)
grid.build_vertex_lookups(mesh_uv)

# Step 3: Compute penalties
penalty_computer = PenaltyComputer(grid, config.paths.mesh_output_dir / "my_object.obj")
geometry_penalty, view_penalty = penalty_computer.compute_combined_penalty()

# Step 4: Run optimization
mesh_data = MeshData.from_grid(grid, geometry_penalty, view_penalty)
ui_data = UIData.from_json(ui_idx=0, config=config)
ui_data.set_scale(0.25)

result = run_optimization(mesh_data, ui_data, config=config)

# Step 5: Generate texture
create_texture(ui_data, mesh_data, result.assignments, "output", config=config)

Project Structure

blendmr/
├── data/                     # Input data
│   ├── mesh/                 # Input mesh files (.obj)
│   └── ui/                   # UI screenshots and annotations
├── outputs/                  # All generated outputs
│   ├── meshes/               # Processed meshes and metadata
│   ├── visualizations/       # All visualization images
│   └── textures/             # Generated texture maps
├── src/blendmr/              # Main package
│   ├── __init__.py
│   ├── config.py             # Configuration dataclasses
│   ├── cli.py                # Command-line interface
│   ├── mesh/                 # Mesh processing
│   │   ├── io.py             # Mesh I/O
│   │   ├── segment.py        # Spectral clustering segmentation
│   │   ├── parameterize.py   # LSCM parameterization
│   │   ├── pack.py           # UV atlas packing
│   │   ├── rotate.py         # UV alignment
│   │   ├── alpha_shape.py    # Alpha shape computation
│   │   ├── visualize.py      # PyVista mesh visualizations
│   │   └── process.py        # Main pipeline
│   ├── optimization/         # Optimization model
│   │   ├── grid.py           # Placement grid
│   │   ├── penalties.py      # Penalty computation
│   │   ├── data.py           # Data structures
│   │   └── model.py          # Gurobi IP model
│   ├── tools/                # User-facing tools
│   │   ├── annotate.py       # UI annotation tool
│   │   └── visualize.py      # Result visualization
│   └── visualization/        # Output generation
│       └── texture.py        # Texture synthesis
├── tests/                    # Unit tests
├── pyproject.toml            # Package configuration
└── README.md

CLI Commands

After installation, the following commands are available:

Command Description
blendmr-mesh Process a 3D mesh (segmentation, UV mapping, alpha shapes)
blendmr-layout Run layout optimization to map UI elements to mesh
blendmr-annotate Interactive UI annotation tool
blendmr-visualize Visualize results with 3D mesh and texture

Optimization Objectives

The optimization balances multiple objectives:

  1. Geometry Penalty (F_geo): Penalizes placement on high-curvature or distorted regions
  2. View Penalty (F_view): Penalizes placement on regions not facing the user
  3. Importance Penalty (F_imp): Penalizes leaving out important UI elements
  4. Distance Penalty (F_dis): Keeps related UI elements close together

Adjust weights via command-line arguments to prioritize different aspects.

Troubleshooting

Gurobi License Issues

Memory Issues with Large Meshes

  • Reduce --n_slots for faster computation
  • Use fewer segments (--n_seg)
  • Simplify the mesh before processing

Optimization Timeout

  • Default timeout is 10 minutes
  • Reduce grid resolution or number of UI elements for faster results

Citation

If you use this code in your research, please cite:

@article{han2023blendmr,
  author = {Han, Violet Yinuo and Cho, Hyunsung and Maeda, Kiyosu and Ion, Alexandra and Lindlbauer, David},
  title = {BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces},
  journal = {Proc. ACM Hum.-Comput. Interact.},
  volume = {7},
  number = {ISS},
  articleno = {436},
  year = {2023},
  publisher = {Association for Computing Machinery},
  doi = {10.1145/3626472}
}

License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Contact

About

BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces. Best paper at ACM ISS'23.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages