Skip to content

Much Ado About NoisingDispelling the Myths of Generative Robotic Control

A PyTorch framework for behavior cloning with flow matching and generative models

Quick Example

bash
# Install dependencies
uv sync --extras robomimic

# Train a flow matching policy on Robomimic Lift task
uv run examples/train_robomimic.py \
    task=lift_ph_state \
    network=chiunet \
    optimization.loss_type=flow \
    log.wandb_mode=online

Key Features

Multiple Training Objectives

This repository supports a variety of training objectives for generative behavior cloning:

  • Flow Matching (flow): Standard continuous normalizing flow objective
  • Regression (regression): Direct supervised learning baseline with the same architecture as flow matching.
  • MIP (mip): Minimum Iterative Policy with two-step sampling (from Much Ado About Noising)
  • TSD (tsd): Two-Stage Denoising (from Much Ado About Noising)
  • CTM (ctm): Consistency Trajectory Model
  • PSD (psd): Progressive Self-Distillation
  • LSD (lsd): Lagrangian Self-Distillation (only support differentiable networks)

Flexible Network Architectures

Choose from multiple proven architectures:

  • MLP: Enhanced MLP with Film conditioning, residual connection and layer normalization.
  • ChiUNet: U-Net from Diffusion Policy (Chi et al.), good for smooth control tasks.
  • ChiTransformer: Transformer architecture from Diffusion Policy, more expressive than ChiUNet but less stable.
  • SudeepDiT: Diffusion Transformer (DiT) architecture, contains best practices from Sudeeep et al.
  • RNN: LSTM/GRU-based recurrent networks, only for baseline purposes.

Comprehensive Task Support

Pre-configured environments and datasets:

  • Robomimic: Manipulation tasks (Lift, Can, Square, Transport, Tool Hang)
  • Kitchen: Multi-task kitchen environment
  • PushT: 2D pushing task with multiple observation modalities

What's Next?

Citation

If you use this repository in your research, please cite:

bibtex

Released under the MIT License.