We’re seeking outstanding ML engineers and researchers to collaborate across multiple active Bradbury Group projects and papers.

Bradbury Group is a non-commercial, volunteer-only research lab pushing the frontier of open-source AI. Our work spans diffusion, model compression, geospatial, cancer research, and more; with one focus: producing conference-grade research that moves the field forward.

We are highly selective, and hold only a handful of open positions at any time. Your application will be kept for future sourcing.

What you’ll gain

  • Coauthorship across multiple live papers and projects
  • Access to funded compute for large-scale experiments
  • Collaboration with world-class peers from top universities and labs
  • A fully remote, async-friendly environment (volunteer/unpaid)

What we need

  • Deep PyTorch and model design expertise; design and train custom models with clean, well-structured code.
  • Familiarity with other common python libraries including einops, OpenCV, pandas, matplotlib, hydra, wandb.
  • Strong ML fundamentals: transformers, ViT, diffusion, DiT/MMDiT, geometric ML, matrix math.
  • Proven experience building real ML systems (beyond toy repos)
  • Compute efficiency: profile bottlenecks, optimize compute, scale training and use budget smartly.
  • Linux fluency: shell scripting, tmux/screen, perf tools.
  • Disciplined workflows: clean code, async communication, clear documentation, small PRs, and careful use of LLM coding tools.
  • Novel ideas and experimental, scientific thinking and testing.

Nice to have

  • Leadership of multi-person teams or end-to-end projects.
  • Experience with large compute, DDP/FSDP, Triton/CUDA.
  • Domain expertise (geospatial, medical imaging, molecular biology).
  • First-author publications or strong open-source track record.
  • Strong drive and time-to-contribute.

Not required

  • Specific timezone, visa, or work authorization
  • PhD not required; some of our strongest contributors are exceptional undergraduates or industry engineers.

[APPLY]