Open Positions

All roles are part-time volunteer; 3 roles currently open.

Group-Wide ML Expectations

We group all machine-learning roles under a single category because most of our work spans multiple projects and demands cross-project responsibility. The expectations below apply broadly to all ML roles, with reasonable flexibility for positions that are primarily engineering-focused or rely on specialised domain knowledge (e.g., protein-modeling expertise).

Required
  • Team-based ML experience. Has worked as part of a research lab or engineering team (academic or industry), with code review, iteration cycles, and shared responsibilities.
  • Real project experience beyond toy work. Has completed ML projects involving non-trivial datasets, non-trivial training, or multi-week experimentation - not just notebooks or coursework.
  • Experience implementing non-standard architectures. Has reproduced or adapted ideas from papers (custom attention blocks, diffusion variants, special-purpose modules, etc.) rather than only using standard libraries.
  • Independent contributor with no need for hand-holding. Able to pick up a task, define the missing pieces, and drive it to completion.
  • Strong PyTorch experience.
  • Strong experimental discipline. Understands baselines, controls/ablation, reproducibility, basic tracking tools, and proper evaluation.
Preferred
  • Meaningful contribution to a published or submitted paper (ICLR/CVPR/NeurIPS/CVPR or equivalent).
  • First-author experience on a peer-reviewed or arXiv-preprint research project.
  • Deep familiarity with diffusion, transformers, flow matching.
  • Designed or proposed your own architectures or training schemes, not purely reproductions.
  • Current PhD or completed MSc student in a relevant field, or equivalent research experience. (We strongly acknowledge that skilled undergraduates often outperform more senior candidates, this is not a rigid selection criteria).
Nice to Have
  • Completed a PhD, especially in ML, applied math, CV, or scientific ML.
  • Experience with large-scale compute environments: SLURM, multi-GPU training (DDP, FSDP)
  • First Author top ML conference paper (ICLR/CVPR/NeurIPS etc.)
  • Prior work on niche domains (e.g., protein modelling, medical imaging, generative editing, scientific ML).
  • Triton / CUDA kernels.
  • Leadership experience / interest in first author / project management roles.

Select a role to view details

Open Positions

All roles are part-time volunteer; 3 roles currently open.

Group-Wide ML Expectations

We group all machine-learning roles under a single category because most of our work spans multiple projects and demands cross-project responsibility. The expectations below apply broadly to all ML roles, with reasonable flexibility for positions that are primarily engineering-focused or rely on specialised domain knowledge (e.g., protein-modeling expertise).

Required
  • Team-based ML experience. Has worked as part of a research lab or engineering team (academic or industry), with code review, iteration cycles, and shared responsibilities.
  • Real project experience beyond toy work. Has completed ML projects involving non-trivial datasets, non-trivial training, or multi-week experimentation - not just notebooks or coursework.
  • Experience implementing non-standard architectures. Has reproduced or adapted ideas from papers (custom attention blocks, diffusion variants, special-purpose modules, etc.) rather than only using standard libraries.
  • Independent contributor with no need for hand-holding. Able to pick up a task, define the missing pieces, and drive it to completion.
  • Strong PyTorch experience.
  • Strong experimental discipline. Understands baselines, controls/ablation, reproducibility, basic tracking tools, and proper evaluation.
Preferred
  • Meaningful contribution to a published or submitted paper (ICLR/CVPR/NeurIPS/CVPR or equivalent).
  • First-author experience on a peer-reviewed or arXiv-preprint research project.
  • Deep familiarity with diffusion, transformers, flow matching.
  • Designed or proposed your own architectures or training schemes, not purely reproductions.
  • Current PhD or completed MSc student in a relevant field, or equivalent research experience. (We strongly acknowledge that skilled undergraduates often outperform more senior candidates, this is not a rigid selection criteria).
Nice to Have
  • Completed a PhD, especially in ML, applied math, CV, or scientific ML.
  • Experience with large-scale compute environments: SLURM, multi-GPU training (DDP, FSDP)
  • First Author top ML conference paper (ICLR/CVPR/NeurIPS etc.)
  • Prior work on niche domains (e.g., protein modelling, medical imaging, generative editing, scientific ML).
  • Triton / CUDA kernels.
  • Leadership experience / interest in first author / project management roles.

Select a role to view details

Open Positions

All roles are part-time volunteer; 3 roles currently open.

Group-Wide ML Expectations

We group all machine-learning roles under a single category because most of our work spans multiple projects and demands cross-project responsibility. The expectations below apply broadly to all ML roles, with reasonable flexibility for positions that are primarily engineering-focused or rely on specialised domain knowledge (e.g., protein-modeling expertise).

Required
  • Team-based ML experience. Has worked as part of a research lab or engineering team (academic or industry), with code review, iteration cycles, and shared responsibilities.
  • Real project experience beyond toy work. Has completed ML projects involving non-trivial datasets, non-trivial training, or multi-week experimentation - not just notebooks or coursework.
  • Experience implementing non-standard architectures. Has reproduced or adapted ideas from papers (custom attention blocks, diffusion variants, special-purpose modules, etc.) rather than only using standard libraries.
  • Independent contributor with no need for hand-holding. Able to pick up a task, define the missing pieces, and drive it to completion.
  • Strong PyTorch experience.
  • Strong experimental discipline. Understands baselines, controls/ablation, reproducibility, basic tracking tools, and proper evaluation.
Preferred
  • Meaningful contribution to a published or submitted paper (ICLR/CVPR/NeurIPS/CVPR or equivalent).
  • First-author experience on a peer-reviewed or arXiv-preprint research project.
  • Deep familiarity with diffusion, transformers, flow matching.
  • Designed or proposed your own architectures or training schemes, not purely reproductions.
  • Current PhD or completed MSc student in a relevant field, or equivalent research experience. (We strongly acknowledge that skilled undergraduates often outperform more senior candidates, this is not a rigid selection criteria).
Nice to Have
  • Completed a PhD, especially in ML, applied math, CV, or scientific ML.
  • Experience with large-scale compute environments: SLURM, multi-GPU training (DDP, FSDP)
  • First Author top ML conference paper (ICLR/CVPR/NeurIPS etc.)
  • Prior work on niche domains (e.g., protein modelling, medical imaging, generative editing, scientific ML).
  • Triton / CUDA kernels.
  • Leadership experience / interest in first author / project management roles.

Select a role to view details