The most important scientific discoveries of our time won't happen in a traditional lab. We're an AI and physical sciences company building state-of-the-art models to accelerate breakthroughs across materials, energy, and beyond. Backed by world-class investors and growing rapidly, we operate at the pace the frontier requires. Our team brings deep expertise, genuine ownership, and an insatiable drive to push the boundaries of what's scientifically possible.
You will own the systems layer that makes our frontier model training and inference fast, efficient, and tightly coupled to the RL feedback loop that drives scientific discovery.
This is not a pure infrastructure role and it is not a pure research role — it sits exactly at their intersection. You will go deep into the stack: scheduling, kernels, RDMA, weight synchronization, and communication primitives, while working shoulder-to-shoulder with researchers to co-design the algorithms and infrastructure together.
The RL loop is central to how Periodic Labs works. Models propose experiments, experiments generate data, data feeds back into training. The speed and reliability of that loop is a direct multiplier on the pace of scientific discovery. You will own the infrastructure that makes it fast.
Build rack and topology-aware scheduling for GB series GPUs across Ray, Slurm, and Kubernetes, minimizing latency and maximizing utilization across heterogeneous cluster configurations
Build online and offline profilers that surface bottlenecks across the training and inference stack and translate findings into actionable optimizations
Implement direct S3 checkpoint streaming to eliminate I/O bottlenecks in large-scale training runs
Run methodical benchmarking to identify optimal RL training configurations across model sizes, batch strategies, and hardware topologies
Write and optimize communication and GPU kernels to extract maximum throughput from the hardware
Design and implement zero-copy RDMA weight synchronization between training and inference to keep the RL loop tight and low-latency
Build fast sandbox execution environments that allow rapid rollout of model-generated actions and return of rewards without blocking the training pipeline
Engage directly with the SGLang, Megatron, and Ray communities — contributing upstream, influencing roadmaps, and pulling in improvements that benefit Periodic Labs’ workloads
Work in close collaboration with RL and pretraining researchers to co-design algorithms and infrastructure together — you will shape what is possible at the research level by knowing what is achievable at the systems level, and vice versa
The net result: high-throughput, fault-tolerant training and inference systems tightly coupled with a low-latency RL feedback loop that accelerates scientific discovery at every turn.
Large-scale inference infrastructure: load balancing, traffic shifting, scheduling, and serving architecture at production scale
Low-level systems programming: RDMA, NVLink, kernel-level work, and network stack optimization
GPU cluster scheduling and orchestration across Ray, Slurm, or Kubernetes, with awareness of rack topology and hardware locality
Writing and optimizing CUDA kernels, communication primitives, or distributed training collective operations
Profiling and benchmarking distributed ML systems to identify and eliminate bottlenecks across compute, memory, and network
Checkpoint management and streaming at scale, including direct cloud storage integration
Building or contributing to open source ML infrastructure projects (e.g., SGLang, Megatron-LM, vLLM, Ray)
Working directly with ML researchers on algorithm-infrastructure co-design — you understand the research well enough to make systems decisions that serve it
The pace of scientific discovery at Periodic Labs is directly governed by the speed of our RL loop. Our models learn by doing: they generate hypotheses, run experiments, receive graded results, and train on the outcomes. Every inefficiency in that cycle — every idle GPU, every blocked weight sync, every slow rollout — compounds into slower science. Right now, our researchers are running frontier-scale RL on thousands of GPUs across Megatron and SGLang/vLLM, and the infrastructure constraints are real and active. Trainer idle time, node pressure, weight sync reliability, and time-to-first-batch are not abstract concerns — they are daily rate limiters on what our researchers can explore and how fast they can learn.
We are building out internal inference platforms with OSS libraries such as SGLang and vLLM, using prefill-decode disaggregation to optimize throughput, working to compress our node footprint so more researchers can run experiments in parallel, and designing a more modular RL infrastructure that decouples inference replicas from training jobs. These are not future roadmap items — they are problems being worked on today, by researchers who should be focused on the science. The person in this role will take those problems off their plate and own them end-to-end, with the technical depth and judgment to make the right architectural calls without being told what to do.
There is also a deeper reason this role matters. Periodic Labs’ scientific tasks — XRD phase identification, crystal structure prediction, synthesis planning — have unusually long and expensive verification loops compared to math or code benchmarks. A model rollout that requires running a Rietveld refinement or executing a DFT calculation is fundamentally different from one that checks a unit test. That asymmetry means inference throughput, sandbox execution speed, and RL loop latency have outsized leverage on our research velocity in ways they simply do not at other labs. Getting this infrastructure right is not a supporting function — it is a primary research accelerant.
The person who fills this role will work at the center of everything: tightly coupled to the research team, directly influencing what science gets done and how fast, and building systems that no one else in the world is building for exactly this problem. That is a rare opportunity, and we are looking for someone who recognizes it.
Minimum education: Bachelor’s degree or an equivalent combination of education and training or experience
Location: Our lab is located in Menlo Park and we prefer folks to be located in Menlo Park or San Francisco but can be flexible based on role
Compensation: The annual compensation range for this role - $300,00-$400,000
Visa sponsorship: Yes, we sponsor visas and will do everything we can to assist in this process with our legal support.
We’re building a team of the world’s best — the scientists, engineers, and problem-solvers who don’t just follow the frontier, they define it. If you’re driven to bring AI to life in the physical world and make discoveries that have never been made before, you belong here.
periodic-labs