We’re looking for an AI LLM Scientist who can push the boundaries of model intelligence, reasoning, and agentic capabilities. If you enjoy diving deep into transformer architectures, experimenting with frontier models, optimizing training pipelines, and prototyping novel ways LLMs can plan, act, and collaborate—this role will feel like home.
You’ll play a key part in shaping our next generation of AI-driven products: smarter agents, efficient pipelines, adaptive workflows, and robust AI behaviors built on top of cutting-edge research.
What You’ll Work On
What You Should Bring
Strong understanding of LLMs, transformers, embeddings, and NLP fundamentals.
Hands-on experience with training/fine-tuning frameworks: PyTorch, JAX, Hugging Face, DeepSpeed, Ray.
Experience building or optimizing LLM-powered applications and agents.
Ability to read, interpret, and apply research papers (NeurIPS, ICML, ICLR, ACL, etc.).
Solid grasp of distributed systems, model optimization, or inference engineering.
Bonus points for:
Publications or open-source research contributions
Experience training smaller foundation models
Knowledge of RLHF pipelines
Exposure to retrieval systems, vector DBs, or tool-use frameworks
Familiarity with frontier safety research
Sponsored ads
Benchling’s Registry and Inventory offerings provide our scientists with a conceptual and physical representation of the entities that are important to their work,
Lunar is a stealth technology company building a new type of software platform for health systems. We are on a mission to revolutionize
Shinzo is building the read layer for blockchains—a trustless, verifiable, permissionless infrastructure for blockchain data access. We’re solving one of crypto’s dirty secrets: