At LinkedIn, our approach to flexible work is centered on trust and optimized for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team. This role will be based in Sunnyvale, CA.
The Generative AI (GenAI) Safety team sits at the heart of LinkedInâs Responsible AI & Governance (RAIâG) organization, with a mission to set the gold standard for AI safety across all AI applications companyâwide. We ensure that every generative AI product is developed and deployed responsibly, ethically, and securely. By combining rigorous governance with cuttingâedge ML research, we identify and mitigate risks such as bias, hallucination, misuse, and privacy leakage.
As both the AI Safety Research team and the central AI safety engineering function, we build safety guardrails, evaluation pipelines, and alignment techniques that enable safe innovation at scale. Our work is foundational to the companyâs AI strategy and influences standards across the industry. We partner closely with Legal, Compliance, AI Infrastructure, and Product teams to embed safety into every stage of the AI lifecycle.
Responsibilities
Drive GenAI Safety Strategy: Serve as the senior technical leader shaping the companyâs generative AI safety direction. Define the roadmap for safety alignment research, model evaluation, and systemâlevel protections.
Lead AI Safety Research & Innovation: Guide LinkedInâs research agenda in alignment, robustness, and responsible model behaviors. Stay ahead of academic and industry advances, rapidly translating insights into practical, productionâready solutions.
Design SafetyâFirst Foundations: Provide architectural leadership for scalable safety systemsâbenchmarking, redâteaming, content safety, privacyâpreserving training, and realâtime guardrails â ensuring they are reliable, performant, and deeply integrated into AI infrastructure.
Deliver HighâImpact Solutions in Ambiguous Spaces: Tackle LinkedInâs toughest ethical, regulatory, and riskâdriven problems. Bring clarity and direction in areas with evolving standards, ensuring the company ships safe GenAI experiences at speed.
Liaison With Product Engineering: Partner closely with product engineering teams to stay current on emerging experiments, venture bets, and product innovations, ensuring safety research and tooling anticipate and support the next wave of product development.
CrossâFunctional Leadership: Collaborate with Legal, Compliance, Privacy, Infra, and Policy teams to operationalize safety requirements, translate regulatory guidance into technical specifications, and ensure endâtoâend alignment across disciplines.
Technical Mentorship: Mentor and grow a team of ~15 engineers across research, ML, and systems. Elevate engineering rigor, drive high bar execution, and nurture future technical leaders in AI safety.
CompanyâWide Impact: Ensure safety techniques, tools, and evaluations are deployed across all GenAI products, safeguarding member trust while enabling safe, scalable innovation.