We’re working to reshape the way manufacturing runs by putting advanced AI solutions into practice — improving productivity, optimizing the way work gets done, and driving efficiency at scale.
As an MLOps & AI Platform Engineer, you’ll build the backbone that makes AI in production possible. You’ll design and operate the infrastructure, tools, and workflows that enable data scientists and ML engineers to deliver reliable, scalable, and automated AI solutions — from classical ML to LLMs and generative AI.
This isn’t just about keeping systems running. You’ll drive automation, standardization, and monitoring practices that ensure AI becomes a trusted and sustainable part of manufacturing operations.
Your Mission
- Engineer ML & AI Platforms – design and maintain scalable infrastructure that supports the entire ML lifecycle, including training, fine-tuning, and deploying both traditional models and large-scale LLMs.
- Enable Automation – implement CI/CD pipelines, container orchestration, and workflow automation for seamless deployments across ML and GenAI workloads.
- Standardize ML Workflows – establish best practices for reproducibility, versioning, model packaging, and deployment across hybrid environments.
- Deploy & Optimize LLMs – manage GPU clusters, inference servers, and vector databases; optimize performance (throughput, latency, token usage) for generative AI applications.
- Monitor & Govern – develop observability for ML and GenAI systems (drift, token usage, hallucinations, bias, GPU utilization) while ensuring reliability, security, and compliance.
- Collaborate Across Teams – work closely with data scientists, AI engineers, and IT to ensure platforms are easy to use, robust, and future-ready.
- Explore Emerging Tech – evaluate and integrate new MLOps and GenAI frameworks (MLflow, Ray, vLLM, Hugging Face TGI, Triton, orchestration agents) to keep the platform state of the art.