We believe conversations will become the #1 way to shop.
At Gorgias, we’re building the platform that makes this real: a unified AI agent that sells, supports, and re-engages customers across the entire journey. Conversational Commerce is the future of ecommerce, and we’re leading that shift.
Our mission is to turn every interaction between a brand and its customers into a relationship: personal, seamless, and intelligent. By combining deep product expertise with the latest in AI, we’re making shopping feel more natural, human, and connected than ever before.
To win, we focus relentlessly on:
Quality: conversations that feel authentic and on-brand.
Experience: effortless shopping from chat to checkout.
Re-engagement: personal, 1-1 dialogue instead of noisy marketing.
The opportunity is massive. As AI reshapes how people buy, Gorgias is building the foundation for the next decade of ecommerce, where every brand has its own intelligent agent and every customer feels understood.
Join us to make Conversational Commerce real.
Gorgias is an AI-first company building products powered by LLMs and agent-based systems.
As we scale our AI capabilities, we need to improve how we evaluate, iterate, and operate these systems in production. Today, parts of this process remain manual or fragmented, especially around prompt iteration, validation, and evaluation workflows.
This role will focus on building and scaling the systems that support AI evaluation and iteration, helping the team move faster and more reliably.
You’ll have a chance to:
🤖 Work on production AI systems used by thousands of businesses
🧠 Define how we evaluate and improve AI performance at scale
🏗️ Build internal platforms and tooling used by AI and engineering teams
📈 Reduce manual processes and improve iteration speed on AI features
👥 Collaborate across AI, ML, and product teams
🧑🏫 Raise the engineering bar and mentor others
Design and build systems for evaluating AI performance (offline and online)
Develop workflows and tooling for prompt iteration and validation
Improve the reliability, scalability, and observability of AI systems in production
Work closely with ML Engineers and AI Engineers to integrate evaluation into development workflows
Collaborate with product teams to ship AI-powered features
Take ownership of systems end-to-end, from design to production and monitoring
Contribute to improving engineering practices around AI development and evaluation
Mentor engineers and help structure how the team approaches AI engineering
8+ years of experience in software engineering
Strong backend engineering background
Experience building and scaling production systems
Experience working with AI systems (LLMs, agents, or similar)
You likely come from one of the following backgrounds:
Staff or senior software engineer who has moved into AI systems
Engineer working on applied AI products in production
Experience building internal platforms or developer tooling
1. Architect the Evaluation "Factory"
End-to-End Platform Ownership: Architect and lead the development of our internal evaluation platform, moving the needle from manual testing to a fully automated lifecycle (from LLM-as-a-judge creation to production monitoring).
Accelerate Time-to-Market: Directly impact our primary KPI by designing tools and workflows that drastically reduce the time it takes to deliver a calibrated, production-ready agent.
Infrastructure Collaboration: Partner with the Orchestration team to build the robust, scalable infrastructure required to run complex evals and agentic simulations at scale.
2. Scaling AI Expertise
Squad Empowerment: Serve as the "AI Technical Lead" for product squads, guiding them through the complexities of agent design, failure analysis, and prompting best practices.
Decentralize Quality: Instead of being a bottleneck, you will build the "paved road" that allows product squads to become autonomous in measuring and maintaining their own agent quality.
Standard Setting: Define what "good" looks like for AI at [Company Name]. You’ll translate non-deterministic AI behavior into predictable engineering metrics that the whole organization can trust.
3. Engineering Leadership
Mentor & Level Up: Bridge the gap between traditional software engineering and AI. You’ll mentor engineers on how to apply rigorous system design to the world of LLMs and agents.
Continuous Observability: Take ownership of the feedback loop, ensuring that production insights from our agents directly inform the next iteration of our evaluation datasets.
8+ Years of Engineering Excellence: You are a Staff-level engineer first. You’ve built systems that handle high scale, and you know how to architect for long-term maintainability and performance.
Agentic Curiosity: You’ve moved beyond the "chatbot" phase and are actively experimenting with AI Agents. You understand that the challenge isn't the prompt, but the orchestration, state management, and reliability of the agent's actions.
Systems Thinker (Non-Deterministic Mindset): You recognize that AI is probabilistic. You are excited by the challenge of building deterministic "wrappers" and Evaluation loops around models to make them safe for production.
The "Applied" Edge: You likely come from a background in distributed systems, internal platforms, or developer tooling, and you're now applying that rigor to the AI stack.
Beyond the Wrapper: You have serious experience moving beyond simple API calls to architecting multi-stage AI orchestrations (agents, chained workflows, or custom runtime logic).
Orchestration Experience: Even if you aren't an AI researcher, you have experience building complex, multi-step workflows (e.g., temporal systems, state machines, or event-driven architectures) and want to apply this to Agentic loops.
Reliability Obsession: You understand why "vibes-based" testing doesn't work. You’ve started exploring or building Eval frameworks to measure how models perform against real-world data.
Infrastructure Mindset: You are comfortable with the "glue" that makes AI work: vector databases, semantic caching, and API integration with third-party tools.
Strong backend experience (Python preferred)
Experience with distributed systems and event-driven architectures
Familiarity with tools like Kafka, Pub/Sub, or equivalent
Experience working with LLMs (prompting, RAG, agents, evaluation workflows)
Experience building APIs and scalable services
Understanding of monitoring, observability, and system performance
gorgias