Function
QualityWe’re Hitachi Vantara, the data foundation trusted by the world’s innovators. Our resilient, high-performance data infrastructure means that customers – from banks to theme parks – can focus on achieving the incredible with data.
If you’ve seen the Las Vegas Sphere, you’ve seen just one example of how we empower businesses to automate, optimize, innovate – and wow their customers. Right now, we’re laying the foundation for our next wave of growth. We’re looking for people who love being part of a diverse, global team – and who get excited about making a real-world impact with data.
About Us
We’reHitachi Vantara, a global infrastructure business. Our people are the force of meaningful progress. We enable theincredible with data –from taking theme park fans on magical rides, conserving natural resources, protecting rainforests to saving lives. We empower businesses to automate,optimize, and advance innovation. Together, we create a sustainable future for all.
Imagine the sheer breadth of talent it takes to inspire the future. Wedon’texpect you to ‘fit’ every requirement – your life experience, character, perspective, and passion for achievinggreat thingsin the world are equally important to us.
Role Overview
Hitachi Vantara is seeking anAI SDETto join our Global Quality Assurance team. In this role, you will design and execute test strategies forAI‑drivensystems, includingLLM‑poweredfeatures, machine learning components, and intelligent automation workflows.
You will collaborate closely with product teams, data scientists, ML engineers, and platform teams tovalidateAI behavior across the full lifecycle—from data ingestion and model inference to system integration anduser‑facingfunctionality. This includes evaluating model outputs, ensuring reliability and safety,validatingdata pipelines, and building tooling that supports predictable,high‑qualityAI experiences.
This role is ideal for an engineer who enjoys blending software testing, system integration, and applied AI behavior analysis.
Experience:7–10+ Years
Job Location:Kolkata, India preferred
Key Responsibilities
AI System Validation:Test andvalidateAI/ML features across internal models, embedded AI components, and integrated AI services.
Model Behavior Evaluation:Assess model inference quality, prompt/response behavior, performance, safety, and robustness across diverse scenarios.
Data Pipeline & Schema Verification:Validatedata contracts, feature schemas, and payload structures used by AI models and downstream systems.
End‑to‑EndWorkflow Testing:EnsureAI‑drivenworkflows function reliably across internal components, orchestration layers, anduser‑facingapplications.
Tooling & Automation:Build tools to simulate AI behavior, automate evaluation workflows, generate test datasets, and support continuous validation.
AI Quality & Reliability:Identifyedge cases, failure modes, drift indicators, and quality gaps inAI‑poweredfeatures.
Security & Responsible AI Checks:Validatecompliance with security, privacy, fairness, and responsible AI guidelines.
Cross‑FunctionalCollaboration:Work with product, engineering, and data science teams to troubleshoot issues,validatefixes, and improve AI feature readiness.
Documentation:Produce clear documentation for AI test plans, expected behaviors, evaluation criteria, and validation procedures.
Continuous Improvement:Propose enhancements to improve AI reliability, observability, and test coverage across the lifecycle.
Required Skills
7–10+ years of experience in software testing or SDET roles, with 2+ years focused on AI/ML systems.
StrongproficiencyinPythonfor test development, data validation, andevaluationtooling.
Strong foundational knowledge ofstorage technologies, such as block storage, object storage (S3‑compatible systems), NFS, or distributed file systems.
KnowledgeofHitachi products such as VSP360 ,OpsCenter will be an added advantage.
Experience with AI/ML frameworks or platforms such asPyTorch, TensorFlow,Scikit‑learn, Hugging Face, Azure AI, AWS AI/ML, or Google Vertex AI.
Understanding of AI/ML concepts including:
model inference behavior
evaluation metrics
prompt/response quality
drift, bias, and fairness considerations
Experience testingLLM‑basedsystems orML‑poweredfeatures.
Familiarity withMLOpstools and workflows (MLflow, Kubeflow, SageMaker, Vertex AI, etc.).
Strong understanding of QA methodologies, SDLC, and agile practices.
Experience with Linux/UNIX environments and containerized workflows (Docker).
Excellent communication skills and ability to collaborate across global teams.