The AI Trust Innovation Technologist strengthens SGSās Digital Research & Ventures capabilities by actively building, testing, and analysing AI systems to develop credible independent validation and monitoring services. The role combines deep AI engineering expertise with venture-oriented innovation, evaluating emerging technologies and startups while translating hands-on experimentation into scalable Digital Trust validation solutions.
Key responsibilities:
- Analyse emerging AI technologies and real-world AI system architectures (e.g., LLM-based systems, ML pipelines, multimodal systems) to identify where independent validation, testing, or monitoring by SGS is technically feasible and valuable.
- Technically assess AI risks ā including robustness failures, bias/fairness issues, explainability limits, data integrity risks, cybersecurity vulnerabilities, and misuse scenarios (e.g., deepfakes, hallucinations) ā and translate them into potential validation or monitoring service opportunities.
- Develop, prototype, and evaluate AI validation approaches (e.g., adversarial testing, dataset validation, interpretability methods, provenance/watermarking) to assess technical feasibility and scalability for Digital Trust services.
- Interpret AI regulations and standards (e.g., EU AI Act, ISO/IEC AI standards, NIST AI RMF) and translate their technical implications into viable validation, monitoring, or independent evaluation approaches.
- Engage with universities, AI research labs, startups, and technology leaders to track cutting-edge AI system developments and explore collaboration, experimentation, and validation opportunities.
- Assess AI startups, tools, and platforms for technical maturity, architectural soundness, evaluation robustness, and strategic fit with SGSās AI Trust ambitions.
- Provide technical insight and hands-on validation input for AI-related buildābuyāpartnerāinvest evaluations, including assessment of model architectures, evaluation methodologies, and system scalability.
- Contribute expert insight to Digital Trust marketing, thought leadership, and internal education on AI trust issues.
- Work cross-functionally with business lines, M&A, R&D, innovation teams, and IT to technically assess AI systems, prototype validation approaches, and support early-stage AI trust initiatives.
- Build and experiment with AI systems directly to deeply understand system behaviour, validation challenges, and potential service design implications.