-Lead and manage the global IT team for data engineering that develops all technical artefacts as code, implemented in professional IDEs, with full version control and CI/CD automation. This team combines both lakehouse modeling of common and business use ase artefacts & semantics, as well as generalist data integration & metadata services.
-Ensure high-quality delivery of data engineering assets that enable business analytics, AI/ML integration, and data governance at scale.
-Act as the delivery and people manager for the data engineering team co-located in Bengaluru, collaborating globally with platform, business, and other IT stakeholders.
-Drive consistency, engineering excellence, and cross-domain reuse across the entire data engineering lifecycle—from data acquisition to semantic layer delivery, while applying rigorous software engineering practices in data engineering such as modular design, test-driven development, and artifact reuse in all implementations
-Direct management of approx. 10–15 data engineers (generalists and specialists). Reports to the global head of Data & Analytics within the IT Competence Center.
-Team delivers data engineering & analytics assets via Product Owner for data & analytics to all business domains.
-Collaborates with Product Owners, Lead Architects & Lead engineers, Data Governance, Infrastructure & Cybersecurity, and domain-aligned functional IT teams globally.
Main Tasks
- Line management for a high-performing, cross-functional data engineering team.
- Drive skill development, mentorship, and performance management.
- Foster a culture of accountability and trust.
- Own timely delivery of data & analytics assets from data acquisition to semantic layers.
- Align work with business priorities and architectural standards.
- Ensure quality gates and documentation.
- Act as primary escalation and coordination point across business domains.
- Bridge infrastructure, functional IT, cybersecurity, and platform decisions.
- Advocate for team in global forums.
- Guide adoption of engineering best practices (TDD, CI/CD, IaC) & guide building all technical artefacts as code, creating scalable batch and streaming pipelines in Azure Databricks using PySpark and/or Scala
-Leading the design and operation of scalable batch/stream pipelines in Databricks, including ingestion from structured/semi-structured sources and implementation of bronze/silver/gold layers under lakehouse governance.
-Overseeing dimensional modeling and curated data marts for analytics use cases, while ensuring semantic layer compatibility and collaboration on enterprise 3NF warehouse integration.
-Ensuring high-quality engineering practices across data validation, CI/CD-enabled TDD, performance tuning, metadata governance, and stakeholder collaboration via agile methods.
- Build an inclusive, high-performance team culture in Bengaluru.
- Champion DevSecOps, reuse, automation, and reliability. Commit all artifacts to version control with peer review and CI/CD integration
- Ensure documentation, knowledge sharing, and continuous improvement.
-Leading the design and operation of scalable, secure ingestion services—including CDC, delta, full-load, and SAP extractions via tools like Theobald Extract Universal.
-Overseeing integration with APIs, legacy systems, Salesforce, and file-based sources, while aligning all interfaces with cybersecurity standards and compliance protocols.
-Driving the development of the enterprise data catalog application, supporting dataset discoverability, metadata quality, and Unity Catalog–aligned access workflows.
continental