Minimum7 to 12 Years of in Devops + Data Engineering related technology experience-
Mandatory -
- Expert level understanding and experience on DevOps area.
- Expert level understanding and working experience in Kubernetes under below areas β
- GitOps (ArgoCD)
- (Nice to have) Experience provisioning data platforms like(Kubeflow, thingsboard, apache superset, Dremio, etc..)
- Experience with Kubernetes resource templating with helmcharts and or kustomize
- (nice to have) Helmfile
- Ingress Controllers (Traefik and Nginx)
- Cert-manager / Letβs encrypt.
- Load Balancers
- HAProxy
- Hands on Experience with
- Private Node Pools
- KeyVault
- Application Gateway
- Postgresql
- Eventhub
- VNETs and Subnets Configuration
- Virtual Machines
- Container Registries
- Cost Management and Billing
- Entra ID
- Log Analytics
- Expert level understanding and working experience in IaaS(Infrastructure as a Code) β Terraform
- Understanding of Docker, Docker File, Docker Registry,Automate Build
- Hands on experience on Azure DevOps(CI/CD Pipeline,Releases, Project Administration, Package Registry (PIP, Maven etc)
- Understanding of GitFlow and Semantic Versioning
- Practitioner of AGILE methodology (Scrum/Kanban)
- Knowledge and experience in Code Management, Code Versioning, Git flow, Release Planning.
- Excellent communication, presentation ,documentation skills.
- Mindset β taking initiatives, team player, keen to learn, adapt changes.
Good to have
- Expert level understanding of distributed computing principles(Big Data Processing).
- Working experience as Data Engineer in Cloud environment (Microsoft Azure).
- Understanding of designing Data Pipelines for ETL process(Databricks/Delta table/Spark).
- Good knowledge and hands on experience in Apache Spark(Batch and Streaming data).
- Understanding/Experience in designing and setting up Delta lakehouse architecture (Data vault 2.0, Data mart/Star Schema, Snowflake)
- Exposure to querying technologies like DremIO.