Join our exciting team as a Data Engineer with the following job scope:
- Architect and implement data solutions on our large-scale data platform Hadoop using PySpark.
- Write complex and efficient SQL queries to extract and transform data from various sources.
- Design, build, and maintain scalable and reliable data pipelines using tools like Apache Airflow and Kafka.
- Develop and optimize data processing jobs on the Microsoft Azure cloud platform, leveraging services such as Azure Data Lake, Databricks, and Azure Data Factory.
- Collaborate with data analysts and other stakeholders to understand data requirements and deliver high-quality data products.
- Ensure data quality and integrity across all systems.
- Monitor, troubleshoot, and optimize the performance of our data infrastructure.
- Stay up-to-date with the latest trends and technologies in the Big Data space.