About the Role
We are seeking a Data Engineer Lead to develop, maintain, and optimize data pipelines and systems that support the acquisition, storage, transformation, and analysis of large volumes of data.
You will collaborate with cross-functional teams, including analysts, software engineers, and operations teams to ensure the availability, reliability, and integrity of data for various business needs.
* Design, develop, and maintain data pipelines in Azure for ingesting, transforming, and loading data from various sources into centralized Azure data lakes, Databricks Delta Lake, and Snowflake.
* Ensure data quality and integrity throughout the process.
* Implement efficient ELT/ETL processes to ensure data quality, consistency, and reliability.
* Develop transformation processes to clean, aggregate, and enrich raw data, ensuring it is in the appropriate format for downstream analysis and consumption.
You will work closely with cross-functional teams to understand data requirements and translate them into technical solutions. Monitor data pipelines, troubleshoot issues, and ensure data integrity and security.
This role requires strong technical expertise in data engineering principles, database management, and programming skills. We are looking for a candidate with good knowledge on Bigdata technology and strong development experience with Databricks and Snowflake.