Job Summary
We are seeking an experienced and skilled Data Engineer to join our team. As a Data Engineer, your primary responsibility will be to design, develop, and maintain data pipelines and solutions using various technologies. You should have expertise in SQL with a focus on data warehousing, as well as experience with Azure Databricks, PySpark, Azure Data Factory, and Azure Data Lake. Strong knowledge of data engineering fundamentals and working with Parquet/Delta tables in Azure Data Lake is also required.
Job Responsibilities
- Designing and developing data pipelines to extract, transform,and load data from various sources into the data warehouse.
- Writing complex SQL queries for data extraction and manipulation from the data warehouse.
- Building and maintaining ETL processes using Azure Databricks w/ PySpark.
- Implementing data integration workflows using Azure Data Factory.
- Collaborating with cross functional teams including developers,data analysts,and business stakeholders to understand requirements and deliver high quality solutions.
- Optimizing performance of the data pipelines and ensuring scalability and reliability of the systems.
- Monitoring Data quality and troubleshooting issues in collaboration with the operations team.
- Maintaining documentation of the design and implementation of the data pipelines.
Basic Qualifications
- Expertise in SQL , ideally with experience in working with data warehousing concepts and technologies.
- Strong hands-on experience with Azure Databricks w/ PySpark.
- Proficiency in designing and implementing data integration workflow using Azure DataFactory.
- Solid understanding of data engineering fundamentals including data modeling, data transformation, and performance optimization techniques.
- Experience working with Azure DataLake for storing large data sets, maintaining Parquet/Delta tables, and performing efficient querying.
Country Restrictions: Avoid Venezuela and Cuba
Tipo de empleo: Remote
Ubicación del empleo: LATAM