We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques.
The ideal candidate has hands-on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights.
Responsibilities:
- Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
- Data Ingestion: Implement and manage data ingestion processes from various sources (relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
- Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats for analytical needs and business requirements.
- Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing ETL process runtime.
- Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
- Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar within the Cloudera ecosystem.
- Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
- Collaboration: Work closely with other data engineers, analysts, product managers, and stakeholders to understand data requirements and support various data-driven initiatives.
- Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.
Qualifications:
- Education and Experience: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.
- Technical Skills:
- 3+ years of experience as a Data Engineer, focusing on PySpark and the Cloudera Data Platform.
- Proficiency in PySpark programming.
- Strong understanding of Spark architecture, data partitioning, and repartitioning.
- Experience with the Hadoop ecosystem and Hive (Oracle experience is also acceptable).
- Ability to work with large data structures and perform transformations in Spark.
- Basic knowledge of Git operations.
- Key Areas:
- Basic Spark architecture.
- Data movement in Spark.
- Spark troubleshooting and optimization.
- SQL query writing and explanation.
Soft Skills:
- Strong analytical and problem-solving skills.
- Excellent verbal and written communication abilities.
- Ability to work independently and collaboratively in a team environment.
- Attention to detail and commitment to data quality.
Language Requirements: English proficiency (additional languages may be a plus).
Location: Dubai, Dubai, United Arab Emirates
Work Conditions: On-site, Full-time