Posted on: 14/09/2025
Job Summary :
The ideal candidate will have extensive hands-on experience with Python and big data platforms on AWS, including Redshift, Glue, and Lambda. You will be responsible for building robust data pipelines, managing data warehouses and data lakes, and integrating diverse data sources to create a seamless flow of information. This role requires a strong technical background, excellent communication skills, and a customer-focused mindset to drive business success.
Key Responsibilities :
Data Pipeline & Integration :
- Utilize workflow management tools such as Apache Airflow, Luigi, or Azkaban to orchestrate complex data flows.
- Integrate data from various sources into a centralized Data Lake and Data Warehouse.
Big Data Development :
- Develop and optimize SQL queries for data manipulation and analysis within the data warehouse.
Platform Management :
- Ensure the data infrastructure is performant, secure, and scalable to meet growing business needs.
Required Skills & Qualifications
Core Experience :
- Hands-on experience with Python coding is a must.
- Proven experience with data engineering, data integration, and data pipeline development.
Technical Proficiency :
- Proficiency in writing code using the Spark engine with Python and PySpark.
- Expertise in SQL.
- Experience with data pipeline and workflow management tools like Azkaban, Luigi, or Apache Airflow.
Professional Attributes :
- Strong consultative and management skills.
- Excellent communication and interpersonal skills.
Preferred Skills :
- Certification in a cloud platform (AWS or GCP).
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1545907
Interview Questions for you
View All