Posted on: 21/10/2025
Description :
We are looking for a highly skilled Senior Data Engineer to join our data engineering team.
The ideal candidate will design, build, and optimize robust and scalable data pipelines and platforms to support our data analytics, reporting, and machine learning initiatives.
You will work closely with data scientists, analysts, and software engineers to ensure efficient data flow, high data quality, and availability across multiple systems.
This role requires strong expertise in modern data engineering tools, cloud platforms, and best practices for managing large-scale data infrastructures.
Key Responsibilities :
- Design, develop, and maintain scalable and reliable data pipelines and ETL/ELT workflows for batch and real-time data processing.
- Architect and implement data ingestion, transformation, and storage solutions using cloud services (AWS, Azure, or GCP) and big data technologies.
- Build and manage data lakes, data warehouses, and data marts to enable efficient data analytics and business intelligence.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical solutions.
- Optimize data processing workflows to improve performance, reduce latency, and control costs.
- Implement data quality, validation, and monitoring frameworks to ensure accuracy and reliability.
- Develop and maintain documentation, including data dictionaries, pipeline architecture, and operational procedures.
- Enforce data governance, security, and compliance standards to protect sensitive information.
- Mentor junior data engineers and support continuous improvement of engineering practices.
- Stay updated with the latest trends and advancements in data engineering, cloud platforms, and big data ecosystems.
Required Skills & Experience :
- 5+ years of hands-on experience in data engineering or related roles.
- Strong proficiency in Python, SQL, and data processing frameworks such as Apache Spark, Flink, or Beam.
- Experience designing and implementing ETL/ELT pipelines using tools like Airflow, AWS Glue, Databricks, or similar orchestration platforms.
- Hands-on expertise with cloud platforms and services like AWS (S3, Redshift, EMR, Lambda), Azure Data Factory, Google Cloud Dataflow, or equivalent.
- Experience with data warehousing concepts and tools such as Snowflake, BigQuery, Redshift, or Azure Synapse Analytics.
- Familiarity with NoSQL and relational databases (e.g., MongoDB, Cassandra, PostgreSQL, MySQL).
- Strong understanding of data modeling, schema design, and database optimization.
- Experience with containerization (Docker) and orchestration (Kubernetes) is a plus.
- Knowledge of data governance, security best practices, and compliance standards (GDPR, HIPAA).
- Excellent problem-solving skills and ability to troubleshoot data issues in complex distributed environments.
- Strong communication and collaboration skills with experience working in Agile teams
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1562908
Interview Questions for you
View All