Posted on: 27/11/2025
Description :
Job Overview :
- We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines, ETL processes, and modern data platforms.
- The ideal candidate will have strong experience in data engineering, cloud technologies, distributed processing, and a solid understanding of data modeling and warehousing principles.
- You will work closely with analytics, data science, and engineering teams to enable efficient, reliable, and secure data flows across the organization.
Key Responsibilities
- Design, build, and optimize ETL/ELT pipelines to ingest, transform, and deliver data across systems.
- Automate batch and streaming data workflows using modern orchestration tools.
- Implement high-quality, reusable, and scalable pipeline components.
- Design dimensional, relational, and/or data lake/lakehouse models based on business needs.
- Build and maintain data warehouses and data marts (Snowflake, BigQuery, Redshift, Synapse, Databricks).
- Apply best practices in schema design, partitioning, indexing, and optimization.
- Work with distributed data processing frameworks such as Spark, PySpark, Hadoop, Kafka, Flink, or Beam.
- Develop large-scale transformations and ensure performance tuning and efficient resource utilization.
- Handle structured, semi-structured, and unstructured data workflows.
- Develop and deploy data pipelines on AWS, Azure, or GCP (specify your platform).
- Work with cloud storage (S3, ADLS, GCS), compute services, serverless components, and infrastructure-as-code (Terraform/CloudFormation).
- Implement secure, scalable, and cost-efficient cloud architectures.
- Implement data validation, quality checks, and monitoring across pipelines.
- Maintain metadata, lineage, and documentation.
- Ensure compliance with security, privacy, and governance standards (RBAC, encryption, PII handling, etc.
- Work closely with data analysts, BI teams, and data scientists to understand data requirements.
- Support analytics and ML workloads by provisioning clean, curated, and reliable datasets.
- Participate in Agile/Scrum processes and contribute to sprint planning.
Required Skills & Qualifications :
- Strong programming experience in Python, Scala, or Java (Python preferred).
- Hands-on experience with Spark/PySpark and distributed data systems.
- Proficiency in SQL and query optimization.
- Solid understanding of ETL/ELT concepts, data modeling, and data warehousing.
- Experience with cloud platforms (AWS/Azure/GCP) and related data services.
- Familiarity with CI/CD practices using Git, Jenkins, GitHub Actions, Azure DevOps, or similar tools.
- Knowledge of API integration, data ingestion patterns, and workflow orchestration (Airflow, ADF, DB Workflows, Dagster, Prefect).
- Strong problem-solving, debugging, and performance optimization skills
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1581580
Interview Questions for you
View All