Posted on: 28/11/2025
Description :
Job Summary :
We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines, data warehouses, and analytics platforms.
The ideal candidate will have strong expertise in ETL/ELT processes, big data frameworks, cloud platforms, and SQL/noSQL databases, and will collaborate with data scientists, analysts, and product teams to enable data-driven decision-making across the organization.
Key Responsibilities :
- Design, build, and maintain robust, scalable ETL/ELT pipelines for batch and real-time data processing.
- Ingest, transform, and load data from multiple sources including databases, APIs, and streaming platforms.
- Ensure data quality, reliability, and consistency across all pipelines.
- Design and implement data models, schemas, and warehouse structures (star schema, snowflake, normalized/denormalized models).
- Work with data lakes, cloud warehouses, and operational databases.
- Optimize data storage and retrieval for query performance and scalability.
- Implement data processing using Apache Spark, Hadoop, Kafka, or similar frameworks.
- Work with cloud platforms like AWS, Azure, or GCP, leveraging services such as S3, Redshift, BigQuery, Synapse, Databricks, Glue, or EMR.
- Ensure pipelines are secure, maintainable, and compliant with organizational standards.
- Work closely with Data Scientists, Analysts, and BI teams to understand data requirements.
- Support analytics, reporting, and machine learning initiatives by providing clean, structured, and timely datasets.
- Troubleshoot, monitor, and optimize data workflows and pipelines.
- Implement unit testing, integration testing, and validation checks for all pipelines.
- Maintain technical documentation, metadata, and data lineage for datasets and workflows.
- Enforce best practices for data governance, monitoring, and observability.
Required Skills & Technical Expertise :
- Strong programming skills in Python, Java, or Scala.
- Hands-on experience with ETL frameworks and big data technologies : Apache Spark, Hadoop, Hive, Kafka, Flink.
- Strong SQL skills and experience with relational databases (PostgreSQL, MySQL, SQL Server).
- Experience with cloud data platforms : AWS (Redshift, Glue, S3, EMR), GCP (BigQuery, Dataflow), or Azure (Synapse, Data Lake).
- Knowledge of data modeling, warehousing, and OLAP/OLTP systems.
- Familiarity with workflow orchestration tools such as Apache Airflow, Luigi, or DBT.
- Strong debugging, performance tuning, and optimization skills
Did you find something suspicious?
Posted By
Riya jain
Senior Talent Acquisition Specialist at MARKTINE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
Last Active: 5 Dec 2025
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1582238
Interview Questions for you
View All