HamburgerMenu
hirist

Data Engineer - Python/SQL/ETL

MARKTINE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
Anywhere in India/Multiple Locations
6 - 7 Years

Posted on: 28/11/2025

Job Description

Description :

Job Summary :

We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines, data warehouses, and analytics platforms.

The ideal candidate will have strong expertise in ETL/ELT processes, big data frameworks, cloud platforms, and SQL/noSQL databases, and will collaborate with data scientists, analysts, and product teams to enable data-driven decision-making across the organization.

Key Responsibilities :

- Design, build, and maintain robust, scalable ETL/ELT pipelines for batch and real-time data processing.

- Ingest, transform, and load data from multiple sources including databases, APIs, and streaming platforms.

- Ensure data quality, reliability, and consistency across all pipelines.

- Design and implement data models, schemas, and warehouse structures (star schema, snowflake, normalized/denormalized models).

- Work with data lakes, cloud warehouses, and operational databases.

- Optimize data storage and retrieval for query performance and scalability.

- Implement data processing using Apache Spark, Hadoop, Kafka, or similar frameworks.

- Work with cloud platforms like AWS, Azure, or GCP, leveraging services such as S3, Redshift, BigQuery, Synapse, Databricks, Glue, or EMR.

- Ensure pipelines are secure, maintainable, and compliant with organizational standards.

- Work closely with Data Scientists, Analysts, and BI teams to understand data requirements.

- Support analytics, reporting, and machine learning initiatives by providing clean, structured, and timely datasets.

- Troubleshoot, monitor, and optimize data workflows and pipelines.

- Implement unit testing, integration testing, and validation checks for all pipelines.

- Maintain technical documentation, metadata, and data lineage for datasets and workflows.

- Enforce best practices for data governance, monitoring, and observability.

Required Skills & Technical Expertise :

- Strong programming skills in Python, Java, or Scala.

- Hands-on experience with ETL frameworks and big data technologies : Apache Spark, Hadoop, Hive, Kafka, Flink.

- Strong SQL skills and experience with relational databases (PostgreSQL, MySQL, SQL Server).

- Experience with cloud data platforms : AWS (Redshift, Glue, S3, EMR), GCP (BigQuery, Dataflow), or Azure (Synapse, Data Lake).

- Knowledge of data modeling, warehousing, and OLAP/OLTP systems.

- Familiarity with workflow orchestration tools such as Apache Airflow, Luigi, or DBT.

- Strong debugging, performance tuning, and optimization skills


info-icon

Did you find something suspicious?