HamburgerMenu
hirist

MonoSpear Technologies - Senior Data Engineer - ETL/ELT Workflows

MonoSpear Technologies Pvt Ltd
Anywhere in India/Multiple Locations
5 - 10 Years

Posted on: 10/11/2025

Job Description

Description :

About the Role :

We are looking for an experienced Senior Data Engineer to design, build, and optimize scalable data pipelines and infrastructure that power our analytics, reporting, and data science initiatives.

The ideal candidate will have a deep understanding of modern data architectures, strong programming skills, and experience working with large-scale distributed data systems.

You will play a key role in enabling data-driven decision-making across the organization.

Key Responsibilities :

- Design, develop, and maintain robust, scalable, and high-performance data pipelines for batch and real-time data processing.

- Build and optimize ETL/ELT workflows to ingest, transform, and integrate data from multiple sources.

- Architect and implement data warehouse and data lake solutions to support analytics and BI needs.

- Collaborate with data scientists, analysts, and business teams to understand data requirements and deliver reliable solutions.

- Ensure data quality, consistency, security, and governance across all systems.

- Implement best practices for data modeling, metadata management, and documentation.

- Optimize and monitor data workflows for performance, cost efficiency, and scalability.

- Mentor junior engineers and contribute to establishing engineering standards and best practices.

Required Skills & Qualifications :

- Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field.

- 510 years of hands-on experience as a Data Engineer or in a similar role.

- Strong programming skills in Python, Java, or Scala.

- Expertise in SQL and data modeling for analytical and transactional systems.

- Hands-on experience with ETL/ELT frameworks (e.g., Airflow, dbt, Luigi, Apache Beam).

- Proficiency with cloud data platforms such as AWS (Redshift, Glue, S3), GCP (BigQuery, Dataflow), or Azure (Synapse, Data Factory).

- Experience with big data technologies like Apache Spark, Kafka, Hadoop, or equivalent.

- Strong understanding of data warehousing concepts, performance tuning, and partitioning strategies.

- Familiarity with DevOps practices, version control (Git), and CI/CD for data systems


info-icon

Did you find something suspicious?