Posted on: 12/11/2025
Job Designation : Senior Data Ops Engineer Team Lead
Job Location : Bengaluru
Key Responsibilities :
Leadership & Team Management :
- Lead and mentor a team of DataOps engineers in designing and maintaining robust data pipelines.
- Plan, assign, and review team tasks to ensure timely and quality delivery.
- Collaborate with data engineers, data scientists, and business teams to prioritize data needs and ensure alignment with organizational goals.
- Drive best practices in coding standards, documentation, and deployment automation.
Technical Delivery :
- Manage real-time and batch data ingestion using Kafka for streaming and MySQL/Snowflake for storage and transformation.
- Implement and maintain data quality checks, validation, and reconciliation frameworks.
- Ensure pipeline observability, error handling, and alerting mechanisms for proactive issue resolution.
- Optimize Snowflake and MySQL queries for performance and cost efficiency.
- Lead migration or modernization initiatives (e.g., on-prem to Snowflake/cloud).
Governance & Operations :
- Maintain data security, access control, and compliance with enterprise standards.
- Define and track DataOps KPIs such as pipeline success rates, latency, and data quality metrics.
- Partner with Infrastructure and DevOps teams for seamless environment management and scalability.
Technical Skills Required :
Databases :
- Strong expertise in MySQL (query optimization, stored procedures, schema design).
- Advanced knowledge of Snowflake (data modelling, performance tuning, cost optimization).
ETL & Data Pipeline Tools :
- Hands-on experience with Pentaho Data Integration (Kettle) and/or StreamSets for ETL/ELT automation.
Streaming :
- In-depth understanding of Apache Kafka (topic configuration, producer/consumer setup, schema registry, stream processing).
Programming :
- Proficient in Python for data automation, transformation scripts, and integration with APIs.
Monitoring & Observability :
- Familiarity with Grafana, Prometheus, or similar tools for performance and error tracking.
Cloud :
- Exposure to AWS/Azure/GCP data stack (S3, Lambda, Glue, Dataflow, etc.).
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1573239
Interview Questions for you
View All