Posted on: 12/01/2026
Description :
Role Overview :
We are looking for a Senior Data Engineer II to design, build, and optimize scalable data platforms that power analytics, reporting, and advanced data science use cases.
- This role requires strong hands-on expertise in data engineering, the ability to own complex pipelines end-to-end, and close collaboration with product, analytics, and ML teams.
- As a senior contributor, you will play a key role in shaping data architecture, improving reliability and performance, and mentoring junior engineers while driving best practices across the data ecosystem.
Key Responsibilities :
- Design, build, and maintain scalable, fault-tolerant data pipelines (batch and streaming)
- Develop and optimize data models to support analytics, BI, and downstream ML use cases
- Own end-to-end data workflows from ingestion to consumption
- Ensure high data quality, availability, and performance across platforms
- Build and manage ETL/ELT pipelines using modern data engineering frameworks
- Optimize query performance, storage, and compute costs
- Implement robust data validation, monitoring, and alerting mechanisms
- Support real-time and near real-time data processing use cases where applicable
- Partner with Analytics, Data Science, Product, and Engineering teams to understand data needs
- Enable self-service analytics by creating reliable, well-documented datasets
- Translate business requirements into scalable technical solutions
- Enforce best practices around coding standards, version control, testing, and CI/CD
- Perform code reviews and provide technical guidance to junior engineers
- Contribute to architectural discussions and long-term data platform roadmap
Required Skills & Experience :
- Strong proficiency in SQL and at least one programming language such as Python or Scala
- Proven experience building data pipelines at scale
- Strong experience with data warehouses / lakehouses (Snowflake, BigQuery, Redshift, Databricks, etc.)
- Hands-on experience with ETL/ELT tools and frameworks (Airflow, dbt, Spark, Flink, etc.)
- Solid understanding of data modeling (dimensional, fact-based, and analytical models)
- Experience working in cloud environments (AWS, GCP, or Azure)
- Understanding of distributed systems, storage formats (Parquet, ORC), and data partitioning strategies
- Familiarity with CI/CD pipelines, Git-based workflows, and automation
- Experience implementing data quality checks, monitoring, and lineage
- Understanding of data security, access controls, and compliance best practices
Nice to Have :
- Exposure to streaming platforms (Kafka, Kinesis, Pub/Sub)
- Familiarity with analytics tools and BI consumption patterns
- Prior experience mentoring engineers or acting as a technical lead
Education :
- Bachelors or Masters degree in Computer Science, Engineering, or a related field
What We Look For :
- Strong problem-solving and analytical mindset
- Ability to work in fast-paced, data-driven environments
- Ownership mentality with a focus on reliability and scalability
- Clear communication and collaboration skills
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1600110