Posted on: 12/12/2025
Data Engineer I (10 to 12 Years Experience)
Position Overview :
We are seeking an experienced Data Engineer I with 10 to 12 years of expertise in designing, building, and optimizing data systems, pipelines, and workflows.
The role requires strong technical skills, analytical thinking, and the ability to work closely with cross-functional teams to ensure reliable, scalable, and high-performance data infrastructure.
Experience : 10 to 12 Years
Location : On-site / Hybrid (as required)
Employment Type : Full-Time
Key Responsibilities (KRAs) :
- Design, develop, and maintain scalable data pipelines and ETL/ELT processes.
- Build and optimize data models, data lakes, and data warehouse systems to support analytics and product requirements.
- Ensure data quality, accuracy, consistency, and reliability across all data touchpoints.
- Collaborate with Product, Engineering, and Analytics teams to support data-driven decision-making.
- Implement performance optimization techniques for query processing, workflow execution, and data storage.
- Identify bottlenecks in data flow and resolve them through architectural improvements and automation.
- Develop data ingestion frameworks integrating structured, semi-structured, and unstructured datasets.
- Manage end-to-end data lifecycle including ingestion, transformation, storage, governance, and security.
- Use monitoring tools to track pipeline health, resolve issues, and minimize downtime.
- Contribute to best practices in data engineering, coding standards, version control, and documentation.
- Support organizational change initiatives related to data modernization and platform upgrades.
- Ensure alignment of data infrastructure with business goals, compliance standards, and operational KPIs.
Required Skills & Competencies :
- Strong proficiency in SQL, data modeling, and relational and NoSQL databases.
- Hands-on experience with modern ETL/ELT tools and pipeline orchestration frameworks.
- Expertise in Python, Scala, or Java for data engineering tasks.
- Deep familiarity with cloud platforms such as AWS, Azure, or GCP.
- Experience with big data technologies such as Spark, Hadoop, Kafka, or Databricks.
- Strong understanding of data warehousing concepts and distributed processing.
- Ability to diagnose performance issues, optimize pipelines, and improve processing workflows.
- Hands-on proficiency with data visualization and analytics tools for insight generation.
- Excellent communication and collaboration skills with cross-functional teams.
- Data-driven mindset with the ability to interpret operational KPIs and drive measurable improvements.
- Bachelors degree in Computer Science, Engineering, Information Systems, or related field.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1589099
Interview Questions for you
View All