Posted on: 26/11/2025
Description :
We are looking for a skilled Data Engineer to build, optimize, and maintain scalable data pipelines and data platforms. The role involves working with diverse data sources, designing efficient ETL/ELT workflows, and enabling analytics and reporting teams with high-quality, reliable datasets.
The core responsibilities for the job include the following :
Data Engineering and Pipelines :
- Develop and maintain scalable data pipelines for ingestion, transformation, and integration across multiple systems.
- Work with structured and semi-structured data formats (CSV, JSON, Parquet) and various data sources (APIs, databases).
- Design and implement robust data models, ETL/ELT workflows, and validation processes.
Data Platforms and Storage :
- Build and maintain data warehouses and data lakes for analytics and reporting use cases.
- Prepare and optimize datasets for Power BI dashboards and advanced analytics.
Programming and Optimization :
- Write high-quality SQL and Python (preferably Pandas) for data extraction, transformation, and loading.
- Debug, optimize, and improve stored procedures (SPs) for performance and reliability.
- Apply best-practice performance tuning techniques, including indexing, execution plan analysis, and query optimization.
- Use advanced SQL concepts such as CTEs, joins, dynamic queries, window functions (LEAD, LAG), and a clear understanding of temporary vs physical tables.
Collaboration and Governance :
- Work closely with BI developers and analytics teams to ensure seamless data availability.
- Support data governance, quality, and security initiatives.
- Document workflows and participate in testing and debugging activities.
Requirements :
- 3-5 years of experience in data engineering or similar roles (internship/project experience may be considered).
- Strong SQL expertise and experience with relational databases (SQL Server, PostgreSQL, MySQL).
- Hands-on experience with Python for data handling (Pandas preferred).
- Exposure to distributed processing frameworks such as Apache Spark.
- Familiarity with cloud platforms (Azure/AWS); willingness to learn both.
- Experience with Snowflake is a strong advantage.
- Experience creating data models and datasets for Power BI.
- Bachelor's degree in Computer Science, Engineering, or related field.
- Solid understanding of ETL/ELT processes and data integration frameworks.
- Strong problem-solving skills, attention to detail, and a learning mindset.
- Good communication skills and ability to work in collaborative teams.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1581082
Interview Questions for you
View All