Posted on: 29/10/2025
Description :
Key Responsibilities :
- Write complex and optimized SQL queries for data extraction, transformation, and loading (ETL).
- Utilize PySpark for big data processing and integration with Incorta.
- Collaborate with business analysts and stakeholders to understand reporting requirements and translate them into technical solutions.
- Optimize data models for performance and scalability.
- Conduct data validation and ensure data accuracy across systems.
- Troubleshoot and resolve Incorta and ETL-related issues.
- Prepare and maintain technical documentation for solutions developed.
Requirements :
- Experience in data engineering, BI development, or related roles.
- Proven expertise in Incorta platform (schema design, data loading, dashboard development).
- Strong SQL development skills, including query optimization and performance tuning.
- Working knowledge of PySpark for big data processing.
- Experience with ETL processes and data integration techniques.
- Good understanding of relational databases and data warehousing concepts.
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Preferred Skills :
- Exposure to cloud platforms such as AWS, Azure, or GCP.
- Familiarity with Agile/Scrum methodologies.
- Knowledge of other BI tools is an added advantage
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1567156
Interview Questions for you
View All