Posted on: 12/12/2025
Description :
Youll play a key role in building and optimizing data pipelines, ensuring data is clean, secure, and accessible for analytics and operations. This role offers the chance to work closely with talented teams, solve real-world data challenges, and make a direct impact on business decisions.
Key Responsibilities :
- Design and develop scalable, efficient data pipelines and ETL/ELT workflows.
- Build and automate data processing solutions using Python, Azure Data Factory, and Databricks.
- Develop and maintain data models, schemas, and warehouse structures on Azure.
- Work with SQL Server, ClickHouse, and MongoDB for data querying, modeling, and performance optimization.
- Create dashboards and data visualizations using Bold BI, Power BI, Superset, or similar tools.
- Integrate external systems and data sources using REST APIs and connectors.
- Optimize data ingestion, transformation, and storage for performance and reliability.
- Implement and manage version control workflows using Git.
- Collaborate with business stakeholders to understand data requirements and deliver high-quality solutions.
- Conduct data quality checks, troubleshoot issues, and ensure reliable data flows across systems.
- Maintain documentation for pipelines, models, SQL logic, and data workflows.
Required Technical Expertise :
- Proven experience in creating client-ready dashboards using Bold BI, Power BI, Superset, or similar BI tools.
- Experience in data modeling, ETL, and database management with SQL Server, ClickHouse, and MongoDB.
- Strong skills in Python for building data pipelines and automation.
- Hands-on expertise with Azure Data Factory and Databricks for data integration and analytics.
- Solid understanding of data warehousing concepts and experience working on Azure Cloud.
- Ability to design and optimize scalable data pipelines and architectures.
- Familiarity with Git for version control and teamwork.
- Knowledge of APIs and REST APIs to connect with external data sources.
- Strong ability to convert business needs into data-driven insights and visualizations.
- Comfort with Linux commands, RegEx, and XML for data handling.
- Strong problem-solving mindset with a passion for tackling data challenges.
- Eagerness to learn, adapt, and work on new data tools and technologies.
What's in it for you :
- Opportunity to work autonomously while being part of a dynamic team.
- Tackling a wide range of challenges daily, across multiple customers, and continuously growing and expanding your expertise.
- Eternal respect from your colleagues as you improve the system.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1588796
Interview Questions for you
View All