Posted on: 22/09/2025
Key Responsibilities :
- Design, develop, and support robust ETL pipelines to extract, transform, and load data into analytical products that drive strategic organizational goals.
- Develop and maintain data workflows on platforms like Databricks and Apache Spark using Python and Scala.
- Create and support data visualizations using tools such as MicroStrategy, Power BI, or Tableau, with a preference for MicroStrategy.
- Implement streaming data solutions utilizing frameworks like Kafka for real-time data processing.
- Collaborate with cross-functional teams to gather requirements, design solutions, and ensure smooth data operations.
- Manage data storage and processing in cloud environments, with strong experience in AWS cloud services.
- Use knowledge of data warehousing, data modeling, and SQL to optimize data flow and accessibility.
- Develop scripts and automation tools using Linux shell scripting and other languages as needed.
- Ensure continuous integration and continuous delivery (CI/CD) practices are followed for data pipeline deployments using containerization and orchestration technologies.
- Troubleshoot production issues, optimize system performance, and ensure data accuracy and integrity.
- Work effectively within Agile development teams and contribute to sprint planning, reviews, and retrospectives.
Required Skills & Experience :
- At least 5 years of experience in developing ETL pipelines and data engineering workflows.
- Minimum 3 years hands-on experience in ETL development and support using Python/Scala on Databricks/Spark platforms.
- Strong experience with data visualization tools, preferably MicroStrategy, Power BI, or Tableau.
- Proficient in Python, Apache Spark, Hive, and SQL.
- Solid understanding of data warehousing concepts, data modeling techniques, and analytics tools.
- Experience working with streaming data frameworks such as Kafka.
- Working knowledge of Core Java, Linux, SQL, and at least one scripting language.
- Experience with relational databases, preferably Oracle.
- Hands-on experience with AWS cloud platform services related to data engineering.
- Familiarity with CI/CD pipelines, containerization, and orchestration tools (e.g., Docker, Kubernetes).
- Exposure to Agile development methodologies.
- Strong interpersonal, communication, and collaboration skills.
- Ability and eagerness to quickly learn and adapt to new technologies.
Preferred Qualifications :
- Experience working in large-scale, enterprise data environments.
- Prior experience with cloud-native big data solutions and data governance best practices.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1550267
Interview Questions for you
View All