Role Overview :
- We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure.
- The role will focus on enabling data-driven decision-making by ensuring reliable data availability, quality, and performance across systems.
Key Responsibilities :
- Design, develop, and maintain scalable ETL/ELT data pipelines
- Build and optimize data architectures for structured and unstructured data
- Ensure data quality, integrity, and governance across systems
- Collaborate with data scientists and analysts to enable advanced analytics use cases
- Implement data warehousing solutions and optimize query performance
- Work with cloud platforms (AWS/GCP/Azure) for data ingestion and processing
- Monitor pipeline performance and troubleshoot issues
Requirements :
- 7 to 10 years of experience in Data Engineering
- Strong expertise in SQL, Python, or Scala
- Experience with ETL tools and frameworks
- Hands-on with cloud platforms (AWS/GCP/Azure)
- Knowledge of big data technologies (Spark, Hadoop, Kafka)
- Experience with data warehousing solutions (Snowflake, Redshift, BigQuery)