Posted on: 29/08/2025
Job Summary :
The ideal candidate will have deep expertise in data architecture, ETL processes, and cloud data platforms.
You will collaborate closely with data scientists, analysts, and software engineers to ensure high-quality, scalable, and reliable data solutions.
Key Responsibilities :
- Design, develop, and maintain scalable and efficient data pipelines for batch and real-time processing.
- Build and optimize data architectures, data models, and storage solutions to support analytics and business intelligence needs.
- Collaborate with cross-functional teams including data scientists, analysts, and product managers to understand data requirements and deliver data solutions.
- Implement ETL/ELT processes to ingest data from various structured and unstructured sources.
- Ensure data quality, consistency, and reliability through robust validation and monitoring frameworks.
- Develop and maintain data warehouses, data lakes, and related infrastructure using cloud platforms (AWS, Azure, GCP).
- Optimize database performance and manage large-scale distributed data processing systems.
- Implement data security and governance best practices to ensure compliance with regulations.
- Mentor junior data engineers and contribute to the continuous improvement of engineering
standards and practices.
- Stay updated with emerging data technologies and recommend innovative solutions.
Technical Skills Required :
- Strong experience with big data technologies like Apache Spark, Hadoop, Kafka, or Flink.
- Expertise in SQL and working with relational databases (PostgreSQL, MySQL, SQL Server).
- Experience with cloud data platforms such as AWS (Redshift, S3, Glue), Azure (Synapse, Data Factory), or Google Cloud (BigQuery, Dataflow).
- Familiarity with data warehousing concepts and tools like Snowflake, Redshift, or Google
BigQuery.
- Hands-on experience with ETL/ELT tools and frameworks.
- Knowledge of containerization (Docker) and orchestration tools (Kubernetes) is a plus.
- Understanding of data governance, security, and compliance standards.
- Experience with version control (Git) and CI/CD pipelines for data engineering workflows
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1537947
Interview Questions for you
View All