Posted on: 27/10/2025
Role Overview :
We are seeking a highly experienced Senior Data Engineer to lead the design, implementation, and optimization of application data stores using PostgreSQL, DynamoDB, and advanced SQL. You will play a critical role in building scalable, performant, and secure data infrastructure to support business-critical applications.
Job Description :
- Proven expertise in building batch and streaming data pipelines using Databricks (PySpark) and Snowflake.
- Strong programming skills in SQL, Python, and PySpark for scalable data processing.
- In-depth knowledge of Azure and AWS data ecosystems, including Microsoft Fabric and distributed computing frameworks.
- Proficient in processing structured (CSV, SQL), semi-structured (JSON, XML), and unstructured (PDF, logs) data.
- Experience with designing low-latency, high-throughput pipelines using Spark and cloud-native tools.
- Solid understanding of CI/CD, automation, schema versioning, and data security in production environments.
Nice to Have :
- Experience with caching layers like Redis or Memcached.
- Knowledge of data security, encryption, and compliance standards (GDPR, HIPAA).
- Exposure to CI/CD pipelines involving DB migrations and automation tools like Flyway or Liquibase.
- Understanding of event-driven architecture or stream processing (Kafka, Kinesis).
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1565602
Interview Questions for you
View All