Posted on: 24/10/2025
Description :
- Design, Build, and Maintain Systems : Develop robust software solutions and implement RESTful APIs that handle high volumes of data in real-time, leveraging message queues (Google Cloud Pub/Sub, Kafka, RabbitMQ) and event-driven architectures.
- Data Pipeline Development : Design, develop, and maintain data pipelines (ETL/ELT) to process structured and unstructured data from various sources.
- Data Storage and Warehousing : Build and optimize databases, data lakes, and data warehouses (e. g. Snowflake) for high-performance querying.
- Data Integration : Work with APIs, batch, and streaming data sources to ingest and transform data.
- Performance Optimization : Optimize queries, indexing, and partitioning for efficient data retrieval.
- Collaboration : Work with data analysts, data scientists, software developers, and product teams to understand requirements and deliver scalable solutions.
- Monitoring and Debugging : Set up logging, monitoring, and alerting to ensure data pipelines run reliably.
- Ownership and Problem-Solving : Proactively identify issues or bottlenecks and propose innovative solutions to address them.
Requirements :
- 4+ years of experience in software development.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Strong Problem-Solving Skills : Ability to debug and optimize data processing workflows.
- Programming Fundamentals : Solid understanding of data structures, algorithms, and
software design patterns.
- Software Engineering Experience : Demonstrated experience (SDE II/III level) in designing,
developing, and delivering software solutions using modern languages and frameworks
(Node.js, JavaScript, Python, TypeScript, SQL, Scala, or Java).
- ETL Tools and Frameworks : Experience with Airflow, dbt, Apache Spark, Kafka, Flink, or similar technologies.
- Cloud Platforms : Hands-on experience with GCP (Pub/Sub, Dataflow, Cloud Storage) or AWS (S3 Glue, Redshift).
- Databases and Warehousing : Strong experience with PostgreSQL, MySQL, Snowflake, and NoSQL databases (MongoDB, Firestore, ES).
- Version Control and CI/CD : Familiarity with Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines for deployment.
- Communication : Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment.
- Experience with data visualization tools (e. g. Superset, Tableau), Terraform, IaC, ML/AI data pipelines, and devops practices is a plus.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1564343
Interview Questions for you
View All