HamburgerMenu
hirist

Job Description

Description :

Required Qualifications :

Education & Experience :

- Experience : 3 5 years of professional experience in a Data Engineering,. Software Engineering, or similar role.

- Academic : Bachelor of Engineering or Master of Computer Applications in. Computer Science, Information Technology, or a related quantitative field.

Core Technical Skills :

- Stream Processing : Deep expertise in Apache Flink (or similar technologies like. Apache Kafka Streams/Spark Streaming).

- Programming : Strong proficiency in Java. Python is a significant plus.

- Data Warehousing : Hands-on experience with cloud data warehouses,. specifically Snowflake.

- Databases : Expertise in analytical databases like StarRocks DB and strong. familiarity with traditional relational (e.g., PostgreSQL, MySQL) and NoSQL. databases.

- Data Fundamentals : Strong understanding of Data Engineering principles,. including data modeling (e.g., Dimensional Modeling, Data Vault), schema. design, and data governance.

Desired Industry-Standard Skills and Tools :

- Cloud Platforms : Experience with major cloud providers (AWS, Azure, or GCP). services relevant to data (e.g., S3/ADLS/GCS, EMR/Dataproc, Lambda/Cloud. Functions).

- Data Orchestration : Proficiency with workflow management tools like Apache. Airflow or similar (e.g., Dagster, Prefect).

- Big Data Ecosystem : Familiarity with the broader Apache ecosystem, particularly. Apache Kafka for message queuing and Apache Spark (batch processing).

- Containerization : Working knowledge of Docker and Kubernetes for deploying. and managing data services.

- DataOps/DevOps : Experience with CI/CD practices and tools (e.g., Git, Jenkins). applied to data pipelines.

- Data Governance & Quality : Understanding of tools and methods for metadata. management, lineage tracking, and automated data quality checks (e.g., Great. Expectations).

- Other Skillsets : Practical application of statistical modeling and sampling. techniques in a data pipeline context.

Key Responsibilities :

- Design, develop, and maintain real-time and batch data pipelines using modern data engineering frameworks

- Implement stream processing solutions using Apache Flink or similar technologies (Kafka Streams, Spark Streaming)

- Build and optimize data ingestion, transformation, and storage workflows

- Develop and maintain data models following Dimensional Modeling, Data Vault, and other best practices

- Ensure data quality, governance, lineage, and metadata management across pipelines

- Collaborate with analytics, data science, and product teams to support business use cases

- Implement CI/CD pipelines and DataOps best practices for data engineering workloads

- Monitor, troubleshoot, and optimize data pipelines for performance and reliability


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in