HamburgerMenu
hirist

Job Description

Job Requirements :

Responsibilities :

- Translate business and functional requirements into robust, scalable solutions that work well within the overall data architecture.

- Develops and maintains scalable data pipelines and builds new API integrations.

- Design, develop, implement, test, document, and operate large scale, high volume and low latency applications.

- Design data integrations and data quality framework.

- Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintaince.

- Experience in working and delivering end-to-end projects independently.

Work Experience :

Must have :

- Excellent knowledge in Python programming.

- 3+ Years development experience in big data using Spark.

- Experience with Dimension Modeling, Data Warehousing, and building ETL pipelines.

- Strong expertise in SQL and experience in writing complex SQLs.

- Knowledge building stream processing platforms using Kafka, Spark Streaming.

Nice to have :

- Knowledge of using job orchestration frameworks like Airflow, Oozie, Luigi, etc.

- Experience with AWS services such as S3, EMR, RDS.

- Good understanding of cloud data warehouses like Snowflake is an added advantage.

- Good understanding of SQL distribution engines like Presto, Druid.

- Knowledge of Streaming processing frameworks like Flink etc.

- Knowledge of NoSQL databases like HBase, Cassandra etc.

Benefits :

- Competitive salary for a startup.

- Gain experience rapidly.

- Work directly with executive team.

- Fast-paced work environment.


info-icon

Did you find something suspicious?