HamburgerMenu
hirist

Job Description

Role : Senior Data Engineer

Notice Period : Immediate Joiner

Work Location : Bangalore ( 4 days from office )

Responsibilities :

- Design and build reusable components, frameworks and libraries at scale to support analytics products

- Design and implement product features in collaboration with business and Technology stakeholders

- Anticipate, identify and solve issues concerning data management to improve data quality

- Clean, prepare and optimize data at scale for ingestion and consumption

- Drive the implementation of new data management projects and re-structure of the current data architecture

- Implement complex automated workflows and routines using workflow scheduling tools

- Build continuous integration, test-driven development and production deployment frameworks

- Drive collaborative reviews of design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards

- Analyze and profile data for the purpose of designing scalable solutions

- Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues

- Mentor and develop other data engineers in adopting best practices

Qualifications :

- 8 years experiencing developing scalable Big Data applications or solutions on distributed platforms

- Able to partner with others in solving complex problems by taking a broad perspective to identify innovative solutions

- Strong skills building positive relationships across Product and Engineering.

- Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

- Able to quickly pick up new programming languages, technologies, and frameworks

- Experience working in Agile and Scrum development process

- Experience working in a fast-paced, results-oriented environment

- Experience in Amazon Web Services (AWS) or other cloud platform tools

- Experience working with Data warehousing tools, including Dynamo DB, SQL, Amazon Redshift, and Snowflake

- Experience architecting data product in Streaming, Serverless and Microservices Architecture and platform.

- Experience working with Data platforms, including EMR, Data Bricks etc

- Experience working with distributed technology tools, including Spark, Presto, Scala, Python, Databricks, Airflow

- Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture

- Working knowledge of Reporting & Analytical tools such as Tableau, Quicksite etc.

- Demonstrated experience in learning new technologies and skills

- Bachelors degree in computer science, Information Systems, Business, or other relevant subject area

Key Skills Required :

- AWS/Azure : Experience with cloud computing platforms like AWS or Azure to design, deploy, and manage scalable infrastructure and services.

- Databricks : Proficient in using Databricks for big data processing and analytics, leveraging Apache Spark for optimized data workflows.

- Airflow : Skilled in Apache Airflow for designing, scheduling, and monitoring complex data pipelines and workflows.

- Python & PySpark : Strong programming skills in Python and PySpark for developing data processing scripts and performing large-scale data transformations.

- Kafka - any streaming platform : Hands-on experience with Kafka or similar streaming platforms for real-time data ingestion, processing, and event-driven architecture.


info-icon

Did you find something suspicious?