HamburgerMenu
hirist

Job Description

Position Summary :

As a Data Engineer role, you will be responsible for designing and developing robust data delivery systems within data warehouse infrastructure on AWS. This data will be essentially used for generating visual dashboards and reports empowering teams to gain deeper insights into trends and opportunities.

Your Role Responsibilities and Duties :

- Design and develop robust data delivery systems within our data warehouse infrastructure on AWS.

- Collaborate with cross-functional teams to understand data requirements and ensure timely and accurate delivery.

- Implement data pipelines and ETL processes to efficiently extract, transform, and load data from various sources into the warehouse.

- Optimize data storage and retrieval mechanisms to ensure scalability, reliability, and performance.

- Work closely with stakeholders to define data schemas and ensure data quality and integrity.

- Develop and maintain documentation for data processes, standards, and best practices.

- Monitor data pipelines and warehouse performance, troubleshoot issues, and implement solutions as needed.

- Support the creation of visual dashboards and reports to enable teams to analyze trends and insights for improving market share and songwriter deals.

- Performs ad-hoc analyses of data stored within the analytics data warehouse and writes SQL scripts, stored procedures, functions, and views.

- Use data to discover tasks that can be automated

Required Skills and Qualifications :

- B.Tech/MTech in Computer Science or Equivalent. An excellent understanding of data warehouses / data-marts and dimensional data models.

- 5+ Years of Experience database development, programming, design and analysis.

- 5+ Years of Experience in SQL and a variety of databases technologies- Snowflake, Teradata, Oracle.

- 3+ Years of Experience in No SQL databases (Dynamo DB, Cassandra)

- Experience in Data Architecture including Data Modelling, Data Mining and Data Ingestion.

- 3+ years of Experience with AWS services (EKS, S3, EC2, Kinesis, Dynamo DB, Glue etc.).

- 3+ Years of Experience with data & ETL programming (Databricks and AbInitio)

- 5+ Years of Experience in coding and scripting languages- Java, Python, JavaScript, Bash, Batch Files, Korn.

- 5 Years of Experience with Spark, Scala, PySpark.

- 3 Years of Experience with Streaming Service- Flink, Kafka.

- Strong problem-solving skills and ability to work independently and in a team environment.

- Excellent communication and presentation skills, with the ability to explain complex data insights to non-technical stakeholders.

- Knowledge of data privacy and security best practices, and experience with data governance.

- Knowledge and experiences of SDLC Methodologies e.g. Agile, Waterfall.

- Knowledge of dimensional modelling

What we Offer :

- Bootstrapped and financially stable with high pre-money evaluation.

- Above industry renumerations.

- Additional compensation tied to Renewal and Pilot Project Execution.

- Additional lucrative business development compensation.

- Chance to work closely with industry experts driving strategy with data and analytics.

- Firm building opportunities that offer stage for holistic professional development, growth, and branding.

- Empathetic, excellence and result driven organization. Believes in mentoring and growing a team with constant emphasis on learning.

info-icon

Did you find something suspicious?