HamburgerMenu
hirist

Job Description

Job Title: Data Engineer

Location: Hybrid (Hyderabad)

Experience: 3+ Years


Job Description:


We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines using modern cloud and data engineering tools. The ideal candidate should have hands-on experience with Databricks, AWS, and Python-based frameworks, and be capable of creating reliable ETL workflows, APIs, and data governance frameworks to support data analytics and business operations.


Key Responsibilities:


- Design, develop, and maintain robust ETL pipelines using Databricks, PySpark, and AWS services


- Automate and orchestrate data workflows using Apache Airflow


- Build scalable REST APIs using FastAPI for seamless data access


- Manage structured and unstructured data storage in AWS S3, implementing Databricks Unity Catalog for governance


- Collaborate with data analysts, scientists, and stakeholders to understand data needs and deliver solutions


- Develop data validation, cleansing, and transformation routines to ensure data integrity and quality


- Implement monitoring and alerting mechanisms for data pipeline health and performance


- Create and maintain Spotfire dashboards and visualizations for actionable insights


- Apply best practices for data security, access control, and compliance


- Document data flows, pipelines, and architecture for transparency and collaboration


- Participate in code reviews, contribute to CI/CD pipeline improvements, and ensure version control with Git


Required Skills:


Databricks, PySpark, Python, SQL, Apache Airflow, FastAPI, AWS (S3, IAM, Lambda, ECR), Spotfire


info-icon

Did you find something suspicious?