HamburgerMenu
hirist

Job Description

Role : Senior Data Engineer

This is a full-time position with D Square Consulting Services Pvt Ltd

Location : Bangalore (Hybrid)

Experience : 5+ years

Notice Period : Candidates available to join within 15 days are preferred


Job Summary :


We are seeking a skilled and experienced Senior Data Engineer with strong Python expertise, API development experience, and a deep understanding of containerized, CI/CD-driven workflows.

You will play a key role in designing, building, and scaling data pipelines and backend services that support our analytics and business intelligence platforms.

This is a hands-on engineering role that requires a strong technical foundation and a collaborative mindset.


Key Responsibilities :


- Design, implement, and optimize robust, scalable data pipelines and ETL workflows using modern Python tools and libraries.

- Build and maintain production-grade RESTful and/or GraphQL APIs to serve data to internal and external stakeholders.

- Collaborate with Data Analysts, Scientists, and Engineering teams to enable end-to-end data solutions.

- Containerize data services using Docker and manage deployments within Kubernetes environments.

- Develop and maintain CI/CD pipelines using GitHub Actions to automate testing, data validations, and deployment processes.

- Ensure code quality through rigorous unit testing, type annotations, and adherence to Python best practices.

- Participate in architecture reviews, design discussions, and code reviews in an agile development process.

- Proactively identify opportunities to optimize data access, transformation, and governance.


Required Skills & Qualifications :


- Bachelors or Masters degree in Computer Science, Engineering, or a related technical field.

- 5+ years of hands-on experience in data engineering or backend development roles.

- Expert-level Python skills, with strong understanding of idiomatic patterns, async programming, and typing.

- Proven experience in building production-grade RESTful or GraphQL APIs using frameworks like FastAPI, Graphene, or Strawberry.

- Hands-on experience with Docker, container-based workflows, and CI/CD automation using GitHub Actions.

- Experience working with Kubernetes for orchestrating deployments in production environments.

- Proficient with SQL and data modeling; familiarity with ETL tools, data lakes, or warehousing concepts is a plus.

- Strong communicator with a proactive and self-driven approach to problem-solving and collaboration.


Nice-to-Have Skills :


- Familiarity with data orchestration tools (e.g., Airflow, Prefect).

- Experience with streaming data platforms like Kafka or Spark.

- Knowledge of data governance, security, and observability best practices.

- Exposure to cloud platforms like AWS, GCP, or Azure


info-icon

Did you find something suspicious?