HamburgerMenu
hirist

Data Engineer

D2N Solutions
Remote
5 - 15 Years

Posted on: 23/11/2025

Job Description

Job Description - Data Engineer (Mid-Senior | 5-10 Years | Immediate Joiners Only)

Type : Full-Time / Contract-to-Hire

Experience : 5-10 Years

Locations : Remote / Flexible

Hiring Priority : Immediate Joiners (Must be able to clear Codility with 70%+)

Assessment : Advanced SQL + Python (Codility)

About the Role

We're seeking mid-to-senior-level Data Engineers who possess sharp coding instincts, deep data engineering expertise, and the ability to build powerful, resilient end-to-end pipelines. Someone who thrives in the details but sees the big picture - from ingestion all the way to cloud deployment.

This team moves fast. One round. High standards. If you've got the SQL chops and Python grit to shine on Codility, step forward.

Responsibilities :

Data Engineering & Pipeline Work :

- Build end-to-end data pipelines (ingestion - transformation - loading)

- Design upstream/downstream integrations across structured & semi-structured data

- Develop Python-based ETL frameworks with reusable modules

- Orchestrate jobs using Prefect (preferred) or Airflow

- Build performant SQL logic : complex joins, CTEs, window functions, stored procedures

Cloud & Deployment :

- Deploy containerized Python apps (Docker) on Azure Container Apps / AKS or AWS ECS/EKS

- Manage environment variables, secret stores (Key Vault / Secrets Manager)

- Work hands-on with Azure SQL, SQL Server, Snowflake

API & Integration :

- Build REST API integrations (OAuth, pagination, retries)

- Work with structured/unstructured data (JSON, nested data, dictionaries)

AI/ML Enablement :

- Integrate ML inference via REST

- Support feature engineering pipelines

- Work with dbt Core / dbt Cloud

Mandatory Skills :

- SQL, Python, OOP, REST, OAuth, ETL, Pipelines, Prefect, Airflow, Docker, Azure, AWS, Snowflake, DBT, APIs, JSON, Pandas, Kubernetes, DevOps, CI/CD, Profiling, Debugging

info-icon

Did you find something suspicious?