Posted on: 22/11/2025
Description :
About :
Arcgate is a dynamic and rapidly growing team of 2500+ professionals passionate about data and technology.
We deliver cutting-edge solutions to some of the worlds most innovative startups to market leaders across application development, quality engineering, AI data preparation, data enrichment, search relevance, and content:
Responsibilities :
- Design, build, and optimize Python based data pipelines that handle large, complex, and messy datasets efficiently.
- Develop and manage scalable data infrastructures, including databases and data warehouses such as Snowflake, Azure Data Factory etc ensuring reliability and performance.
- Build, maintain, and optimize CDC processes that integrate data from multiple sources into the data warehouse.
- Collaborate closely with data scientists, analysts, and operations teams to gather requirements and deliver high-quality data solutions.
- Perform data quality checks, validation, and verification to ensure data integrity and consistency.
- Support and optimize data flows, ingestion, transformation, and publishing across various systems.
- Work with AWS infrastructure (ECS, RDS, S3), manage deployments using Docker, and package services into containers.
- Use tools like Prefect, Dagster and dbt to orchestrate and transform data workflows.
- Implement CI/CD pipelines using Harness and GitHub Actions.
- Monitor system health and performance using DataDog.
- Manage infrastructure orchestration with Terraform and Terragrunt.
- Stay current with industry trends, emerging tools, and best practices in data engineering.
- Coach and mentor junior team members, promoting best practices and skill development.
- Contribute across diverse projects, demonstrating flexibility and:
Requirements:
- Bachelors degree in Computer Science, Engineering, Mathematics, Physics, or a related field.
- 5+ years of demonstrable experience building reliable, scalable data pipelines in production environments.
- Strong experience with Python, SQL programming, and data architecture.
- Hands-on experience with data modeling in Data Lake or Data Warehouse environments (Snowflake preferred).
- Familiarity with Prefect, Dagster, dbt, and ETL/ELT pipeline frameworks.
- Experience with AWS services (ECS, RDS, S3) and containerization using Docker.
- Knowledge of TypeScript, React, Node.js is a plus for collaborating on the application platform.
- Strong command of GitHub for source control and Jira for change management.
- Strong analytical and problem-solving skills, with a hands-on mindset for wrangling data and solving complex challenges.
- Excellent communication and collaboration skills; ability to work effectively with crossfunctional teams.
- A proactive, start-up mindset, adaptable, ambitious, responsible, and ready to contribute wherever needed.
- Passion for delivering high-quality solutions with meticulous attention to detail.
- Enjoy working in an inclusive, respectful, and highly collaborative environment where every voice matters.
Benefits :
- Competitive salary package.
- Opportunities for growth, learning, and professional development.
- Dynamic, collaborative, and innovation-driven work culture
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1578689
Interview Questions for you
View All