Posted on: 02/02/2026
Description :
What Were Looking For :
- Bachelor's or master's degree in computer science, Data Engineering, Information Systems, or related technical field.
- 4+ years of hands-on experience in data engineering with a focus on data integrations, warehousing, and analytics pipelines.
- Hands-on experience with :
- Google Big Query as a centralized data warehousing and analytics platform.
- Python for scripting, data processing, and integration logic.
- SQL for data transformation, complex querying, and performance tuning.
- DBT for building modular, maintainable, and reusable transformation models.
- Airflow / Cloud Composer for orchestration, dependency management, and job scheduling.
- Solid understanding of ETL/ELT pipeline development and cloud-based data architecture.
- Strong knowledge of data quality frameworks, validation methods, and governance best practices.
Preferred Skills (Optional) :
- Experience with Ascend.io or comparable ETL platforms such as Databricks, Dataflow, or Fivetran.
- Familiarity with data cataloging and governance tools like Collibra.
- Knowledge of CI/CD practices, Git-based workflows, and infrastructure automation tools.
- Exposure to event-driven or real-time streaming pipelines using tools like Pub/Sub or Kafka.
- Strong problem-solving and analytical mindset with the ability to think broadly and identify innovative solutions and able to quick to learn new technologies, programming languages, and frameworks.
- Excellent communication skills, both written and verbal.
- Ability to work in a fast-paced and collaborative environment.
- Ability to provide on ground troubleshooting and diagnosis to architecture and design challenges
- Good experience in Agile Methodologies like Scrum, Kanban, and managing IT backlogs.
- Be a go-to expert for data technologies and solutions
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1608923