HamburgerMenu
hirist

Job Description

Description :

About Horizontal :

Horizontal is a multinational organization built on the belief that people and technology work best when they work together. We operate through two strong pillars:

Horizontal Digital, our creative and technology powerhouse, specializes in platforms like Sitecore and Salesforce. Weve delivered impactful digital solutions for leading brands such as Mahindra, enabling Indias first fully online car-buying experience by an automobile manufacturer, and Crocs, where we supported next-generation digital customer experiences.

Horizontal Talent, where I represent, is our global talent solutions arm focused on IT, marketing, and business strategy roles. We are ranked among the top 2% of talent solutions firms globally, with a presence across the US, Malaysia, Australia, and India, and have been recognized by Forbes as one of Americas Best Recruiting Firms.

With 20+ years of experience and partnerships with 200+ global clients, including several Fortune 500 companies, we bring strong domain and technology expertise to every role we support.

JOB DESCRIPTION :

Minimum Qualifications :

- Bachelors degree in computer science, Engineering, or equivalent practical experience

- 5+ years of Data Engineering experience in Big Data technologies like Distributed computing frameworks ( Hive , HDFS, Map Reduce, Spark)

- 4+ years of hands-on experience building and operating cloud-native data platforms, preferably on GCP (BigQuery, Dataproc, Looker, Dataflow, Vertex AI )

- Proficiency in Scala and/or Python, with strong understanding of object-oriented and functional programming principles

- Experience designing and developing batch and streaming data pipelines using technologies such as Spark, Kafka, and Flink

- Experience with test-driven development, automated testing frameworks, and observability/monitoring tools

- Experience with workflow orchestration, dependency management, and end-to-end pipeline optimization

- Strong understanding of data modeling, schema design, and performance optimization for analytical and operational workloads

- Experience with CI/CD pipelines and production deployment best practices

- Knowledge of data integration patterns, APIs, ETL/ELT frameworks, and real-time streaming architectures

- Strong analytical and problem-solving skills with the ability to identify data quality issues and drive actionable insights

Good to Have Qualifications:

- Experience with data visualization and BI tools such as Looker, Power BI

- Experience in building semantic data model using Looker, PowerBI

- Proven ability to collaborate cross-functionally with product, analytics, and platform teams

- Hands-on experience with Docker-based containerization.

- Strong written and verbal communication skills, including the ability to explain complex technical concepts to non-technical stakeholders

- Demonstrated commitment to engineering best practices, operational excellence, and continuous learning

Responsibilities:

- Design, build, and maintain scalable, reliable, and high-performance data pipelines

- Ensure data quality, reliability, and governance across data platforms

- Optimize data workflows for cost, performance, and scalability

- Partner with cross-functional teams to translate business requirements into robust data solutions

- Drive best practices in code quality, testing, CI/CD, and system design

- Stay current with emerging data engineering technologies and industry best practices

Note : Candidates with a confirm last working day are only accepted.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in