HamburgerMenu
hirist

Job Description

DESCRIPTION :

PRESCIENCE DECISION SOLUTIONS - A MOVATE COMPANY

A Bangalore-based company specializing in Data Science, Advanced Analytics, AI and Data Engineering - Prescience Decision Solutions is delivering high-impact data-driven solutions across a wide range of industries. We partner with top-tier global MNCs and contribute to critical projects with industry leaders & MOVATE while also developing and deploying innovative solutions for our own flagship initiatives. Our expertise spans advanced analytics, AI/ML modeling, decision intelligence, and digital transformationempowering businesses to unlock actionable insights and drive strategic outcomes. At Prescience, we don't just analyze datawe transform it into decisive competitive advantage.

JOB DESCRIPTION :

As a Senior Data Engineer on our team, you will work on our Hadoop-based data warehouse, contributing to scalable and reliable big data solutions for analytics and business insights. This is a hands-on role focused on building, optimizing, and maintaining large data pipelines and warehouse infrastructure.

Key Responsibilities :

- Design, develop, and maintain robust data pipelines in Hadoop and related ecosystems, ensuring data reliability, scalability, and performance.

- Implement data ETL processes for batch and streaming analytics requirements.

- Optimize and troubleshoot distributed systems for ingestion, storage, and processing.

- Collaborate with data engineers, analysts, and platform engineers to align solutions with business needs.

- Ensure data security, integrity, and compliance throughout the infrastructure.

- Maintain documentation and contribute to architecture reviews.

- Participate in incident response and operational excellence initiatives for the data warehouse.

- Continuously learn mindset and apply new Hadoop ecosystem tools and data technologies.

Required Skills and Experience :

- Proficiency in Hadoop ecosystems such as Spark, HDFS, Hive, Iceberg, Spark SQL.

- Extensive experience with Apache Kafka, Apache Flink, and other relevant streaming technologies.

- Proven ability to design and implement automated data pipelines and materialized views.

- Proficiency in Python, Unix or similar languages.

- Good understanding of SQL oracle, SQL server or similar languages.

- Ops & CI/CD : Monitoring (Prometheus/Grafana), logging, pipelines (Jenkins/GitHub Actions).

- Core Engineering : Data structures/algorithms, testing (JUnit/pytest), Git, clean code.

- 5+ years of directly applicable experience

- BS in Computer Science, Engineering, or equivalent experience.


info-icon

Did you find something suspicious?