Posted on: 22/10/2025
Job Title : Senior Data Engineer/ DevOps Enterprise Big Data Platform
Job Description :
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, to enable business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.
The Enabling Functions Data Office Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Enabling Functions data management and analytics platform (Palantir Foundry, AWS and other components).
The Foundry Platform Comprises Multiple Different Technology Stacks, Which Are Hosted On Amazon Web Services (AWS) Infrastructure Or Own Data Centers. Developing Pipelines And Applications On Foundry Requires :
- Proficiency in SQL / Scala / Python (Python required; all 3 not necessary)
- Proficiency in PySpark for distributed computation
- Proficiency in Ontology, Slate, Familiarity with Workshop App basic design/visual competency
- Familiarity with common databases (e.g. Oracle, mySQL, Microsoft SQL). Not all types required
This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.
Roles & Responsibilities :
- Strong experience in Big Data & Data Analytics
- Experience in building robust ETL pipelines for batch as well as streaming ingestion.
- Experience with Palantir Foundry. Most important Foundry apps : Code Repository, Data Lineage and Scheduling, Ontology Manager, Contour, Object View Editor, Object Explorer, Quiver, Workshop, Vertex
- Experience with Data Connection, external transforms, Foundry APIs, SDK and Webhooks is a plus
- Interacting with RESTful APIs incl. authentication via SAML and OAuth2
- Experience with test driven development and CI/CD workflows
- Knowledge of Git for source control management
- Agile experience in Scrum environments like Jira
- Experience in visualization tools like Tableau or Qlik is a plus
- Experience in Palantir Foundry, AWS or Snowflake is an advantage
- Basic knowledge of Statistics and Machine Learning is favorable
- Problem solving abilities
- Proficient in English with strong written and verbal communication
Primary Responsibilities :
- Industrialize data pipelines
- Establishes a continuous quality improvement process to systematically optimize data quality
- Collaboration with various stakeholders incl. business and IT
Education :
- 4+ years of experience in engineering with experience in ETL type work with databases and Hadoop platforms.
Skills :
- Knowledge of Spark and differences between Spark and Map-Reduce
- Familiarity of encryption and security in a Hadoop cluster.
- Must be proficient in technical data management tasks, i.e. writing code to read, transform and store data
- XML/JSON knowledge
- Experience working with REST APIs.
- Spark Experience in launching spark jobs in client mode and cluster mode
- Familiarity with the property settings of spark jobs and their implications to performance.
- Experience with developing ELT/ETL processes with experience in loading data from enterprise sized RDBMS systems such as Oracle, DB2, MySQL, etc.
- Authorization Basic understanding of user authorization (Apache Ranger preferred)
- Programming : Must be at able to code in Python or expert in at least one high level language such as Java, C, Scala.
- Must have experience in using REST APIs
- SQL Must be an expert in manipulating database data using SQL
- Familiarity with views, functions, stored procedures and exception handling.
- AWS General knowledge of AWS Stack (EC2, S3, EBS, )
Specific Information Related To The Position :
- Flexible to work CEST and US EST time zones (according to team rotation plan)
- Willingness to travel to Germany, US and potentially other locations (as per project demand)
At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1563438
Interview Questions for you
View All