Posted on: 24/07/2025
Role Overview :
We are hiring a Senior Palantir Data Engineer with deep expertise in building large-scale, cloud-native data pipelines, orchestration frameworks, and operational data services.
This role demands strong experience with Palantir Foundry, AWS, Python, and PySpark, along with a proven track record in architecting data solutions in complex enterprise environments.
As a senior engineer, you will lead design decisions, mentor junior team members, and drive the implementation of secure, scalable, and high-performance data infrastructure.
Key Responsibilities :
- Architect and develop advanced data pipelines, ingestion frameworks, and orchestration flows using PySpark, Palantir Foundry, and AWS services.
- Lead the development of domain-specific Foundry ontologies, code workbooks, and operational pipelines.
- Design reusable, scalable components and workflows for data cleansing, enrichment, and transformation.
- Drive infrastructure-as-code adoption and CI/CD pipeline integration using GitLab and AWS developer tools.
- Champion observability and operational excellence by implementing robust monitoring and alerting systems.
- Collaborate closely with stakeholders across business, analytics, and engineering to align platform capabilities with strategic goals.
- Provide technical leadership in architectural reviews, design sessions, and mentoring of junior engineers.
- Contribute to platform governance and policy enforcement, including data security and compliance (e.g., IAM best practices).
Technical Skills Required :
- Programming : Expert-level skills in Python and PySpark
- Cloud Platform : Advanced hands-on experience with AWS (S3, Glue, EMR, Lambda, IAM, CloudWatch, ECS)
- Platform Expertise : In-depth experience with Palantir Foundry, including Ontology, Object Builders, Code Workbooks, and pipeline orchestration
- DevOps & CI/CD : Git, GitLab, Bitbucket, Jenkins, AWS CodePipeline/CodeBuild
- Data Modeling & SQL : Strong SQL optimization and schema design skills
- Linux : Proficiency in system-level scripting and data management in Unix environments
- Monitoring & Logging : Tools such as Prometheus, Datadog, ELK stack, or AWS-native tools
Qualifications :
- Bachelors or Masters degree in Computer Science, Data Engineering, or related field
- 10+ years of experience in designing, building, and operating large-scale data systems
- Proven experience leading high-impact engineering teams or large-scale data transformation programs
- Ability to assess business data needs and translate them into scalable technical architectures
- Strong stakeholder management and communication skills across engineering and product teams
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1519025
Interview Questions for you
View All