HamburgerMenu
hirist

DBP - Palantir Developer - Data Modeling

DBP Offshore pvt ltd
Bangalore
3 - 6 Years

Posted on: 25/08/2025

Job Description

Responsibilities :


- Design, develop, and implement end-to-end data pipelines and analytical solutions within the Palantir Foundry or Gotham platform.

- Integrate diverse datasets from various source systems (databases, APIs, files, streaming data) into the Palantir environment.

- Develop and optimize complex data models within Palantir, ensuring data quality, consistency, and usability for analytical purposes.

- Write efficient and scalable data transformation logic using Python and PySpark to cleanse, enrich, and prepare data for analysis.

- Utilize SQL for data extraction, manipulation, and querying across various data sources and within the Palantir platform.

- Collaborate closely with data scientists, analysts, product managers, and business stakeholders to understand requirements and translate them into technical solutions.

- Build and maintain custom applications, dashboards, and workflows within Palantir to support specific business needs.

- Troubleshoot and debug data issues, performance bottlenecks, and application errors within the Palantir ecosystem.

- Ensure data governance, security, and compliance standards are met throughout the data lifecycle within Palantir.

- Document technical designs, data flows, and operational procedures for developed solutions.

- Stay updated with the latest features and best practices of the Palantir platform and related technologies.


Required Qualifications :


- Bachelor's degree in Computer Science, Information Technology, Data Science, or a related quantitative field.

- 3+ years of professional experience in data engineering, software development, or a similar role with a strong focus on data.

- Proven hands-on experience with Palantir Foundry or Gotham, including data integration, modeling, and application development.

- Strong proficiency in Python for data manipulation, scripting, and automation.

- Solid experience with PySpark for large-scale data processing and transformations.

- Expertise in SQL for complex querying, data extraction, and database interactions.

- Demonstrated ability to design and implement robust data integration strategies from various sources.


- Experience with data modeling principles and practices, including relational and dimensional modeling.

- Excellent problem-solving skills and the ability to work independently and as part of a team.

- Strong communication and interpersonal skills to effectively collaborate with technical and non-technical stakeholders.


Preferred Qualifications :


- Master's degree in a relevant field.


- Experience with cloud platforms such as AWS, Azure, or GCP.

- Familiarity with other big data technologies (Apache Spark, Hadoop, Kafka).

- Knowledge of data visualization tools and techniques.

- Experience with version control systems (Git).

- Understanding of data governance, data security, and privacy best practices


info-icon

Did you find something suspicious?