HamburgerMenu
hirist

Big Data Developer

Populace World Solutions
Pune
7 - 10 Years

Posted on: 10/10/2025

Job Description

Description : Big Data Developer

Location : Pune, India

Experience : 7+ Years

About the Role :

We are seeking a highly experienced and technically proficient Big Data Developer to join our team in Pune. In this role, you will be responsible for designing, developing, and optimizing large-scale data processing pipelines and platforms. The ideal candidate has a deep background in the modern data ecosystem, with expertise in Python, PySpark, and cloud-based data analytics services, and a proven ability to solve complex data transformation challenges.

Key Responsibilities :

- Big Data Development : Design, build, and maintain robust, scalable, and highly performant data pipelines using Python and PySpark.

- Data Architecture : Work extensively with core Big Data technologies, including Hadoop and Apache Spark, to process vast datasets.

- Platform Expertise : Leverage deep knowledge of the Databricks platform to manage and execute data workloads efficiently.

- Data Structuring : Implement best practices for data storage and querying, with extensive, hands-on experience utilizing Delta Tables and working with various file formats such as JSON and Parquet.

- Cloud Integration : Develop and deploy solutions leveraging AWS data analytics services.

- Problem Solving : Tackle and resolve complex data processing and transformation-related problems, ensuring data quality, accuracy, and performance.

- Database Management : Apply solid knowledge of both NoSQL and RDBMS databases in data ingestion and integration workflows.

- Collaboration : Utilize good communication skills to effectively collaborate with cross-functional teams, including data scientists, analysts, and other engineering teams.

Required Qualifications :

- Experience : 7+ years of professional experience as a Big Data Engineer or in a similar role.

- Programming Proficiency : Must be proficient with Python & PySpark.

- Core Big Data : In-depth, practical knowledge of Hadoop and Apache Spark.

- Data Lake/Warehouse Technologies : Extensive hands-on experience with Databricks and deep familiarity with Delta Tables.

- File Formats : Must have extensive experience working with and optimizing data stored in JSON and Parquet file formats.

- Database Knowledge : Solid understanding and working knowledge of both NoSQL and RDBMS databases.

- Analytical Skills : Proven ability to analyze and solve complex data processing, transformation, and optimization problems.

Desired/Good-to-Have Qualifications :

- Cloud Data Services : Experience with AWS data analytics services such as Athena, Glue, Redshift, and EMR.

- Data Warehousing : Familiarity and practical experience with Data Warehousing concepts and methodologies will be a plus.


info-icon

Did you find something suspicious?