HamburgerMenu
hirist

Data Engineer - PySpark/SQL

True Tech Professionals
Gurgaon/Gurugram
3 - 8 Years

Posted on: 15/01/2026

Job Description

Company Overview :



Role Overview :


True Tech Professionals is a leading technology solutions provider specializing in data engineering and analytics services for businesses across various sectors, including finance, healthcare, and e-commerce. We empower organizations to leverage their data assets effectively, driving informed decision-making and achieving significant business outcomes. Our commitment to innovation and client success has established us as a trusted partner for companies seeking to unlock the power of their data.


As a Data Engineer at True Tech Professionals, you will play a crucial role in building and maintaining our data infrastructure, enabling data-driven insights for our clients. You will collaborate closely with data scientists, analysts, and other engineers to design, develop, and deploy scalable data pipelines, ensuring data quality and accessibility. Your work will directly impact our clients' ability to make informed decisions, optimize their operations, and gain a competitive edge in their respective industries.


Key Responsibilities :

- Design and implement robust and scalable data pipelines to ingest, process, and transform large datasets from various sources for client projects.


- Develop and maintain data warehouses and data lakes, ensuring data quality, consistency, and accessibility for data scientists and analysts.

- Optimize data infrastructure for performance, reliability, and cost-effectiveness, ensuring efficient data processing and storage.

- Collaborate with data scientists and analysts to understand their data requirements and provide them with the necessary data infrastructure and tools.

- Implement data governance policies and procedures to ensure data security, privacy, and compliance with relevant regulations for our clients.

- Troubleshoot and resolve data-related issues, ensuring minimal disruption to data pipelines and data availability for critical business operations.

Required Skillset :


- Demonstrated ability to design, develop, and maintain data pipelines using technologies such as Spark, Kafka, and Hadoop.


- Strong proficiency in Python and Pyspark for data engineering tasks.


- Proven experience in building and managing data warehouses and data lakes using cloud platforms like AWS, Azure, or GCP.

- Proficiency in SQL and NoSQL databases, including data modeling, query optimization, and database administration.

- Strong understanding of data governance principles and practices, including data quality, data security, and data privacy.

- Excellent problem-solving and analytical skills, with the ability to identify and resolve data-related issues effectively.

- Effective communication and collaboration skills, with the ability to work effectively in a team environment.

- Bachelor's or Master's degree in Computer Science, Data Science, or a related field.



info-icon

Did you find something suspicious?

Similar jobs that you might be interested in