Posted on: 07/04/2026
Description :
Website :
Geektrust : https ://www.geektrust.com/
Target : https ://www.target.com/
- Location : Bangalore (Manyata Tech Park)
- Work mode : Hybrid (3 WFO)
- LinkedIn
Geektrust : https ://www.linkedin.com/company/geektrust/
Target : https ://www.linkedin.com/company/target/
Company information :
About Geektrust Labs :
The services division of Geektrust, partners with over 10 prominent companies, including Thoughtworks, Target, Pharmarack, and Tesco, among others. We specialise in screening and interviewing candidates, allowing our clients to concentrate solely on the final stages of the hiring process. Successful candidates will join Geektrust as an employee and will be deployed to ThoughtWorks to work on their projects. On completing these six months, based on your performance, you can convert to a full-time employee at TARGET.
Target :
Target is a Fortune 50 company and one of the worlds most recognised brands. With more than 350,000 team members worldwide, they are one of America's largest big box department store chains. Target offers outstanding value, inspiration, innovation and an exceptional guest experience that no other retailer can deliver. A responsible corporate citizen with ethical business practices, environmental stewardship and generous community support, their goal is to work as one team to fulfil their unique brand promise to guests, wherever and whenever they choose to shop.
Role overview :
We are seeking a highly skilled Data Engineer (Contractor) to support the development and optimization of data pipelines within our Google Cloud Platform (GCP) environment. The ideal candidate will have strong experience building Spark pipelines (Scala preferred), working with BigQuery, and writing efficient Unix scripts to support large-scale data processing.
You will collaborate with data engineering teams to design, build, and maintain high-performance data pipelines that enable analytics, reporting, and business insights.
Key Responsibilities :
- Design, develop, and deploy data ingestion and transformation pipelines using Apache Spark (Scala).
- Work within Google Cloud Platform (GCP) components including Dataproc, BigQuery & Cloud Storage.
- Write complex SQL queries for data analysis, validation, and transformation in BigQuery.
- Develop Unix shell scripts for automation, data movement, and pipeline orchestration.
- Optimize and troubleshoot data pipelines for performance, scalability, and reliability.
- Collaborate with data engineers, analysts, and business stakeholders to understand requirements and ensure data quality.
- Contribute to code reviews, documentation, and CI/CD integration of data workflows.
Required Skills & Qualifications :
- 4-6 years of hands-on experience in data engineering or related roles.
- Proven experience developing Spark applications in Scala.
- Strong experience working in Google Cloud Platform (GCP) ecosystem including Dataproc, BigQuery, Cloud Storage.
- Proficient in SQL (especially BigQuery SQL dialects).
- Strong experience with Unix/Linux scripting for data automation.
- Familiarity with version control (Git) and CI/CD processes.
- Excellent problem-solving and debugging skills.
- Strong communication and documentation abilities.
Nice-to-Have Skills :
- Experience with Python for data processing or automation.
- Knowledge of data governance, data quality, or metadata management best practices.
Education :
- Bachelors degree in Computer Science, Engineering, or a related technical discipline (or equivalent work experience).
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1626529