Posted on: 24/03/2026
Role : GCP Architect.
Experience : 14+ years.
Location : Bangalore, Hyderabad, Chennai, Kolkata, Pune, Gurgaon.
Skills : GCP Data Architect, Python, Pyspark.
Description :
We are looking for a Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data.
The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
You will also be responsible for integrating them with the architecture used across our clients.
Primary Roles and Responsibilities :
- Develop and maintain data pipelines implementing ETL process, monitoring performance and advising any necessary infrastructure changes.
- Translate complex technical and functional requirements into detailed designs.
- Investigate and analyze alternative solutions to data storing, processing etc. to ensure most streamlined approaches are implemented.
- Serve as a mentor to junior staff by conducting technical training sessions and reviewing project outputs.
Skills and Qualifications :
- Proficient understanding of distributed computing principles Hadoop v2, MapReduce,HDFS.
- Strong data engineering skills on GCP cloud platforms Airflow, Cloud Composer, Data Fusion, Data Flow, Data Proc, Big Query.
- Experience with building stream-processing systems, using solutions such as Storm or SparkStreaming.
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala.
- Experience with Spark, SQL, and Linux.
- Knowledge of various ETL techniques and frameworks, such as Flume, Apache NiFi, or DBT.
- Experience with various messaging systems, such as Kafka or RabbitMQ.
- Good understanding of Lambda Architecture, along with its advantages and drawbacks.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1623209