Posted on: 11/11/2025
Experience :
- 7-10 years experience in design, architecture or development in Analytics and Data Warehousing.
- Experience in building end-to-end solutions with the Big data platform, Spark or scala programming.
- 5 years of Solid experience in ETL pipeline building with spark or sclala programming framework with knowledge in developing UNIX Shell Script, Oracle SQL/ PL-SQL.
- Experience in Big data platform for ETL development with AWS cloud platform.
- Proficiency in AWS cloud services, specifically EC2, S3, Lambda, Athena, Kinesis, Redshift, Glue, EMR, DynamoDB, IAM, Secret Manager, Step functions, SQS, SNS, Cloud Watch.
- Excellent skills in Python-based framework development are mandatory.
- Should have experience with Oracle SQL database programming, SQL performance tuning, and relational model analysis.
- Extensive experience with Teradata data warehouses and Cloudera Hadoop. Proficient across Enterprise Analytics/BI/DW/ETL technologies such as Teradata Control Framework, Tableau, OBIEE, SAS, Apache Spark, Hive
- Analytics & BI Architecture appreciation and broad experience across all technology disciplines.
- Experience in working within a Data Delivery Life Cycle framework & Agile Methodology.
- Extensive experience in large enterprise environments handling large volume of datasets with High SLAs
- Good knowledge in developing UNIX scripts, Oracle SQL/PL-SQL, and Autosys JIL Scripts.
- Well versed in AI Powered Engineering tools like Cline, GitHub Copilot
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1572100