Posted on: 04/08/2025
Accountabilities :
- Data Pipeline : Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity
- Data Integration : Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data
- Data Quality Management : Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms
- Data Transformation : Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process
- Data Enablement : Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation.
Qualifications & Specifications :
- Masters /Bachelors degree in Engineering /Computer Science/ Math/ Statistics or equivalent.
- Strong programming skills in Python/Pyspark/SAS.
- Proven experience with large data sets and related technologies Hadoop, Hive, Distributed computing systems, Spark optimization.
- Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps.
- Hands-on experience on Databricks, Delta Lake, Workflows.
- Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus.
- Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs.
The job is for:
Did you find something suspicious?
Posted By
FedEx
Talent Acquisition specialist at FEDEX EXPRESS TRANSPORTATION AND SUPPLY CHAIN SERV
Last Active: 5 Aug 2025
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1524004
Interview Questions for you
View All