Posted on: 10/12/2025
Description :
Exp : 5-10
Salary : 15-30
Location : Remote
Position : Contract
Mode : Remote
Skills required :
- 5+ years experience supporting Big Data and Azure cloud platforms.
- Strong skills in Spark, Python, Linux, Shell scripting, and DevOps/CI/CD tools (e.g., Jenkins).
- Hands-on with Azure data services (Databricks, Data Factory, ADLS) and infrastructure automation (Terraform).
- Strong communication, documentation, and collaboration skills.
- Ability to work independently and deliver in a fast-paced environment
Key Responsibilities :
- Design, build, and optimize big data pipelines and transformations using Spark (PySpark/Scala).
- Develop scalable data ingestion, processing, and storage solutions to support analytics and business intelligence needs.
- Optimize distributed data processing workloads for performance, reliability, and cost efficiency.
- Build and maintain data solutions leveraging Azure Databricks, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure Synapse (if applicable).
- Develop automation scripts using Python, Shell, and Linux tools to streamline deployments and operational tasks.
- Work with CI/CD pipelines using tools such as Jenkins, Azure DevOps, or GitHub Actions to automate build, testing, and deployment processes.
- Ensure data accuracy, integrity, and consistency across multiple systems.
- Implement monitoring, alerting, and logging frameworks for pipelines and environments.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1588435
Interview Questions for you
View All