HamburgerMenu
hirist

Job Description

Shift 1pm to 10pm/3 pm to 12 am

Key Responsibilities :

- Design, develop, and optimize data pipelines and ETL workflows using Databricks.

- Implement scalable data integration solutions for large datasets across diverse data sources.

- Build and maintain data architectures for real-time and batch processing.

- Collaborate with data scientists, analysts, and stakeholders to ensure the delivery of high-quality data solutions.

- Monitor, troubleshoot, and optimize data workflows for performance and cost-efficiency.

- Ensure data governance, security, and compliance in all processes.

Qualifications :

- Should have very clear understanding of Data Lake/Data Warehouse/Lakehouse Concepts

- strong background in data engineering practices, cloud platforms, big data technologies and big data processing frameworks.

- Strong hands-on experience with Databricks, including Spark, Delta Lake

- 3+ Year Hands-on experience on Spark is mandatory using Databricks, Azure/AWS and associated data services.

- 3+ Years Hands-on experience in SQL, Unix & advanced Unix Shell Scripting, Kafka

- Expertise in Python/Java/Scala

- Experience in GIT, SVN, Build Tools like Ant, Maven etc., CI/CD pipelines

- Hands on file transfer mechanism (NDM, sFTP etc)

- Knowledge of Schedulers like airflow, TWS etc.

- Strong problem-solving skills and the ability to work in an agile environment.

Educational Background :

- Bachelors or Masters degree in Computer Science, Engineering, or a related field.


info-icon

Did you find something suspicious?