Role : Senior ETL Developer DataStage
Experience : 5 8 years
Location : Bengaluru, Karnataka, India
Employment Type : Full-time
About the Role :
We are seeking a highly skilled and passionate Senior ETL Developer with extensive experience in DataStage to join our growing team in Bengaluru. In this role, you will be instrumental in designing, developing, and maintaining robust and efficient ETL solutions, primarily utilizing IBM DataStage, alongside leveraging modern cloud data technologies like Databricks and AWS. If you are an expert in data integration, possess a strong understanding of cloud-based data architecture, and thrive in a dynamic environment, we encourage you to apply.
Key Responsibilities :
ETL Development & Design :
- Design, develop, test, and deploy complex ETL solutions using IBM DataStage (versions 8.x, 9.x, 10.x, or higher) to extract, transform, and load data from various source systems into target data warehouses/data lakes.
- Develop and optimize DataStage jobs, sequences, routines, and shared containers for high performance and scalability.
- Implement robust error handling, data validation, and data quality checks within ETL processes.
- Collaborate with data architects, data modelers, and business analysts to understand data requirements and translate them into efficient ETL designs.
Cloud Data Integration:
- Utilize Databricks for data transformation, analysis, and building data pipelines, leveraging PySpark or Scala.
- Explore and integrate other relevant cloud technologies to enhance data processing capabilities.
Scripting & Automation :
- Automate operational tasks to improve efficiency and reduce manual intervention.
Data Analysis & SQL Proficiency :
- Perform data profiling and analysis to understand data structures, identify data quality issues, and propose solutions.
Performance Tuning & Optimization :
- Propose and implement optimization techniques for improved data processing efficiency.
Documentation & Best Practices :
- Adhere to and promote best practices in ETL development, coding standards, and version control.
Collaboration & Communication :
- Clearly communicate technical concepts and solutions to both technical and non-technical stakeholders.
- Participate in code reviews and provide constructive feedback to peers.
Qualifications :
- Proficiency in UNIX shell scripting for automation and job orchestration.
- Strong understanding and practical experience with cloud-based data architecture, particularly AWS services.
- Experience with Databricks for data processing and transformations is highly desirable.
- Expertise in writing and optimizing SQL queries.
- Solid grasp of data warehousing concepts, dimensional modeling, and ETL methodologies.
- Excellent analytical, problem-solving, and debugging skills.
- Exceptional communication (written and verbal) and interpersonal skills.
- Ability to work independently and as part of a collaborative team in a fast-paced environment.
- Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field.
Bonus Points (Nice to Have) :
- Knowledge of Big Data technologies (e.g., Hadoop, Spark).
- Familiarity with CI/CD pipelines for data solutions.
- Experience in an Agile development environment.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1515852
Interview Questions for you
View All