HamburgerMenu
hirist

Job Description

Description :


Job Title : ETL/DataStage SQL Engineer


Key Responsibilities :


- Act as a critical member of the Colocation Team and Site Strategy, supporting both on-premises and cloud-based data platforms to ensure seamless data operations.


- Design, develop, and maintain robust ETL workflows and data pipelines using DataStage and other ETL tools, ensuring high data quality, consistency, and reliability.


- Support, monitor, and optimize on-prem Teradata environments while planning, migrating, and integrating data with modern cloud platforms such as Snowflake and Databricks.


- Analyze, optimize, and troubleshoot complex SQL queries, ensuring optimal database performance and minimal latency in data processing.


- Implement, maintain, and improve workflow orchestration using Airflow to automate data pipelines efficiently.


- Integrate modern DevOps best practices into ETL development, including version control via GitHub, automated CI/CD pipelines, and leveraging GitHub Copilot for enhanced coding efficiency.


- Collaborate closely with business analysts, data architects, and technical teams to design and deliver scalable, reliable, and reusable data solutions that meet evolving business needs.


- Explore and leverage AI/ML frameworks and tools to automate repetitive tasks, optimize processes, and enhance data engineering efficiency.


- Document technical workflows, ETL processes, and best practices to ensure knowledge transfer and maintain operational continuity.


- Participate actively in a hybrid work model, spending a minimum of three days per week onsite at HCL Hyderabad or Noida offices, while maintaining remote collaboration for flexibility.


Required Skills & Experience :


- Strong hands-on experience with UNIX/Linux environments, including shell scripting and command-line tools.


- Expertise in ETL design and development, with hands-on experience in DataStage or equivalent ETL platforms.


- Solid understanding of Data Warehousing concepts, dimensional modeling, and SQL query optimization.


- Proficiency in Snowflake, Databricks, and Teradata environments, with experience in data migration and integration.


- Advanced programming skills in Python and PySpark for data manipulation, automation, and pipeline optimization.


- Experience in workflow orchestration using Airflow, including scheduling, monitoring, and error handling.


- Familiarity with GitHub Copilot, modern coding practices, and DevOps workflows to improve development efficiency.


- Awareness of AI/ML concepts and practical ability to leverage automation and data-driven solutions.


- Strong analytical, problem-solving, and communication skills, with the ability to work collaboratively in a fast-paced environment.


- (Nice to have) Prior experience in the Healthcare domain, including HIPAA-compliant data handling and reporting.


info-icon

Did you find something suspicious?