Posted on: 02/12/2025
Key Responsibilities :
- Design, develop and maintain scalable data pipelines and ETL processes in the cloud.
- Work with stakeholders to understand data requirements and deliver solutions.
- Develop & optimize SQL queries for data extraction, transformation, and reporting.
- Build and maintain scripts and automation using Python.
- Integrate with and perform basic operations within Snowflake data warehouse.
- Ensure data quality, performance, and reliability of systems.
- Collaborate with functional teams including Data Scientists, Analysts & DevOps.
- Design and implement scalable database structures and ensure data integrity across systems.
- Analyze large and complex datasets, troubleshoot issues and support data-driven decision-making.
- Participate in data architecture discussions and contribute to best practices in database design and data governance.
- Monitor and optimize data pipeline performance, troubleshoot bottlenecks, and recommend tuning improvements.
- Participate in structured peer code reviews and deployment processes to ensure production readiness and reduce risk.
- Take part in initiatives to continuously improve version control practices and deployment workflows.
- Participate in root cause analysis sessions to identify the impact of data issues and improve long-term system stability.
Required Skills & Experience :
- Strong SQL development skills and solid understanding of database design principles, indexing strategies, and normalization.
- Expert-land optimizeciency in SQL (e.g., T-SQL, PL/SQL) and query optimization techniques.
- Strong experience with AWS cloud services (e.g., S3, Lambda, Glue, Redshift).
- Proficiency in Python for data engineering and automation tasks.
- Experience with Snowflake or similar modern cloud-based data platforms.
- Familiarity with modern data architecture patterns and ETL best practices.
- Experience with Git and CI/CD tools for version control and deployment automation.
- Experience with orchestration tools such as Kubernetes and Airflow.
- 5+ years of hands-on experience as a Data Engineer or in a similar data-focused role.
- Experience with large, complex datasets and enterprise-scale data environments.
- Experience participating in or managing data deployment processes, with understanding of production release and risk mitigation practices.
- Detail-oriented mindset with a focus on quality, performance, and stability.
- Ability to understand the broader system and business impact of technical solutions.
- Strong analytical, problem-solving, and communication skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1583743
Interview Questions for you
View All