Posted on: 05/01/2026
Responsibilities :
- Design, develop, and maintain scalable and reliable data pipelines using TAVS and other relevant technologies.
- Implement data ingestion, transformation, and storage solutions for various data sources.
- Optimize data pipelines for performance and efficiency.
- Build and maintain data warehouses and data lakes.
- Develop and implement data quality monitoring and alerting systems.
- Collaborate with data scientists and analysts to understand their data needs and provide solutions.
- Troubleshoot and resolve data-related issues.
- Participate in code reviews and contribute to the development of best practices.
- Stay up-to-date with the latest trends and technologies in data engineering.
- Contribute to the documentation of data pipelines and systems.
- Mentor junior data engineers.
- Work with cloud platforms (e.g., AWS, Azure, GCP) to deploy and manage data infrastructure.
- Implement data security and governance policies.
- Strong understanding of data warehousing concepts and principles.
- Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
- Experience with data processing frameworks (e.g., Spark, Hadoop).
- Experience with cloud platforms (e.g., AWS, Azure, GCP).
- Experience with data ingestion tools (e.g., Kafka, Flume).
- Experience with ETL tools (e.g., Informatica, Talend).
- Strong programming skills in Python or Java.
- Experience with data visualization tools (e.g., Tableau, Power BI) is a plus.
- Excellent problem-solving and communication skills.
- Experience with TAVS (please specify what TAVS is and what experience is required).
- Experience with data modeling and schema design.
- Experience with DevOps practices and tools (e.g., Docker, Kubernetes).
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1596569