Posted on: 05/02/2026
Description :
Responsibilities :
Data Pipeline Development :
- Design, implement, and optimize end-to-end data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data.
- Develop robust ETL (Extract, Transform, Load) processes to integrate data from diverse sources into our data ecosystem.
- Implement data validation and quality checks to ensure accuracy and consistency.
Data Modeling and Architecture :
- Design and maintain data models, schemas, and database structures to support analytical and operational use cases.
- Optimize data storage and retrieval mechanisms for performance and scalability.
- Evaluate and implement data storage solutions, including relational databases, NoSQL databases, data lakes, and cloud storage services.
Data Integration and API Development :
- Build and maintain integrations with internal and external data sources and APIs.
- Implement RESTful APIs and web services for data access and consumption.
- Ensure compatibility and interoperability between different systems and platforms.
Data Infrastructure Management :
- Configure and manage data infrastructure components, including databases, data warehouses, data lakes, and distributed computing frameworks.
- Monitor system performance, troubleshoot issues, and implement optimizations to enhance reliability and efficiency.
- Implement data security controls and access management policies to protect sensitive information.
Collaboration and Documentation :
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver tailored solutions.
- Document technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation.
- Provide technical guidance and support to team members and stakeholders as needed.
Requirements :
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field.
- 9+ Proven experience in data engineering, software development, or related roles.
- 9+ in programming languages commonly used in data engineering (e.g., Python, Java, Store procs, SQL, Scala, etc.).
- 9+ Strong knowledge of database systems, data modeling techniques, and SQL proficiency.
- 9+ Proficiency with ETL tools commonly used in data engineering (e.g., SSIS, Snowflake, Databricks, Azure Data Factory, Store procs).
- Experience with big data technologies and frameworks (e.g., Hadoop, Spark, Kafka, etc.).
- 7+ Experience with cloud platforms and services (e.g., AWS, Azure, Google Cloud Platform, etc.).
- Excellent problem-solving skills and attention to detail.
- Effective communication and collaboration skills in a team-oriented environment.
- Ability to adapt to evolving technologies and business requirements.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1610229