Posted on: 13/10/2025
Job Summary :
We are seeking a highly proficient Data Engineer with 46+ years of specialized experience in designing, building, and maintaining robust, scalable, and secure data infrastructure.
This role is pivotal in enabling advanced analytics, reporting, and machine learning use cases by focusing on data integration, transformation (ETL/ELT), and optimal storage solutions.
The engineer will collaborate closely with data analysts, data scientists, and software engineers to ensure high-quality, well-structured datasets are accessible, accurate, and ready for critical business decision-making and product innovation.
Job Description :
Data Pipeline Architecture and Development :
- Demonstrate expertise in relational databases (e.g., PostgreSQL, MySQL, SQL Server), including advanced schema design, indexing, and tuning for analytical efficiency.
- Possess strong proficiency in SQL for complex data querying, manipulation, and stored procedure development.
- Apply hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB) to architect solutions for unstructured or high-volume, low-latency data storage needs.
- Implement effective data modeling techniques to optimize database structures for both transaction speed and analytical reporting.
Analytics, Integration, and Collaboration :
- Ensure data readiness and availability to support diverse Analytics & Integration use cases, including complex data science and machine learning (ML) workloads.
- Apply solid knowledge of API integration principles to reliably ingest data from external services and internal microservices.
- Work closely with cross-functional teams (Data Scientists, Analysts, Software Engineers) to understand data requirements and deliver datasets that support business decision-making and product innovation.
- Serve as a technical resource for data consumers, advising on optimal data access patterns and governance standards.
Required Skills & Expertise :
Programming :
- Mandatory proficiency in Python, Java, or Scala for high-performance data pipeline construction.
Databases (Relational) :
- Expert knowledge and hands-on experience with relational databases (PostgreSQL, MySQL, SQL Server) and strong SQL mastery.
Databases (NoSQL) :
- Hands-on experience with NoSQL databases (MongoDB, Cassandra, DynamoDB).
Streaming/Big Data :
- Experience with stream processing technologies (Kafka, Kinesis) and distributed data processing frameworks (e.g., Apache Spark).
Data Integration :
- Expertise in ETL/ELT processes, data modeling, and API integration for data sourcing.
Quality & Performance :
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1560004
Interview Questions for you
View All