We are looking for a skilled and detail-oriented Data Engineer to design, build, and maintain scalable data pipelines and architectures. The ideal candidate will work closely with data scientists, analysts, and business stakeholders to ensure that high-quality, reliable data is available for analytics and business decision-making.
Key Responsibilities :
- Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data.
- Build and optimize data warehouses, data lakes, and data marts to support analytics and reporting.
- Collaborate with data scientists and analysts to understand data requirements and implement solutions.
- Ensure data quality, integrity, and consistency across multiple sources.
- Implement data governance, security, and compliance best practices.
- Monitor and troubleshoot data pipeline performance and optimize for efficiency.
- Evaluate and recommend new tools, technologies, and frameworks for data processing and storage.
- Document processes, pipelines, and systems for operational transparency.
Qualifications & Skills :
- Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or related field.
- Proven experience as a Data Engineer, Big Data Engineer, or similar role.
- Strong programming skills in Python, Java, or Scala.
- Hands-on experience with SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra).
- Experience with cloud platforms like AWS, Azure, or GCP, including services for data storage and processing.
- Familiarity with big data tools such as Hadoop, Spark, Kafka, or Airflow.
- Strong problem-solving, analytical, and communication skills.
- Knowledge of data modeling, ETL frameworks, and data warehousing concepts.
- Experience with CI/CD pipelines and version control (e.g., Git) is a plus.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1537471
Interview Questions for you
View All