Posted on: 07/10/2025
Description :
Role : Data Engineer
Location : Remote
Experience Required : 5 to 10 years
About the Role :
We are seeking an experienced Data Engineer to design, build, and maintain robust data pipelines and infrastructure. The ideal candidate will have a strong background in data integration, real-time and batch processing, and a deep understanding of modern data platforms across cloud environments.
Key Responsibilities :
- Design, develop, and maintain ETL/ELT pipelines to enable efficient data flow across systems.
- Ingest and transform data from various sources APIs, databases, files, and streaming platforms.
- Build and optimize real-time and batch data processing solutions.
- Implement data validation, quality, and cleansing frameworks to ensure accuracy and consistency.
- Translate business requirements into scalable data models and architectures.
- Ensure data security, access control, and compliance with organizational and regulatory standards.
- Collaborate with cross-functional teams to enhance data availability and usability.
- Maintain comprehensive documentation, enforce best practices, and contribute to continuous improvement initiatives.
Technical Skills & Tools :
- Cloud Platforms : AWS, GCP, Azure
- Data Warehousing : Snowflake, BigQuery, Redshift, Azure Synapse
- Data Lake & Processing : Databricks (Lakehouse, Spark), Apache Spark, Hadoop
- Data Integration & Workflow Tools : Airflow, dbt, Kafka, AWS Glue, Azure Data Factory
- Programming & Query Languages : Python (PySpark), SQL
- Data Quality & Governance : Informatica, Talend, Collibra
- Experience in real-time and batch pipeline development and management
What We Offer :
- Fully remote work environment
- Opportunity to work on cutting-edge data infrastructure
- Collaborative and growth-oriented culture
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1557022
Interview Questions for you
View All