Job Summary :
Key Responsibilities :
- Develop, optimize, and troubleshoot SQL queries in PostgreSQL, including working with high-concurrency environments
- Work with 5000+ processor instances in PostgreSQL or similar scale setups
- Manage data ingestion from multiple sources, ensuring data integrity, consistency, and availability
- Monitor data workflows, identify bottlenecks, and apply performance tuning
- Collaborate with data architects, analysts, and stakeholders to define and fulfill data requirements
- Ensure data quality, validation, and reconciliation across systems
- Create and maintain documentation for data processes, models, and architecture
- Ensure ETL pipelines meet security, privacy, and compliance standards
Required Skills & Experience :
- Strong hands-on experience with PostgreSQL, including optimization at scale
- Proven ability to manage and process data across massively parallel systems (e.g., 5000 processor environments)
- Proficient in SQL, PL/pgSQL, and performance tuning
- Experience with ETL tools like Talend, Apache Nifi, Informatica, Airflow, etc.
- Familiarity with big data ecosystems (Hadoop, Spark, Kafka) is a plus
- Strong understanding of data modeling, warehousing, and data governance
- Excellent analytical, debugging, and problem-solving skills
Preferred Qualifications :
- Familiarity with DevOps and CI/CD practices for data pipelines
- Exposure to real-time streaming data processing
- Knowledge of scripting languages (Python, Bash, etc.)
Education :
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1551143
Interview Questions for you
View All