Posted on: 10/08/2025
Job Description :
- Experience in SQL Server databases, Oracle and/or cloud databases.
- Experience in data warehousing and data mart, Star and Snowflake model.
- Experience in loading data into database from databases and files.
- Experience in analyzing and drawing design conclusions from data profiling results.
- Understanding business process and relationship of systems and applications.
- Must be comfortable conversing with the end-users.
- Must have ability to manage multiple projects/clients simultaneously.
- Excellent analytical, verbal and communication skills.
Role and Responsibilities :
- Work with business stakeholders and build data solutions to address analytical & reporting requirements.
- Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements.
- Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow
- Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation
- Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases.
- Conduct root cause analysis and resolve production problems and data issues.
- Create and maintain up to date documentation of the data model, data flow and field level mappings.
- Provide support for production problems and daily batch processing.
- Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1527277
Interview Questions for you
View All