- Minimum of 5 years experience in data engineering, with hands-on expertise in AWS services, Databricks, and/or Informatica IDMC.
- Strong proficiency in programming languages such as Python, Java, or Scala to build and maintain data pipelines.
- Skilled in assessing technical solutions and providing recommendations to resolve data issues, with a focus on optimizing complex transformations and long-running processes.
- Solid understanding and working knowledge of SQL and NoSQL databases.
- Familiar with data modeling principles and schema design best practices.
- Demonstrated ability to solve problems effectively and apply strong analytical thinking.
- Excellent communication and collaboration skills for cross-functional teamwork.
- AWS, Databricks, or Informatica certifications (e.g., AWS Certified Data Analytics Specialty) are considered an advantage.
- Experience working with big data technologies such as Apache Spark and Hadoop within Databricks environments.
- Knowledge of containerization and orchestration tools, including Docker and Kubernetes, is preferred.