HamburgerMenu
hirist

Job Description

About Apexon :

Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences.

We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.

Apexon brings together distinct core competencies in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences to help businesses capitalize on the unlimited opportunities digital offers.

Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients toughest technology problems, and a commitment to continuous improvement.

Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents.

Job Title : Databricks ETL Developer.

Experience : 46 Years.

Location : Hybrid, preferably in Bangalore.

Job Description :

We are seeking a skilled Databricks ETL Developer with 4 to 6 years of experience in building and maintaining scalable data pipelines and transformation workflows on the Azure Databricks platform.

Key Responsibilities :

- Design, develop, and optimize ETL pipelines using Azure Databricks (Spark).

- Ingest data from various structured and unstructured sources (Azure Data Lake, SQL DBs, APIs).

- Implement data transformation and cleansing logic in PySpark or Scala.

- Collaborate with data architects, analysts, and business stakeholders to understand data requirements.

- Ensure data quality, performance tuning, and error handling in data workflows.

- Schedule and monitor ETL jobs using Azure Data Factory or Databricks Workflows.

- Participate in code reviews and maintain coding best practices.

Required Skills :

- Hands-on experience with Azure Databricks, Spark (PySpark/Scala).

- Strong ETL development experience handling large-scale data.

- Proficient in SQL and working with relational databases.

- Familiarity with Azure Data Lake, Data Factory, Delta Lake.

- Experience with version control tools like Git.

- Good understanding of data warehousing concepts and data modeling.

Preferred :

- Experience in CI/CD for data pipelines.

- Exposure to BI tools like Power BI for data validation.


info-icon

Did you find something suspicious?