HamburgerMenu
hirist

Job Description

Company Description :


CodeChavo is a global provider of digital transformation solutions, working collaboratively with leading technology companies to deliver impactful results.

By combining cutting-edge technology, innovative strategies, and a people-first approach, CodeChavo integrates transformation and agility into its clients' organizations.

With deep industry expertise, the company ensures high-quality project execution from design to operation.

CodeChavo also supports businesses by outsourcing digital projects and building skilled technology teams.

Work location : Noida WFO.

Job Summary :

- Build systems for collection & transformation of complex data sets for use in production systems


- Collaborate with engineers on building & maintaining back-end services


- Implement data schema and data management improvements for scale and performance


- Provide insights into key performance indicators for the product and customer usage


- Serve as team's authority on data infrastructure, privacy controls and data security.

- Collaborate with appropriate stakeholders to understand user requirements


- Support efforts for continuous improvement, metrics and test automation


- Maintain operations of live service as issues arise on a rotational, on-call basis


- Verify whether data architecture meets security and compliance requirements and expectations .

- Should be able to fast learn and quickly adapt at rapid pace java/scala, SQL

Minimum Qualifications :


- Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience.

- 3+ years of progressive experience demonstrating strong architecture, programming and engineering skills.

- Firm grasp of data structures, algorithms with fluency in programming languages like Java, Python, Scala.

- Strong SQL language and should be able to write complex queries.

- Strong Airflow like orchestration tools.

- Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations.

- Experience with streaming technologies such as Apache Spark, Kafka, Flink.

- Backend experience including Apache Cassandra, MongoDB and relational databases such as Oracle, PostgreSQL AWS/GCP solid hands on with 4+ years of experience.

- Strong communication and soft skills.

- Knowledge and/or experience with containerized environments, Kubernetes, docker.

- Experience in implementing and maintained highly scalable micro services in Rest, Spring Boot, GRPC.

- Appetite for trying new things and building rapid POCs".

Key Responsibilities :


- Design, develop, and maintain scalable data pipelines to support data ingestion, processing, and storage.

- Implement data integration solutions to consolidate data from multiple sources into a centralized data warehouse or data lake.

- Collaborate with data scientists and analysts to understand data requirements and translate them into technical specifications.

- Ensure data quality and integrity by implementing robust data validation and cleansing processes.

- Optimize data pipelines for performance, scalability, and reliability.

- Develop and maintain ETL (Extract, Transform, Load) processes using tools such as Apache Spark, Apache NiFi, or similar technologies .

- Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal downtime.

- Implement best practices for data management, security, and compliance.

- Document data engineering processes, workflows, and technical specifications.

- Stay up-to-date with industry trends and emerging technologies in data engineering and big data.

info-icon

Did you find something suspicious?