Posted on: 05/06/2025
Job Description :
Responsibilities :
- Support BlueVine's business and research Data teams by designing and developing data-centric infrastructure tools to facilitate and enhance Data Analytics and Research, both in real-time and offline.
- Help design and implement next-generation ML workloads, focusing on modern, efficient Big Data technologies.
- Use pipeline orchestration tools such as AirFlow, ML platforms such as SageMaker, and data frameworks to design and develop first-in-class solutions for the financial sector.
- Design and develop end-to-end data pipelines, from data collection, through data validation and transformation, to making the data available to processes and stakeholders.
- Work closely with BlueVine's Data Science, Data Engineering, and Data Analytics teams.
- Work closely with BlueVine's other R& D teams to help incorporate industry best practices, define and enforce data governance procedures, and monitor system performance.
Requirements :
- Bachelor's or Master's in Computer Science or a related field.
- 5+ years of full-time work experience (not including internships, study, personal/school projects) as a Backend engineer/ML Infra engineer in a fast-paced, data-centric environment.
- 5+ years of hands-on Python programming experience (not including internships, study, personal/school projects).
- Experience with AWS ecosystem and container technology (e. g., Docker, K8S).
- Exceptional communication skills, ability to collaborate smoothly with and convey complex ideas in a clear way to people of different backgrounds.
- Be a quick learner, adaptable, and have the ability to work independently or as part of a team in a fast paced environment.
- Ability to quickly and independently learn new technologies, frameworks, and algorithms.
- Proactive, result-driven, and multi-tasker; creative but committed to meeting deadlines.
- Very good English, written and verbal.
Bonus points if you also have :
- Experience with AWS SageMaker.
- Experience with databases such as PostgreSQL, Redshift, Neptune.
- Experience with monitoring tools like Grafana, Sentry, and Opensearch.
- Experience working with Python ML libraries (e. g. Pandas, Numpy, Scikit Learn).
- Experience working with Airflow.
- Experience in working with Jenkins-based CI/CD.
- Experience with streaming and real-time analytics systems.
Did you find something suspicious?
Posted By
hitanshi darmwal
Last Login: NA as recruiter has posted this job through third party tool.
Posted in
AI/ML
Functional Area
ML / DL Engineering
Job Code
1491013
Interview Questions for you
View All