Posted on: 03/10/2025
Data Engineering Foundations :
- Design & Development : Design and implement scalable data architectures and datasets that support the organization's evolving data needs, providing the technical foundations for our analytics team and business users.
- Data Engineering : Support and implement large datasets in batch/real-time analytical solutions leveraging data transformation technologies.
- Data Security & Scalability : Enable robust data-level security features and build scalable solutions to support dynamic cloud environments, including financial considerations.
- Process Improvement : Perform code reviews with peers and make recommendations on how to improve our end-to-end development processes.
AI/ML Innovation & Business Impact :
Develop & Deploy Classical ML Models : Own the end-to-end lifecycle of machine learning projects.
- You'll build and productionize sophisticated models for critical business areas such as marketing attribution, customer churn prediction, case escalation and other relevant use-cases to post-sales.
Optimize AI Agentic Systems : Play a key role in our generative AI initiatives.
- You will be responsible for characterizing, evaluating, and fine-tuning AI agentssuch as conversational systems that allow users to query massive datasets using natural languageto improve their accuracy, efficiency, and reliability.
Partner with Business Stakeholders :
- Act as an internal consultant to our Go-to-Market (GTM), Global Customer Services (GCS) and Product and Finance teams.
- You'll translate business challenges into data science use-cases, identify opportunities for AI-driven solutions, and present your findings in a clear, actionable manner.
Own the Full Data Science Lifecycle :
- Your responsibilities will cover the entire project workflow, working with the business to understand the problem, charting a path to solve the problem, feature engineering, model selection and training, robust evaluation, deployment, and, in partnership with the data platform team, ongoing monitoring for performance degradation.
Qualifications :
- 4 to 7 plus years' experience building and maintain data pipeline both for reporting, analysis and feature engineering.
- Experience building and optimizing clean, well-structured analytical datasets for business and data science use cases.
This includes Implementing and supporting Big Data solutions for both batch (scheduled) and real-time (streaming) analytics.
- Prior experience working extensively within dynamic cloud environments, specifically Google Cloud Services (GCS) BigQuery and Vertex AI.
- Prior experience developing dashboards in Tableau/Looker or similar data viz platform.
- Nice to have : Experience implementing and managing data-level security features to ensure data is protected and access is properly controlled.
- Expert-level programming skills in Python and familiarity with core data science and machine learning libraries (e., Scikit-learn, Pandas, PyTorch/TensorFlow, XGBoost).
- A solid command of SQL for complex querying and data manipulation.
- Proven ability to work autonomously, navigate ambiguity, and drive projects from concept to completion.
Preferred Qualifications :
- Prior working experience in Customer Analytics space and customer experience use-cases, e. Escalation, Risk predictors, Renewals and efficiency of project delivery in Professional Services space.
- Direct experience with generative AI, including hands-on work with LLMs and frameworks like LangChain, LlamaIndex, or the Hugging Face ecosystem.
- Experience in evaluating and optimizing the performance of AI systems or agents.
- Demonstrated expertise in specialized modeling domains such as causal inference, time-series analysis.
- An MS or PhD in a quantitative field like Computer Science, AI, Statistics, or equivalent practical experience or equivalent military experience.
Did you find something suspicious?