Posted on: 15/12/2025
Key Responsibilities :
- Apply SQL and Python skills, along with therapeutic area knowledge, to conduct exploratory analysis on pharma datasets (e., patient claims, sales, payer), build reports from scratch, and recommend dataset applications for existing or new use cases.
- Collaborate with data ingestion teams to ensure integration of commercial datasets from providers like IQVIA and Symphony follows best practices and includes appropriate quality
checks.
- Apply AWS services (Glue, Lambda, S3) to support the development and maintenance of
scalable data solutions.
- Translate business requirements into technical specifications and mock-ups, applying best
practices and an enterprise mindset.
- Support automation and innovation efforts by leveraging GenAI tools and scalable
frameworks to enhance analytics delivery.
- Contribute to data quality efforts by applying and enhancing existing QC frameworks to
ensure reliability and consistency across domains.
- Partner with cross-functional teams-including data engineering, forecasting, and therapeutic
area leads-to align on business rules and metric definitions used in building patient journey,
market access, and adherence solutions.
- Assist in data strategy activities such as brand planning, data budget forecasting, and
Statement of Work (SoW) management.
- Ensure adherence to data access protocols and compliance standards, especially when
working with sensitive patient-level data.
Qualifications & Experience :
- Bachelor's or master's degree in engineering, Statistics, Data Science, or a related field.
- Minimum 4-5 years of experience in a Data Analyst role within the biopharma or pharmaceutical industry.
- Prior experience working with commercial real-world data, including prescriptions, claims, and sales datasets.
- Strong analytical and problem-solving skills with the ability to interpret complex data and
deliver actionable insights.
- Effective communication and stakeholder management skills with ability to work
independently or collaboratively, manage multiple priorities, and deliver with integrity,
urgency, and accountability.
- Strong proficiency in SQL and Python.
- Strong proficiency and hands-on experience with BI tools including Power BI and Tableau.
- Exposure to platforms such as Domino and Databricks with experience using Redshift and
Snowflake is a plus.
- Familiarity with AWS services (Glue, Lambda, S3) and cloud-based data engineering practices
is a plus.
- Experience with GitHub, JIRA and Confluence is a plus.
- Understanding of Data Architecture, ETL processes and data modelling is a plus.
Did you find something suspicious?
Posted by
Posted in
Data Analytics & BI
Functional Area
Data Mining / Analysis
Job Code
1590753
Interview Questions for you
View All