Posted on: 14/07/2025
Job Description :
Responsibilities :
- Ability to identify patterns, trends, and anomalies in log data to generate actionable insights for product enhancements and feature optimization.
- Collaborate with cross-functional teams to gather business requirements and translate them into functional and technical specifications.
- Manage and organize large volumes of application log data using Google BigQuery.
- Design and develop interactive dashboards to visualize key metrics and insights using any of the tools like Tableau, Power BI, or ThoughtSpot AI.
- Create intuitive, impactful visualizations to communicate findings to teams, including customer success and leadership.
- Ensure data integrity, consistency, and accessibility for analytical purposes.
- Analyse application logs to extract metrics and statistics related to product performance, customer behaviour, and user sentiment.
- Work closely with product teams to understand log data generated by Python-based applications.
- Collaborate with stakeholders to define key performance indicators (KPIs) and success metrics.
- Can optimize data pipelines and storage in BigQuery.
- Participate in the development of proof-of-concepts (POCs) and pilot projects.
- Ability to articulate ideas and points of view clearly to the team.
- Take ownership of data analytics and data engineering solutions.
Requirements :
- 5+ years of experience in dashboard story development, dashboard creation, and data engineering pipelines.
- Hands-on experience with log analytics, user engagement metrics, and product performance metrics.
- Strong communication and teamwork skills.
- Ability to learn quickly and adapt to new technologies.
- Excellent problem-solving skills.
Nice-to-Haves :
- Knowledge of Generative AI (GenAI) and LLM-based solutions.
- Experience in designing and developing dashboards using ThoughtSpot AI.
- Good exposure to Google Cloud Platform (GCP).
- Data engineering experience with modern data warehouse architectures.
- Experience working with large datasets and distributed data processing tools such as Apache Spark or Hadoop.
- Familiarity with Agile development methodologies and version control systems like Git.
- Familiarity with ETL tools such as Informatica or Azure Data Factory.
Did you find something suspicious?
Posted By
Manind
Last Active: NA as recruiter has posted this job through third party tool.
Posted in
Data Analytics & BI
Functional Area
Data Mining / Analysis
Job Code
1512964
Interview Questions for you
View All