Posted on: 04/11/2025
Description :
Company Profile : Solytics Partners is a Global Analytics firm, recognized with multiple industry awards for innovation and excellence.
- Our team comprises experts with deep knowledge in risk, analytics, AI/ML, AML/FCC, and fraud.
- By converging this expertise with cutting edge technologies like AI, Machine Learning, Generative AI, and Large Language Models (LLMs), we deliver powerful automated platforms and incisive point solutions.
- Our offerings enable clients to streamline and future-proof their risk, AML, and analytics processes, comply seamlessly with global regulations, and safeguard financial systems.
- Whether its solving complex challenges or driving operational efficiency, Solytics Partners is committed to empowering organizations with transformative tools to stay ahead in an evolving regulatory landscape.
Job Title : Sr. Data Engineer.
Experience : 4-7 years of relevant experience.
Location & Timings : Pune Work from office & Timing 11 :00 AM 8 :00 PM.
Education Qualification : Masters or bachelor's in computer science or IT or in other relevant discipline from a reputed institute.
Role Type : Permanent / Full Time.
Job Descriptions :
- We are seeking a skilled and detail-oriented Data Engineer with 4 to 7 years of experience, particularly strong in Python and PySpark/ Spark.
- The ideal candidate will have hands-on expertise in ETL transformation processes and will play a key role in analyzing data, building data pipelines, and delivering insights to support business decisions.
Responsibilities :
- Develop and maintain scalable ETL processes for data extraction, transformation, and loading using Python and Spark.
- Analyze complex data sets to identify trends, patterns, and actionable insights.
- Collaborate with data engineers, data scientists, and business stakeholders to define data requirements and support analytics initiatives.
- Develop efficient data processing scripts using Python (Pandas, PySpark, etc.)
- Build and optimize data models, schemas, and data warehouses to support analytics and reporting needs.
- Collaborate with cross-functional teams to ensure data quality, integrity, and availability.
- Integrate data from multiple sources such as APIs, databases, and streaming platforms (Kafka, Kinesis, etc.
- Implement and maintain data monitoring, validation, and governance frameworks.
- Work with cloud data platforms (e.g, AWS, GCP, or Azure) and related services (e.g, AWS Glue, Lambda, S3, Redshift).
- Participate in code reviews, maintain documentation, and follow best practices in CI/CD and version control (Git).
Required Key Skills :
- 4- 7 years of experience as a Data Engineer or in a similar data-focused role.
- Proficiency in Python for data manipulation, automation, and scripting.
- Strong hands-on experience with PySpark/Spark for large-scale data processing.
- Solid understanding and experience in ETL transformation and data pipeline development.
- Experience handling large datasets across SQL and NoSQL data storage systems.
- Familiarity with data visualization tools (e.g, Tableau, Power BI) is a plus.
- Experience with data orchestration tools (Airflow, Prefect, etc.) and cloud data platforms (AWS/GCP/Azure).
- Understanding of data warehousing, data modeling, and performance tuning.
- Excellent problem-solving skills, analytical mindset, and attention to detail.
- Strong communication and collaboration skills with the ability to work in cross-functional Agile teams.
Good to Have :
- Experience with real-time streaming data pipelines (Kafka, Kinesis).
- Exposure to machine learning data pipelines or MLOps workflows.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1569340