Posted on: 21/01/2026
Description :
About the job :
At Opsera, we share a vision for the future of software delivery. We believe that DevOps has transformed from an aspiration to now a practical science. We built our platform to help organizations accelerate their DevOps adoption and reach peak innovation velocity. Our culture represents the values and philosophy that helped inspire the foundations of our Opsera platform. Opsera is the only DevOps orchestration platform that enables integration, orchestration, and intelligence across the entire software development life cycle. With easy-to-use self-service integrations, pipeline catalogs and templates, a centralized tool registry as well as unified intelligence to tie it all together, Opsera enables organizations to support their developers tool choices, operations team the efficiency they need, and business leaders unparalleled visibility into their DevOps environment.
The Role :
As a valued member of Opera's Engineering Team, you will work alongside ambitious and driven engineers and team members across the globe who are committed to solving the hard challenges present today in the DevOps space and pioneer new concepts and capabilities around Artificial Intelligence in a field ripe with potential. Youll contribute to fast-paced cycles of innovation and develop new user experiences and industry defining insights and analysis standards in the DevOps space. As an Experienced Data Engineer, you will be at the forefront of our data engineering efforts, responsible for ensuring the performance, reliability, and efficiency of our data ecosystem. Your expertise will be crucial in optimizing existing queries, designing and modeling data structures, and collaborating closely with cross-functional teams to drive data-related initiatives. Your proactive approach, problem-solving skills, and deep knowledge of Big data, Spark, Python & SQL databases will directly contribute to our success.
What You Will Be Doing :
- Partner with product teams, data analysts and engineering teams to build foundational data sets that are trusted, well understood, and aligned with business strategy
- Translate business requirements into data models that are easy to understand and used by different subject areas
- Design, implement and build data models and pipelines that deliver data with measurable quality under the SLA
- Identify, document and promote data engineering standard methodologies.
What You Should Have :
- Minimum 5 years of experience working in data engineering, data analysis & data modeling.
- Must have experience in working on data engineering elements in databricks.
- A consistent track record in scaling and optimizing schemas, performance tuning SQL and ELT/ETL pipelines in OLAP and Data Warehouse environments
- Deep understanding of Spark architecture & its components.
- Proficiency in coding through the use of with SQL, Spark and Python
- Troubleshooting Spark jobs & effectively handling issues on the same.
- Familiar with data governance frameworks, SDLC, and Agile methodology
- Excellent written and verbal communication and interpersonal skills, and ability to efficiently collaborate with technical and business partners
- Bachelors degree in computer science, Engineering or a related field, equivalent training, fellowship, or work experience
- Understanding of ML, DL and Agentic AI would be preferred
Bonus Points :
- Familiarity with DevOps processes and practices
- Experience working with cloud technologies such as AWS, GCP or Azure
- Hands-on experience with NoSQL Data Stores (MongoDB)
- Hands-on experience with Spark SQL & Big Data
- Experience around Batch & Real-time processing
- AI technologies and understanding of ML & LLMs
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1604377