Description :
Are you obsessed with data, partner success, taking action, and changing the game If you have a whole lot of hustle and a touch of nerd, come work with Pattern! We want you to use your skills to push one of the fastest-growing companies in the US to the top of the list.
Pattern is a leading eCommerce data and growth company located in the Silicon Slopes tech hub and with global offices in Europe, China, India, Australia, the Middle East and Canada. Named one of the fastest growing companies in the US by Inc. 500; Pattern has made its mark in the industry.
Some of the biggest consumer brands like Skullcandy, Nestle, Clorox, Kong, Panasonic, and Tumi trust Pattern with their business. Pattern has recruited talent from brands like Amazon, eBay, Adobe, Pepsico, Apple, Google, and Oracle. Think you have what it takes to work at Pattern If you have a whole lot of hustle and a touch of nerd, Pattern is the place for you.
We're looking for a dynamic, people oriented, programming language agnostic, and highly analytical person to join our development team in our Pune office. You would be working with a team of 4-6 highly skilled engineers to scale Patterns Data Platform.
About Team :
- Exposure in building data ingestion and transformation pipelines using modern ETL frameworks.
- Strong capabilities in at least one programming language used for data pipelines (e.g., Go preferred; Ruby on Rails/Python/Scala a plus).
- Working knowledge of relational (Postgres/MySQL) and NoSQL databases.
- Advanced SQL query skills for performance and scale.
- Strong experience with AWS Services : including S3, EMR/Glue, Athena, Lambda, EC2, RDS, IAM, and cost monitoring.
- Demonstrated experience optimizing large scale data pipelines for performance, including techniques
such as parallel processing, multithreading, and/or massive parallel processing (MPP).
- Understanding of bottlenecks in data pipelines and ability to optimize for high throughput and low latency.
- Familiarity with DevOps and SRE principles, experience with infrastructure as code (Terraform/CloudFormation).
- Experience writing and maintaining automated tests.
- Clear, concise technical documentation skills and championing clean code.
What Could Set You Apart ?
- Deep hands on expertise in building workflows with Apache Airflow, orchestrating complex DAGs, integrations, and monitoring.
- Extensive experience with data streaming systems like Apache Kafka for high-throughput, low-latency ingestion and event-driven architectures.
- Solid understanding of monitoring and observability stacks, including Prometheus for metrics and Grafana for dashboarding and incident response.
- Experience building dashboards and visualizations in Apache Superset (or similar tools).
- Bachelors or Masters degree in Computer Science, Information Systems, or relevant field.
- Familiarity with Snowflake, Trino query, Spark and similar distributed query/execution services.
- Prior experience implementing end-to-end monitoring strategies and incident response processes.
- Strong experience working with remote/global teams.
- Exceptional written, verbal, and visual communication skills, including presenting complex data topics to non-technical stakeholders.
- Experience operating in a highly regulated or security-conscious environment.
- Demonstrated thought leadership in optimizing and automating cost and resource use in cloud data solutions.
- Good to have Hands-on experience leveraging GPU acceleration (e.g., CUDA, RAPIDS, or similar) for distributed data processing or analytics workloads.
- Hands on experience applying AI applications and tools in software engineering tasks such as code generation, code review, automated testing, or research ("surfing") for solutions.
What We're About ?
We are looking for individuals who are :
Game Changers :
A game changer is someone who looks at problems with an open mind and shares new ideas with team members, regularly reassesses existing plans and attaches a realistic timeline to goals, makes profitable, productive, and innovative contributions, and actively pursues improvements to Patterns processes and outcomes.
Data Fanatics :
A data fanatic is someone who recognizes problems and seeks to understand them through data, draws unbiased conclusions based on data that lead to actionable solutions, and continues to track the effects of the solutions using data.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1621403