HamburgerMenu
hirist

Job Description

Description :

Job Description : Full Stack Engineer

About Company :

USEReady helps businesses to be self-reliant on data. Growing over 3000% since inception, USEReady achieved #113 rank in Inc 500 2015 and was honored among the top 100 companies in North America in 2015 by Red Herring. USEReady is built on a strong entrepreneurial spirit with unprecedented opportunities for career growth.

USEReady helps enterprises apply AI and agentic intelligence to improve decisions, automate operations, and build smarter, more autonomous business systems. For more than a decade, we have modernized BI environments, migrated legacy platforms, improved data quality, and enabled governed, cloud-first architectures. These foundations now support AI-driven insights, automated intelligence, and agent-powered decision support that reduce complexity and accelerate outcomes.

We work closely with technology leaders such as AWS, Elementum, Snowflake, Tableau, Databricks, and others. Headquartered in New York City with 450+ experts across the US, Canada, India, and Singapore, USEReady serves financial services, healthcare, manufacturing, government, education, and retail industries.

Role Summary :

We are seeking a Full Stack Engineer with strong Python expertise and proven experience building end-to-end ETL pipelines to design, develop, and maintain scalable data workflows. The role focuses on extracting data from multiple sources, transforming it based on business requirements, and loading it into analytics and data warehouse platforms (primarily Snowflake).

This position plays a critical role in enabling data-driven decision-making by ensuring reliable, high-quality, and production-ready data pipelines.

Roles & Responsibilities :

- Design, develop, and maintain end-to-end ETL/ELT pipelines using cloud-based data integration tools such as Azure Data Factory, Databricks, or similar platforms

- Build and optimize Python-based data processing workflows using libraries such as pandas and PySpark

- Extract data from diverse structured and semi-structured sources and transform it according to defined business logic and data standards

- Load transformed data into Snowflake and other analytics platforms, ensuring performance, scalability, and reliability

- Write and optimize complex SQL queries for data transformation, validation, and performance tuning

- Ensure data quality, accuracy, and consistency by implementing validation checks and monitoring mechanisms

- Collaborate closely with analytics, BI, and business teams to understand data requirements and translate them into scalable technical solutions

- Monitor, troubleshoot, and resolve data pipeline failures in production environments

- Implement best practices for workflow orchestration, including dependency management, retries, and scheduling

- Support data testing initiatives by integrating validation frameworks and contributing to robust data governance practices

- Document data pipelines, transformations, and operational processes to ensure maintainability and knowledge sharing

- Participate in code reviews and contribute to continuous improvement of data engineering standards and practices

Mandatory Skills :

- 3-4 years of hands-on experience in data engineering, with strong exposure to end-to-end ETL pipeline development

- Experience using Azure Data Factory, Databricks, or similar cloud-based data integration tools

- Strong Python programming skills for data processing and transformation

- Hands-on experience with pandas, PySpark, or similar data libraries

- Proven experience implementing ETL/ELT pipelines in production environments

- Strong SQL skills for querying, transforming, and optimizing large datasets

Desirable Skills :

- Hands-on experience with Snowflake, including data loading, performance tuning, tasks, streams, and query optimization

- Exposure to Azure Data Lake Storage, Azure Synapse Analytics, or Databricks

- Familiarity with workflow orchestration tools such as Apache Airflow, Prefect, or similar platforms

- Understanding of orchestration concepts such as retries, alerts, and dependency management

- Experience with data testing and validation frameworks such as Great Expectations or dbt tests


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in