Posted on: 29/10/2025
We're looking for a Python Engineer (2-4 years) who's strong in backend development and has hands-on experience implementing aggregation and data computation use cases - such as device-level rollups, metrics computation, time-based summaries, or multi-source joins.
You'll work closely with platform, data, and product teams to design efficient aggregation logic and APIs that serve real-time and historical analytics, and you'll help make Condense's data platform more intelligent and scalable.
Key Responsibilities :
- Design and implement data aggregation logic for device, customer, or time-window-based metrics using Python.
- Build clean, maintainable backend services or microservices that perform aggregations and expose results through APIs or data sinks.
- Work with internal teams to translate business or analytics needs into efficient aggregation pipelines.
- Optimize data handling - caching, indexing, and computation efficiency for large-scale telemetry data.
- Collaborate with DevOps and data teams to integrate with databases, message queues, or streaming systems.
- Write high-quality, tested, and observable code - ensuring performance and reliability in production.
- Contribute to design discussions, reviews, and documentation across backend
and data infrastructure components.
Required Qualifications :
- 2-4 years of professional experience as a Python Developer / Backend Engineer.
- Strong proficiency with Python (async programming, data structures, I/O, concurrency).
- Experience with data aggregation, metrics computation, or analytics workflows (batch or incremental).
- Sound understanding of REST APIs, microservice architecture, and database design (SQL/NoSQL).
- Familiarity with cloud-native development and containerized deployment (Docker, Kubernetes).
- Hands-on with data access and transformation using libraries like pandas, SQLAlchemy, or FastAPI/Flask for backend services.
- Excellent debugging, profiling, and optimization skills.
Good to Have :
- Exposure to real-time data pipelines (Kafka, Kinesis, Pulsar, etc.) or streaming frameworks (Kafka Streams, ksqlDB, Faust).
- Experience with time-series databases or analytics stores (ClickHouse, Timescale, Druid, etc.).
- Understanding of event-driven or stateful aggregation patterns (tumbling/sliding windows, deduplication).
- Familiarity with CI/CD, observability tools (Prometheus, Grafana), and monitoring best practices.
- Experience working in IoT, mobility, or telemetry-heavy product environments.
Did you find something suspicious?
Posted By
Posted in
Backend Development
Functional Area
Backend Development
Job Code
1566988
Interview Questions for you
View All