Posted on: 17/11/2025
Description :
Responsibilities :
- Execute on R& D of distributed, highly scalable, and fault-tolerant microservices.
- Use test-driven development techniques to develop beautiful, efficient, and secure code.
- Create and scale high-performance services that bring new capabilities to Arctic Wolf's data science organisations.
- Identify problems proactively and propose novel solutions to solve them.
- Continuously learn and expand your technical horizons.
Requirements :
- Will collaborate closely with our data science and detection research teams across different cybersecurity domains to define research detection infrastructure requirements and build critical data services.
- Has proficiency in big data technologies such as Apache Spark, Databricks, Kafka, SQL, and Terraform.
- Has experience interacting with and author workflows, such as prompts or tools, for LLMs, in AWS Bedrock.
- Has extensive experience with data pipeline tools (Flink, Spark or Ray) and orchestration tools such as Airflow, Dagster or Step Functions.
- Has knowledge of Data Lake technologies, data storage formats (Parquet, ORC, Avro), and query engines (Athena, Presto, Dremio) and associated concepts for building optimised solutions at scale.
- Maintains an expert level in one of the following programming languages or similar- Python, Java, Go, Scala.
- Is an expert in implementing data streaming and event-based data solutions (Kafka, Kinesis, SQS/SNS or the like).
- Has experience deploying software with CI / CD tools, including Jenkins, Harness, Terraform, etc.
- Has hands-on experience implementing data pipeline infrastructure for data ingestion and transformation near real-time availability of data for applications and ETL pipelines.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1576134
Interview Questions for you
View All