HamburgerMenu
hirist

Job Description

We have vacancies for Aws - Bigdata Specialist position.

We are looking for a highly skilled Senior AWS Data Engineer to join our growing data engineering team. The ideal candidate will have deep hands-on experience in AWS cloud services, big data processing frameworks, and advanced SQL programming. You will be responsible for building and maintaining scalable, high-performance data solutions that enable advanced analytics and data-driven decision-making across the organization.

This is a great opportunity for someone passionate about working with modern data technologies and optimizing large-scale distributed systems.

Key Responsibilities :

- Design, build, and optimize data pipelines and ETL processes using AWS Glue, Lambda, RDS, and S3.

- Develop scalable batch and streaming data processing workflows using Apache Spark, Spark Streaming, and Kafka.

- Work with SQL and NoSQL databases (MySQL, PostgreSQL, Elasticsearch) to design schemas and optimize queries for high performance.

- Write efficient and optimized SQL for data extraction, transformation, and analytics.

- Build and maintain data ingestion, transformation, and integration solutions across multiple systems.

- Collaborate with cross-functional teams (data science, analytics, engineering) to provide reliable, high-quality data.

- Use Scala, Java, and Python for data processing and automation scripts.

- Implement monitoring, logging, and alerting for data pipelines using AWS services (CloudWatch, CloudTrail).

- Ensure data security, compliance, and governance using AWS best practices.

Required Skills & Experience :

- 5+ years of experience with AWS Services including RDS, AWS Lambda, AWS Glue, EMR, S3, and related big data technologies.

- 5+ years of experience working with Apache Spark, Spark Streaming, Kafka, and Hive.

- Strong experience in SQL and NoSQL databases (MySQL, PostgreSQL, Elasticsearch).

- Proficiency in Java and Scala, with hands-on scripting experience in Python and Unix/Linux shell.

- Deep understanding of Spark programming paradigms (batch and stream processing).

- Advanced knowledge of SQL, including query optimization, indexing, and performance tuning.

- Strong analytical and problem-solving skills with the ability to handle large datasets efficiently.


info-icon

Did you find something suspicious?