HamburgerMenu
hirist

Astrosoft - AWS Data Engineer - ETL/Scala

Posted on: 18/07/2025

Job Description


About the Role:

We are looking for a highly skilled and experienced AWS Data Engineer to join our dynamic team in Hyderabad. As an AWS Data Engineer, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and data solutions on the Amazon Web Services (AWS) platform. You will work with diverse data sources, ensuring data is accurately collected, stored, processed, and made available for analysis, supporting our clients' critical business intelligence and analytical needs. This is an excellent opportunity for someone who is passionate about data and cloud technologies and is eager to make an immediate impact.

Key Responsibilities :

Data Pipeline Design & Development :

- Design, build, and maintain highly scalable and fault-tolerant ETL/ELT data pipelines on AWS using services like AWS Glue, AWS Step Functions, AWS Lambda, AWS Batch, and Apache Airflow.

- Develop robust solutions for data ingestion from various sources (e.g., relational databases, APIs, streaming data, flat files) into AWS.

- Implement data transformation logic to clean, enrich, and prepare data for analytical consumption.

AWS Data Services Expertise :

- Proficiently utilize and optimize core AWS data services such as Amazon S3 (for data lake storage), Amazon Redshift (data warehousing), Amazon RDS (relational databases), Amazon DynamoDB (NoSQL), Amazon Kinesis (real-time data streaming), and Amazon Athena (ad-hoc querying).

- Leverage AWS Lake Formation for managing data access and security within the data lake.

- Implement effective security measures and access controls in alignment with AWS best practices and company policies (IAM).

Data Modeling & Architecture :

- Design and implement efficient data models and schemas for both transactional and analytical workloads, considering performance, scalability, and cost optimization.

- Contribute to the overall data architecture strategy, ensuring alignment with business requirements and future growth.

Programming & Scripting :

- Develop and optimize code in Python (primary) and/or Scala/Java for data processing, automation, and custom AWS Lambda functions.

- Write and optimize complex SQL queries for data extraction, manipulation, and analysis.

Performance Optimization & Troubleshooting :

- Monitor, troubleshoot, and optimize data pipelines and data processing jobs to ensure high performance, reliability, and data quality.

- Identify and resolve performance bottlenecks in data loads, queries, and transformations.

Collaboration & Documentation :

- Collaborate closely with data scientists, data analysts, business intelligence developers, and other engineering teams to understand data requirements and deliver appropriate solutions.

- Create and maintain comprehensive technical documentation for data pipelines, data models, and AWS infrastructure.

- Participate in code reviews, design discussions, and knowledge sharing sessions.

DevOps & Automation :

- Work with CI/CD pipelines for deploying data solutions, leveraging tools like AWS CodeCommit, CodeBuild, CodePipeline, and Jenkins.

- Automate operational tasks related to data infrastructure and monitoring.

Qualifications :

- 5-10 years of professional experience in Data Engineering, with a significant focus on AWS data services.

- Proven hands-on experience in designing, building, and maintaining ETL/ELT pipelines on AWS.

- Strong proficiency in Python and SQL.

- In-depth knowledge and practical experience with AWS data services including (but not limited to) S3, Redshift, Glue, Lambda, Kinesis, Athena, RDS, DynamoDB, and IAM.

- Solid understanding of data warehousing concepts, dimensional modeling, and data lake architectures.

- Experience with Apache Spark (especially via AWS Glue or EMR) for big data processing.

- Familiarity with distributed computing principles.

- Excellent analytical, problem-solving, and debugging skills.

- Strong communication (written and verbal) and interpersonal skills, with the ability to explain complex technical concepts clearly.

- Ability to work effectively in a hybrid work environment (3 days from office in Hyderabad).

- Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.

What We Are Looking For (Early Joiner) :

- Candidates who are available to join immediately or within a short notice period (ideally 0-15 days).

- Self-starters who are proactive, adaptable, and eager to contribute from day one.

- Individuals with a strong sense of ownership and a commitment to delivering high-quality solutions.


info-icon

Did you find something suspicious?