drjobs AWS Data Engineer

AWS Data Engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Jobs by Experience drjobs

5-10years

Job Location drjobs

India

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Position Overview
We are looking for an experienced AWS Data Engineer to design develop and maintain data solutions using core AWS services. The ideal candidate will have handson experience with tools like Amazon S3 Redshift AWS Glue and DynamoDB and will build scalable efficient and secure data pipelines and architectures. The role also requires strong expertise in PySpark SQL and workflow orchestration tools like Airflow.


Key Responsibilities


1. Data Pipeline Development
Develop and manage ETL/ELT workflows using AWS Glue and PySpark to process large datasets.
Automate data workflows using Apache Airflow and other orchestration tools.


2. Data Storage and Management
Architect and manage data storage in Amazon S3 ensuring performance costefficiency and security.
Create and optimize Amazon Redshift clusters for data warehousing and analytics workloads.
Design scalable NoSQL solutions using Amazon DynamoDB for realtime data needs.


3. Compute and Serverless
Build and deploy serverless solutions using AWS Lambda for eventdriven data processing.
Configure and manage virtual machine instances using Amazon EC2 for custom data processing tasks.


4. Security and Monitoring
Implement finegrained access controls with AWS IAM to ensure data security.
Set up monitoring logging and alerts using AWS CloudWatch for proactive system health management.


5. Data Integration and Transformation
Create efficient SQL queries to handle data transformations and analytics.
Integrate and process structured and unstructured data from multiple sources.
Design data models and implement them in Redshift or DynamoDB for optimized query performance.


6. Collaboration and Optimization
Collaborate with data scientists analysts and stakeholders to gather requirements and deliver solutions.
Continuously optimize data processing workflows to improve performance and reduce costs.



Requirements

Required Skills and Qualifications



Core AWS Expertise
Strong knowledge of Amazon S3 Redshift AWS Glue Lambda DynamoDB EC2 IAM and CloudWatch .




Technical Proficiency
Handson experience with PySpark for big data processing.
Proficient in writing complex SQL queries for data manipulation and analysis.
Expertise in workflow orchestration using Apache Airflow or similar tools.




Experience
4 years in data engineering with a focus on AWS technologies.
Experience in designing scalable faulttolerant data pipelines.
Familiarity with data modeling ETL/ELT and data warehousing principles.




Soft Skills
Strong problemsolving and analytical skills.
Excellent communication and collaboration abilities.
Ability to manage priorities in a fastpaced environment




AWS, S3, Glue, redshift, cloud watch, SQL, Pyspark,

Employment Type

Full Time

Company Industry

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.