drjobs AWS Data Engineer العربية

AWS Data Engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Jobs by Experience drjobs

5-10years

Job Location drjobs

Hyderabad - India

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

AWS Data Engineer

We are seeking a highly motivated AWS data engineer to join our team. In this role you will be responsible for designing developing and implementing data pipelines that ingest transform and load data into our data warehouse and data lake on AWS.

Responsibilities:

  • Design develop and maintain scalable and efficient data pipelines for batch and realtime data sources utilizing AWS services like S3 Redshift Glue Lambda DynamoDB and Kinesis.
  • Build ETL/ELT pipelines to handle both structured and unstructured data.
  • Utilize Python and PySpark to process and transform large datasets.
  • Implement ETL (Extract Transform Load) processes using a combination of Informatica PowerCenter and custombuilt Python scripts.
  • Build and maintain data models to optimize data storage retrieval and analysis.
  • Collaborate with data analysts data scientists and business stakeholders to understand data requirements and translate them into technical solutions.
  • Develop and implement unit tests and integration tests to ensure data quality and pipeline reliability.
  • Monitor and troubleshoot data pipelines to identify and resolve issues proactively.
  • Automate data pipeline deployment processes using AWS CodePipeline or similar tools.
  • Stay uptodate on the latest trends and technologies in cloud data engineering.

Previous Working Experience:

  • Experience building data ingestion (ETL/ELT) pipelines for various data sources.
  • Experience building data warehouses and data lakes.


Requirements


Qualifications:

  • 510 years of experience in data engineering or a related field.
  • Proven experience with AWS cloud services including S3 Redshift Glue Lambda DynamoDB and Kinesis.
  • Proficiency in Python and PySpark for data processing and transformation.
  • Experience with ETL tools like Informatica PowerCenter.
  • Strong understanding of data modeling concepts and techniques.
  • Excellent problemsolving and analytical skills.
  • Experience working in a collaborative and fastpaced environment.
  • Excellent communication and interpersonal skills.

Bonus Points:

  • Experience with CI/CD pipelines for data engineering.
  • Experience with cloud security best practices.
  • Experience with data governance and compliance.
  • Experience with data visualization tools.


Benefits

Benefits

Opportunity to work with cuttingedge technologies in a fastpaced environment.

Be part of a collaborative and supportive team

Work on impactful projects that make a difference.



AWS Data Engineer We are seeking a highly motivated AWS Data Engineer to join our team. In this role, you will be responsible for designing, developing, and implementing data pipelines that ingest, transform, and load data into our data warehouse and data lake on AWS. Responsibilities: Design, develop, and maintain scalable and efficient data pipelines for batch and real-time data sources, utilizing AWS services like S3, Redshift, Glue, Lambda, DynamoDB, and Kinesis. Build ETL/ELT pipelines to handle both structured and unstructured data. Utilize Python and PySpark to process and transform large datasets. Implement ETL (Extract, Transform, Load) processes using a combination of Informatica PowerCenter and custom-built Python scripts. Build and maintain data models to optimize data storage, retrieval, and analysis. Collaborate with data analysts, scientists, and business stakeholders to understand and translate data requirements into technical solutions. Develop and implement unit and integration tests to ensure data quality and pipeline reliability. Monitor and troubleshoot data pipelines to identify and resolve issues proactively. Automate data pipeline deployment processes using AWS CodePipeline or similar tools. Stay up-to-date on the latest trends and technologies in cloud data engineering. Previous Working Experience: Experience building data ingestion (ETL/ELT) pipelines for various sources. Experience building data warehouses and data lakes. Qualifications: 5-10 years of experience in data engineering or a related field. Proven experience with AWS cloud services, including S3, Redshift, Glue, Lambda, DynamoDB, and Kinesis. Proficiency in Python and PySpark for data processing and transformation. Experience with ETL tools like Informatica PowerCenter. Strong understanding of data modelling concepts and techniques. Excellent problem-solving and analytical skills. Experience working in a collaborative and fast-paced environment. Excellent communication and interpersonal skills. Bonus Points: Experience with CI/CD pipelines for data engineering. Experience with cloud security best practices. Experience with data governance and compliance. Experience with data visualization tools. Benefits Opportunity to work with cutting-edge technologies in a fast-paced environment. Be part of a collaborative and supportive team Work on impactful projects that make a difference.

Employment Type

Full Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.