Responsible for designing building and maintaining data pipelines using AWS services to extract transform and load (ETL) data from various sources into a data warehouse or data lake requiring expertise in AWS technologies like Glue S3 Redshift etc.
Proficiency in programming languages like Python PySpark Spark to ensure efficient data processing and analysis within the cloud environment.
Architect and implement robust ETL pipelines using AWS Glue defining data extraction methods transformation logic and data loading procedures across different data sources
Develop scripts to extract data from diverse sources like databases APIs flat files and Mainframe applications using AWS services like S3 RDS and Kinesis etc.
AWS ETL oracle Data migration experience (specific to Exadata)
AWS DMS for delta workloads
AWS EFS S3
AWS RDS Oracle
Experience in handling Oracle Exadata compression tables
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.