Design build and maintain robust and scalable ETL (Extract Transform Load) pipelines for collecting processing and transforming large datasets.
Implement configure and optimize database architectures including relational (MySQL PostgreSQL) and NoSQL (MongoDB Cassandra) databases.
Monitor database performance troubleshoot issues and tune SQL queries for optimal efficiency.
Ensure data integrity accuracy and consistency across all database systems through regular maintenance and performance checks.
Integrate data from various sources (APIs flat files streaming etc.) to create a unified and efficient data ecosystem.
Automate database backup restoration and recovery operations to ensure data availability.
Develop and maintain data warehouse solutions to support analytics reporting and business intelligence needs.
Implement data governance policies to manage data security privacy and compliance across all systems.
Work closely with data analysts data scientists and business stakeholders to understand data requirements and deliver tailored solutions that align with business objectives.
Automate manual processes for data extraction and transformation and optimize data pipelines for performance and scalability.
Implement database replication archiving and table partitioning strategies to enhance performance and manageability.
Maintain detailed technical documentation on data architectures data flows database configurations and system processes.
Identify and resolve databaserelated issues and performance bottlenecks providing technical support to application developers and users as needed.
Requirements:
Minimum 57 years of proven experience as a Data Engineer with DBA expertise.
Handson experience in building and managing ETL processes and data pipelines.
Familiarity with big data tools such as Hadoop Spark or Kafka.
Experience with cloud platforms (AWS Azure or GCP) for data storage and processing.
Strong knowledge of data visualization tools such as Power BI Power Apps or Tableau.
Proficient in SQL R Python and other dataprocessing languages.
Strong knowledge of database management including relational (MySQL PostgreSQL etc.) and NoSQL databases (MongoDB Cassandra).
Familiarity with database backup and recovery strategies security practices and compliance regulations.
DB replications archiving table partitioning RDS Unix and DBMS
Experience with data warehousing solutions like Amazon Redshift Google BigQuery or Snowflake.
Knowledge of data modeling schema design and data versioning.
Experience with workflow automation tools like Airflow or Luigi.
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.