Big Data Spark HDFS Kafka AWS Azure GCP ETL Hadoop Hive etc.
Strong experience in programming languages such as Python Java or Scala for data manipulation and engineering tasks.
Expertise in SQL and NoSQL databases
Handson experience with big data technologies like Hadoop Spark Kafka and Hive to handle largescale data processing and realtime data streams.
Indepth knowledge of data warehousing solutions such as Amazon Redshift Google BigQuery and Snowflake for building and managing data warehouses.
Proficiency in designing developing and maintaining ETL (Extract Transform Load) processes using tools like Apache NiFi Talend or Informatica.
Familiarity with cloud platforms like AWS Azure or Google Cloud for deploying managing and scaling data infrastructure and services.
* Strong understanding of data modeling concepts and techniques to create efficient and scalable data models.
Experience with version control systems such as Git for code management and collaboration.
Knowledge of data governance data quality standards and data security practices to ensure compliance and protection of sensitive information
kafka,data quality standards,big data,hadoop,snowflake,scala,talend,google bigquery,java,etl,informatica,git,sql,data governance,aws,gcp,hive,hdfs,data security,azure,nosql,data modeling,python,amazon redshift,spark,apache nifi