As an Application Developer you will be responsible for designing building and configuring applications to meet business process and application requirements using Python. Your typical day will involve working with Oracle Procedural Language Extensions to SQL (PLSQL) and collaborating with crossfunctional teams to deliver impactful datadriven solutions. Key Responsibilities: Work on client projects to deliver AWS PySpark Databricks based Data engineering & Analytics solutions Build and operate very large data warehouses or data lakes. ETL optimization designing coding & tuning big data processes using Apache Spark. Build data pipelines & applications to stream and process datasets at low latencies. Show efficiency in handling data tracking data lineage ensuring data quality and improving discoverability of data. Technical Experience: Minimum of 1 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark Minimum of 3 years of experience in ETL Big Data/Hadoop and data warehouse architecture & delivery. Minimum 2 year of Experience in one or more programming languages Python Java Scala Experience using airflow for the data pipelines in min 1 project 1 years of experience developing CICD pipelines using GIT Jenkins Docker Kubernetes Shell Scripting Terraform Resource is willing to work in B shift This position is based at our Hyderabad office.