This role is for one of the Weekdays clients
As a Backend Developer focusing on Data Science you will be responsible for building optimizing and maintaining serverside applications that support data processing and machine learning models. You will collaborate closely with data scientists data engineers and other developers to create efficient data pipelines scalable APIs and robust backend services that power datadriven applications.
Key Responsibilities:
- Backend Development: Design develop and maintain scalable backend systems services and APIs that handle large datasets and machine learning models.
- Data Pipeline Integration: Collaborate with data scientists to integrate machine learning models and process large volumes of structured and unstructured data efficiently.
- API Development: Develop RESTful APIs to expose data science models and datadriven services to internal and external applications.
- Database Management: Design and optimize database structures for storing and querying large datasets ensuring high performance and scalability.
- Collaboration: Work closely with data engineers data scientists and frontend developers to deliver endtoend data solutions.
- Performance Tuning: Identify bottlenecks and implement solutions for high performance and lowlatency in data processing systems.
- Security: Implement security best practices to protect sensitive data and ensure compliance with relevant regulations.
Required Skills and Qualifications:
- Programming Languages: Proficiency in languages such as Python Java or Node.js.
- Data Science Integration: Familiarity with integrating data science libraries such as TensorFlow Scikitlearn or PyTorch.
- API Development: Experience with building and consuming RESTful APIs and microservices architecture.
- Database Management: Strong knowledge of SQL and NoSQL databases (e.g. PostgreSQL MongoDB).
- Data Pipeline Tools: Experience with data processing tools and frameworks like Apache Kafka Spark or Hadoop.
- Cloud Platforms: Experience with cloud environments like AWS GCP or Azure for deploying datacentric applications.
- Version Control: Proficiency with version control systems like Git.
Preferred Skills:
- Familiarity with containerization (Docker) and orchestration (Kubernetes).
- Experience with data streaming tools such as Apache Flink or Storm.
- Strong understanding of machine learning lifecycle and model deployment.
- Familiarity with data governance privacy laws and data security best practices.
data governance,aws,data science,mongodb,apache flink,storm,gcp,nosql databases,node.js,data security,azure,pytorch,spark,sql,machine learning lifecycle,machine learning,scikit-learn,python,restful apis,docker,java,models,kubernetes,tensorflow,hadoop,postgresql,git,apache kafka