We’re a young startup providing computer vision (with deep learning) services for agriculture. Our first products provide smartphone-based alternatives to expensive hardware analyzers providing cereals quality measurements.
After a successful proof-of-concept in 2018 which allowed us to acquire our first customers we are now expanding to more use-cases and more clients both in France and abroad. Over the last 12 months we went from a team of 2 to 12+, analysed tens of thousands tons of cereal and won BPIFrance’s famous startup prize : iLab
About the product
Our client built a Deep Learning platform to provide prediction services for agriculture at scale. The platform is backed by several bleeding-edge technologies such as Kubeflow, and ArgoCD on Kubernetes clusters hosted on AWS and GCP.
Those frameworks are integrated in a custom-built platform which allows the company to deliver production Deep Learning at scale, while empowering a very dynamic Data Science team which works on many products candidates and constantly improves and refresh existing ones. Unlike more classic vision applications, building products for agriculture is particularly challenging because crops have a large variability over time and geographic zones.
In this aspect, has specialized in delivering not only static AI product, but putting in place refreshments processes so we constantly catch-up with Mother Nature’s variability.
About the position
As our second DevOps / MLOps, you will be in charge of the usual story : cloud-based black magic, CI/CD secret weapons, measuring & dashboarding to navigate our digital spaceships. Yet, it will be in the largely uncharted and dynamic territory of production Deep Learning and its specific challenges.
You’ll be on the lookout for new technologies allowing to provide reliable, cost-effective, fast and scalable solutions to our GPU-enabled ML client-serving applications and data-hungry model training workflows. Also, our business requires constant model refreshments and deployment, so CI/CD automation and self-serving is crucial
You’ll fight the ML – Dev gap : You’ll work hand-in-hand with the Data Science team to understand their workloads and their outputs (model weights and other training artifacts, predictions, masks, images etc.) and find/adapt/create best-in-class tools to leverage their work. You’ll also will play an active role in teaching them both the basics and the latest of cloud and DevOps along with providing support for their day-to-day work.
Our client works with Nature, and by Nature’s cycles. During the summer, harvests are key moments where our clients often work around the clock to gather a year’s worth of food and drinks. You’ll be sometimes on call during this period to allow prompt reaction if somethings goes wrong.
As you have now understood, this job is as challenging as it can be rewarding. We don’t expect you to know everything already and as company evolves, the position will too. You’ll have the opportunity to learn a lot and teach us a lot too. You’ll be managed by the Head of Software Engineering, and will work in very close collaboration with the Data Science team and obviously with our current DevOps / MLOps.
About the stack
We don’t really need sentences here do we?
K8s, Kubeflow, Helm, Terraform, ArgoCD, Argo-worklflow, Prometheus, Grafana, Datadog, NodeJS, python, flask, PubSub, FileStore, s3, gs, postgreSQL, github, slack
- Be in charge of creating, maintaining, updating our cloud infrastructure
- Enable rapid iterations with efficient and easy-to-use (i.e. automatable) CI/CD tools and processes
- Put in place, maintain and improve extensive analytics, monitoring and alerting systems
- Teach the tech team and provide constant support about our tools and processes.
- Dig the latest tech and experiment on it to find production-worthy tools to improve our spaceships, in particular for the specificities of ML / Deep Learning / Vision workloads.
- Plan for bad things, and when our plans fail, investigate promptly and deeply to find the least damaging fix while learning from our failures to be even more resilient later.
Requirements and qualifications
- 2+ years work experience as a DevOps / RSE/ MLOps or similar
- Experience with Kubernetes in production
- Work experience with AWS or GCP
- Work experience with a CI/CD toolchain
- Docker, docker-compose
- English (very-good written English and good spoken English)
- Love to learn, teach and exchange, even at the boundary of your expertise
- Rigorous, Pro-active, Autonomous, Reliable