August 27, 2022

User-friendly, Wilderness-proof MLOps

Developing a CI/CD pipeline that automatically retrains the existing poacher-detecting model, which can be deployed to the drone with improved performance.

The aim of the CI/CD team was to develop a functioning CI/CD pipeline that automatically processes new data to train an existing model, which can be deployed to the SPOTS drone with improved performance.

In order to do so, we established following subgoals:

  • Create a CI/CD pipeline to automate the building, testing and deployment of the code used to train the model.
  • Create an MLOps pipeline to run the complete machine learning workflow - fetching data, pre-processing, model training, evaluating, optimizing and deploying - using code deployed from GitLab.
  • Make the MLOps pipeline to be triggered manually by a user with a simple click on a button, if possible.
  • Make the pipeline abstract, loosely coupled with the infrastructure it runs on. This should make it possible to transfer it from one cloud service provider to another, or to an on-premise set-up with minimal overhead.

We started off with research into the available technologies. We went for GitLab to create the CI/CD pipeline. This is where we also stored the code and converted it to docker images. These images were stored in an IBM cloud registry which is called by the components in Kubeflow. These components form the machine learning pipeline that gets the data; preprocesses it; trains; evaluates; and then stores the model in Google Drive.

“The challenge not only improved my machine learning and coding skills but led to my growth as an individual. I learned how to better work with people, to collaborate with team members from all over the world in a virtual environment, all that while balancing other areas of my life. I’m glad I could contribute to saving endangered species.” - Barbra Apilli, CI/CD Team

Easy-to-use MLOps Pipeline

After a thorough consideration of how the user will interact with the system in the future, we created a workflow:

  1. The user first uploads new data to Google Drive.
  2. The data goes to the Kubeflow dashboard, starting a new run.
  3. After the run, code in the pipeline uploads the model to Google Drive.
  4. This is where it’s fetched from and uploaded onto the drone.

The detailed workflow showing how the different technologies interact with each other 
*When a new run is started, code from GitLab is converted into docker images by a process known as containerisation. Code for the different machine learning components resides in individual branches of the GitLab repository. Each branch contains code informing Kubeflow about the pipeline structure and the paths to the inputs and outputs of each component. This code is known as pipeline definitions. Any modification of the code triggers the CI/CD pipeline to generate a new image with updates. This image is then pushed to IBM Cloud Container Registry. *

The MLOps pipeline in Kubeflow retrieves these images and runs the code in the 4 machine learning workflow components:

  • *Data Retrieval *contains the code that fetches the newly uploaded data from Google Drive.
  • Data Preprocessing handles the separation of uploaded data into different datasets to be used in the training of the model - training, validation and testing dataset.
  • The data was cleaned - grayed and blurry images (presumed to be large bodies of water) were removed since they didn’t contain relevant information.
  • The cleaned data was augmented.
  • The datasets directory structure was then modified to fit the YOLOv5 models.
  • *Model Training. *The preprocessed training dataset is used to train the model. The training code is obtained from the official Ultralytics YOLOv5 docker image.
  • Model Deployment. When the pipeline run has finished, the trained model is pushed to Google Drive.

Dataset samples - a ‘grayed’ image on the left and an image with relevant information on the right
The Kubeflow framework can be deployed on all major cloud platforms as well as on-premise. The cloud platform we used was IBM Cloud Kubernetes Service.

The Follow-up

The biggest challenge for us was to grasp the GitLab CI/CD and Kubeflow pipelines and to ensure that all the technologies worked well with each other. We were able to create the pipeline, yet there is still room for improvement. 

We’ve outlined the follow-up path for the next CI/CD team to start with:  

  1. Implement model evaluation and create tests for the different components.
  2. Fetch an available pre-trained model from Google Drive to start a new run.
  3. Automate the manual processes within the pipeline so non-technical users don’t have to worry about picking up the tech.
  4. Ensure a safe and secure storage of credentials used to run the pipeline.
  5. Create a web application to trigger the machine learning workflow.

*CI/CD Team *

Aisha Kala, Mayur Ranchod, Sabelo Makhanya, Rowanne Trapmann, Ethan Kraus, Barbra Apilli, Samantha Biegel, Adrian Azoitei

AI for Wildlife Engineers

*Model Optimization: *Sahil Chachra, Manu Chauhan, Jaka Cikač, Sabelo Makhanya, Sinan Robillard

CI/ CD: Aisha Kala, Mayur Ranchod, Sabelo Makhanya, Rowanne Trapmann, Ethan Kraus, Barbra Apilli, Samantha Biegel, Adrian Azoitei

Hardware: Kamalen Reddy, Estine Clasen, Maks Kulicki, Thembinkosi Malefo, Michael Bernhardt

Autonomous Flight: Thanasis Trantas, Emile Dhifallah, Nima Negarandeh, Gerson Foks, Ryan Wolf

Edge Computing
AI for Wildlife
MLOps
Challenge results
Subscribe to our newsletter

Be the first to know when a new AI for Good challenge is launched. Keep up do date with the latest AI for Good news.

* indicates required
Thank you!

We’ve just sent you a confirmation email.

We know, this can be annoying, but we want to make sure we don’t spam anyone. Please, check out your inbox and confirm the link in the email.

Once confirmed, you’ll be ready to go!

Oops! Something went wrong while submitting the form.