February 21, 2024

Saving Marine Ecosystems with Artificial Intelligence

Discover how the AI for Coral Reefs Challenge enhances coral reef monitoring through advanced technology, offering hope for marine ecosystem preservation.

By developing an image segmentation pipeline using state-of-the-art AI models, the challenge focused on rapidly analyzing underwater imagery to provide insights into coral reef health. This approach aimed to reduce the manual labor involved in data processing, offering a quicker, more accurate understanding of coral conditions globally. To give you an idea of how much manual labor is currently involved in data processing: A four-hour dive photographing one hectare of coral results in 40 hours of labeling images.

The use of AI in coral reef monitoring is a game-changer, offering precise, timely insights that can guide conservation efforts effectively. This technology enables targeted interventions, ensuring resources are allocated where they're needed most, ultimately contributing to the global effort to protect these vital ecosystems. 

How did we tackle this Challenge?

The challenge adopted a dual approach, given the availability of two distinct types of datasets - dense segmentation masks and sparse point labels - the participants were initially categorized into two groups: Supervised Learning and Unsupervised Learning, aimed at addressing the problem statement using differing methodologies. Within the Supervised Learning group, further subdivision occurred to capitalize on a diverse array of cutting-edge segmentation models, including You Only Look Once V8 (YOLOv8), Segment Anything Model (SAM), and Mask R-CNN.

This blog focuses on the research and experimentation conducted with SAM and Yolo as these models proved to perform the best. Given the commonalities in the model training process for both dense and sparse inputs, members from both the Supervised Learning and Unsupervised Learning groups collaborated to execute experiments related to fine-tuning.

The following objectives were set for the subgroup working with supervised learning:

  1. Train and fine-tune a Yolov8 model for detection and segmentation
  2. Fine-tune SAM using dense segmentation masks to achieve semantic segmentation of hard coral and soft coral.
  3. Fine-tune SAM using sparse point labels to achieve semantic segmentation of hard coral and soft coral.
  4. Employ the object detection capability of YOLOv8 to provide SAM with bounding boxes, facilitating accurate segmentation of hard coral and soft coral.

The unsupervised approach focussed on developing a label propagator that uses point labels to create segmentation masks. The results of using this label propagator can be a starting point for the labeling process, which would only require some human refinement. This would significantly ease the labeling process in marine research, saving hours of work.

The Dataset

Reef Support has curated a diverse array of datasets encompassing coral reef ecosystems worldwide, incorporating two widely utilized datasets in coral reef research: Seaview and CoralSeg. 

Furthermore, the Reef Support team meticulously crafted dense segmentation masks for one-third of the Seaview dataset, ensuring adequate coverage across all biological realms of coral reefs.

Sample image along with the corresponding point labels and dense segmentation mask that can be used for training and/or evaluation of models.
Some examples of images from the different datasets
The challenge led to several significant milestones. Despite encountering challenges such as data leakage and dataset quality, the teams persevered, refining their methodologies and enhancing model performance.


The best model turned out to be the Yolov8l. The table below summarizes the performance of the different YOLOv8 models that are trained on the same training set, using the same test set for evaluation. 

The top-performing model is the l size model, as indicated in the table above. As the model size decreases, there is a slight degradation in performance—from a mIoU of 0.85 to 0.83. However, the advantage of smaller models lies in their faster execution and compatibility with smaller hardware devices.

While the results presented in the table may seem exceptionally favorable for this computer vision task given the dataset, it is important to note that performance varies significantly across different regions. The average results summarized in the table above should be interpreted cautiously.

The subsequent table provides a summary of the performance of the model on the test sets for each region:

For a qualitative assessment of the model’s performance, a random sample of images was drawn from the test set. This enables a direct comparison between the ground truth masks and the predicted masks. 

The predictions presented here were generated using the l size model.

In comparison, the SAM models reached a mean IoU of 76.05% 
The unsupervised learning team's innovative approaches, like the Point Label Aware Superpixels, achieved a mIoU of 55%.

Looking Ahead: The Future of AI in Marine Conservation

The AI for Coral Reefs Challenge has set a precedent for the use of AI in environmental science, offering promising new directions for coral reef preservation. The collaborative effort of the global AI community has not only advanced our understanding of coral ecosystems but also paved the way for future innovations in marine conservation.

Big thank you to everyone who made this Challenge such a great success!

Arthur Caillau, Bart Emons, Bogumila Soroka, Cas Rooijackers, Icxa Khandelwal, Julieta Millán, Laurens Potiau, Leo Hyams, Masum Patel, Pierre Le Roux, shadi Andishmand, Sohane Le Roux, Sumit Sakarkar, Thomas Burger, Timo Scheidel, Mohsen Nabil, Sonny Burniston, Joanne Lijbers, Ponniah Kameswaran & of course Yohan Runhaar!

AI for Wildlife
Artificial Intelligence
Challenge results
Computer vision
Deep Learning
Object detection
Subscribe to our newsletter

Be the first to know when a new AI for Good challenge is launched. Keep up do date with the latest AI for Good news.

* indicates required
Thank you!

We’ve just sent you a confirmation email.

We know, this can be annoying, but we want to make sure we don’t spam anyone. Please, check out your inbox and confirm the link in the email.

Once confirmed, you’ll be ready to go!

Oops! Something went wrong while submitting the form.