Thursday, February 1, 2024
AI for Good
AI for Good
AI for Wildlife
AI for Wildlife
Artificial Intelligence
Artificial Intelligence
Community story
Community story
Computer vision
Computer vision
Deep Learning
Deep Learning
Object detection
Object detection
Segmentation
Segmentation
Challenge results
Challenge results

Tracking Turtles: How AI helps conservationists to re-identify sea turtles

Discover how AI revolutionizes sea turtle conservation, enabling accurate re-identification with non-intrusive methods. This collaboration with Sea Turtle Conservation Bonaire utilizes cutting-edge AI to match sea turtle photos against a database, enhancing our understanding of their migratory patterns and aiding in their preservation. The solution employs innovative techniques for turtle face detection, feature extraction, and matching, proving pivotal for habitat preservation efforts.

Beneath the surface of the ocean, amid the silent ballet of currents, lies a world where sea turtles have lived for millions of years. Today, there are seven existing species, six of which are listed as endangered and/or threatened! These ancient wanderers are considered migratory species and therefore conservationists around the world are tracking these magnificent creatures. By delving into their migratory patterns, we gain insights crucial for habitat preservation and the mitigation of potential risks.

Among the organizations engaged in this crucial work is Sea Turtle Conservation Bonaire (STCB). Over the years, they have amassed data on over 3,000 sea turtles in the Caribbean through photography and/or tagging. Given the limitations of manual searches in a growing database, and the preference for non-intrusive methods, STCB sought alternative solutions for re-identifying captured turtles. This led to a collaboration with Fruitpunch AI, as they embarked on the challenge of developing a time-efficient, non-intrusive method for sea turtle re-identification. Challenge accepted!

The Challenge

The primary objective of this challenge was to develop an application enabling users to upload a turtle photo and receive the top 5 matches from our turtle database, accompanied by the probability of encountering an entirely new turtle. To achieve this, the team was divided into three units: one dedicated to face detection, another focusing on feature extraction, and the third dedicated to building the application.

Turtle Face Detection, Segmentation and Rotation

The detection system serves as a pre-processing step for the re-identification algorithm. It locates the turtle in the image, removes distracting background pixels and aligns the turtle’s head. As a result, the images sent to the final recognition system always have the turtle head at fixed size, horizontally rotated in the center of the image. This enables the re-identification system to focus on its core task: recognizing which turtle this is. 

Let us dive shortly into the three components: the face detector is implemented as a Yolo-v8 model that generates a rectangular box for each detected turtle head. The 1,000 manual annotations for training proved sufficient for this challenge, although the model is slightly overtrained to the context. Next, the segmentation system removes background pixels around the turtle head. Because training a segmentation system requires annotating a large number of pixel-true segmentation masks we have opted for a zero-shot solution. To this end, we have found the HQ improvement of the Segment Anything Model (SAM) foundation model very suitable. Despite being computationally costly, this model segments the turtle head without any explicit training stage. Finally, the rotation subsystem aligns the face to always be horizontal. This is implemented as a neural network that regresses the image rotation from the image and then rotates the images accordingly.

Feature Extraction and Matching

Following extensive literature research, our team opted to investigate three distinct feature extraction methods: metric learning, LoFTR, and LightGlue. Remarkably, LightGlue emerged as the top-performing algorithm, boasting an impressive 92% matching accuracy and an 86% accuracy in detecting novelty, as illustrated below. Nonetheless, it's crucial to note that the primary drawback lies in the requirement of a GPU for achieving reasonable inference times in the application.

LightGlue uses the traditional SIFT (Scale-Invariant Feature Transform) algorithm to capture a collection of keypoint (blue dots) from the input images, and applies a matching algorithm to match those keypoints (depicted with the green lines). Simultaneously, it assigns confidence scores to these keypoint pairs. This entire process is executed for both the images within the turtle database and the input image.


Next, the top 5 matching turtles are determined through the computation of the Wasserstein score. This metric examines the distribution of confidence scores for keypoint pairs, presenting a unique approach not widely used in the existing LightGlue literature. To assess the novelty of the turtle, a novelty threshold was determined based on the distribution of Wasserstein scores between matching and non-matching turtles.

Remarkably, LightGlue demonstrated such impressive performance that it successfully exposed a mislabeled turtle within the dataset—an accomplishment that had long been a mere dream for the STCB team.

The Application

The application team aimed to create a user-friendly GUI for local use, allowing end-users to run turtle detection models from the face detection and feature extraction teams. In response to STCB's user requirements, the challenge scope included features like image upload, exporting turtle lists, model re-training, turtle database management, and offline/mobile application support.

Based on these requirements different frameworks were explored. Kobo toolbox was initially considered but lacked sufficient documentation. Gradio was adopted to visualize model functionality. Wireframes, developed on Figma, were shared and refined with end-users and the face detection/feature extraction teams to ensure a comprehensive understanding of the system's operation and model integration points.

Drawing inspiration from the wireframes, NextJS was chosen for frontend development, currently integrating models from the face detection and feature extraction teams. Below, you'll find screenshots providing a glimpse of the user interface when performing turtle detection on the webpage.

Below is a table summarizing the infrastructures that were explored along with the hurdles they faced:

Conclusion: AI Empowers Sea Turtle Conservation (if GPUs are available) 🐢

With an advanced pipeline integrating face detection, segmentation, and rotation, coupled with a cutting-edge feature extraction and matching algorithm, visualized with a beautifully designed web app, the project appears to be a resounding success. However, one crucial requirement—offline operation on a laptop—remained unfulfilled. Unfortunately, the LightGlue model necessitates GPU availability for reasonable inference time, making it unsuitable for a standard laptop. As a workaround, re-identification is currently operational on Google Colab, albeit without the sophisticated front-end features. If you're reading this and would like to contribute a free or affordable GPU to STCB, enabling them to utilize the application and enhance sea turtle conservation efforts, please contact STCB or Fruitpunch AI. Together, we can make a difference for our marine friends!

A Dream Come True

During the concluding presentation, Kaj and Daan from STCB were genuinely moved by the dedication poured into the project, and witnessing the final results was nothing short of a "dream come true." The fact that all this was achieved in just 10 weeks is truly remarkable, and a heartfelt shoutout goes to the dedicated efforts of data engineers, data scientists, and front-end developers. The application and its code will be open-sourced, providing an opportunity for anyone interested to use and contribute to the development of a turtle re-identification application.

Big thank you to everyone who worked hard for 10 weeks to reach these results!

Anton van Werkum, Hamzah Al-Qadasi, Jeroen Verboom, Davide Coppola, Debajyoti Debnath, Eelke Folmer, Laurenz Schindler, Lennart van de Guchte, Marcus Leiwe, Miruna-Maria Morarasu, Rodrigo Mattos, Thor Veen, Alexandre Capt, Barbra Apilli, Jeroen Vermunt, Sonny Burniston, Rob Wijnhoven and of course Kaj Schut and Daan Zeegers

Subscribe to our newsletter

Be the first to know when a new AI for Good challenge is launched. Keep up do date with the latest AI for Good news.

* indicates required
Thank you!

We’ve just sent you a confirmation email.

We know, this can be annoying, but we want to make sure we don’t spam anyone. Please, check out your inbox and confirm the link in the email.

Once confirmed, you’ll be ready to go!

Oops! Something went wrong while submitting the form.

Previous publication

You are reading the most recent publication!

Return to publications page

Next publication

You have reached the end of our publications, you must love our content!

Return to publications page
Challenge results
Challenge results
Learning Resources
Learning Resources
Artificial Intelligence
Artificial Intelligence
AI for Good
AI for Good
LLMs
LLMs
Computer vision
Computer vision
NLP
NLP
AI for Good News
AI for Good News
Community story
Community story
Segmentation
Segmentation
Object detection
Object detection
Deep Learning
Deep Learning
GIS
GIS
Remote Sensing
Remote Sensing
AI for Earth
AI for Earth
Explainable AI
Explainable AI
Time Series Forecasting
Time Series Forecasting
AI for Health
AI for Health
Autonomous
Autonomous
TinyML
TinyML
MLOps
MLOps
Edge Computing
Edge Computing
AI for Wildlife
AI for Wildlife