AI-powered Wildlife Conservation in Africa
An account of 10-week teamwork developing multiple machine learning and hardware pipelines to bring production-ready AI to edge hardware on flying rangers.
The conservation of threatened species in South Africa has seen remarkable successes in the past years, but illegal poaching still presents a great danger for a range of animals living in the reserves. 415 rhinos were killed in 2021 alone. The team of Strategic Protection Of Threatened Species (SPOTS) has been working consistently over the years to put an end to illegal poaching. But the monitoring of such a vast area by a limited number of rangers, day and night, is a huge challenge.
One way to tackle the problem is using UAVs (Unmanned Aerial Vehicles). SPOTS teamed up with FruitPunch AI to crowdsource a team of AI for Good engineers to develop a poacher detection system and put it onto an autonomous drone. Over a series of AI for Wildlife Challenges, thousands of engineering hours have been invested into developing the AI-powered virtual flying ranger. Meeting the unique operating conditions of south african wildlife reserves, its goal is to detect poachers any time of the day autonomously.
This is an account of the 10-week teamwork of us, 22 engineers who’ve picked up from the outcomes of the first two AI for Wildlife Challenges and developed multiple software and hardware pipelines improving the efficiency and autonomy of the UAV.
Our AI for Wildlife team was divided up to address 4 subgoals:
With the subteams formed, each of the 4 groups set out to define a more detailed framework of subgoals to work towards over the course of the 10-week challenge. Due to overlap in technology and methods needed to reach the subgoals, the Model Optimization and the CI/CD team worked more closely together, as did the Hardware and the Autonomous Flight team.
Team by team, we’ll be documenting how the solutions formed and what our teams experienced while building these solutions. We’ll draw conclusions from our work, implications for the real world and outline next steps for the follow-up projects.
The Model Optimization team’s goal for the challenge was to optimize the YOLOv5 model for the NVIDIA Jetson Nano, a small computer for AI IoT applications, to i) increase the inference speed and ii) reduce memory footprint. Our focus was mainly on the inference speed, not the absolute mAP per se.
We started by reviewing literature and our preliminary research resulted in 4 potential paths to explore:
We experimented with optimizing sparse models with ONNX. For CPU inference, DeepSparse Engine produced a speedup. However, it was much slower than native PyTorch for GPU.
We ran experiments with NeuralMagic and Nebullvm recipe based optimization libraries. The former did not produce significant improvements of the results, the latter proved to be a lot to work with when setting up the environment.
We also tried converting the YOLOv5 models from the previous challenge to the Tensor RT Engine INT8 calibrated to the Jetson Nano 4GB itself. It failed. TensorRT engines turned out to be hardware-specific. One cannot convert a model to INT8 on some device and run an inference with it on the Jetson Nano. However, we could build and run FP16 on the Nano.
Our results showed the YOLOv5 Small with image size 640 x 640 in FP16 mode was the most sensible to be used on the current dataset and a desktop GPU.
Final Jetson Nano Results
The results showed that we don’t have to use the higher power mode. We can use the lower power mode without sacrificing performance or accuracy.
We concluded that YOLOv5s was the better choice given the accuracy and inference speed from the results. Input image size 640 x 640 is suitable for the current dataset(s). FP16 precision was a go to, since we didn’t lose accuracy while boosting inference speed. On the Jetson Nano, TensoRT was the best choice with the best performance of all the optimizations we tried. It’s also great for CI/CD automation as for the codebase.
As for the next steps it would be worthwhile to explore different hardware accelerators. On-drone tests should come handy to test the baseline and define SMART goals for further model optimization. And with the YOLOv5 architecture constantly upgrading, there’s always potential to explore structured pruning.
“At the beginning of the challenge, I felt I did not belong or not as skilled/ knowledgeable as the other members. I could barely understand the jargon and how I would be of value to the team or the challenge. However, through engagement and asking questions (everyone was friendly and helpful), I quickly understood that FruitPunchAI challenges are about learning, impact and networks. I came to understand it is a platform to enhance my DS / ML skills and our society with AI. By the end of the challenge, I had gained confidence and an appreciation of how not knowing is an opportunity to learn. It is also encouraging that our contributions will be helping the rangers.” - Sabelo Mcebo Makhanya, Model Optimization Team
Watch the summary of the 10 weeks of the Challenge and the Model Optimization team’s final results, presented by Jaka Cikač >>>
The aim of the CI/CD team was to develop a functioning CI/CD pipeline that automatically processes new data to train an existing model, which can be deployed to the SPOTS drone with improved performance.
In order to do so, we established following subgoals:
We started off with research into the available technologies. We went for GitLab to create the CI/CD pipeline. This is where we also stored the code and converted it to docker images. These images were stored in an IBM cloud registry which is called by the components in Kubeflow. These components form the machine learning pipeline that gets the data; preprocesses it; trains; evaluates; and then stores the model in Google Drive.
After a thorough consideration of how the user will interact with the system in the future, we created a workflow:
When a new run is started, code from GitLab is converted into docker images by a process known as containerisation. Code for the different machine learning components resides in individual branches of the GitLab repository. Each branch contains code informing Kubeflow about the pipeline structure and the paths to the inputs and outputs of each component. This code is known as pipeline definitions. Any modification of the code triggers the CI/CD pipeline to generate a new image with updates. This image is then pushed to IBM Cloud Container Registry.
The MLOps pipeline in Kubeflow retrieves these images and runs the code in the 4 machine learning workflow components:
The Kubeflow framework can be deployed on all major cloud platforms as well as on-premise. The cloud platform we used was IBM Cloud Kubernetes Service.
The biggest challenge for us was to grasp the GitLab CI/CD and Kubeflow pipelines and to ensure that all the technologies worked well with each other. We were able to create the pipeline, yet there is still room for improvement.
We’ve outlined the follow-up path for the next CI/CD team to start with:
“The challenge not only improved my machine learning and coding skills but led to my growth as an individual. I learned how to better work with people, to collaborate with team members from all over the world in a virtual environment, all that while balancing other areas of my life. I’m glad I could contribute to saving endangered species.” - Barbra Apilli, CI/CD Team
The goal of the Hardware Team was to connect different hardware components on the drone, which included:
We had to make sure that the information flows smoothly in the whole system, with as little delay as possible. Within seconds, the drone could have already moved tens of meters, so the detection results had to be processed and sent to the ground really quickly.
Our goals were:
We were able to run the model on the Jetson Nano, but we didn’t find a way to skip the frames. We managed to set up the system to send the information to the ground. It required plugging an HDMI cable between the Jetson Nano and the Herelink transmitter. We tried to set up the ground display to show the original video alongside the model output on the same screen, but it turned out to be more difficult than expected.
“Being a part of the AI for Wildlife Challenge was a great learning opportunity. My background is in machine learning, so I was able to learn about different parts of the pipeline required to create a successful real-life solution. The biggest challenge was to work on the hardware remotely (I was in Amsterdam, 9000 km from the actual drone), but I still managed to contribute to the project. It was great to meet people from all over the world and work together towards a common goal for a good cause.” - Maks Kulicki, Hardware Team
Although the plane is guided by a mission planner in-flight, the landing is not executed on auto-pilot. Extreme caution has to be taken when landing a fixed-wing plane with a span of four meters in a densely packed landscape of the south african bush. Roads are only slightly wider than the plane - there are high risks involved when landing, including hitting overhanging tree branches, uneven terrain and passing animals. Because of this, basic auto-landing features were originally replaced by manual landing with expert drone pilots.
But manual landing requires many resources. SPOTS wanted to implement a smart auto-landing feature as soon as possible. Our team set out to develop a prototype for landing the plane autonomously.
We first set out to list all the constraints the plane has to adhere to. Things like weather conditions, runway conditions and plane measurements have to be encoded accurately when giving the control of the plane over to our software.
We approached the landing problem from two perspectives:
The first technique involves setting up two GPS beacons at the start and at the end of the runway, which serve as pointers for the plane. When the drone approaches the first runway pointer, a script calculates the desired landing approach direction, and uploads a flight plan to a PX4 Flight Control Unit (FCU). The FCU monitors the sensors onboard the plane and executes a landing manoeuvre. This ensures that the landing is soft enough and does not divert too much from the middle of the runway.
Using a Gazebo simulation environment, we tested the auto-landing script with success. The only setback we faced was a discrepancy in the simulation, causing the plane to land below the surface of the map. After fixing this, the method is ready to be tried on a test-version of the real plane in South Africa.
We started to build a second simulation environment in parallel, where we could train a model of the plane to land with the help of a Reinforcement Learning agent. In a highly realistic environment built in Unreal Engine 4, we could model the descent of the plane using an Airsim simulation. The goal here was to connect the plane’s controls to OpenAI’s Gym, which is an extremely useful toolkit for running RL experiments with drones.
Training a RL agent to land the plane involves setting definitions of what is considered ‘good’. Specifically, we had to define:
After encoding all of this information, the agent could start learning the landing process. We made a first attempt at doing so using Deep Q Learning. The progress was unfortunately halted because of the lack of compute resources. Other obstacles we faced in the RL pipeline included a lack of definition in the Unreal Engine environment, which is essential to transferring the capabilities of the reinforcement learning agent to the real-life plane.
Both of our development trajectories need more refinement before real-life testing, but they’ve shown to be a promising replacement for manual landing of the SPOTS plane. We’re looking forward to seeing the techniques above implemented on the drone in the future.
“The diversity of the tasks involved challenged me to learn a lot of new things in the overlap of hardware and software. The interactive team environment and interaction with the people from SPOTS was a great experience for someone learning to apply AI in real-life scenarios. Moreover, working on a case as beautiful as wildlife conservation has been inspiring, it showed how applying engineering knowledge can have a positive impact in the real world.” - Emile Dhifallah, Autonomous Flight Team
Watch the presentation of final results of the CI/CD team by Aisha Kala, of the Hardware team by Kamalen Reddy and of the Autonomous Flight team by Ryan Wolf >>>
The protection of endangered species by the SPOT UAV already in use has become more efficient thanks to the results of this challenge. The machine learning algorithms that spot the poachers have become more accurate, faster and more automated. Due to higher efficiency, more poachers can be reprimanded easily. This will lead to better conservation of endangered species and local ecosystems where these species reside.
We made progress on all fronts but the components are not quite ready for deployment in their edge computing entirety. Automated flight still needs to be moved to the drone in a way that does not collide with object detection. Both functions strain the GPU resources, so they can’t run at the same time yet. More engineering hours will be needed to make all parts of the solution work together.
Who would’ve thought that bringing TinyML to the edge on an autonomous drone in remote areas of african savanna can be a bit of a challenge ;)
But we have a mission. We did it … and we will be doing it for the rhino and the communities to which these beloved animals are so important. The rhino is a symbol on the South African banknotes. It is a totem to the Lango community in Uganda. And being part of the big five animals in Africa, the rhino contributes immensely towards the tourism sector. Moverover, it’s a global symbol of the importance of preserving wildlife on Earth for generations to come.
Team AI for Wildlife 3
Model Optimization: Sahil Chachra, Manu Chauhan, Jaka Cikač, Sabelo Makhanya, Sinan Robillard
CI/ CD: Aisha Kala, Mayur Ranchod, Sabelo Makhanya, Rowanne Trapmann, Ethan Kraus, Barbra Apilli, Samantha Biegel, Adrian Azoitei
Hardware: Kamalen Reddy, Estine Clasen, Maks Kulicki, Thembinkosi Malefo, Michael Bernhardt
Autonomous Flight: Thanasis Trantas, Emile Dhifallah, Nima Negarandeh, Gerson Foks, Ryan Wolf