The immediate response to natural disasters like large cyclones, hurricanes, and floods involves massive and complex search and rescue operations. Oliver Cottray at Esri posted an in-depth piece about the United Nations World Food Programme’s (WFP) innovative use of drones to aid in critical disaster relief and risk prediction in Mozambique.
The World Food Programme is undertaking a study funded by the European Civil Protection and Humanitarian Aid Operations (ECHO). The first phase, conducted in November 2021, demonstrated the feasibility of leveraging drones for the search phase of search and rescue operations. It proved easier to identify individuals in need of rescue in images collected by drones flying in grid patterns over a target area than in live video feeds broadcast from helicopters and drones.
The problem is that a single drone flight can collect hundreds to thousands of images — simply too many images to scan through manually. A machine-learning solution to rapidly identify people in need of rescue is a huge asset to any search and rescue operation, where every minute counts.
“The faster we can go through them, the faster we can get to the people that need our help,” Patrick McKay, drone data operations manager for the World Food Programme (WFP) said in Esri’s blog post.
This is why the World Food Programme is collaborating with Synthetaic (now RAIC Labs) to deploy RAIC (Rapid Automatic Image Classification), an end-to-end AI solution for disaster-response efforts around the world.
RAIC uses a single example image and finds similar objects in an unlabeled dataset in minutes — including large aerial datasets captured as drone imagery, imagery from other electro-optical sources such as satellites, or full motion video (FMV).
RAIC provides a key advantage in that detections (such as those in search and rescue missions) can be carried out without any training on human-labeled data.
In June 2022, Synthetaic CTO Brian Goodwin traveled to Salisbury, England to participate in rescue simulations conducted by WFP. Human subjects were situated across hundreds of acres. Then drone operators captured aerial imagery of the entire simulation space. In less than 10 minutes, Goodwin uploaded this image data and used RAIC to train an AI model. Less than an hour later, Goodwin had successfully identified and located every subject in the simulation exercise — including those wearing camouflage.
By using RAIC, search and rescue teams don’t need to have a model ahead of time; instead, they can acquire imagery on the fly and rapidly build and run a high-performing model in minutes — one that is trained on, and understands, the unique features of the environment they are operating within.
“Before now, trying to build an image-based detection model for search and rescue operations would have been a waste of effort. By the time the model was ready, the emergency would be long over,” said Patrick McKay, UAS Data Operations Manager at the World Food Programme. “We tried this and it took 20 people three weeks to achieve. Literally over a years’ worth of time was spent, and the result was a model based on only one environment. Now there’s a way to do this in minutes. RAIC is going to be a game changer for search and rescue and other emergency response operations across the globe.”