Title

Revised Method for Differentiating Water from Land in Aerial Imagery

Poster Number

10a

Lead Author Affiliation

Cybersecurity

Lead Author Status

Masters Student

Second Author Affiliation

ECPE

Second Author Status

Faculty

Introduction

As society’s encroachment on wetlands continues, and our impact on those wetlands becomes more pronounced, there exists a growing need to accurately monitor these environmentally important areas. However these areas tend to be vast in size, difficult to access, and sensitive to change, making in-person monitoring problematic. The recent emergence of Unmanned Aerial Vehicle (UAV) technologies offers environmental researchers a reliable, affordable, and minimally invasive way of monitoring these areas.

Purpose

This project addresses one key area of such a UAV-based system: How can an autonomous UAV differentiate between water and land, given the limitations of time and computational power onboard a small UAV? Notable challenges include:

  • An ever-changing landscape sculpted by tidal fluctuations and seasonal flows.
  • A wide range of possible weather and lighting conditions.
  • Sensor data limited to GPS and a commodity camera providing data only in the Red-Green-Blue (RGB) color space.
This effort is in support of a larger UAV research project led by Dr. Elizabeth Basha, and is a derivation of prior work. This effort yields substantial improvements in accuracy and efficiency.

Method

The algorithm is based on the following observation: In an aerial image, the color and brightness of water tends to have low variability when compared to other features in the image. By searching for areas of low variability and applying machine learning (ML) techniques to compensate for lighting and other variables, we can identify areas of water with increased accuracy and efficiency. The method starts with the offline training of a classifier (Water/Not Water) using images from previous flights. To conserve the computational resources onboard the UAV, a metaclassifier is used to identify the subset of features for optimal classification, and the ML algorithm to be used given those features. Through this process, the metaclassifier ultimately produces the trained ML model to deploy onboard the UAV. Then, for each image taken during a flight, the algorithm first notes the minimum, mean, maximum, and standard deviation values of each image channel (Red, Green, Blue, Hue, Saturation, Value). It then recursively does the following:

  • Submits the values of the current region to the classifier, with the output of the classifier (Water/Not Water) becoming an additional feature in subsequent iterations.
  • Divides the current region evenly into four (4) sub-regions
  • Repeats until it reaches the stop condition (a pre-defined magnification factor)
Those image regions having the least variability, and that are classified as water at multiple magnification levels, will be classified as water with a high degree of accuracy.

Results

The previous classifier used by the UAV parent project had a verification accuracy of 89.5%, and required approximately one hundred (100) seconds to process each image onboard the UAV. In contrast, the algorithm described here has a verification accuracy of 97.5%, and requires less than five (5) seconds of processing per image, over the same dataset. Opportunities for additional improvements/optimizations have been identified, but no usable results were yet available at the time of this report.

Significance

The UAV currently in use has a flight time of approximately twenty (20) minutes per battery charge. With image processing exceeding one (1) minute per image, researchers using the previous algorithm were faced with a difficult choice:

  • Keep flying between images:
    • Wastes precious flight time waiting for each image to be processed.
    • Reduces number of images per flight to approximately twenty (20).
  • Land while processing each image:
    • Risks conflict with vegetation during each landing/takeoff cycle.
    • Risks inadvertently landing on water due to high classification error rate.
With this new classification algorithm, researchers can focus on flight operations since each image will be quickly processed as the UAV travels to its next waypoint. If the UAV does have to land for whatever reason, the improved accuracy of the classifier will reduce the risks of a mishap.

Location

DeRosa University Center

Format

Poster Presentation

Poster Session

Afternoon 1pm-3pm

This document is currently not available here.

Share

COinS
 
Apr 28th, 1:00 PM Apr 28th, 3:00 PM

Revised Method for Differentiating Water from Land in Aerial Imagery

DeRosa University Center

As society’s encroachment on wetlands continues, and our impact on those wetlands becomes more pronounced, there exists a growing need to accurately monitor these environmentally important areas. However these areas tend to be vast in size, difficult to access, and sensitive to change, making in-person monitoring problematic. The recent emergence of Unmanned Aerial Vehicle (UAV) technologies offers environmental researchers a reliable, affordable, and minimally invasive way of monitoring these areas.