One bird, two bird? Red bird, blue bird? Analyzing bird songs using wavelet transforms, image processing, and neural networks

Student Author(s)

Alli VanderStoep
Taylor Rink

Faculty Mentor(s)

Dr. Paul Pearson, Mathematics

Document Type


Event Date



Biologists, ecologists, and bird enthusiasts want to estimate bird population trends in order to monitor changes in ecosystems. Recently, time-consuming field observations of birds have been augmented by audio recordings of birds. In the lab, we deciphered the types of birds singing in these recordings. We used wavelet transforms to convert audio signals into images called scalograms. These scalograms display bird songs in a format similar to sheet music; they show how the pitch and volume of a bird’s song change over time. After applying denoising methods to the images, we trained a neural network to identify the birds from the processed images.


This research was supported by Dr. Dave VanWylen.

This document is currently not available here.