š¦8.1 Cornell Lab of Ornithology Externship Day 2:a talk with Jay McGowan
Thanks to Archie for connecting me with Jay. He gave me a tour of their archive storage, which houses an extensive collection of reels and tapesāthe oldest dating back to the 1930s. Their library (yeah, I get why itās called Macaulay library now) is remarkable, and they recently received a new donation of tapes from a French ornithologist who passed away.
Jay explained how birdwatching in the United States often begins with learning bird vocalizations and understanding the environment, rather than solely focusing on visual identification. This approach highlights how American birders develop a more holistic understanding of avian species by integrating auditory and environmental cues, which contrasts with the predominant reliance on visual identification among many Chinese birders. This difference could partly explain why American volunteers participating in initiatives like the Breeding Bird Survey tend to be more efficient and professional; they possess both visual and auditory identification skills, whereas many Chinese birders are limited to recognizing physical appearances.
Jay also suggested that when designing citizen science projects, itās important not to focus solely on teaching participants to identify birds by their calls, as vocalizations can be challenging to describe. Instead, leveraging tools like the Merlin appās spectrogram visualization can help participants form a mental image of the bird sounds. This method could be particularly effective in educating children or engaging new birders in China, where visual learning might resonate more strongly.
He introduced me to three tools used for bird sound recording: parabolic reflectors, shotgun microphones, and handheld recorders, emphasizing that parabolic reflectors offer the highest sensitivity.
Jay also shared insights into their training process for machine learning systems. Initially, they annotated only the prominent bird sounds identified by users, ignoring background species. However, as the systems improved, they began recognizing background speciesā calls, making annotation increasingly complex. Now, sound reviewers must meticulously annotate all audible bird species in recordings, even when the sound quality is poor. While AI is not yet capable of performing this task with sufficient accuracy, human reviewers also face challenges in achieving near-perfect results. I will be trying some of these annotation work tomorrow!