Chasing the bird: 3D acoustic tracking of aerial flight displays with a minimal planar microphone array

ABSTRACT Tracking the flight patterns of birds and bats in three-dimensional space is central to key questions in evolutionary ecology but remains a difficult technical challenge. For example, complex aerial flight displays are common among birds breeding in open habitats, but information on flight performance is limited. Here, we demonstrate the feasibility of using a large ground-based 4-microphone planar array to track the aerial flight displays of the cryptic Jack Snipe Lymnocryptes minimus. The main element of male display flights resembles a galloping horse at a distance. Under conditions of sufficient signal-to-noise ratio and of vertical alignment with the microphone array, we successfully tracked male snipe in 3D space for up to 25 seconds with a total flight path of 280 m. The ’gallop’ phase of male snipe dropped from ca. 141 to 64 m above ground at an average velocity of 77 km/h and up to 92 km/h. Our project is one of the first applications of bioacoustics to measure 3D flight paths of birds under field conditions, and our results were consistent with our visual observations. Our microphone array and post-processing workflow provides a standardised protocol that could be used to collect comparative data on birds with complex aerial flight displays.


Introduction
Animal movements have traditionally been studied in a two-dimensional plane but effective methods for tracking in three-dimensions are necessary to understand the diving behaviour of fish and marine mammals in an aquatic environment, burrowing or arboreal species of reptiles and mammals, and the flight patterns of bats and birds (Cooper et al. 2014;Koblitz 2018;Aspillaga et al. 2019).For example, complex aerial flight displays to attract mates are a common feature of the social systems of waders, hummingbirds and songbirds (Figuerola 1999;Mikula et al. 2022;Wilcox et al. 2022).Aerobatic males often have higher mating success (Mather and Robertson 1992;Grønstøl 1996;Blomqvist et al. 1997) and selection for agility in aerial displays is one of the main hypotheses proposed to explain the evolution of female-biased sexual size dimorphism in birds (Székely et al. 2007).The flight performance of birds and bats under field conditions is relevant to key questions in evolutionary ecology, but the collection of detailed information on movements has proven to be a difficult technical challenge.Studies of aerial flight displays have often focused on the frequency and duration of displays (Møller 1991;Hedenström and Alerstam 1996;Lanctot et al. 2000) but have been unable to directly measure the length of the flight path, flight speed or turning radius.
Existing methods for animal tracking are not adequate for investigating the complex aerial flight displays of birds.Radar studies have provided valuable information on bird movement patterns, but it can be difficult to identify individual species (Hedenström and Alerstam 1996;Gauthreaux and Belser 2003).New methods for laser remote sensing with LIDAR systems are promising for both tracking and identification of flying animals (Jansson et al. 2017;Malmqvist et al. 2018).High-speed stereo cameras can be used to reconstruct flight trajectories but work best at relatively small spatial scales (Henningsson et al. 2010;Prinsloo et al. 2021).Optical rangefinders are effective for estimating flight speed, height and orientation at larger spatial scales, but require visual tracking of the target and work best for straight-line flights (Hedenström 1995;Stantial and Cohen 2015;Borkenhagen et al. 2017).New classes of GPS tags with altimeters and accelerometers allow tracking of 3D movements but then require capture and individual tagging, are restricted to use with large-bodied birds, and estimates of altitude are often highly biased (Bouten et al. 2013;Péron et al. 2020).Bioacoustic localisation of animals with microphone arrays provides a promising alternative because methods allow for species identification, can be deployed at large spatial scales for any type of flight pattern, and do not require individual tagging to track free-living individuals under natural conditions (Rhinehart et al. 2020).
The use of microphone arrays to localise sound-producing animals has a long history (Magyar et al. 1978), and different geometrical configurations of microphones or of microphone arrays have been used to address a range of questions in ecology and evolution (Blumstein et al. 2011;Rhinehart et al. 2020;Verreycken et al. 2021).Signals from the different microphones must be synchronised if the locations of vocalising animals are triangulated from time differences of arrival for sounds.The most straightforward solution is to use cables to connect multiple microphones to a multi-channel recorder.Distant observation points in large microphone arrays can be synchronised either with wireless connections or by combining several small arrays with local storage and by using angle or arrival (Ali et al. 2009).Most applications of bioacoustic methods have focused on locating animals in a horizontal plane (Mennill et al. 2012;Wilson and Bayne 2018;Matsubayashi et al. 2022) but a few authors have used sounds to locate animals in three dimensions (Stepanian et al. 2016;Gayk and Mennill 2020;Verreycken et al. 2021).For example, Stepanian et al. (2016) deployed a 6-microphone array on three > 9.14-metre poles in an equilateral triangle with 20 m vertices (0.05 ha) and showed that the array could locate controlled sound sources up to 130 m above ground.Similarly, Gayk and Mennill (2020) deployed an 8-microphone array on four > 7.1-metre poles in a square with 25 m vertices (0.06 ha) and accuracy was < 5 m when locating birds from their calls.In both studies, microphones were placed in a bucket to minimise ground reflections.More recently, Matsubayashi et al. (2023) used a single compact 16microphone array on a tripod and used a robot audition system to study courtship flights of Latham's Snipe Gallinago hardwickii, which successfully provided estimates of azimuth and elevation but not the 3D coordinates of bird positions.
The only examples of three-dimensional flight path from bioacoustic methods are for bats in the laboratory (Verreycken et al. 2021) and in the field (Grodzinski et al. 2009;Koblitz 2018).Bats are good candidates for bioacoustic tracking because they regularly use sonar to navigate during flight but a disadvantage is that the high frequency sounds attenuate rapidly in air.Thus, the active space of sound transmission is limited, and the detection range of compact microphone arrays is restricted as well.Nevertheless (Grodzinski et al. 2009), obtained flight paths for Pipistrellus kuhlii with travel distances over 31 m and durations up to 6 sec by using time-of-arrival differences between eight microphones arranged as two symmetrical stars in two arrays placed on tripods.
The objective of our project was to conduct a field test for the potential of acoustic sampling of flight trajectories with a minimal array of four low-self-noise microphones connected by cables to a single high-quality multi-channel recorder.We successfully estimated the 3D flight path and components of flight performance for the aerial flight display of a poorly known species of wader: the Jack Snipe Lymnocryptes minimus.The Jack Snipe is one of a suite of migratory waders that are characteristic of open mires in the taiga wetlands of northern Eurasia (Järvinen et al. 1978;Cramp and Simmons 1983;Van Gils et al. 2020), including Common Snipe Gallinago gallinago, Broad-billed Sandpipers Calidris falcinellus and Spotted Redshank Tringa erythropus.Many of these species of waders are difficult to observe and monitor in the mire habitats but have characteristic aerial flight displays during the pre-laying period (Armstrong and Westall 1953;Sutton 1981;Svensson 1987).For example, the aerial flight display of male Jack Snipe includes three discrete stages: an ascending phase without vocalisations where the bird climbs in altitude for ~1 minute, a diving phase where the bird drops steeply and produces a characteristic 'galloping' vocalisation that lasts for ~15 seconds, and a final hovering phase with short 'grunt' calls that lasts for ~10 seconds (Olivier 2007, Authors pers. obs.).The galloping vocalisation is expressed continuously during the diving phase, whereas the grunt calls are repeated a few times during the hovering phase.During a complete display flight, the ascending and diving phases can be repeated multiple times such that the galloping vocalisation is repeated at ~ 1-2 minute intervals (Nilsson and Nilsson 1978).We have deployed automatic audio recorders to conduct acoustic monitoring of waders at Kautokeino in northern Norway since 2016, and found that Jack Snipe was quite active at some of our recording locations.However, the acoustic flight displays of the males were not completely captured due to significant self-noise from the microphones and limitations of audio hardware associated with cost-effective automatic recording devices (Darras et al. 2020).Conventional acoustic monitoring with nonsynchronised recording channels is of limited use for studying flight behaviour, the kinematics of flight, or the active space of vocalisations.
Our paper is organised as follows.First, we describe our field procedures and the setup of a 4-microphone array that we deployed to record acoustic flight displays of Jack Snipe (Section 2).Second, we developed a workflow for post-processing sound recordings with two steps for manual annotation of multichannel recordings and then extraction of a 3D movement track, which was formulated as a non-linear optimisation problem.Third, we demonstrate that our method can be successfully used to collect quantitative data on the aerial flight displays of birds.We present examples of flight paths and ground tracks, and estimates of flight parameters, such as the distance covered during the galloping phase and the flight speed at the start of this phase (Section 3).Finally, we provide recommendations for future field studies, including methodological considerations for flush-mounted microphones, planar array topology and micrometeorology, viable alternatives to manual annotation of sound recordings, and potential applications with other species of birds (Section 4).

Study area and recording sessions
Two recording sessions were conducted in Kautokeino, Finnmark, Norway, in June 2022.Two locations were selected based on successful detections of birds in field surveys completed by the authors in previous years.The mire complexes are typical breeding habitats for Jack Snipe.Continuous audio recordings were collected at location 2 (Figure 1) from June, 15th at 17:56 to June 16th at 10:32, and at location 1 (Figure 1) from June, 16th at 16:25 to June, 17th at 04:04.Temperature data were compiled from the nearest weather station at Kautokeino (Station SN93700; Norwegian Meteorological Institute).During the two recording periods, air temperatures ranged between 3°C and 9°C.Wind speed was measured at the start of each recording period at 1.5 m height with a handheld anemometer (Windmate type WM100) and was always less than 2 m/s.
Our intention was to place three microphones at the vertices of an equilateral triangle and a fourth microphone at the centroid, but this configuration could not be achieved in practice because of field conditions and the presence of shrubs, flower patches, and standing water.The actual geometry of the microphone array was an approximation of the equilateral triangle (Figure 2) and had a footprint of about 1 ha.We expected the arrays to capture the flight patterns of male Jack Snipe because the detectable range for their distinctive vocalisations is ca. 1 km, but the ceiling for the display-flights is usually 150-250 m in height (Nilsson and Nilsson 1978;Cramp and Simmons 1983;Olivier 2007).
The geodetic coordinates of the microphones were measured with a handheld GPS receiver (Garmin GPSmap 65).The absolute uncertainty on the horizontal position reported by the GPS was within 5 m at 95%.In our uncertainty analysis of the estimation of the retrieved 3D flight path, we used empirical statistical distributions of latitude and longitude estimates provided by GPS receivers (Specht 2021) where the standard deviation is lower for the latitude than for the longitude.Both parameters were represented as Gaussian random deviates in our simulations.Since the terrain encompassed by the triangle and the surrounding areas were essentially flat and horizontal with deviations below 0.5 m, the altitude was not measured, and we considered that the microphones were all at the same altitude.

Recording acoustic flight displays
In Kautokeino, the ground underlying the 'palsa' mire complexes where Jack Snipe breed is often permafrost.Access to the breeding sites by car or all-terrain vehicles is impossible and all equipment must be carried by hand over unstable ground so that the amount of equipment should be minimised.Moreover, the frozen ground makes it difficult to anchor the equipment with pegs and poles in the ground.Buckets are an efficient measure against ground reflections but are likely to cause significant distortion of the received signals and to reduce the aperture of the sensors.Another constraint of our experiment was the goal to capture the complete dive and hovering phases of the flight display, starting at 50-250 m above the ground (Cramp and Simmons 1983;Olivier 2007).Therefore, our focus was on maximising the signal-to-noise ratio and the amplitude of the received signal.The combination of features led us to set up an array that was different from designs used in previous field studies (Stepanian et al. 2016;Gayk and Mennill 2020;Verreycken et al. 2021).
The microphones were not installed on a pole or tripod, but were instead flush mounted on a heavy-coated marine plywood plate that was placed on the ground (Figure 3).The reflective coating of the plate and its dimensions 50 × 70 cm comply with the requirement of the ISO 1996-2 standard for the measurement of sound pressure levels ISO (2017) at a building façade.Our configuration eliminates the so-called comb filter effect in the recordings (Hartmann 1997) where the interference is caused by the delay between the direct sound and the reflection from the ground.Moreover, in a flushmounted configuration, a 6 dB increase is observed in the strength of the signal because the direct and the reflected sound have the same amplitude and the same phase (Kinsler et al. 2000).According to the ISO1996-2 standard, the 6 dB increase due to flushmounting holds up to 4 kHz.Furthermore, our flush-mounted configuration minimised wind-induced noise in the microphone versus the microphone deployed on the tripod.Four pre-polarised 1/2-inch measurement microphones were used in combination with constant current preamplifiers.For the points at the vertices of the triangle, a combination of a Bruel & Kjaer (B&K, Naerum, Denmark) type 4964 infrasound capsule and a B&K 2671 preamplifier was used.For the centroid, a B&K 4189 class I IEC (2013) microphone capsule was combined with a B&K 2671 preamplifier.The use of different microphones and of infrasound models was due to the availability of sensors in our laboratory at the time of the experiment and did not compromise our results.In particular, the upper cut-off frequency of the infrasound capsules is well above the maximum frequency of the bird sounds that were recorded.Protection against wind was provided by 9 cm spherical wind screens cut in half, in addition to flush-mounting (Figure 3).
A single audio recorder was used to collect the signals delivered by the recorders.At location 1, a multichannel audio recorder (Sound Devices, Reedsburg, Wisconsin, USA, type MixPre6) was used until 21:27 in June, 16th.Afterwards, this recorder was replaced by a Head Acoustics (Herzogenrath, Germany) SQuadriga III class-I-compliant portable measurement system.At location 2, the recorder used was the MixPre6 used at location 1.Long coaxial cables were used to transfer both the power supply and the signals between the microphones and the recorder.Connection via cables guaranteed the perfect synchronisation of the signals.Indeed, since electricity travels at 65.9% of the speed of light, with the RG58 cable lengths used in our experiments, the propagation delays caused by the cables are much shorter than the sampling period of 20 microseconds that corresponds to the sampling frequency of 48 kHz.The signals were recorded in 24 bits PCM format at 48 kHz and stored either as WAV files with the MixPre6 or as the proprietary HDF files with the SQuadriga III.In the latter case, the files were later converted to WAV for post-processing.

Annotating sound spectrograms
An Audacity Sound Editor (Audacity Team release 3.2) was used to annotate the recordings displayed as spectrograms (2048 points, Hann window, 50% overlap).For each audio recording of the acoustic flight display of a male Jack Snipe, where the complete so-called "galloping phase" (GP) was visible in the spectrogram of each of the 4 channels, a single label file was populated with standardised annotations.The annotation was carried out by hand by the first author.The annotated features were the most energetic notes.The first channel to be annotated was that of the first arrival at the beginning of the GP.The marker was typically placed at the beginning of the note.The general annotation pattern is [1 − 4][bme][a−z]{1,2} according to the POSIX syntax of regular expressions (ISO/IEC/IEEE 2009).The first number on the left specifies the channel.In the second block b stands for beginning, m for middle i.e.GP and e for end (Figures 4 and 5).The last block consists of one or two lowercase letters.As an example, the 3rd (resp.28th) label in chronological order for GP in channel 3 is 3mc (resp.3mab).With this naming convention, a single label file can be used to identify the different occurrences of the same note in the different channels.When exporting the label track to a text file, Audacity generates one row per marker with the instant in seconds reported to 5 decimal places and the label.

Signal processing
A homogeneous atmospheric layer was assumed between the height where birds were being tracked and the microphones at ground level.The sound speed c was then calculated from the air temperature T close to the ground.Considering the large distances between the microphones and between the source and the microphones, the occasional phase mismatch between measurement channels that are never perfectly identical was deemed to be negligible.
The 3D coordinates of the bird with respect to the centre of the microphone array were obtained using time differences of arrival (TDOA) (Li et al. 2016).If k S(x,y,z) is the unknown position of the source corresponding to label k from the manual annotation, using the notation d S k ; M i À � for the Euclidean distance between k S and microphone M i , the following optimisation problem was defined as:  where t i,mod is the modelled TDOA for microphone i ∈{1,2,3}, t i,meas is the measured TDOA for microphone i, and is a penalty term to prevent the optimisation algorithm from exploring negative altitudes that are mathematically acceptable because of the symmetry of the problem but are physically impossible for an aerial flight display.The measured TDOA were easily obtained from the label file made in Audacity that provided the arrival time of each note for each microphone.The non-linear optimisation problem (2) was solved repeatedly for all k using the Nelder-Mead optimisation algorithm (Nelder and Mead 1965).Whatever the value for k, we used (x, y, z)=(0, 0, 50) as an initial guess for k S. The steps generated a series of points k S that formed a 3D path from the first feature of the acoustic display that is labelled on each of the 4 tracks to the last one that satisfies the same condition.The raw path included small offsets due to random variation in the point estimates.For further analyses of the flight speed and the estimation of the distance travelled, the raw path was replaced with a smoothed line estimated by a polynomial fit M fit (t).
The different steps outlined in this section for reconstructing the flight path were implemented in Julia version 1.8 as a flexible programming language (Bezanson et al. 2017), with functions of the packages Optimisation version 3.10 and CurveFit version 0.5.0.

Assessing the accuracy of the 3D flight track
The accuracy of the 3D flight tracking system was evaluated by simulating errors in the positions of the different microphones in our planar array.Taking the coordinates displayed on site by the GPS receiver as a reference, under the assumptions made above (Section 2.1), errors on the horizontal positions of the 4 microphones were introduced before computing the flight track by sampling two centred Gaussian random deviates, one for latitude and one for longitude.These steps were repeated 1000 times before so that the corresponding standard deviations of the space coordinates could be computed.

Data collected
In total, 33 acoustic flight displays were collected during the two sessions of field recordings, 19 at location 1 and 15 at location 2. The complete galloping phase was present on the 4 channels for 17 displays at location 1 and 6 at location 2. We observed two males displaying at the same time at both locations but different males were easy to distinguish because their distances from the microphones varied and the song elements did not overlap in our sound recordings.However, background noise from leaves rustling in the wind can be occasionally heard in the recordings at both locations.At location 2, additional sound from a distant stream was present in the recordings.Therefore, especially for location 2, the background noise proved too strong to allow annotation of the 4 tracks in a reliable way for more than 3 of the recorded displays.For location 1, the dropout rate was slightly lower with 6 displays that could be annotated.To make matters worse, the ground track of the aerial flight display was often located outside the footprint of the area encompassed by the microphone array for a significant part of the flight display.Poor coverage caused divergence of the optimisation routine and led to estimates of non-physical heights of flight.Such heights could be either 0 m or arbitrarily high values.As a consequence, flight tracking could be successfully completed for two acoustic flight displays recorded at location 1 and one at location 2. At location 1, one flight display occurred in almost ideal conditions with little background noise and with a flight track almost perfectly within the area encompassed by the array.Here, it was possible to annotate each of the 4 audio tracks over 25 seconds.The total duration of the flight display was significantly longer than the galloping phase (� x = 11.7 s, σ = 1.06 s, n = 23).

Flight path
Kinematic parameters were successfully determined from smoothed flight paths from 3 successful recordings of the acoustic flight displays of male Jack Snipe (Table 1).The flight path that corresponds to the first row of Table 1 is presented in Figure 6.For these 3 displays, the flight path during the galloping phase consists of a steep dive followed by two sharp turns (Figure 6).The corresponding ground track is shown in Figure 7, and the space coordinates as a function of time in Figure 8.In Figures 6 and 7, a twelfth degree polynomial was fitted to the raw flight path to obtain a smooth path.The effect of errors in the coordinates of the microphones were simulated by bootstrapping.The effect of simulated spatial errors on the microphone position was mostly visible on the vertical coordinate (Figure 8).

Flight pattern
The flight path revealed by analysing the acoustic signals received on the ground was consistent with previous reports of display flights for Jack Snipe (Olivier 2007) and with our visual observations of birds in flight during fieldwork at other sites and years.The flight pattern that was described for the galloping phase was observed at both locations 1 and 2. Our two study locations were more than 10 km apart.Therefore, we are confident that the flight displays recorded here were performed by at least two different male snipe.Thus, we expect our new data for the flight pattern to be representative of the aerial flight display for Jack Snipe.

Active space
The likely function of aerial flight displays is to attract mates, both in songbirds (Møller 1991;Mather and Robertson 1992;Hedenström and Alerstam 1996) and waders (Grønstøl 1996;Blomqvist et al. 1997;Lanctot et al. 2000).Male Jack Snipe will likely start flight displays at heights such that their vocalisations can be detected by females resting on the ground.Neglecting directivity effects and in a homogeneous atmosphere, the sound pressure level L p on the ground will take its maximum value L p? at the intersection O with the ground of the vertical line passing by the bird.At the start of the flight display, considering typical intensity discrimination thresholds in birds (Dooling et al. 2000) a reasonable assumption is that the call will be detectable on the ground provided that L p � L p? À 3 dB.Based on Table 1, the average start height for the galloping phase was 141 m.Assuming spherical spreading, this height corresponds to a maximum distance of 199 m from the bird, and implies that the beginning of the galloping phase would be detectable within a circle of radius 141 m centred on O.
A radius of 141 m corresponds to an area of ca.6.2 ha.If one further assumes that the source level is constant across the galloping phase and one follows the same approach, at the end of the galloping phase the ground area where the call would be detectable amounts to ca. 11 ha.For comparison, visual estimates of the active space used for flight displays of male Jack Snipe have been 10-20 ha at heights of 100-250+ m with a total flight path of 0.5-1.5 km (Olivier 2007).6, owing to errors on microphone horizontal positions caused by uncertainties inherent to the GPS system.For each space coordinate x,y and z, the solid line corresponds the flight path shown in fig.6, using the coordinates read on the GPS, whereas the ribbon corresponds to ± 1 σ for 1000 flight paths where the four microphone positions were sampled from Gaussian random deviates that reflect the uncertainty on latitude and longitude when measured with GPS according to (Specht 2021).

Flush-mounted microphone
To our knowledge, our study is the first time data from a microphone mounted on a ground plate has been published in a bioacoustics context.The use of a flush-mounted microphone was effective because it led to much clearer and stronger signals due to: (1) an absence of echoes in an open landscape without obstacles where reflections arrive only from the ground, and (2) constructive interferences between the direct and reflected sound.
A ground-based microphone configuration could have some potential drawbacks.The microphones were more exposed to potential damage from animals living on the ground, from rodents that could cut cables by gnawing or ungulates that might step on the microphone or drag them away.A microphone on the ground is also potentially more exposed to precipitation and dew.Therefore, such a configuration is mostly relevant for short-term measurements when weather conditions are good and the microphone array can be attended.Moreover, it is not ideal to record sound produced close to the ground as the ground effect will be stronger in that case (Embleton 1996).Furthermore, the 6 dB gain only holds for an angle of incidence that is low enough with respect to the vertical.Care must also be taken of the frequencies of interest as the wavelengths should remain large with respect to the diameter of the microphone membrane.These two conditions were always satisfied in our field study.

Array topology and extension
An extended planar horizontal microphone array can deliver plausible results that are unaffected by errors in the position of the sensors, but it is not an optimal configuration if an accurate vertical localisation of the sound source is desired.For our study site with open mires in Finnmark, it was the only reasonable choice given the absence of tall trees, a lack of firm ground, difficulties with site access and a limited budget.Improving the vertical accuracy would have required a high mast to make a difference.While the uncertainties in the coordinates of the 4 microphones were acceptable when considering the large distances between them, the use of differential GPS would have been beneficial to the overall uncertainty in the estimation of the flight path.
Based on the theory of TDOA for source localisation, four microphones are the absolute minimum in order to avoid the occurrence of dual solutions in 2D (Schmidt 1972).For the same reasons, in 3D the absolute minimum is 5 microphones (Spiesberger 2001;Spencer 2007).The microphone array we used here meets these requirements in 2D, in the plane defined by the ground.In 3D, while the microphone count is too low by one unit, only solutions in the upper half space are of interest and the penalty term defined by Equation 3 prevents solutions that result in estimates of negative altitudes.Therefore, the absolute minimum of sensors for 3D localisation is four microphones as well for our horizontal flush-mounted star array.
Even with 100 metres from each vertex of the triangle to its centroid, the ca.1.0 ha area encompassed was relatively small compared to the space covered by a male Jack Snipe during their flight displays.The usable recordings were a small subset of our data because the flight track was at the margins of the array in a majority of recordings, a situation where the accuracy of localisation deteriorates rapidly and often leads to invalid altitudes.
Examples for this are visible on the vertical z coordinate in Figure 8 between 7 and 10 seconds, and also after 17 seconds.Here, the greater uncertainty corresponds to the period when the bird's acoustic flight display was moving outside the area encompassed by the microphone array, as shown in Figure 7. Location 2 was also less suitable because of a relatively high level of background noise.
With longer recording periods and a better selection of locations, it would have been possible to collect more recordings.Longer deployment periods would have increased the risk of exposing microphones to adverse weather.It would therefore be desirable to deploy more microphones to cover a larger area.A larger array would require the use of more than one recorder to limit the total cable length.Unless a signal were shared between two recorders, some types of synchronisation such as time stamps delivered by a GNSS receiver would be necessary.In our case, a need for synchronisation would rule out the Sound Devices MixPre6, while the SQuadriga III has a built-in GNSS receiver.

Signal processing
Manual annotation could be time consuming since the number of relevant acoustic features annotated can reach up to 66 elements for the galloping phase alone, and annotations must be performed separately for each of the 4 channels.One option would be to annotate a subset of the song elements, such as only the double notes of the galloping phase of the flight display (Figure 5) without compromising the accuracy of flight tracking.
Automating the annotation process should be feasible using classical image segmentation techniques such as binarization, thresholding and connected components (Castleman 1979), at least when the signal-to-noise ratio is high enough.In our field study, automating the annotation was not worth the extra effort because the number of acoustic flight displays that could be analysed was relatively small.Whatever annotation technique is used, it would be interesting to apply Kalman filtering when estimating the flight path (Kalman 1960).The use of such filtering is efficient for modelling consecutive points and is likely to lead to a more accurate and less noisy flight path.Due to a lower degree of random errors in the flight path, polynomial fitting may become unnecessary.

Micrometeorology
Ground topography and wind or temperature gradients in the atmosphere can affect sound propagation in natural environments (Embleton 1996).In our field study, we deployed microphones on flat ground in the mires and restricted our analyses to recordings without background noise when wind conditions were still.Assuming a flat horizontal ground, a rule of thumb is that the effect of atmospheric refraction on sound propagation is modest when where z s is the source altitude, z r is the receiver altitude and D is the horizontal distance between the source and receiver (ISO 2017, see (Embleton 1996) for more details about sound propagation outdoors).In our case, z s ϵ [50,150] m for the bird, z r = 0 for all the microphones and D ≤200 m.Therefore the condition defined in Equation 4 was always met.This suggests that potential temperature gradients could be neglected.The flight tracking procedure used here assumed indeed a homogeneous atmosphere.The assumption was reasonable for our field recordings because the geometrical criterion described above was satisfied.It is, however, not certain whether this rule of thumb applies to other aspects of sound propagation than attenuation.
While there was little wind at both locations during the two recording sessions, cloud cover was not monitored, and the temperature drop observed on the second night suggests that a moderate temperature inversion might have developed.In the event of a temperature inversion, the time differences of arrival recorded would correspond to propagation along curved paths between S and the microphones M i .As a consequence, the heights above ground obtained under the assumption of a homogeneous atmosphere and thereby straight paths would be slightly overestimated.Assessing the vertical sound speed gradient would be worthwhile to evaluate whether the sound propagation conditions would deviate significantly from a homogeneous atmosphere and significant atmospheric refraction would occur because then it would be necessary to consider curved propagation paths between the bird and the microphones (Embleton 1996).Quantifying turbulence would also be quite useful because low turbulence means less degradation of the vocalisations occurs during sound propagation.

Potential for tracking aerial flight displays
The microphone array used in our field study had a footprint of 1 ha at ground level but was effective for tracking flight paths of birds at heights up to 150 m.We recorded nocturnal birds under low light levels and successfully measured flight speeds up to 93 km/h with rapid changes in direction.Thus, our bioacoustic method has great potential for a broad range of applications with other study systems.The microphone array, annotation of recordings, and signal processing provide a standardised protocol that could be used to collect comparative data on the flight displays of other species of snipe including Common Snipe Gallinago gallinago (Sutton 1981;Hoodless et al. 2006), Pintail Snipe G. stenura (Sutton 1981;Byrkjedal 1990), Latham's Snipe (Ida 1995;Matsubayashi et al. 2023), Swinhoe's Snipe G. megala (Morozov 2004) and Subantarctic snipe Coenocorypha spp (Miskelly 1990;Miskelly et al. 2006).Our method could also be used to investigate other species of waders that breed in the same taiga mires and also have aerial flight displays, including Whimbrel Numenius phaeopus, Broad-billed Sandpipers, and Spotted Redshanks Tringa erythropus (Armstrong and Westall 1953;Skeel 1978;Svensson 1987).Aerial flight displays are common among songbirds in open habitats (Mikula et al. 2022), and Bluethroats Luscinia svecica, Meadow Pipits Anthus pratensis, and Skylarks Alauda arvensis would be good candidates for 3D flight tracking with their complex flight displays and rich repertoires of song elements (Armstrong and Westall 1953;Merilä and Sorjonen 1994;Hedenström 1995).
We anticipate several challenges in applying our bioacoustic method to other bird species.The acoustic flight tracking presented here is perhaps best-suited to species like Jack Snipe and Latham's Snipe that produce sounds continuously during the different phases of their flight displays (Matsubayashi et al. 2023, this study).For species that vocalise continuously in flight but at lower heights, planar microphone arrays deployed on the ground could be used but with shorter distances among microphones.Some species of birds are silent during ascent but produce vocalisations or mechanical sounds only during the descent phase of the song flight, including Common Snipe and Lapland Buntings Calcarius lapponicus (Byrkjedal et al. 2016).In such cases, a potential issue is to identify and to label a significant number of one-to-one correspondences across the recording channels, but our method could still be used to measure part of the flight display.Eurasian Golden Plover Pluvialis apricaria and Eurasian Woodcock Scolopax rusticola produce intermittent sounds during their display flights (Byrkjedal and Thompson 1998;Bristow et al. 2023).A widely spaced array of microphones could be used to measure a sequence of discrete positions but not the continuous flight paths.The same approach would also apply to raptors that follow thermals during sunny days and vocalise sporadically, with the additional issue of atmospheric turbulence that may make it more difficult to use TDOA to triangulate positions.A final challenge will be to estimate separate flight paths for different individuals if multiple birds are recorded simultaneously.Male Jack Snipe primarily have solitary display, but identifying individuals will be especially challenging for birds that fly together in tight groups, as is the case with 'screaming parties' of Common Swifts Apus apus (Henningsson et al. 2010).

Conclusions
Our bioacoustic method was designed to be a minimal planar array of only 4 microphones mounted on ground plates.We have successfully demonstrated that it is possible to track the 3D flight path of a male Jack Snipe for up to 25 seconds and a total flight distance greater than 280 m.Furthermore, the continuous estimation of positions during this part of their aerial flight display also allowed us to measure the instantaneous or average flight speed.To our knowledge, our results are the first report of the 3D flight path of a bird obtained in the field from acoustic measurements.Moreover, here it was possible to follow the totality of the acoustic flight display.Our results provide new estimates of the duration of song elements and travelled distances that appear to be much greater than any published data on animal flight obtained from acoustic signals pickedup by ground-borne sensors.
A horizontal array configuration is not optimal for vertical accuracy, but our results suggest that the third dimension can be resolved with confidence by this kind of array.This approach will benefit from the wider availability of more accurate positioning techniques like differential GPS.For attended short-term recording sessions, the benefits of the ground plate approach include (1) a shorter deployment time compared to setting up poles with guy lines, (2) stronger and (3) anechoic signals due to the superposition of the real and the image source, and (4) less sensitivity to disturbance from wind and other sources of ambient noise.
Acoustic-based 3D flight tracking provides a valuable new approach for systematic investigations of the aerial flight displays of snipe but could also be applied to field studies of other waders, hummingbirds, raptors and songbirds that vocalise or produce mechanical sounds during flight.

Figure 1 .
Figure 1.Overview of the recording sites at Location 1 Ráigeluovttajeaggi and Location 2 Suvdošjohka, west of Kautokeino, Finnmark, Norway.Map from Norgeskart.No.Drawings of microphone arrays are not to scale.

Figure 4 .
Figure 4. Typical spectrogram of one dive cycle of the acoustic flight display of a male Jack Snipe L. minimus illustrating the three discrete phases: beginning phase at the start, galloping vocalization produced during the dive phase, and the grunt sounds produced during a hovering phase after completion of the dive.

Figure 5 .
Figure 5. Close-up of 3 repetitions (rectangles) of the rhythmic pattern of the galloping phase of the flight display shown in Figure 4.The ellipses delineate the song features that were annotated.The second ellipse from the left shows a double note that was repeated 3 times in this song fragment.

Figure 6 .
Figure 6.3D view of an acoustic flight display.

Figure 7 .
Figure 7. Ground track of the acoustic flight display shown in Figure 6.

Figure 8 .
Figure 8. Estimation of uncertainties on the flight path shown in Figure6, owing to errors on microphone horizontal positions caused by uncertainties inherent to the GPS system.For each space coordinate x,y and z, the solid line corresponds the flight path shown in fig.6, using the coordinates read on the GPS, whereas the ribbon corresponds to ± 1 σ for 1000 flight paths where the four microphone positions were sampled from Gaussian random deviates that reflect the uncertainty on latitude and longitude when measured with GPS according to(Specht 2021).

Table 1 .
Kinematic parameters of the galloping phase of 3 acoustic flight displays of L. minimus.H means height above ground in m, L is the total distance in m travelled between start and stop of galloping phase, v is the flight velocity in km/h.