Project description:Transcriptional profiling of courtship song stimulated females in Drosophila melanogaster comparing females exposed to conspecific song to those exposed to either white-noise (control) or heterospecific song (D. simulans)
Project description:Transcriptional profiling of courtship song stimulated females in Drosophila melanogaster comparing females exposed to conspecific song to those exposed to either white-noise (control) or heterospecific song (D. simulans) Three condition experiment, D. melanogaster (mel_song) song stimulated vs. Control (no_song) and vs. D. simulans (sim_song) stimulated females. Biological replicates: 7 for D_mel_song, 4 for no_song, 3 for D_sim_song. One replicate per array. Each sample contains 120 pooled female heads, collected from three independent experiments. Technical replication: two arrays with reverse labeling for each contrast.
Project description:The human capacity for speech and vocal music depends on vocal imitation. Songbirds, in contrast to non-human primates, share this vocal production learning with humans. The process through which birds and humans learn many of their vocalizations as well as the underlying neural system exhibit a number of striking parallels and have been widely researched. In contrast, rhythm, a key feature of language, and music, has received surprisingly little attention in songbirds. Investigating temporal periodicity in bird song has the potential to inform the relationship between neural mechanisms and behavioral output and can also provide insight into the biology and evolution of musicality. Here we present a method to analyze birdsong for an underlying rhythmic regularity. Using the intervals from one note onset to the next as input, we found for each bird an isochronous sequence of time stamps, a "signal-derived pulse," or pulse(S), of which a subset aligned with all note onsets of the bird's song. Fourier analysis corroborated these results. To determine whether this finding was just a byproduct of the duration of notes and intervals typical for zebra finches but not dependent on the individual duration of elements and the sequence in which they are sung, we compared natural songs to models of artificial songs. Note onsets of natural song deviated from the pulse(S) significantly less than those of artificial songs with randomized note and gap durations. Thus, male zebra finch song has the regularity required for a listener to extract a perceived pulse (pulse(P)), as yet untested. Strikingly, in our study, pulses(S) that best fit note onsets often also coincided with the transitions between sub-note elements within complex notes, corresponding to neuromuscular gestures. Gesture durations often equaled one or more pulse(S) periods. This suggests that gesture duration constitutes the basic element of the temporal hierarchy of zebra finch song rhythm, an interesting parallel to the hierarchically structured components of regular rhythms in human music.
Project description:The song of the adult male zebra finch is a well-studied example of a learned motor sequence. Song bouts begin with a variable number of introductory notes (INs) before actual song production. Previous studies have shown that INs progress from a variable initial state to a stereotyped final state before each song. This progression is thought to represent motor preparation, but the underlying mechanisms remain poorly understood. Here, we assessed the role of sensory feedback in the progression of INs to song. We found that the mean number of INs before song and the progression of INs to song were not affected by removal of two sensory feedback pathways (auditory or proprioceptive). In both feedback-intact and feedback-deprived birds, the presence of calls (other non-song vocalizations), just before the first IN, was correlated with fewer INs before song and an initial state closer to song. Finally, the initial IN state correlated with the time to song initiation. Overall, these results show that INs do not require real-time sensory feedback for progression to song. Rather, our results suggest that changes in IN features and their transition to song are controlled by internal neural processes, possibly involved in getting the brain ready to initiate a learned movement sequence.