Project description:This study compared spatial speech-in-noise performance in two cochlear implant (CI) patient groups: bimodal listeners, who use a hearing aid contralaterally to support their impaired acoustic hearing, and listeners with contralateral normal hearing, i.e., who were single-sided deaf before implantation. Using a laboratory setting that controls for head movements and that simulates spatial acoustic scenes, speech reception thresholds were measured for frontal speech-in-stationary noise from the front, the left, or the right side. Spatial release from masking (SRM) was then extracted from speech reception thresholds for monaural and binaural listening. SRM was found to be significantly lower in bimodal CI than in CI single-sided deaf listeners. Within each listener group, the SRM extracted from monaural listening did not differ from the SRM extracted from binaural listening. In contrast, a normal-hearing control group showed a significant improvement in SRM when using two ears in comparison to one. Neither CI group showed a binaural summation effect; that is, their performance was not improved by using two devices instead of the best monaural device in each spatial scenario. The results confirm a "listening with the better ear" strategy in the two CI patient groups, where patients benefited from using two ears/devices instead of one by selectively attending to the better one. Which one is the better ear, however, depends on the spatial scenario and on the individual configuration of hearing loss.
Project description:Cochlear implants (CIs) allow to restore the hearing function in profoundly deaf individuals. Due to the degradation of the stimulus by CI signal processing, implanted individuals with single-sided deafness (SSD) have the specific challenge that the input highly differs between their ears. The present study compared normal-hearing (NH) listeners (N = 10) and left- and right-ear implanted SSD CI users (N = 10 left, N = 9 right), to evaluate cortical speech processing between CI- and NH-ears and to explore for side-of-implantation effects. The participants performed a two-deviant oddball task, separately with the left and the right ear. Auditory event-related potentials (ERPs) in response to syllables were compared between proficient and non-proficient CI users, as well as between CI and NH ears. The effect of the side of implantation was analysed on the sensor and the source level. CI proficiency could be distinguished based on the ERP amplitudes of the N1 and the P3b. Moreover, syllable processing via the CI ear, when compared to the NH ear, resulted in attenuated and delayed ERPs. In addition, the left-ear implanted SSD CI users revealed an enhanced functional asymmetry in the auditory cortex than right-ear implanted SSD CI users, regardless of whether the syllables were perceived via the CI or the NH ear. Our findings reveal that speech-discrimination proficiency in SSD CI users can be assessed by N1 and P3b ERPs. The results contribute to a better understanding of the rehabilitation success in SSD CI users by showing that cortical speech processing in SSD CI users is affected by CI-related stimulus degradation and experience-related functional changes in the auditory cortex.
Project description:Purpose Our aim was to make audible for normal-hearing listeners the Mickey Mouse™ sound quality of cochlear implants (CIs) often found following device activation. Method The listeners were 3 single-sided deaf patients fit with a CI and who had 6 months or less of CI experience. Computed tomography imaging established the location of each electrode contact in the cochlea and allowed an estimate of the place frequency of the tissue nearest each electrode. For the most apical electrodes, this estimate ranged from 650 to 780 Hz. To determine CI sound quality, a clean signal (a sentence) was presented to the CI ear via a direct connect cable and candidate, and CI-like signals were presented to the ear with normal hearing via an insert receiver. The listeners rated the similarity of the candidate signals to the sound of the CI on a 1- to 10-point scale, with 10 being a complete match. Results To make the match to CI sound quality, all 3 patients need an upshift in formant frequencies (300-800 Hz) and a metallic sound quality. Two of the 3 patients also needed an upshift in voice pitch (10-80 Hz) and a muffling of sound quality. Similarity scores ranged from 8 to 9.7. Conclusion The formant frequency upshifts, fundamental frequency upshifts, and metallic sound quality experienced by the listeners can be linked to the relatively basal locations of the electrode contacts and short duration experience with their devices. The perceptual consequence was not the voice quality of Mickey Mouse™ but rather that of Munchkins in The Wizard of Oz for whom both formant frequencies and voice pitch were upshifted. Supplemental Material https://doi.org/10.23641/asha.9341651.
Project description:Despite the difficulties experienced by cochlear implant (CI) users in perceiving pitch and harmony, it is not uncommon to see CI users listening to music, or even playing an instrument. Listening to music is a complex process that relies not only on low-level percepts, such as pitch or timbre, but also on emotional reactions or the ability to perceive musical sequences as patterns of tension and release. CI users engaged in musical activities might experience some of these higher-level musical features. The goal of this study is to evaluate CI users' ability to perceive musical tension. Nine CI listeners (CIL) and nine normal-hearing listeners (NHL) were asked to rate musical tension on a continuous visual analog slider during music listening. The subjects listened to a 4 min recording of Mozart's Piano Sonata No. 4 (K282) performed by an experienced pianist. In addition to the original piece, four modified versions were also tested to identify which features might influence the responses to the music in the two groups. In each version, one musical feature of the piece was altered: tone pitch, intensity, rhythm, or tempo. Surprisingly, CIL and NHL rated overall musical tension in a very similar way in the original piece. However, the results from the different modifications revealed that while NHL ratings were strongly affected by music with random pitch tones (but preserved intensity and timing information), CIL ratings were not. Rating judgments of both groups were similarly affected by modifications of rhythm and tempo. Our study indicates that CI users can understand higher-level musical aspects as indexed by musical tension ratings. The results suggest that although most CI users have difficulties perceiving pitch, additional music cues, such as tempo and dynamics might contribute positively to their experience of music.
Project description:Cochlear implantation in subjects with single-sided deafness (SSD) offers a unique opportunity to directly compare the percepts evoked by a cochlear implant (CI) with those evoked acoustically. Here, nine SSD-CI users performed a forced-choice task evaluating the similarity of speech processed by their CI with speech processed by several vocoders presented to their healthy ear. In each trial, subjects heard two intervals: their CI followed by a certain vocoder in Interval 1 and their CI followed by a different vocoder in Interval 2. The vocoders differed either (i) in carrier type-(sinusoidal [SINE], bandfiltered noise [NOISE], and pulse-spreading harmonic complex) or (ii) in frequency mismatch between the analysis and synthesis frequency ranges-(no mismatch, and two frequency-mismatched conditions of 2 and 4 equivalent rectangular bandwidths [ERBs]). Subjects had to state in which of the two intervals the CI and vocoder sounds were more similar. Despite a large intersubject variability, the PSHC vocoder was judged significantly more similar to the CI than SINE or NOISE vocoders. Furthermore, the No-mismatch and 2-ERB mismatch vocoders were judged significantly more similar to the CI than the 4-ERB mismatch vocoder. The mismatch data were also interpreted by comparing spiral ganglion characteristic frequencies with electrode contact positions determined from postoperative computed tomography scans. Only one subject demonstrated a pattern of preference consistent with adaptation to the CI sound processor frequency-to-electrode allocation table and two subjects showed possible partial adaptation. Those subjects with adaptation patterns presented overall small and consistent frequency mismatches across their electrode arrays.
Project description:Prelingually deaf children listening through cochlear implants (CIs) face severe limitations on their experience of music, since the hearing device degrades relevant details of the acoustic input. An important parameter of music is harmony, which conveys emotional as well as syntactic information. The present study addresses musical harmony in three psychoacoustic experiments in young, prelingually deaf CI listeners and normal-hearing (NH) peers. The discrimination and preference of typical musical chords were studied, as well as cadence sequences conveying musical syntax. The ability to discriminate chords depended on the hearing age of the CI listeners, and was less accurate than for the NH peers. The groups did not differ with respect to the preference of certain chord types. NH listeners were able to categorize cadences, and performance improved with age at testing. In contrast, CI listeners were largely unable to categorize cadences. This dissociation is in accordance with data found in postlingually deafened adults. Consequently, while musical harmony is available to a limited degree to CI listeners, they are unable to use harmony to interpret musical syntax.
Project description:BackgroundIn electric-acoustic pitch matching experiments in patients with single-sided deafness and a cochlear implant, the observed "mismatch" between perceived pitch and predicted pitch, based on the amended Greenwood frequency map, ranges from -1 to -2 octaves. It is unknown if and how this mismatch differs for perimodiolar versus lateral wall electrode arrays.ObjectivesWe aimed to investigate if the type of electrode array design is of influence on the electric-acoustic pitch match.MethodFourteen patients (n = 8 with CI422 + lateral wall electrode array, n = 6 with CI512 + perimodiolar electrode array; Cochlear Ltd.) compared the pitch of acoustic stimuli to the pitch of electric stimuli at two test sessions (average interval 4.3 months). We plotted these "pitch matches" per electrode contact against insertion angle, calculated from high-resolution computed tomography scans. The difference between these pitch matches and two references (the spiral ganglion map and the default frequency allocation by Cochlear Ltd.) was defined as "mismatch."ResultsWe found average mismatches of -2.2 octaves for the CI422 group and -1.3 octaves for the CI512 group. For any given electrode contact, the mismatch was smaller for the CI512 electrode array than for the CI422 electrode array. For all electrode contacts together, there was a significant difference between the mismatches of the two groups (p < 0.05). Results remained stable over time, with no significant difference between the two test sessions considering all electrode contacts. Neither group showed a significant correlation between the mismatch and phoneme recognition scores.ConclusionThe pitch mismatch was smaller for the perimodiolar electrode array than for the lateral wall electrode array.
Project description:Psychophysical and neuroimaging studies in both animal and human subjects have clearly demonstrated that cortical plasticity following sensory deprivation leads to a brain functional reorganization that favors the spared modalities. In postlingually deaf patients, the use of a cochlear implant (CI) allows a recovery of the auditory function, which will probably counteract the cortical crossmodal reorganization induced by hearing loss. To study the dynamics of such reversed crossmodal plasticity, we designed a longitudinal neuroimaging study involving the follow-up of 10 postlingually deaf adult CI users engaged in a visual speechreading task. While speechreading activates Broca's area in normally hearing subjects (NHS), the activity level elicited in this region in CI patients is abnormally low and increases progressively with post-implantation time. Furthermore, speechreading in CI patients induces abnormal crossmodal activations in right anterior regions of the superior temporal cortex normally devoted to processing human voice stimuli (temporal voice-sensitive areas-TVA). These abnormal activity levels diminish with post-implantation time and tend towards the levels observed in NHS. First, our study revealed that the neuroplasticity after cochlear implantation involves not only auditory but also visual and audiovisual speech processing networks. Second, our results suggest that during deafness, the functional links between cortical regions specialized in face and voice processing are reallocated to support speech-related visual processing through cross-modal reorganization. Such reorganization allows a more efficient audiovisual integration of speech after cochlear implantation. These compensatory sensory strategies are later completed by the progressive restoration of the visuo-audio-motor speech processing loop, including Broca's area.
Project description:Psychophysical tests of spectro-temporal resolution may aid the evaluation of methods for improving hearing by cochlear implant (CI) listeners. Here the STRIPES (Spectro-Temporal Ripple for Investigating Processor EffectivenesS) test is described and validated. Like speech, the test requires both spectral and temporal processing to perform well. Listeners discriminate between complexes of sine sweeps which increase or decrease in frequency; difficulty is controlled by changing the stimulus spectro-temporal density. Care was taken to minimize extraneous cues, forcing listeners to perform the task only on the direction of the sweeps. Vocoder simulations with normal hearing listeners showed that the STRIPES test was sensitive to the number of channels and temporal information fidelity. An evaluation with CI listeners compared a standard processing strategy with one having very wide filters, thereby spectrally blurring the stimulus. Psychometric functions were monotonic for both strategies and five of six participants performed better with the standard strategy. An adaptive procedure revealed significant differences, all in favour of the standard strategy, at the individual listener level for six of eight CI listeners. Subsequent measures validated a faster version of the test, and showed that STRIPES could be performed by recently implanted listeners having no experience of psychophysical testing.
Project description:Auditory stream segregation is a perceptual process by which the human auditory system groups sounds from different sources into perceptually meaningful elements (e.g., a voice or a melody). The perceptual segregation of sounds is important, for example, for the understanding of speech in noisy scenarios, a particularly challenging task for listeners with a cochlear implant (CI). It has been suggested that some aspects of stream segregation may be explained by relatively basic neural mechanisms at a cortical level. During the past decades, a variety of models have been proposed to account for the data from stream segregation experiments in normal-hearing (NH) listeners. However, little attention has been given to corresponding findings in CI listeners. The present study investigated whether a neural model of sequential stream segregation, proposed to describe the behavioral effects observed in NH listeners, can account for behavioral data from CI listeners. The model operates on the stimulus features at the cortical level and includes a competition stage between the neuronal units encoding the different percepts. The competition arises from a combination of mutual inhibition, adaptation, and additive noise. The model was found to capture the main trends in the behavioral data from CI listeners, such as the larger probability of a segregated percept with increasing the feature difference between the sounds as well as the build-up effect. Importantly, this was achieved without any modification to the model's competition stage, suggesting that stream segregation could be mediated by a similar mechanism in both groups of listeners.