• Users Online: 57
  • Print this page
  • Email this page


 
 Table of Contents  
ORIGINAL RESEARCH
Year : 2021  |  Volume : 40  |  Issue : 1  |  Page : 23-30

Speech perception in noise and localization performance of digital noise reduction algorithm in hearing aids with ear-to-ear synchronization


1 Department of Audiology, All India Institute of Speech and Hearing, Mysuru, Karnataka, India
2 Department of Rehabilitation Health Sciences, College of Applied Medical Sciences, King Saud University, Riyadh, Kingdom of Saudi Arabia
3 Hearfon Systems Pvt Ltd, Bangalore, Karnataka, India

Date of Submission01-Nov-2021
Date of Acceptance27-Apr-2022
Date of Web Publication06-Sep-2022

Correspondence Address:
Dr. Geetha Chinnaraj
Department of Audiology, All India Institute of Speech and Hearing, Mysuru 570006, Karnataka
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jose.JOSE_4_21

Rights and Permissions
  Abstract 

Purpose: The present study aimed to compare speech perception in noise and horizontal localization with and without activating digital noise reduction (DNR) in hearing aids with and without an ear-to-ear synchronization. Materials and Methods: Twenty-five listeners with mild-to-moderate bilateral sensorineural hearing loss, aged between 18 and 55 years, were the participants. Each participant’s horizontal sound-source localization performance was measured as a root-mean-square degree of error. Speech recognition in the presence of speech babble noise was measured as the signal-to-noise ratio required for 50% recognition score (SNR-50). Further, SNR-50 was measured with noise source from four different directions and was recorded in four aided conditions, with and without an independent activation of wireless link and DNR. Results: Results showed that wireless synchronization technology in hearing aids improved localization and speech perception in noise under certain conditions. Adding the activation of DNR improved the overall performance in the horizontal sound-source localization task. However, the amount of improvement in speech perception in noise with the activation of wireless synchronization and/or DNR depended on the spatial separation between the direction of speech and the noise. Conclusions: The activation of DNR and wireless synchronization in hearing aids showed a better performance in assessed parameters in the current study. However, the improvement in scores may or may not be beneficial to the listener, depending on the direction of noise and speech.

Keywords: Hearing aids, sound localization, speech perception, wireless


How to cite this article:
Chinnaraj G, Tanniru K, Rajan Raveendran R. Speech perception in noise and localization performance of digital noise reduction algorithm in hearing aids with ear-to-ear synchronization. J All India Inst Speech Hear 2021;40:23-30

How to cite this URL:
Chinnaraj G, Tanniru K, Rajan Raveendran R. Speech perception in noise and localization performance of digital noise reduction algorithm in hearing aids with ear-to-ear synchronization. J All India Inst Speech Hear [serial online] 2021 [cited 2022 Oct 3];40:23-30. Available from: http://www.jaiish.com/text.asp?2021/40/1/23/355663



Several signal-processing strategies are commonly employed in modern digital hearing aids to improve the output of sound quality. Of these, digital noise reduction (DNR) using an algorithm analyzes the incoming signals based on their acoustic characteristics (mostly the temporal modulations) to process signals with appropriate gain parameters (Bentler & Chiou, 2006). DNR in a conventional digital hearing aid can change the gain for a steady-state noise (Bentler & Chiou, 2006). However, the efficiency of DNR tends to decline when background noise has modulations similar to speech (Bentler & Chiou, 2006; Tawfik et al., 2010).

Keidser et al. (2006) studied the effects on sound localization of different digital signal-processing algorithms in conventional digital hearing aids. The authors reported that when hearing aids in two ears acted independently, multichannel wide dynamic range compression (WDRC) and noise reduction features altered the interaural differences and spectral shape differences in the signal. Such spectral and interaural differences may lead to difficulty in creating an authentic stereophonic sound by the cortex (Renvall et al., 2016). The individual neurons in the cortex can signal locations panoramically, and such neurons are distributed throughout the auditory cortex. Cortical neurons transmit information about sound-source locations with the spike counts and timing. Several studies have proven that the accuracy of location estimation based on cortical spike patterns is degraded when temporal information is degraded or eliminated (Middlebrooks and Knudsen, 1984; Middlebrooks et al., 2002; Mickey & Middlebrooks, 2003).

Binaural signal-processing enabled by a radio link called wireless synchronization (WS) between the two independent hearing aids overcomes such difficulties. Several authors have studied the efficiency of such an ear-to-ear synchronization technology.

Kreisman et al., (2010) evaluated the effect of binaural wireless advanced digital hearing aids (coordinated compression, noise reduction, and directional features) on speech perception in noise in 36 participants (aged 39–79 years) with symmetrical sensorineural hearing loss. Participants had significantly better speech-in-noise (SIN) scores using binaural wireless technology in all noise conditions. Similarly, Sockalingam et al., (2009) evaluated wireless hearing aids with gain and compression settings optimized and DNR and directionality synchronized between the two ears. Thirty listeners underwent sound quality and localization experiments in a simulated cafeteria, garden, and street environments. Subjective sound quality analysis revealed a statistically significant higher rating for naturalness with wireless activation only in the cafe environment.

Ibrahim et al., (2013) evaluated speech perception and localization in 20 listeners (eight normal hearing, mean age: 26 years; 12 bilateral symmetrical moderate-to-severe sensorineural hearing loss, mean age: 69 years) using binaural wireless technology. With only the wireless WRDC synchronization enabled (directionality and DNR were disabled), there was no statistically significant enhancement in speech intelligibility but localization improved when wireless binaural synchronization was enabled. Our earlier study (Geetha et al., 2017) showed a better SIN recognition and better localization with wireless synchronized directional microphones between hearing aids.

Sockalingam et al. (2009) has demonstrated that the DNR algorithm along with other features in wireless hearing aids improved localization. Even though DNR is not capable of increasing speech intelligibility, it has been found to significantly benefit many individuals using hearing aids in other aspects (Bentler, 2005). A study done by Boymans and Dreschler (2000) reported that the majority of the listeners preferred DNR “on” compared with DNR “off.” They also said that DNR helped increase the sound quality and listening comfort of the individuals using hearing aids. In a study done by Alcantara et al., (2003), listening effort was similar when noise reduction was “on” and “off,” whereas Bentler et al. (2008) found that when noise reduction was “on,” there was reduced listening effort.

Though several studies used perceptual measures to note the functionality of DNR (Brons et al., 2013; Ricketts and Hornsby, 2005), the improvement expected with only the DNR algorithm in wireless hearing aids has not been reported. Hence, the current study aimed to assess the independent and combined effects of DNR algorithms and WS in binaural hearing aids for speech perception in noisy environments and horizontal localization. The study’s outcome can be used as perceptual evidence to select the wireless synchronized binaural hearing aids over conventional binaural digital hearing aids if the former is found to have superior performance.


  Materials and Methods Top


The study included speech perception and localization experiments. Informed consent was taken from all the participants.

Participants

A total of 25 listeners with bilateral sensorineural hearing loss participated (mean age = 39; age range = 18–55 years; standard deviation [SD] = 9.1). All participants had bilateral symmetrical postlingual mild-to-moderate sensorineural hearing loss with no history of middle ear disorders, neurological involvement, and psychological complaints. All were native speakers of Kannada with no experience using amplification devices. Hearing assessment was performed using a calibrated two-channel diagnostic audiometer using the modified Hughson and Westlake procedure (Carhart & Jerger, 1959). Speech recognition threshold and speech identification score (SIS) were obtained as a part of speech audiometry. The participants had speech recognition thresholds and SIS in agreement with the degree of hearing loss. Immittance evaluation (GSI-Tympstar middle ear analyzer) demonstrated “A” or “As”-type tympanogram with acoustic reflex thresholds appropriate to the degree of hearing loss. “As” type of tympanogram was also considered as “As” type along with the presence of acoustic reflexes in association with the absence of air-bone gap on hearing assessment indicated no middle ear pathology.

Stimuli

Sentence perception in noise was assessed using the sentence test (25 lists, 10 sentences each) in the Kannada language developed by Geetha et al., (2014). The recorded version of the sentences with a female speaker was used as these sentences are standardized on normal-hearing and hearing-impaired individuals. A car horn sound (duration 260 ms) served as stimuli for the localization task. The human neural system needs more than 150 ms to process and respond to novel sound locations (Makous & Middlebrooks, 1990). Fujiki et al., (2002) suggested that binaural intensity and duration cues require around 100–150 ms to process. Hence, a duration of more than 150 ms was preferred. The stimulus had a spectral peak at 735 Hz and had a complex tone structure with the long-term average spectrum. The stimulus is used to simulate the localization performance to vehicle sound. However, the temporal cues (onsets/offsets) were certainly modified to present in the experimental speakers without much turbulence.

Procedure for hearing aid fitting

All participants were naïve to amplification and wore two 16-channel digital WDRC hearing aids in both ears (receiver in the canal hearing aid with dome). These hearing aids were selected as they could binaurally synchronize noise reduction and directionality features and optimize compression settings. The hearing aids used near-field magnetic inductive coupling to transfer a full-audio bandwidth signal from one hearing aid to the other. A computer with NOAH software connected the hearing aids via NOAH link with appropriate cables. Hearing aids were fine-tuned if necessary, based upon the subject’s word identification score (without any background noise) at 40 dB (only minor modifications to gain functions to achieve at least 80% aided word identification score). As the study aimed to assess the benefit of the DNR algorithm, other features such as feedback management and the directionality options were disabled in all test conditions.

Localization procedure

The sound of a car horn (duration of 260 ms) with a 22,050 sampling rate with 32-bit resolution was selected from the available digital resources. The car horn was selected as hearing a car horn sound would elicit a reflexive response to locate the sound source, and also most subjects know what this sound means and what they have to do as a consequence (Lemaitre et al., 2009). The selected stimulus was symmetrically ramped with 50 ms to eliminate the abrupt onset and offset stimulus effects, and the root-mean-square (RMS) amplitude was normalized to a −10 dB gain factor. For the delivery of the stimulus, the study used equipment with professional analog to digital/digital to analog converter (32-bit resolution opted) from Lynx studio technology (USA) associated with Cubase 6 software, and GENELEC speakers (model no–8020B) in a sound-treated room with permissible noise limits (Frank, 2000). Eight loudspeakers were placed in a circle covering 360°. The listener sat in the middle of the setup at 1.5 m from the loudspeakers. [Figure 1] shows the localization setup. The stimulus was routed through the Lynx aurora signal router to the respective speakers. The level of the stimulus was calibrated to deliver 70 dB SPL from each speaker, one at a time. The order of presentation of the stimulus was randomized for each condition, presented from each loudspeaker at least three times for a total of 24 stimuli for each condition. A 5-s silence interval between two successive stimulus presentations enabled participants to point to the previous stimulus and reset the head position to the front. After each stimulus, no feedback was provided to the subject to avoid any learning and memory effects as the procedure involved multiple presentations.
Figure 1: Localization setup

Click here to view


Participants pointed to the direction of the stimulus, and the examiner documented the indicated angle of the respective speaker. The degree of error (DOE) was calculated for each condition (provided in [Table 1]). DOE is a standard procedure to quantify the subjective localization of auditory signal. DOE corresponds to the shortest difference in degrees between the degrees of the azimuth of the loudspeaker of the actual presentation of the stimuli, to the degree of the azimuth of the loudspeaker identified as the source of the stimulus by the participant.
Table 1: Test conditions used in the experiments

Click here to view


[Table 1] shows details of four different conditions in which hearing aids were preprogrammed tested. Whenever binaural synchronization was activated, the WDRC was synchronized between instruments for each condition listed in [Table 1], and the DOE was separately computed. The root-mean-square DOE (rmsDOE) was calculated using the following formula (adapted from Ching, Incerti, & Hill, 2004):



Speech intelligibility in noise procedure

A personal laptop was connected to the Maico MA-52 audiometer’s auxiliary input to present the stimuli for speech intelligibility in noise, using the same setup as the localization experiment. The calibration of stimuli used the procedure used in the localization experiment (an equivalent 1000 Hz pure-tone signal used to change the amplitudes of RMS speech or speech babble noise while measured with B-weighing network on Bruel & Kjaer 2250 sound level meter). The sentences in Kannada were presented in (1) noise from 0° angle, (2) noise from 90° angle, (3) noise from 270° angle, and (4) noise from both 90° and 270° angles.

In all the conditions, sentence lists were presented from 0° angle. Sentences were presented at different signal-to-noise ratios (SNRs) with the four-talker Kannada speech babble at a constant 70 dB SPL level. SNR required for 50% recognition score (SNR-50) was done in a Quick SIN format. Each sentence in the list was presented at a fixed SNR. In each list, the first sentence was presented at +20 dB SNR and reduced in 3 dB SNR steps until reaching −7 dB SNR. The choice of SNR was based on a pilot study conducted for an earlier study on individuals with hearing loss to identify SNRs that yielded maximum and minimum speech recognition scores and a step size for identifying SNR-50 in individuals with hearing loss (Sanjeev, 2017). Each sentence in the test had four keywords, with one point for each keyword and a maximum score of 40 per list. Spearman-Karber (Finney, 1952) equation was used for SNR-50 computation.

The equation is as follows:



wherein, i = the initial presentation level (dB S/N); d = the attenuation step size (decrement); w = the number of keywords per decrement; TC = the total number of correct keywords.

The level of sentences was varied until 50% correct (SNR-50) scores were obtained. Listeners repeated the sentences, and tester recorded the responses. One sentence list was presented in each condition. Test conditions were randomized, and each sentence list was used only once. The test setup was a sound-treated room with permissible noise limits with no head movements of a subject during the assessment.


  Results Top


Data were statistically analyzed (SPSS software version 20). Shapiro-Wilk test was used to test the distribution of data, showing that data from the localization experiment followed normal distribution, but the SIN data were of nonnormal distribution. Hence, parametric statistical methods were used for analyzing localization data, and nonparametric methods were used for analyzing speech intelligibility in noise data.

Localization experiment

[Figure 2] provides the mean and SD of rmsDOE in the localization experiment in all the aided conditions. Mean rmsDOE ranged from 28.8° to 36.9° across different conditions, with lower rmsDOE indicating better localization ability. The localization performance was better when the wireless binaural synchronization was activated.
Figure 2: Mean and SD of rmsDOE (in degree azimuth) in various conditions (n = 25)

Click here to view


Repeated-measures ANOVA was carried out for statistical comparison of the effect of binaural synchronization and DNR on localization ability. Results showed a significant effect of binaural synchronization and DNR on localization ability (F (4.625, 110.994) = 49.6, P < 0.01). The Bonferroni test was used for the pair-wise analysis [Table 2].
Table 2: Results of Bonferroni pair-wise comparison of rmsDOE (n = 25)

Click here to view


[Table 2] shows that the rmsDOE was significantly different between all conditions except between “WS Off/DNR On” and “WS On/DNR Off.” The activation of DNR resulted in better localization than without it. Similarly, wireless binaural synchronization led to significantly better localization ability (P < 0.01) than no synchronization between two ears. The condition “WS On/DNR On” showed significantly fewer (P < 0.01) localization errors when compared with all the other conditions. Results indicated the activation of DNR along with synchronization in hearing aids helped to further reduce localization errors. There were significantly more localization errors when both synchronization and DNR were deactivated.

Speech intelligibility in noise experiment

The mean and SD of SNR-50 in all conditions are given in [Table 3]. A smaller SNR-50 value indicated better performance.
Table 3: Mean and SD of SNR-50 (in dB) in all the conditions with the noise from 0°, 90°, 270°, and 90° and 270° azimuth (n = 25)

Click here to view


Friedman test revealed a significant difference in SNR-50 scores obtained in four different conditions (x2(31) = 470.1, P < 0.01) in 0°, 90°, 270°, and 90° and 270° azimuth conditions. A further pair-wise comparison used Wilcoxon signed-rank test, and output is presented in [Table 4] for 0°, 90°, 270°, and 90° and 270° conditions.
Table 4: Results of Wilcoxon signed-rank test of SNR-50 with the noise from different angles

Click here to view


When both speech material and speech babble were presented at 0° azimuth, no significant differences between SNR-50 were obtained in any tested conditions except between “WS Off/DNR Off” and “WS On/DNR On.” These results indicate either the activation of WS or the DNR algorithm alone has not significantly improved the speech perception when both speech and speech babble are presented at 0° azimuth. When speech materials are presented at 0° azimuth and speech babble noise at either 90° or 270° azimuth, [Table 4] indicates significant differences were noted among SNR-50 across all tested conditions except between “WS On/DNR On” and “WS On/DNR Off.” These results indicate activating the DNR algorithm along with/or WS leads to better speech perception scores when the direction of speech material is separated from that of noise.

No significant differences in SNR-50 were noted between the conditions of “WS Off/DNR On” and “WS On/DNR Off” for “noise at 90° and 270°” azimuths. That is, the presence of wireless binaural synchronization led to significantly better SNR-50 (P < 0.05) when the speech babble arrived from 90° or 270° azimuth and when the babble was presented from both 90° and 270° azimuth loudspeakers. Further, when wireless binaural synchronization was enabled, the activation of DNR resulted in significantly better SNR-50 (P < 0.01) when compared with the conditions where DNR and wireless binaural synchronization were disabled (namely “WS Off/DNR Off”). The effect size (ηp2) observed varied from 0.3 to 0.6 across different conditions, which represents a moderate-to-large effect size. Individual size effects can be observed from [Table 4].


  Discussion Top


The study aimed to assess the efficacy of DNR algorithms in binaural hearing aids that communicate with each other. The efficacy was assessed in terms of horizontal localization and speech intelligibility. Horizontal localization was assessed by obtaining the DOE, and speech intelligibility was assessed using SNR-50. Results showed that the wireless binaural synchronization of hearing aids between two ears significantly improved horizontal localization abilities and speech perception in noise in individuals with mild-to-moderate hearing impairment, except when noise and speech were from the same angle. There was an improvement in the localization by means of a reduction in RMS error by 6° when only the wireless binaural synchronization was activated in individuals with mild-to-moderate hearing impairment. These findings were consistent with Ibrahim et al. (2013) and Sockalingam et al. (2009). The ear-to-ear synchronization feature enables hearing aids in the right and left ears to coordinate and better preserve the localization cues (Ibrahim et al., 2013; Kreisman et al., 2010; Sockalingam et al., 2009). Standard binaural hearing aids process signals separately in two hearing aids placed in the right and left ear, and hence there tends to be gain and time differences between the two ears, whereas in wireless hearing aids, the output is coordinated that helps preserve interaural time difference (ITD) and interaural level difference (ILD) cues, which helps in better localization (Geetha et al., 2015). Nevertheless, the localization performance error reduced by 14% in Sockalingam et al. (2009). These results could be because of the difference in the localization setup used. In the former study, the distance between the loudspeakers were 15° spanning from 0º to 105º azimuth.

The differences noted among “WS off/DNR on” and “WS on/DNR on” complied with findings by Keidser et al. (2006). Keidser et al. (2006) assessed localization with and without DNR in conventional bilateral hearing aids. Though there was noise reduction in both ears, the ILD cues became distorted because of differences in the amount and frequency band in which noise reduction happened in the two ears. The ILD is the difference in the signal’s intensity level between two ears, which is an essential cue for localization. These results support the results of the current study that when noise reduction in the wireless binaural hearing aids is activated, binaural cues are better preserved, and hence fewer localization errors. Similar results have also been reported by Van den Bogaert et al., (2006).

The differences could also be attributed to the signal properties of the stimulus used. As the stimulus is a predominately low-frequency (735 Hz) signal, minute temporal changes in the output of the hearing aids because of enabling either DNR or synchronization would have caused the performance difference. Overall, in the current study, the lowest error obtained was 28.8°. However, the averaged rmsDOE was reported as less than 10° in normal-hearing adults by Nisha and Uppunda (2016) indicating even with wireless synchronizing along with DNR features, normal performance might not be warranted in individuals with bilateral mild-to-moderate hearing loss with amplification.

Second, the results of the study indicated an improvement in speech perception in noise performance because of the coordinated output, except when noise and speech are from the same angle. These results are in concurrence with those of Kreisman et al. (2010). Kreisman et al. assessed speech intelligibility using the Quick SIN and HINT tests, and they showed improved performance with wireless binaural synchronization.

When noise and speech are from the same angle (0° azimuth), there is no significant difference in SNR-50 with and without wireless, as the two hearing aids receive the same input without much separation between speech and noise. Thus, the separation of noise and speech based on modulations is difficult. The improvement in speech perception when noise is from a different angle could be because of spatial separation between the target signal and interfering noise, which is essential for better speech perception (Hawley et al., 1999; Ricketts, 2001). That is, DNR works better when the SNR is positive (Mueller et al., 2006) and when the signal and noise are from different directions, especially when the speech is from front and the signal is from the back, the SNR tends to be better and hence might have lead to better results. Current results indicate that even DNR algorithms require spatial separation between speech and noise to work effectively.

Our earlier study (Geetha et al., 2017) assessed the efficacy of directionality in wireless hearing aids using the same setup. Results showed that the directionality in wireless hearing aids significantly improves speech perception in noise and localization. In comparison to our earlier study on speech perception scores, even when spatial separation is present, DNR scores are lower than the directionality scores (Geetha et al., 2017). Similarly, on the localization task, the binaurally synchronized directionality algorithm resulted in 27.9° rmsDOE (as reported in Geetha et al., 2017), whereas the binaurally synchronized DNR algorithm resulted in 28.8° rmsDOE. These differences could be attributed to the lesser efficiency of the DNR algorithm in separating the speech signal from speech babble noise because of the similarities in the temporal characteristics of signals (Bentler & Chiou, 2006). However, the overall activation of DNR along with WS would be beneficial on tasks considered in the study.


  Conclusions Top


It can be concluded that DNR in wireless binaural synchronization hearing aids in the current study improves the localization and speech perception in noise in some conditions, albeit perhaps not to the extent of the directionality algorithm. Besides, it is also evident that the working of DNR algorithms depends on the direction of speech babble. The results of this study validate the benefit provided by wireless hearing aids for a better understanding of speech in the presence of speech babble. These results support counseling hearing-impaired individuals with mild-to-moderate sensorineural hearing loss on the advantages of wireless hearing aids while suggesting suitable algorithms. However, the results are limited to bilateral mild-to-moderate sensorineural hearing loss and the hearing aids tested in the current study.

Acknowledgements

The present study data are a part of the project funded by the AIISH Research Fund. The authors extend their gratitude to the Director, All India Institute of Speech and Hearing, Mysore, and all the participants of the present study. The authors appreciate the support from the Research Center at the College of Applied Medical Sciences at King Saud University, Kingdom of Saudi Arabia.

Financial support and sponsorship

This is a part of a project funded by ARF.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Alcantara, J, Moore, B, Kuhel, V, & Launer, S (2003). Evaluation of the noise reduction system in a commercial hearing aid. International Journal of Audiology, 42: 43-42.  Back to cited text no. 1
    
2.
Bentler, R A (2005). Effectiveness of directional microphones and noise reduction schemes in hearing aids: A systematic review of the evidence. Journal of American Academy of Audiology, 16(7): 473-484.  Back to cited text no. 2
    
3.
Bentler, R, & Chiou, L. K (2006). Digital noise reduction: An overview. Trends in Amplification, 10(2): 67-82.  Back to cited text no. 3
    
4.
Bentler, R, Wu, Y.-H, Kettel, J, & Hurtig, R (2008). Digital noise reduction: Outcomes from field and lab studies. International Journal of Audiology 47(8): 447-460.  Back to cited text no. 4
    
5.
Boymans, M, & Dreschler, W. A (2000). Field trials using a digital hearing aid with active noise reduction and dual-microphone directionality. Audiology: Official Organ of the International Society of Audiology, 39(5): 260-268.  Back to cited text no. 5
    
6.
Brons, I, Houben, R, & Dreschler, W. A (2013). Perceptual effects of noise reduction concerning personal preference, speech intelligibility, and listening effort. Ear and Hearing, 34(1): 29-41.  Back to cited text no. 6
    
7.
Carhart, R, & Jerger, J. F (1959). Preferred method for clinical determination of pure-tone thresholds. Journal of Speech and Hearing Disorders, 24(4): 330-345.  Back to cited text no. 7
    
8.
Ching, T. Y, Incerti, P, & Hill, M (2004). Binaural benefits for adults who use hearing aids and cochlear implants in opposite ears. Ear and Hearing, 25(1): 9-21.  Back to cited text no. 8
    
9.
Finney, D. J (1952). Probit analysis. Cambridge, England: Cambridge University Press.  Back to cited text no. 9
    
10.
Frank, T (2000). ANSI update: Maximum permissible ambient noise levels for audiometric test rooms. American Journal of Audiology, 9(1): 3-8.  Back to cited text no. 10
    
11.
Fujiki, N, Riederer, K. A, JousmaÈki, V, MaÈkelaÈ, J. P, & Hari, R (2002). Human cortical representation of virtual auditory space: Differences between sound azimuth and elevation. European Journal of Neuroscience, 16(11): 2207-2213.  Back to cited text no. 11
    
12.
Geetha, C, Kumar, K. S, Manjula, P, & Pavan, M (2014). Development and standardization of the sentence identification test in the Kannada language. Journal of Hearing Science, 4(1): 18-26.  Back to cited text no. 12
    
13.
Geetha, C, Rajan, R. R, & Tanniru, K (2015). A review of the performance of wireless synchronized hearing aid. Journal of Hearing Science, 5(4): 9-12.  Back to cited text no. 13
    
14.
Geetha, C, Tanniru, K, & Rajan, R. R (2017). Efficacy of directional microphones in hearing aids equipped with wireless synchronization technology. The Journal of International Advanced Otology, 13(1): 113.  Back to cited text no. 14
    
15.
Hawley, M. L, Litovsky, R. Y, & Colburn, H. S (1999). Speech intelligibility and localization in a multi-source environment. Journal of the Acoustical Society of America, 105(6): 3436-3448.  Back to cited text no. 15
    
16.
Ibrahim, I, Parsa, V, Macpherson, E, & Cheesman, M (2013). Evaluation of speech intelligibility and sound localization abilities with hearing aids using binaural wireless technology. Audiology Research, 3(1): e1.  Back to cited text no. 16
    
17.
Keidser, G, Rohrseitz, K, Dillon, H, Hamacher, V, Carter, L, Rass, U, & Convery, E (2006). The effect of multichannel wide dynamic range compression, noise reduction, and the directional microphone on horizontal localization performance in hearing aid wearers. International Journal of Audiology, 45(10): 563-579.  Back to cited text no. 17
    
18.
Kreisman, B. M, Mazevski, A. G, Schum, D. J, & Sockalingam, R (2010). Improvements in speech understanding with wireless binaural broadband digital hearing instruments in adults with sensorineural hearing loss. Trends in Amplification, 14(1): 3-11.  Back to cited text no. 18
    
19.
Lemaitre, G, Susini, P, Winsberg, S, Mcadams, S, & Letinturier, B (2009). The sound quality of car horns: Designing new representative sounds. Acta Acustica United with Acustica, 95: 356-372.  Back to cited text no. 19
    
20.
Makous, J. C, & Middlebrooks, J. C (1990). Two-dimensional sound localization by human listeners. Journal of the Acoustical Society of America, 87(5): 2188-2200.  Back to cited text no. 20
    
21.
Mickey, B. J, & Middlebrooks, J. C (2003). Representation of auditory space by cortical neurons in awake cats. Journal of Neuroscience, 23: 8649-8866.  Back to cited text no. 21
    
22.
Middlebrooks, J, & Knudsen, E (1984). A neural code for auditory space in the cat’s superior colliculus. Journal of Neuroscience, 4: 2621-2634.  Back to cited text no. 22
    
23.
Middlebrooks, J. C, Xu, L, Furukawa, S, & Macpherson, E. A (2002). Book review: Cortical neurons that localize sounds. The Neuroscientist, 8(1): 73-83.  Back to cited text no. 23
    
24.
Mueller, H. G, Weber, J, & Hornsby, B. W. Y (2006). The effects of digital noise reduction on the acceptance of background noise. Trends in Amplification, 10: 83-93.  Back to cited text no. 24
    
25.
Nisha, K. V, & Uppunda, A. K (2016). Effect of localization training in a horizontal plane on auditory spatial processing skills in listeners with normal hearing. Journal of Indian Speech Language & Hearing Association, 30(2): 28-39.  Back to cited text no. 25
    
26.
Renvall, H, Staeren, N, Barz, C. S, Ley, A, & Formisano, E (2016). Attention modulates the auditory cortical processing of spatial and category cues in naturalistic auditory scenes. Frontier in Neuroscience, 10: 254.  Back to cited text no. 26
    
27.
Ricketts, T. A (2001). Directional hearing aids. Trends in Amplification, 5(4): 139-176.  Back to cited text no. 27
    
28.
Ricketts, T. A, & Hornsby, B. W (2005). Sound quality measures for speech in noise through a commercial hearing aid implementing digital noise reduction. Journal of American Academy of Audiology, 16(5): 270-277.  Back to cited text no. 28
    
29.
Sanjeev, M. R (2017). Effect of noise on the sentence identification test in Kannada in individuals with hearing loss. Unpublished Master Dissertation, Mysuru, India: University of Mysore.  Back to cited text no. 29
    
30.
Sockalingam, R, Holmberg, M, Eneroth, K, & Shulte, M (2009). Binaural hearing aid communication shown to improve sound quality and localization. The Hearing Journal, 62(10): 46-47.  Back to cited text no. 30
    
31.
Tawfik, S, El Danasoury, I. M, AbuMoussa, H, & Naguib, M. F (2010). Enhancement of speech intelligibility in digital hearing aids using directional microphone/noise reduction algorithm. The Journal of International Advanced Otology, 6(1): 74-82.  Back to cited text no. 31
    
32.
Van den Bogaert, T, Klasen, T. J, Moonen, M, Van Deun, L, & Wouters, J (2006). Horizontal localization with bilateral hearing aids: Without is better than with. Journal of the Acoustical Society of America, 119(1): 515-526.  Back to cited text no. 32
    


    Figures

  [Figure 1], [Figure 2]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Materials and Me...
Results
Discussion
Conclusions
Materials and Me...
Results
Discussion
Conclusions
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed415    
    Printed16    
    Emailed0    
    PDF Downloaded54    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]